We use cookies to improve your experience with our site.
Xin Feng, Hao-Ming Wu, Yi-Hao Yin, Li-Bin Lan. CGTracker: Center Graph Network for One-Stage Multi-Pedestrian-Object Detection and Tracking[J]. Journal of Computer Science and Technology, 2022, 37(3): 626-640. DOI: 10.1007/s11390-022-2204-8
Citation: Xin Feng, Hao-Ming Wu, Yi-Hao Yin, Li-Bin Lan. CGTracker: Center Graph Network for One-Stage Multi-Pedestrian-Object Detection and Tracking[J]. Journal of Computer Science and Technology, 2022, 37(3): 626-640. DOI: 10.1007/s11390-022-2204-8

CGTracker: Center Graph Network for One-Stage Multi-Pedestrian-Object Detection and Tracking

  • Most current online multi-object tracking (MOT) methods include two steps: object detection and data association, where the data association step relies on both object feature extraction and affinity computation. This often leads to additional computation cost, and degrades the efficiency of MOT methods. In this paper, we combine the object detection and data association module in a unified framework, while getting rid of the extra feature extraction process, to achieve a better speed-accuracy trade-off for MOT. Considering that a pedestrian is the most common object category in real-world scenes and has particularity characteristics in objects relationship and motion pattern, we present a novel yet efficient one-stage pedestrian detection and tracking method, named CGTracker. In particular, CGTracker detects the pedestrian target as the center point of the object, and directly extracts the object features from the feature representation of the object center point, which is used to predict the axis-aligned bounding box. Meanwhile, the detected pedestrians are constructed as an object graph to facilitate the multi-object association process, where the semantic features, displacement information and relative position relationship of the targets between two adjacent frames are used to perform the reliable online tracking. CGTracker achieves the multiple object tracking accuracy (MOTA) of 69.3% and 65.3% at 9 FPS on MOT17 and MOT20, respectively. Extensive experimental results under widely-used evaluation metrics demonstrate that our method is one of the best techniques on the leader board for the MOT17 and MOT20 challenges at the time of submission of this work.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return