We use cookies to improve your experience with our site.
Ma YC, Ma X, Hao TR et al. Knowledge distillation via hierarchical matching for small object detection. JOURNAL OFCOMPUTER SCIENCE AND TECHNOLOGY 39(4): 798−810 July 2024. DOI: 10.1007/s11390-024-4158-5.
Citation: Ma YC, Ma X, Hao TR et al. Knowledge distillation via hierarchical matching for small object detection. JOURNAL OFCOMPUTER SCIENCE AND TECHNOLOGY 39(4): 798−810 July 2024. DOI: 10.1007/s11390-024-4158-5.

Knowledge Distillation via Hierarchical Matching for Small Object Detection

  • Knowledge distillation is often used for model compression and has achieved a great breakthrough in image classification, but there still remains scope for improvement in object detection, especially for knowledge extraction of small objects. The main problem is the features of small objects are often polluted by background noise and not prominent due to down-sampling of convolutional neural network (CNN), resulting in the insufficient refinement of small object features during distillation. In this paper, we propose Hierarchical Matching Knowledge Distillation Network (HMKD) that operates on the pyramid level P2 to pyramid level P4 of the feature pyramid network (FPN), aiming to intervene on small object features before affecting. We employ an encoder-decoder network to encapsulate low-resolution, highly semantic information, akin to eliciting insights from profound strata within a teacher network, and then match the encapsulated information with high-resolution feature values of small objects from shallow layers as the key. During this period, we use an attention mechanism to measure the relevance of the inquiry to the feature values. Also in the process of decoding, knowledge is distilled to the student. In addition, we introduce a supplementary distillation module to mitigate the effects of background noise. Experiments show that our method achieves excellent improvements for both one-stage and two-stage object detectors. Specifically, applying the proposed method on Faster R-CNN achieves 41.7% mAP on COCO2017 (ResNet50 as the backbone), which is 3.8% higher than that of the baseline.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return