We use cookies to improve your experience with our site.
Chong Zhang, Hong-Zhi Wang, Hong-Wei Liu, Yi-Lin Chen. Fine-Tuning Channel-Pruned Deep Model via Knowledge Distillation[J]. Journal of Computer Science and Technology. DOI: 10.1007/s11390-023-2386-8
Citation: Chong Zhang, Hong-Zhi Wang, Hong-Wei Liu, Yi-Lin Chen. Fine-Tuning Channel-Pruned Deep Model via Knowledge Distillation[J]. Journal of Computer Science and Technology. DOI: 10.1007/s11390-023-2386-8

Fine-Tuning Channel-Pruned Deep Model via Knowledge Distillation

  • Deep convolutional neural networks with high performance are hard to be deployed in many real world applications, since the computing resources of edge devices such as smart phones or embedded GPU are limited. To alleviate this hardware limitation, the compression of a deep neural networks from the model side becomes important. As one of the most popular methods in the spotlight, channel pruning of deep convolutional model could effectively remove redundant convolutional channels from the CNN without affecting the network's performance remarkably. Existing methods focus on pruning design, evaluating the importance of different convolutional filters in the CNN model. A fast and effective fine-tuning method to restore accuracy is urgently needed. In this paper, we propose a fine-tuning method KDFT, which improves the accuracy of fine-tuned models with almost negligible training overhead by introducing knowledge distillation. Extensive experimental results on benchmark datasets with representative CNN models show that up to 4.86\% accuracy improvement and 79\% time saving could be obtained.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return