We use cookies to improve your experience with our site.
Xu YG, Qiu XP, Zhou LG et al. Improving BERT fine-tuning via self-ensemble and self-distillation. JOURNAL OFCOMPUTER SCIENCE AND TECHNOLOGY 38(4): 853−866 July 2023. DOI: 10.1007/s11390-021-1119-0.
Citation: Xu YG, Qiu XP, Zhou LG et al. Improving BERT fine-tuning via self-ensemble and self-distillation. JOURNAL OFCOMPUTER SCIENCE AND TECHNOLOGY 38(4): 853−866 July 2023. DOI: 10.1007/s11390-021-1119-0.

Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation

  • Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing (NLP) and yield state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-training tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The self-ensemble mechanism utilizes the checkpoints from an experience pool to integrate the teacher model. In order to transfer knowledge from the teacher model to the student model efficiently, we further use knowledge distillation, which is called self-distillation because the distillation comes from the model itself through the time dimension. Experiments on the GLUE benchmark and the Text Classification benchmark show that our proposed approach can significantly improve the adaption of BERT without any external data or knowledge. We conduct exhaustive experiments to investigate the efficiency of the self-ensemble and self-distillation mechanisms, and our proposed approach achieves a new state-of-the-art result on the SNLI dataset.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return