We use cookies to improve your experience with our site.
Hao Liang, Zhen Hao Wong, Ruitong Liu, Yuhan Wang, Meiyi Qiang, Zhengyang Zhao, Chengyu Shen, Conghui He, Wentao Zhang, Bin Cui. Data Preparation for Large Language ModelsJ. Journal of Computer Science and Technology. DOI: 10.1007/s11390-026-5948-8
Citation: Hao Liang, Zhen Hao Wong, Ruitong Liu, Yuhan Wang, Meiyi Qiang, Zhengyang Zhao, Chengyu Shen, Conghui He, Wentao Zhang, Bin Cui. Data Preparation for Large Language ModelsJ. Journal of Computer Science and Technology. DOI: 10.1007/s11390-026-5948-8

Data Preparation for Large Language Models

  • Large Language Models (LLMs) have demonstrated remarkable generalization capabilities across diverse domains, largely attributed to the availability of massive amounts of high-quality training data. Recently, the development paradigm of LLMs has been shifting from a model-centric to a data-centric perspective. In this paper, we provide a comprehensive survey of data preparation algorithms and workflows for LLMs, categorized into three stages: Pre-training, Continual Pre-training, and Post-Training. We further summarize widely used datasets along with their associated data preparation methodologies, offering a practical reference for researchers who may lack extensive experience in the field of data preparation. Finally, we outline potential directions for future work, highlighting open challenges and opportunities in advancing data preparation for LLMs.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return