Data Preparation for Large Language Models
-
Abstract
Large Language Models (LLMs) have demonstrated remarkable generalization capabilities across diverse domains, largely attributed to the availability of massive amounts of high-quality training data. Recently, the development paradigm of LLMs has been shifting from a model-centric to a data-centric perspective. In this paper, we provide a comprehensive survey of data preparation algorithms and workflows for LLMs, categorized into three stages: Pre-training, Continual Pre-training, and Post-Training. We further summarize widely used datasets along with their associated data preparation methodologies, offering a practical reference for researchers who may lack extensive experience in the field of data preparation. Finally, we outline potential directions for future work, highlighting open challenges and opportunities in advancing data preparation for LLMs.
-
-