We use cookies to improve your experience with our site.
WU GenQing, ZHENG Fang. A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices[J]. Journal of Computer Science and Technology, 2003, 18(6).
Citation: WU GenQing, ZHENG Fang. A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices[J]. Journal of Computer Science and Technology, 2003, 18(6).

A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices

  • In this paper, an important question, whether a small language modelcan be practically accurate enough, is raised. Afterwards, the purposeof a language model, the problems that a language model faces, andthe factors that affect the performance of a language model, areanalyzed. Finally, a novel method for language model compression isproposed, which makes the large language model usable for applicationsin handheld devices, such as mobiles, smart phones, personal digitalassistants (PDAs), and handheld personal computers (HPCs). In theproposed language model compression method, three aspects are included.First, the language model parameters are analyzed and acriterion based on the importance measure of n-grams is used todetermine which n-grams should be kept and which removed. Second, apiecewise linear warping method is proposed to be used to compress theuni-gram count values in the full language model. And third, arank-based quantization method is adopted to quantize the bi-gramprobability values. Experiments show that by using this compressionmethod the language model can be reduced dramatically to only about 1Mbytes while the performance almost does not decrease. This providesgood evidence that a language model compressed by means of awell-designed compression technique is practically accurate enough, andit makes the language model usable in handheld devices.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return