? Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | FAQ
 
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2017, Vol. 32 Issue (4) :667-682    DOI: 10.1007/s11390-017-1750-y
Special Issue on Deep Learning Current Issue | Archive | Adv Search << Previous Articles | Next Articles >>
Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks
Shu-Chang Zhou1,2,3, Yu-Zhi Wang3,4, Student Member, IEEE, He Wen3,5, Qin-Yao He3,5, Yu-Heng Zou5,6
1 University of Chinese Academy of Sciences, Beijing 100049, China;
2 State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences Beijing 100190, China;
3 Megvii Inc., Beijing 100190, China;
4 Department of Electronic Engineering, Tsinghua University, Beijing 100084, China;
5 Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China;
6 School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China

Abstract
Reference
Related Articles
Download: [PDF 461KB]     Export: BibTeX or EndNote (RIS)  
Abstract Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.
Articles by authors
Keywordsquantized neural network   percentile   histogram equalization   uniform quantization     
Received 2016-12-20;
About author:
Cite this article:   
Shu-Chang Zhou, Yu-Zhi Wang, He Wen, Qin-Yao He, Yu-Heng Zou.Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks[J]  Journal of Computer Science and Technology, 2017,V32(4): 667-682
URL:  
http://jcst.ict.ac.cn:8080/jcst/EN/10.1007/s11390-017-1750-y
Copyright 2010 by Journal of Computer Science and Technology