Journal of Computer Science and Technology ›› 2019, Vol. 34 ›› Issue (2): 305-317.doi: 10.1007/s11390-019-1912-1

Special Issue: Artificial Intelligence and Pattern Recognition

• Special Section of Advances in Computer Science and Technology—Current Advances in the NSFC Joint Research Fund for Overseas Chinese Scholars and Scholars in Hong Kong and Macao 2014-2017 (Part 2) • Previous Articles     Next Articles

Space Efficient Quantization for Deep Convolutional Neural Networks

Dong-Di Zhao1, Fan Li1,*, Member, CCF, ACM, IEEE, Kashif Sharif1, Member, CCF, ACM, IEEE, Guang-Min Xia1, Yu Wang2,*, Fellow, IEEE, Senior Member, ACM   

  1. 1 School of Computer Science, Beijing Institute of Technology, Beijing 100081, China;
    2 Wireless Networking and Sensing Laboratory, Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, U.S.A.
  • Received:2018-07-15 Revised:2019-01-27 Online:2019-03-05 Published:2019-03-16
  • Contact: Fan Li, Yu Wang E-mail:fli@bit.edu.cn;yu.wang@uncc.edu
  • About author:Dong-Di Zhao received his B.E. degree in the Internet of Things from the School of Computer Science, Beijing Institute of Technology, Beijing, in 2016. He is currently pursuing his Master's degree at Beijing Institute of Technology, Beijing. His research interests include mobile sensing, mobile computing, and deep learning.
  • Supported by:
    The work of Fan Li is partially supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 61772077 and 61370192, and Beijing Natural Science Foundation of China under Grant No. 4192051. The work of Yu Wang is partially supported by NSFC under Grant Nos. 61428203 and 61572347.

Deep convolutional neural networks (DCNNs) have shown outstanding performance in the fields of computer vision, natural language processing, and complex system analysis. With the improvement of performance with deeper layers, DCNNs incur higher computational complexity and larger storage requirement, making it extremely difficult to deploy DCNNs on resource-limited embedded systems (such as mobile devices or Internet of Things devices). Network quantization efficiently reduces storage space required by DCNNs. However, the performance of DCNNs often drops rapidly as the quantization bit reduces. In this article, we propose a space efficient quantization scheme which uses eight or less bits to represent the original 32-bit weights. We adopt singular value decomposition (SVD) method to decrease the parameter size of fully-connected layers for further compression. Additionally, we propose a weight clipping method based on dynamic boundary to improve the performance when using lower precision. Experimental results demonstrate that our approach can achieve up to approximately 14x compression while preserving almost the same accuracy compared with the full-precision models. The proposed weight clipping method can also significantly improve the performance of DCNNs when lower precision is required.

Key words: convolutional neural network; memory compression; network quantization;

[1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In Proc. the 26th Annual Conf. Neural Information Processing Systems, December 2012, pp.1106-1114.
[2] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proc. the 29th Annual Conf. Neural Information Processing Systems, December 2015, pp.91-99.
[3] Abdel-Hamid O, Mohamed A R, Jiang H, Deng L, Penn G, Yu D. Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio, Speech, and Language processing, 2014, 22(10): 1533-1545.
[4] Mao H, Alizadeh M, Menache I, Kandula S. Resource management with deep reinforcement learning. In Proc. the 15th ACM Workshop on Hot Topics in Networks, November 2016, pp.50-56.
[5] Deng J, Dong W, Socher R, Li L J, Li K, Li F F. ImageNet: A large-scale hierarchical image database. In Proc. the 2009 IEEE Computer Society Conf. Computer Vision and Pattern Recognition, June 2009, pp.248-255.
[6] He K, Shang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2016, pp.770-778.
[7] Yao S, Hu S, Zhao Y, Zhang A, Abdelzaher T. DeepSense: A unified deep learning framework for time-series mobile sensing data processing. In Proc. the 26th International Conference on World Wide Web, April 2017, pp.351-360.
[8] Guo B, Wang Z, Yu Z, Wang Y, Yen N, Huang R, Zhou X. Mobile crowd sensing and computing: The review of an emerging human-powered sensing paradigm. ACM Computing Surveys, 2015, 48(1): Article No. 7.
[9] Vanhoucke V, Senior A, Mao M Z. Improving the speed of neural networks on CPUs. In Proc. NIPS Deep Learning and Unsupervised Feature Learning Workshop, December 2011, pp.611-620.
[10] Han S, Mao H, Dally W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In Proc. Int. Conf. Learning Representations, May 2016, pp.351-360.
[11] Gysel P, Motamedi M, Ghiasi S. Hardware-oriented approximation of convolutional neural networks. arXiv:1604.03168, 2016. https://arxiv.org/abs/1604.03168,October2018.
[12] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. https://arxiv.org/abs/1409.1556,April2018.
[13] Chen W, Wilson J T, Tyree S, Weinberger K Q, Chen Y. Compressing neural networks with the hashing trick. In Proc. the 32nd Int. Conf. Machine Learning, July 2015, pp.2285-2294.
[14] Wu J, Leng C, Wang Y, Hu Q, Cheng J. Quantized convolutional neural networks for mobile devices. In Proc. the 2016 IEEE Conf. Computer Vision and Pattern Recognition, June 2016, pp.4820-4828.
[15] Zhou A, Yao A, Guo Y, Xu L, Chen Y. Incremental network quantization: Towards lossless CNNs with low precision weights. arXiv:1702.03044, 2017. https://arxiv.org/abs/1702.03044,August2017.
[16] Park E, Ahn J, Yoo S. Weighted-entropy-based quantization for deep neural networks. In Proc. the 2017 IEEE Conf. Computer Vision and Pattern Recognition, July 2017, pp.7197-7205.
[17] Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions. In Proc. British Machine Vision Conference, September 2014, Article No. 73.
[18] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv:1503.02531, 2015. https://arxiv.org/pdf/1503.02531.pdf,November2018.
[19] Iandola F N, Han S, Moskewicz M W, Ashraf A, Dally W J, Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv:1602.07360, 2016. https://arxiv.org/abs/1602.07360,November2018.
[20] Chollet F. Xception: Deep learning with depthwise separable convolutions. In Proc. the 2017 IEEE Conf. Computer Vision and Pattern Recognition, July 2017, pp.1800-1807.
[21] Lin D, Talathi S, Annapureddy S. Fixed point quantization of deep convolutional networks. In Proc. the 33rd Int. Conf. Machine Learning, Jun. 2016, pp.2849-2858.
[22] Gupta S, Argawal A, Gopalakrishnan K, Narayanan P. Deep learning with limited numerical precision. In Proc. the 32nd Int. Conf. Machine Learning, July 2015, pp.1737- 1746.
[23] Gong Y, Liu L, Yang M., Bourdev L. Compressing deep convolutional networks using vector quantization. arXiv:1412.6115, 2014. https://arxiv.org/abs/1412.6115,December2018.
[24] Kullback S, Leibler R A. On information and sufficiency. The Annals of Mathematical Statistics, 1951, 22(1): 79-86.
[25] Abadi M, Barham P, Chen J, Chen Z et al. TensorFlow: A system for large-scale machine learning. In Proc. the 12th USENIX Symposium on Operating Systems Design and Implementation, November 2016, pp.265-283.
[1] Xin Zhang, Siyuan Lu, Shui-Hua Wang, Xiang Yu, Su-Jing Wang, Lun Yao, Yi Pan, and Yu-Dong Zhang. Diagnosis of COVID-19 Pneumonia via a Novel Deep Learning Architecture [J]. Journal of Computer Science and Technology, 2022, 37(2): 330-343.
[2] Shao-Jie Qiao, Guo-Ping Yang, Nan Han, Hao Chen, Fa-Liang Huang, Kun Yue, Yu-Gen Yi, Chang-An Yuan. Cardinality Estimator: Processing SQL with a Vertical Scanning Convolutional Neural Network [J]. Journal of Computer Science and Technology, 2021, 36(4): 762-777.
[3] Yang Liu, Ruili He, Xiaoqian Lv, Wei Wang, Xin Sun, Shengping Zhang. Is It Easy to Recognize Baby's Age and Gender? [J]. Journal of Computer Science and Technology, 2021, 36(3): 508-519.
[4] Zhang-Jin Huang, Xiang-Xiang He, Fang-Jun Wang, Qing Shen. A Real-Time Multi-Stage Architecture for Pose Estimation of Zebrafish Head with Convolutional Neural Networks [J]. Journal of Computer Science and Technology, 2021, 36(2): 434-444.
[5] Dun Liang, Yuan-Chen Guo, Shao-Kui Zhang, Tai-Jiang Mu, Xiaolei Huang. Lane Detection: A Survey with New Results [J]. Journal of Computer Science and Technology, 2020, 35(3): 493-505.
[6] Rui-Song Zhang, Wei-Ze Quan, Lu-Bin Fan, Li-Ming Hu, Dong-Ming Yan. Distinguishing Computer-Generated Images from Natural Images Using Channel and Pixel Correlation [J]. Journal of Computer Science and Technology, 2020, 35(3): 592-602.
[7] Shu-Quan Wang, Lei Wang, Yu Deng, Zhi-Jie Yang, Sha-Sha Guo, Zi-Yang Kang, Yu-Feng Guo, Wei-Xia Xu. SIES: A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array [J]. Journal of Computer Science and Technology, 2020, 35(2): 475-489.
[8] Xing-Gang Wang, Jia-Si Wang, Peng Tang, Wen-Yu Liu. Weakly- and Semi-Supervised Fast Region-Based CNN for Object Detection [J]. Journal of Computer Science and Technology, 2019, 34(6): 1269-1278.
[9] Robail Yasrab. SRNET: A Shallow Skip Connection Based Convolutional Neural Network Design for Resolving Singularities [J]. Journal of Computer Science and Technology, 2019, 34(4): 924-938.
[10] Ri-Sheng Liu, Cai-Sheng Mao, Zhi-Hui Wang, Hao-Jie Li. Blind Image Deblurring via Adaptive Optimization with Flexible Sparse Structure Control [J]. Journal of Computer Science and Technology, 2019, 34(3): 609-621.
[11] Tie-Ke He, Hao Lian, Ze-Min Qin, Zhen-Yu Chen, Bin Luo. PTM: A Topic Model for the Inferring of the Penalty [J]. , 2018, 33(4): 756-767.
[12] Bei-Ji Zou, Yun-Di Guo, Qi He, Ping-Bo Ouyang, Ke Liu, Zai-Liang Chen. 3D Filtering by Block Matching and Convolutional Neural Network for Image Denoising [J]. , 2018, 33(4): 838-848.
[13] Nai-Ming Yao, Hui Chen, Qing-Pei Guo, Hong-An Wang. Non-Frontal Facial Expression Recognition Using a Depth-Patch Based Deep Neural Network [J]. , 2017, 32(6): 1172-1185.
[14] Xiang Bai, Zheng Zhang, Hong-Yang Wang, Wei Shen. Directional Edge Boxes: Exploiting Inner Normal Direction Cues for Effective Object Proposal Generation [J]. , 2017, 32(4): 701-713.
[15] Xu-Ran Zhao, Xun Wang, Qi-Chao Chen. Temporally Consistent Depth Map Prediction Using Deep CNN and Spatial-temporal Conditional Random Field [J]. , 2017, 32(3): 443-456.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved