1 State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences Beijing 100190, China;
2 Microprocessor Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China;
3 University of Chinese Academy of Sciences, Beijing 100049, China;
4 Department of Computer Science, University of Science and Technology of China, Hefei 230026, China
Abstract Recently, deep learning processors have become one of the most promising solutions of accelerating deep learning algorithms. Currently, the only method of programming the deep learning processors is through writing assembly instructions by bare hands, which costs a lot of programming efforts and causes very low efficiency. One solution is to integrate the deep learning processors as a new back-end into one prevalent high-level deep learning framework (e.g., TPU (tensor processing unit) is integrated into Tensorflow directly). However, this will obstruct other frameworks to profit from the programming interface. The alternative approach is to design a framework-independent low-level library for deep learning processors (e.g., the deep learning library for GPU, cuDNN). In this fashion, the library could be conveniently invoked in high-level programming frameworks and provides more generality. In order to allow more deep learning frameworks to gain benefits from this environment, we envision it as a low-level library which could be easily embedded into current high-level frameworks and provide high performance. Three major issues of designing such a library are discussed. The first one is the design of data structures. Data structures should be as few as possible while being able to support all possible operations. This will allow us to optimize the data structures easier without compromising the generality. The second one is the selection of operations, which should provide a rather wide range of operations to support various types of networks with high efficiency. The third is the design of the API, which should provide a flexible and user-friendly programming model and should be easy to be embedded into existing deep learning frameworks. Considering all the above issues, we propose DLPlib, a tensor-filter based library designed specific for deep learning processors. It contains two major data structures, tensor and filter, and a set of operators including basic neural network primitives and matrix/vector operations. It provides a descriptor-based API exposed as a C++ interface. The library achieves a speedup of 0.79x compared with the performance of hand-written assembly instructions.
This work is partially supported by the National Natural Science Foundation of China under Grant Nos. 61432016, 61472396, 61473275, 61522211, 61532016, 61521092, 61502446, 61672491, 61602441, and 61602446, the National Basic Research 973 Program of China under Grant No. 2015CB358800, and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDB02040009.
About author: Hui-Ying Lan received her B.E. degree in software engineering from Wuhan University, Wuhan, in 2012. She received her Master's degree from School of Software and Microelectronics, Peking University, Beijing, in 2015. She is currently a Ph.D. student at Institute of Computing Technology, Chinese Academy of Sciences, Beijing. Her research interests include computer architecture and computational intelligence.
Cite this article:
Hui-Ying Lan, Lin-Yang Wu, Xiao Zhang, Jin-Hua Tao, Xun-Yu Chen, Bing-Rui Wang, Yu-Qing Wang, Qi Guo, Yun-Ji Chen.DLPlib: A Library for Deep Learning Processor[J] Journal of Computer Science and Technology, 2017,V32(2): 286-296
 Zhang S J, Du Z D, Zhang L, Lan H Y, Liu S L, Li L, Guo Q, Chen T S, Chen Y. Cambricon-X:An accelerator for sparse neural networks. In Proc. the 49th Annual IEEE/ACM International Symposium on Microarchitecture, Oct. 2016. Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. In Proc. the 26th Annual Conference on Neural Information Processing Systems, Dec. 2012, pp.1106-1114. Sun Y, Liang D, Wang X G, Tang X O. DeepID3:Face recognition with very deep neural networks. arXiv:1502.00873, 2015. http://arxiv.org/abs/1502.00873, Feb. 2017. Karpathy A, Li F F. Deep visual-semantic alignments for generating image descriptions. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2015, pp.3128-3137. Eriguchi A, Hashimoto K, Tsuruoka Y. Tree-to-sequence attentional neural machine translation. In Proc. the 54th Annual Meeting of the Association for Computational Linguistics, Aug. 2016. Ren S Q, He K M, Girshick R B, Sun J. Faster R-CNN:Towards real-time object detection with region proposal networks. In Proc. Annual Conference on Neural Information Processing Systems, Dec. 2015, pp.91-99. Farabet C, Poulet C, Han J Y, LeCun Y. CNP:An FPGA-based processor for convolutional networks. In Proc. the 19th International Conference on Field Programmable Logic and Applications, Aug.31-Sept.2, 2009, pp.32-37. Zhang C, Li P, Sun G Y, Guan Y J, Xiao B J, Cong J. Optimizing FPGA-based accelerator design for deep convolutional neural networks. In Proc. the ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Feb. 2015, pp.161-170. Chen T S, Du Z D, Sun N H et al. DianNao:A small-footprint high-throughput accelerator for ubiquitous machine-learning. In Proc. the 19th ACM Int. Conf. Languages and Operating Systems, Mar. 2014, pp.269-284. Chen Y, Luo T, Liu S et al. DaDianNao:A machinelearning supercomputer. In Proc. the 47th Annual IEEE/ACM Int. Symp. Microarchitecture, Dec. 2014, pp.609-622. Liu S L, Du Z D, Tao J H et al. Cambricon:An instruction set architecture for neural networks. In Proc. the 43rd ACM/IEEE Annual Int. Symp. Computer Architecture (ISCA), Jun. 2016, pp.393-405. Chakradhar S T, Sankaradass M, Jakkula V, Cadambi S. A dynamically configurable coprocessor for convolutional neural networks. In Proc. the 37th International Symposium on Computer Architecture, Jun. 2010, pp.247-257. Chi P, Li S C, Xu C et al. PRIME:A novel processingin-memory architecture for neural network computation in ReRAM-based main memory. In Proc. the 43rd ACM/IEEE Annual Int. Symp. Computer Architecture (ISCA), Jun. 2016, pp.27-39. Shafiee A, Nag A, Muralimanohar N et al. ISAAC:A convolutional neural network accelerator with In-Situ analog arithmetic in crossbars. In Proc. the 43rd ACM/IEEE Annual International Symposium on Computer Architecture, Jun. 2016, pp.14-26. Du Z D, Fasthuber R, Chen T S et al. ShiDianNao:Shifting vision processing closer to the sensor. In Proc. the 42nd Annual Int. Symp. Computer Architecture, Jun. 2015, pp.92-104. Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E. cuDNN:Efficient primitives for deep learning. arXin:1410.0759, 2014. http://arxiv.org/abs/1410.0759, Feb. 2017. Abadi M, Barham P, Chen J et al. Tensorflow:A system for large-scale machine learning. In Proc. the 12th USENIX Symp. Operating Systems Design and Implementation, Nov. 2016, pp.265-283. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11):2278-2324. Szegedy C, Liu W, Jia Y Q et al. Going deeper with convolutions. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2015. Krizhevsky A. Cuda-convnet:High-performance C++/CUDA implementation of convolutional neural networks. https://code.google.com/p/cuda-convnet, Feb. 2017. Ioffe S, Szegedy C. Batch normalization:Accelerating deep network training by reducing internal covariate shift. In Proc. the 32nd Int. Conf. Machine Learning, Jul. 2015, pp.448-456. He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2016, pp.770-778. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. http://arxin.org/abs/1409.1556, Feb. 2017.
Copyright 2010 by Journal of Computer Science and Technology