›› 2016, Vol. 31 ›› Issue (1): 50-59.doi: 10.1007/s11390-016-1611-0

Special Issue: Computer Architecture and Systems; Artificial Intelligence and Pattern Recognition; Emerging Areas

• Special Section on Computer Architecture and Systems with Emerging Technologies • Previous Articles     Next Articles

Modelling Spiking Neural Network from the Architecture Evaluation Perspective

Yu Ji(季宇), You-Hui Zhang(张悠慧), Member, CCF, ACM, IEEE, and Wei-Min Zheng(郑纬民), Fellow, CCF, Member, ACM, IEEE   

  1. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
  • Received:2015-07-15 Revised:2015-11-19 Online:2016-01-05 Published:2016-01-05
  • About author:Yu Ji received his B.S. degree in physics from Tsinghua University, Beijing, in 2011. Now he is a Ph.D. student in the Department of Computer Science and Technology at Tsinghua University, Beijing.t
  • Supported by:

    The work is supported by the Science and Technology Plan of Beijing, titled "Research on Efficient Parallel Acceleration Technology for Cognitive Computing Platform", and the Brain Inspired Computing Research of Tsinghua University under Grant No. 20141080934.

The brain-inspired spiking neural network (SNN) computing paradigm offers the potential for low-power and scalable computing, suited to many intelligent tasks that conventional computational systems find difficult. On the other hand, NoC (network-on-chips) based very large scale integration (VLSI) systems have been widely used to mimic neurobiological architectures (including SNNs). This paper proposes an evaluation methodology for SNN applications from the aspect of micro-architecture. First, we extract accurate SNN models from existing simulators of neural systems. Second, a cycle-accurate NoC simulator is implemented to execute the aforementioned SNN applications to get timing and energyconsumption information. We believe this method not only benefits the exploration of NoC design space but also bridges the gap between applications (especially those from the neuroscientists' community) and neuromorphic hardware. Based on the method, we have evaluated some typical SNNs in terms of timing and energy. The method is valuable for the development of neuromorphic hardware and applications.

[1] Paugam-Moisy H, Bohte S. Computing with spiking neuron networks. In Handbook of Natural Computing, Rozenberg G, Bäck T, Kok J N (eds.), Springer-Verlag Berlin Heidelberg, 2012, pp.335-376.

[2] Hereld M, Stevens R, Sterling T, Gao G R. Structured hints:Extracting and abstracting domain expertise. Technical Report, ANL/MCS-TM-303, Argonne National Laboratory, 2009. http://www.mcs.anl.gov/papers/ANL-MCSTM-303.pdf, Nov. 2015.

[3] Merolla P A, Arthur J V, Alvarez-Icaza R et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014, 345(6197):668-673.

[4] Furber S B, Lester D R, Plana L A et al. Overview of the SpiNNaker system architecture. IEEE Transactions on Computers, 2013, 62(12):2454-2467.

[5] Boahen K. Neurogrid:Emulating a million neurons in the cortex. In Proc. the 28th IEEE EMBS Annual International Conference, Aug.30-Sept.3, 2006, p.6702.

[6] Schemmel J, Bruderle D, Grubl A et al. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proc. the 2010 IEEE International Symposium on Circuits and Systems, May 30-June 2, 2010, pp.1947-1950.

[7] Liu D, Chen T, Liu S et al. PuDianNao:A polyvalent machine learning accelerator. In Proc. the 20th International Conference on Architectural Support for Programming Languages and Operating Systems, Mar. 2015, pp.369-381.

[8] Chen Y, Luo T, Liu S et al. DaDianNao:A machinelearning supercomputer. In Proc. the 47th IEEE/ACM Int. Symp. Microarchitecture (MICRO), Dec. 2014, pp.609-622.

[9] Chen T, Du Z, Sun N et al. DianNao:A small-footprint high-throughput accelerator for ubiquitous machinelearning. ACM SIGPLAN Notices, 2014, 49(4):269-284.

[10] Chakradhar S, Sankaradas M, Jakkula V et al. A dynamically configurable coprocessor for convolutional neural networks. ACM SIGARCH Computer Architecture News, 2010, 38(3):247-257.

[11] Fidjeland A K, Roesch E B, Shanahan M P et al. NeMo:A platform for neural modelling of spiking neurons using GPUs. In Proc. the 20th IEEE ASAP, July 2009, pp.137- 144.

[12] Glackin B, McGinnity T M, Maguire L P et al. A novel approach for the implementation of large scale spiking neural networks on FPGA hardware. In Proc. the 18th IWANN, June 2005, pp.552-563.

[13] Wendt K, Ehrlich M, Schüffny R. A graph theoretical approach for a multistep mapping software for the facets project. In Proc. the 2nd WSEAS International Conference on Computer Engineering and Applications, Jan. 2008, pp.189-194.

[14] Seo J, Brezzo B, Liu Y et al. A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. In Proc. IEEE CICC, Sept. 2011.

[15] Amir A, Datta P, Risk W P et al. Cognitive computing programming paradigm:A corelet language for composing networks of neurosynaptic cores. In Proc. the 2013 International Joint Conference on Neural Networks (IJCNN), Aug. 2013.

[16] Carrillo S, Harkin J, McDaid L J et al. Scalable hierarchical network-on-chip architecture for spiking neural network hardware implementations. IEEE Transactions on Parallel and Distributed Systems, 2013, 24(12):2451-2461.

[17] Pande S. Design exploration of EMBRACE hardware spiking neural network architecture and applications[Ph.D. Thesis]. Electrical & Electronic Engineering College of Engineering and Informatics, National University of Ireland, Feb. 2014.

[18] Happel B L M, Murre J M J. Design and evolution of modular neural network architectures. Neural Networks, 1994, 7(6/7):985-1004.

[19] Ali M, Welzl M, Adnan A et al. Using the NS-2 network simulator for evaluating network on chips (NoC). In Proc. IEEE ICET, Nov. 2006, pp.506-512.

[20] Lis M, Shim K S, Cho M H et al. DARSIM:A parallel cyclelevel NoC simulator. http://wwweb.eecs.umich.edu/MoBS/2010/proceedings/2-mobs6-lis.pdf, Nov. 2015.

[21] Wang H S, Zhu X, Peh L S et al. Orion:A powerperformance simulator for interconnection networks. In Proc. the 35th Annual IEEE/ACM International Symposium on Microarchitecture, Nov. 2002, pp.294-305.

[22] Kahng A B, Li B, Peh L S et al. ORION 2.0:A fast and accurate NoC power and area model for early-stage design space exploration. In Proc. Design, Automation and Test in Europe, April 2009, pp.423-428.

[23] Vainbrand D, Ginosar R. Scalable network-on-chip architecture for configurable neural networks. Microprocessors and Microsystems, 2011, 35(2):152-166.

[24] Jerger N E, Peh L S, Lipasti M. Virtual circuit tree multicasting:A case for on-chip hardware multicast support. In Proc. the 35th International Symposium on Computer Architecture, June 2008, pp.229-240.

[25] Abad P, Puente V, Gregorio J. MRR:Enabling fully adaptive multicast routing for CMP interconnection networks. In Proc. the 15th IEEE HPCA, Feb. 2009, pp.355-366.

[26] Rodrigo S, Flich J, Duato J et al. Efficient unicast and multicast support for CMPs. In Proc. the 41st IEEE/ACM MICRO, Nov. 2008, pp.364-375.

[27] Eliasmith C, Anderson C H. Neural Engineering:Computation, Representation, and Dynamics in Neurobiological Systems. MIT Press, 2004.

[28] Hines M L, Morse T, Migliore M et al. ModelDB:A database to support computational neuroscience. Journal of Computational Neuroscience, 2004, 17(1):7-11.

[29] Redgrave P, Prescott T J, Gurney K. The basal ganglia:A vertebrate solution to the selection problem? Neuroscience, 1999, 89(4):1009-1023.

[30] Tripp B P, Eliasmith C. Population models of temporal differentiation. Neural Computation, 2010, 22(3):621-659.

[31] Conklin J, Eliasmith C. A controlled attractor network model of path integration in the rat. Journal of Computational Neuroscience, 2005, 18(2):183-203.

[32] Stewart T C, Bekolay T, Eliasmith C. Learning to select actions with spiking neurons in the basal ganglia. Frontiers in Neuroscience, 2012, 6(2).
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] Shen Li;. Testability Analysis at Switch Level for CMOS Circuits[J]. , 1990, 5(2): 197 -202 .
[2] Hayong Zhou;. Analogical Learning and Automated Rule Constructions[J]. , 1991, 6(4): 316 -328 .
[3] Zhu Zhigang; Xu Guangyou;. Neural Networks for Omni-View Road Image Understanding[J]. , 1996, 11(6): 570 -580 .
[4] Chen Yangjun;. Graph Traversal and Top-Down Evaluation of Logic Queries[J]. , 1998, 13(4): 300 -316 .
[5] Lu Weifeng; Zhang Yuping;. Experimental Study on Strategy of CombiningSAT Algorithms[J]. , 1998, 13(6): 608 -614 .
[6] LU Sanglu; ZHOU Xiaoboand; XIE Li;. A Model for Dynamic Adaptive Coscheduling[J]. , 1999, 14(3): 267 -275 .
[7] CHEN Haiming;. Function Definition Language FDL andIts Implementation[J]. , 1999, 14(4): 414 -421 .
[8] WU Xunwei; Massoud Pedram;. Bounded Algebra and Current-Mode Digital Circuits[J]. , 1999, 14(6): 551 -557 .
[9] SUN Yongqiang; LIN Kai; LU Chaojun;. Partial Completion of Equational Theories[J]. , 2000, 15(6): 552 -559 .
[10] Terumine Hayashi, Haruna Yoshioka, Tsuyoshi Shinogi, Hidehiko Kita, and Haruhiko Takase. On Test Data Compression Using Selective Don t-Care Identification[J]. , 2005, 20(2): 210 -215 .

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved