We use cookies to improve your experience with our site.

Indexed in:

SCIE, EI, Scopus, INSPEC, DBLP, CSCD, etc.

Submission System
(Author / Reviewer / Editor)
Wang L, Guo SS, Qu LH et al. M-LSM: An improved multi-liquid state machine for event-based vision recognition. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 38(6): 1288−1299 Nov. 2023. DOI: 10.1007/s11390-021-1326-8.
Citation: Wang L, Guo SS, Qu LH et al. M-LSM: An improved multi-liquid state machine for event-based vision recognition. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 38(6): 1288−1299 Nov. 2023. DOI: 10.1007/s11390-021-1326-8.

M-LSM: An Improved Multi-Liquid State Machine for Event-Based Vision Recognition

Funds: This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 62372461, 62032001 and 62203457, and in part by the Key Laboratory of Advanced Microprocessor Chips and Systems.
More Information
  • Author Bio:

    Lei Wang is currently an associate professor in the College of Computer Science and Technology, National University of Defense Technology, Changsha. She received her B.E. and Ph.D. degrees from National University of Defense Technology, Changsha, in 2000 and 2006, respectively. Her current research interests include computer architecture, asynchronous circuit, artificial intelligence, and neuromorphic computation

    Sha-Sha Guo received her B.E. degree in information security from National University of Defense Technology, Changsha, in 2017. She is currently a Ph.D. candidate in computer science and technology at the same university. Her research interests include dynamic vision sensor denoising and neuromorphic computing

    Lian-Hua Qu received his B.E., M.S., and Ph.D. degrees from National University of Defense Technology, Changsha, in 2014, 2016, and 2020, respectively. His current research interests include spiking neural networks, reservoir computing, and nonvolatile memory design

    Shuo Tian received his B.E. degree from Sichuan University, Chengdu, in 2014, and his M.S. and Ph.D. degrees from National University of Defense Technology, Changsha, in 2016 and 2021, respectively. His current research interests include automatic neural architecture search, reservoir computing, and hardware accelerator design for neural networks

    Wei-Xia Xu is currently a professor in the College of Computer Science and Technology, National University of Defense Technology, Changsha. He received his B.E. degree from Nanjing University of Science and Technology, Nanjing, in 1984, and his M.S. and Ph.D. degrees from National University of Defense Technology, Changsha, in 1993 and 2018, respectively. His current research interests include computer architecture, high-performance microprocessor design, artificial intelligence, and neuromorphic computation

  • Received Date: January 26, 2021
  • Accepted Date: November 18, 2021
  • Event-based computation has recently gained increasing research interest for applications of vision recognition due to its intrinsic advantages on efficiency and speed. However, the existing event-based models for vision recognition are faced with several issues, such as large network complexity and expensive training cost. In this paper, we propose an improved multi-liquid state machine (M-LSM) method for high-performance vision recognition. Specifically, we introduce two methods, namely multi-state fusion and multi-liquid search, to optimize the liquid state machine (LSM). Multi-state fusion by sampling the liquid state at multiple timesteps could reserve richer spatiotemporal information. We adapt network architecture search (NAS) to find the potential optimal architecture of the multi-liquid state machine. We also train the M-LSM through an unsupervised learning rule spike-timing dependent plasticity (STDP). Our M-LSM is evaluated on two event-based datasets and demonstrates state-of-the-art recognition performance with superior advantages on network complexity and training cost.

  • [1]
    Rathi N, Panda P, Roy K. STDP-based pruning of connections and weight quantization in spiking neural networks for energy-efficient recognition. IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, 2019, 38(4): 668–677. DOI: 10.1109/TCAD.2018.2819366.
    [2]
    Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 1997, 10(9): 1659–1671. DOI: 10.1016/S0893-6080(97)00011-7.
    [3]
    Lee C, Srinivasan G, Panda P, Roy K. Deep spiking convolutional neural network trained with unsupervised spike-timing-dependent plasticity. IEEE Trans. Cognitive and Developmental Systems, 2019, 11(3): 384–394. DOI: 10.1109/TCDS.2018.2833071.
    [4]
    Querlioz D, Bichler O, Dollfus P, Gamrat C. Immunity to device variations in a spiking neural network with memristive nanodevices. IEEE Trans. Nanotechnology, 2013, 12(3): 288–295. DOI: 10.1109/TNANO.2013.2250995.
    [5]
    Merolla P A, Arthur J V, Alvarez-Icaza R et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014, 345(6197): 668–673. DOI: 10.1126/science.1254 642.
    [6]
    Davies M, Srinivasa N, Lin T H et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 2018, 38(1): 82–99. DOI: 10.1109/MM.2018.112130359.
    [7]
    Du Z D, Rubin D D B D, Chen Y J et al. Neuromorphic accelerators: A comparison between neuroscience and machine-learning approaches. In Proc. the 48th International Symposium on Microarchitecture, Dec. 2015, pp.494–507. DOI: 10.1145/2830772.2830789.
    [8]
    Schuman C D, Potok T E, Patton R M et al. A survey of neuromorphic computing and neural networks in hardware. arXiv: 1705.06963, 2017. https://arxiv.org/abs/1705.06963, Dec. 2023.
    [9]
    Amir A, Taba B, Berg D et al. A low power, fully event-based gesture recognition system. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.7388–7397. DOI: 10.1109/CVPR.2017.781.
    [10]
    Gehrig D, Loquercio A, Derpanis K, Scaramuzza D. End-to-end learning of representations for asynchronous event-based data. In Proc. the 2019 IEEE/CVF International Conference on Computer Vision, Oct. 27–Nov. 2, 2019, pp.5632–5642. DOI: 10.1109/ICCV.2019.00573.
    [11]
    Lichtsteiner P, Posch C, Delbruck T. A 128x128 120 db 15 μs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566–576. DOI: 10.1109/JSSC.2007.914337.
    [12]
    Yang M H, Liu S C, Delbruck T. A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding. IEEE Journal of Solid-State Circuits, 2015, 50(9): 2149–2160. DOI: 10.1109/JSSC.2015.2425886.
    [13]
    He W H, Wu Y J, Deng L et al. Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences. Neural Networks, 2020, 132: 108–120. DOI: 10.1016/j.neunet.2020.08.001.
    [14]
    Shrestha S B, Orchard G. SLAYER: Spike layer error reassignment in time. In Proc. the 32nd International Conference on Neural Information Processing Systems, Dec. 2018, pp.1419–1428.
    [15]
    Ju H, Xu J X, Chong E et al. Effects of synaptic connectivity on liquid state machine performance. Neural Networks, 2013, 38: 39–51. DOI: 10.1016/j.neunet.2012.11.003.
    [16]
    Mi Y Y, Lin X H, Zou X L, Ji Z L, Huang T J, Wu S. Spatiotemporal information processing with a reservoir decision-making network. arXiv: 1907.12071, 2019. https://arxiv.org/abs/1907.12071, Dec. 2023.
    [17]
    Kaiser J, Stal R, Subramoney A et al. Scaling up liquid state machines to predict over address events from dynamic vision sensors. Bioinspiration & Biomimetics, 2017, 12(5): 055001. DOI: 10.1088/1748-3190/aa7663.
    [18]
    Wang Q, Li P. D-LSM: Deep liquid state machine with unsupervised recurrent reservoir tuning. In Proc. the 23rd International Conference on Pattern Recognition (ICPR), Dec. 2016, pp.2652–2657. DOI: 10.1109/ICPR.2016.7900 035.
    [19]
    Srinivasan G, Panda P, Roy K. SpilinC: Spiking liquid-ensemble computing for unsupervised speech and image recognition. Frontiers in Neuroscience, 2018, 12: 524. DOI: 10.3389/fnins.2018.00524.
    [20]
    Orchard G, Jayawant A, Cohen G K, Thakor N. Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in Neuroscience, 2015, 9: 437. DOI: 10.3389/fnins.2015.00437.
    [21]
    Goodman D F M, Brette R. The Brian simulator. Frontiers in Neuroscience, 2009, 3: 192–197. DOI: 10.3389/neuro.01.026.2009.
    [22]
    Stimberg M, Brette R, Goodman D F M. Brian 2, an intuitive and efficient neural simulator. eLife, 2019, 8: e47314. DOI: 10.7554/eLife.47314.
    [23]
    Wijesinghe P, Srinivasan G, Panda P, Roy K. Analysis of liquid ensembles for enhancing the performance and accuracy of liquid state machines. Frontiers in Neuroscience, 2019, 13: 504. DOI: 10.3389/fnins.2019.00504.
    [24]
    Liu Q H, Ruan H B, Xing D, Tang H J, Pan G. Effective AER object classification using segmented probability-maximization learning in spiking neural networks. In Proc. the 34th AAAI Conference on Artificial Intelligence, Feb. 2020, pp.1308–1315. DOI: 10.1609/aaai.v34i02.5486.
    [25]
    Reynolds J J M, Plank J S, Schuman C D. Intelligent reservoir generation for liquid state machines using evolutionary optimization. In Proc. the 2019 International Joint Conference on Neural Networks (IJCNN), Jul. 2019, pp.1–8. DOI: 10.1109/IJCNN.2019.8852472.
    [26]
    Wu Y J, Deng L, Li G Q, Zhu J, Shi L P. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, 2018, 12: Article No. 331. DOI: 10.3389/fnins.2018.00331.
  • Related Articles

    [1]Shi-Wei Gan, Ya-Feng Yin, Zhi-Wei Jiang, Lei Xie, Sang-Lu Lu. Vision-Based Sign Language Translation via a Skeleton-Aware Neural Network[J]. Journal of Computer Science and Technology, 2025, 40(2): 378-396. DOI: 10.1007/s11390-024-2978-y
    [2]Hua-Peng Wei, Ying-Ying Deng, Fan Tang, Xing-Jia Pan, Wei-Ming Dong. A Comparative Study of CNN- and Transformer-Based Visual Style Transfer[J]. Journal of Computer Science and Technology, 2022, 37(3): 601-614. DOI: 10.1007/s11390-022-2140-7
    [3]Zhi-Xin Qi, Hong-Zhi Wang, An-Jie Wang. Impacts of Dirty Data on Classification and Clustering Models: An Experimental Evaluation[J]. Journal of Computer Science and Technology, 2021, 36(4): 806-821. DOI: 10.1007/s11390-021-1344-6
    [4]Hua Chen, Juan Liu, Qing-Man Wen, Zhi-Qun Zuo, Jia-Sheng Liu, Jing Feng, Bao-Chuan Pang, Di Xiao. CytoBrain: Cervical Cancer Screening System Based on Deep Learning Technology[J]. Journal of Computer Science and Technology, 2021, 36(2): 347-360. DOI: 10.1007/s11390-021-0849-3
    [5]Dong-Hong Han, Xin Zhang, Guo-Ren Wang. Classifying Uncertain and Evolving Data Streams with Distributed Extreme Learning Machine[J]. Journal of Computer Science and Technology, 2015, 30(4): 874-887. DOI: 10.1007/s11390-015-1566-6
    [6]Peng Chen, Lei Zhang, Yin-He Han, Yun-Ji Chen. A General-Purpose Many-Accelerator Architecture Based on Dataflow Graph Clustering of Applications[J]. Journal of Computer Science and Technology, 2014, 29(2): 239-246. DOI: 10.1007/s11390-014-1426-9
    [7]LIU Hongyan, LU Hongjun, CHEN Jian. A Fast Scalable Calssifier Tightly Integrated with RDBMS[J]. Journal of Computer Science and Technology, 2002, 17(2).
    [8]HUANG YU, XU Guangyou, ZHU Yuanxin. Extraction of Spatial-Temporal Features for Vision-Based Gesture Recognition[J]. Journal of Computer Science and Technology, 2000, 15(1): 64-72.
    [9]HUANG Deshuang. The "Bottleneck" Behaviours in Linear Feedforward Neural Network Classifiers and Their Breakthrough[J]. Journal of Computer Science and Technology, 1999, 14(1): 34-43.
    [10]Peng Chenglian. Combining Gprof and Event-Driven Monitoring for Analyzing Distributed Programs:A Rough View of NCSA Mosaic[J]. Journal of Computer Science and Technology, 1996, 11(4): 427-432.
  • Others

  • Cited by

    Periodical cited type(1)

    1. Farideh Motaghian, Soheila Nazari, Reza Jafari, et al. Application of modular and sparse complex networks in enhancing connectivity patterns of liquid state machines. Chaos, Solitons & Fractals, 2025, 191: 115940. DOI:10.1016/j.chaos.2024.115940

    Other cited types(0)

Catalog

    Article views (421) PDF downloads (52) Cited by(1)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return