.tabbox {width:400px; margin-top: 15px;margin-bottom: 5px} .tabmenu {width:400px;height:28px;border-left:1px solid #CCC;border-top:1px solid #ccc;} .tabmenu ul {margin:0;padding:0;list-style-type: none;} .tabmenu li { text-align:center; float:left; display:block; width:99px; overflow:hidden; background-color: #f1f1f1; line-height:27px; border-right:#ccc 1px solid; border-bottom:#ccc 1px solid; display:inline;} .tabmenu .cli {text-align:center;float:left;display:block;width:99px;overflow:hidden;background-color: #fff;line-height:27px;border-right:#ccc 1px solid;border-bottom:#fff 1px solid;display:inline; cursor:pointer; color: #810505; font-weight:bold} #tabcontent {width:399px;background-color:#fff;border-left:#CCC 1px solid;border-right:#CCC 1px solid;border-bottom:#CCC 1px solid; height:60px;} #tabcontent ul {margin:0;padding:5px;list-style-type: none;} #tabcontent .hidden {display:none;} Search Browse by Issue Fig/Tab Adv Search
 HOME ABOUT JCST AUTHORS REVIEWERS PUBLISHED PAPERS FORTHCOMING PAPERS

• Special Section on Software Systems 2020-Part 2 •

### 基于模型检验的ReLU神经网络验证

Wan-Wei Liu1, Senior Member, CCF, Fu Song2,3, Senior Member, CCF, Tang-Hao-Ran Zhang1, and Ji Wang1,4, Distinguished Member, CCF

1. 1 College of Computer Science, National University of Defense Technology, Changsha 410073, China;
2 School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China;
3 Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai 201210, China;
4 State Key Laboratory for High Performance Computing, Changsha 410073, China
• 收稿日期:2020-04-12 修回日期:2020-10-06 出版日期:2020-11-20 发布日期:2020-12-01
• 作者简介:Wan-Wei Liu received his Ph.D. degree in computer science from National University of Defense Technology, Changsha, in 2009. He is an associated professor in College of Computer Science, National University of Defense Technology, Changsha. He is a senior member of CCF. His research interests include theoretical computer science (particularly in automata theory and temporal logic), formal methods (particularly in verification), and software engineering.
• 基金资助:
This work is supported by the National Natural Science Foundation of China under Grant No. 61872371, the Open Fund from the State Key Laboratory of High Performance Computing of China (HPCL) under Grant No. 202001-07, and the Natural Key Research and Development Program of China under Grant No. 2018YFB0204301.

### Verifying ReLU Neural Networks from a Model Checking Perspective

Wan-Wei Liu1, Senior Member, CCF, Fu Song2,3, Senior Member, CCF, Tang-Hao-Ran Zhang1, and Ji Wang1,4, Distinguished Member, CCF

1. 1 College of Computer Science, National University of Defense Technology, Changsha 410073, China;
2 School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China;
3 Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai 201210, China;
4 State Key Laboratory for High Performance Computing, Changsha 410073, China
• Received:2020-04-12 Revised:2020-10-06 Online:2020-11-20 Published:2020-12-01
• About author:Wan-Wei Liu received his Ph.D. degree in computer science from National University of Defense Technology, Changsha, in 2009. He is an associated professor in College of Computer Science, National University of Defense Technology, Changsha. He is a senior member of CCF. His research interests include theoretical computer science (particularly in automata theory and temporal logic), formal methods (particularly in verification), and software engineering.
• Supported by:
This work is supported by the National Natural Science Foundation of China under Grant No. 61872371, the Open Fund from the State Key Laboratory of High Performance Computing of China (HPCL) under Grant No. 202001-07, and the Natural Key Research and Development Program of China under Grant No. 2018YFB0204301.

1、研究背景（context）神经网络是AI领域中一种非常重要的计算模型，得到了非常广泛的应用。然而，该类计算模型的性质，尤其是与输入/输出相关的数值性质往往缺少有效的刻画/验证手段。与此同时，模型检验技术是一种非常有效的软硬件正确性保证方法。
2、目的（Objective）：本文的目标是建立一套针对ReLU神经网络的模型检验框架。更确切的说，建立一种融合时序逻辑与数值方法的验证框架。
3、方法（Method）：为描述规约，定义了一种新的时序逻辑ReTL，该逻辑的语义基于ReLU神经网络给出。对于其量词交错深度不超过2的子逻辑，可采用如下方法进行验证：首先，从给定的网络与规约性质得到一个ReLU系统；而后，借助ReLU-消去方法，将上述ReLU系统转化为一组线性等式/不等式系统；接下来，采用本文提出的PE-判定算法，完成对原始问题的验证。
4、结果（Result&Findings）：本文证明了ReTL上述子逻辑的模型检验问题属于EXPSPACE。基于本文提出的方法，设计并实现了支持ReTL模型检验的原型工具。
5、结论（Conclusions）：论文的证明方法主要基于线性等式/不等式系统的相关结论进行。实验结果表明了对ReTL量词深度不超过2的子逻辑进行模型检验的可行性。本文的工作对进一步深入研究针对神经网络的模型检验技术具有一定启发意义。关于ReTL，还有其他相关研究问题（如综合问题）值得进一步研究。

Abstract: Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLU Temporal Logic (ReTL), whose semantics is defined with respect to ReLU neural networks, which could specify value-related properties about the network. We show that the model checking algorithm for the Σ2Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach.

 [1] Mcculloch W S, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 1943, 5:115-133.[2] Salehinejad H, Baarbe J, Sankar S, Barfett J, Colak E, Valaee S. Recent advances in recurrent neural networks. arXiv:1801.01078, 2018. https://arxiv.org/abs/1801.01078, May 2020.[3] LeCun Y. Generalization and network design strategies. In Connectionism in Perspective, Pfeifer R, Schreter Z, Fogelman-Soulié F, Steels L (eds.), Elsevier, 1989, pp.143-155.[4] Nair V, Hinton G E. Rectified linear units improve restricted Boltzmann machines. In Proc. the 27th International Conference on Machine Learning, June 2010, pp.807-814.[5] Lei N, Luo Z, Yau S, Gu X D. Geometric understanding of deep learning. arXiv:1805.10451, 2018. https://arxiv.org/abs/1805.10451, May 2020.[6] Clarke E M, Emerson E A. Design and synthesis of synchronization skeletons using branching-time temporal logic. In Proc. the 3rd Workshop on Logics of Programs, May 1981, pp.52-71.[7] Queille J, Sifakis J. Specification and verification of concurrent systems in CESAR. In Proc. the 5th Int. Symp. Programming, April 1982, pp.337-351.[8] Katz G, Barrett C W, Dill D L, Julian K, Kochenderfer M J. Reluplex:An efficient SMT solver for verifying deep neural networks. In Proc. the 29th Int. Conf. Computer Aided Verification, July 2017, pp.97-117.[9] Pnueli A. The temporal logic of programs. In Proc. the 18th Annual Symp. Foundations of Computer Science, October 1977, pp.46-57.[10] Pnueli A, Rosner R. On the synthesis of a reactive module. In Proc. the 16th Annual ACM Symp. Principles of Programming Languages, January 1989, pp.179-190.[11] Manna Z, Zarba C G. Combining decision procedures. In Proc. the 10th Anniversary Colloquium of the Int. Institute for Software Technology of the United Nations University, March 2002, pp.381-422.[12] Davenport J H, Heintz J. Real quantifier elimination is doubly exponential. J. Symbolic Computation, 1988, 5(1/2):29-35.[13] Huang X, Kwiatkowska M, Wang S, Wu M. Safety verification of deep neural networks. In Proc. the 29th Int. Conf. Computer Aided Verification, July 2017, pp.3-29.[14] Ruan W, Huang X, Kwiatkowska M. Reachability analysis of deep neural networks with provable guarantees. In Proc. the 27th Int. Joint Conf. Artificial Intelligence, July 2018, pp.2651-2659.[15] Pulina L, Tacchella A. An abstraction-refinement approach to verification of artificial neural networks. In Proc. the 22nd Int. Conf. Computer Aided Verification, July 2010, pp.243-257.[16] Dutta S, Jha S, Sankaranarayanan S, Tiwari A. Output range analysis for deep feedforward neural networks. In Proc. the 10th Int. Symposium NASA Formal Methods, April 2018, pp.121-138.[17] Weng T, Zhang H, Chen H, Song Z, Hsieh C, Daniel L, Duane S B, Dhillon I S. Towards fast computation of certified robustness for ReLU networks. In Proc. the 35th Int. Conf. Machine Learning, July 2018, pp.5273-5282.[18] Penrose R. A generalized inverse for matrices. Mathematical Proc. the Cambridge, 1955, 51:406-413.[19] Penrose R. On the best approximate solution of linear matrix equations. Mathematical Proceedings of the Cambridge Philosophical Society, 1956, 52(1):17-19.[20] Farkas G. über die theorie der einfachen ungleichungen. J. die Reine und Angewandte Mathematik, 1902, 124:1-24. (in German)[21] Lomuscio A, Maganti L. An approach to reachability analysis for feed-forward ReLU neural networks. arXiv:1706.07351, 2017. https://arxiv.org/abs/1706.07351, May 2020.[22] Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X. Safety and trustworthiness of deep neural networks:A survey. arXiv:1812.08342v4, 2019. https://arxiv.org/abs/1812.08342, April 2020.[23] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I J, Fergus R. Intriguing properties of neural networks. In Proc. the 2nd Int. Conf. Learning Representations, April 2014.[24] Lei Y, Chen S, Fan L, Song F, Liu Y. Advanced evasion attacks and mitigations on practical ML-based phishing website classifiers. arXiv:2004.06954, 2020. https://arxiv.org/abs/2004.06954, May 2020.[25] Chen G, Chen S, Fan L, Du X, Zhao Z, Song F, Liu Y. Who is real Bob? Adversarial attacks on speaker recognition systems. arXiv:1911.01840, 2019. https://arxiv.org/abs/1911.01840, May 2020.[26] Duan Y, Zhao Z, Bu L, Song F. Things you may not know about adversarial example:A black-box adversarial image attack. arXiv:1905.07672, 2019. https://arxiv.org/abs/1905.07672, May 2020.[27] Narodytska N, Kasiviswanathan S P, Ryzhyk L, Sagiv M, Walsh T. Verifying properties of binarized deep neural networks. In Proc. the 32nd AAAI Conf. Artificial Intelligence, February 2018, pp.6615-6624.[28] Cheng C, Nührenberg G, Ruess H. Maximum resilience of artificial neural networks. In Proc. the 15th Int. Symp. Automated Technology for Verification and Analysis, October 2017, pp.251-268.[29] Bunel R, Turkaslan I, Torr P H S, Kohli P, Mudigonda P K. A unified view of piecewise linear neural network verification. In Proc. the 32nd Annual Conf. Neural Information Processing Systems, December 2018, pp.4795-4804.[30] Tran H, Lopez D M, Musau P, Yang X, Nguyen L V, Xiang W, Johnson T T. Star-based reachability analysis of deep neural networks. In Proc. the 3rd World Congress on Formal Methods, October 2019, pp.670-686.[31] Cousot P, Cousot R. Abstract interpretation:A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proc. the 4th ACM Symp. Principles of Programming Languages, January 1977, pp.238-252.[32] Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M T. AI2:Safety and robustness certification of neural networks with abstract interpretation. In Proc. the 39th IEEE Symp. Security and Privacy, May 2018, pp.3-18.[33] Wang S, Pei K, Whitehouse J, Yang J, Jana S. Formal security analysis of neural networks using symbolic intervals. In Proc. the 27th USENIX Security Symp., August 2018, pp.1599-1614.[34] Singh G, Gehr T, Püschel M, Vechev M. T. Boosting robustness certification of neural networks. In Proc.the 7th International Conference on Learning Representations, May 2019.[35] Wan W, Zhang Z, Zhu Y, Zhang M, Song F. Accelerating robustness verification of deep neural networks guided by target labels. arXiv:2007.08520, 2020. https://arxiv.org/abs/2007.08520, July 2020.
 [1] Yang Liu, Xuan-Dong Li, Yan Ma. 一种基于博弈的带有证据的PCTL*随机模型检验方法[J]. , 2016, 31(1): 198-216. [2] Yang Liu, Huai-Kou Miao, Hong-Wei Zeng, Yan Ma, Pan Liu. 非确定概率Petri网-一种研究系统定性和定量行为的新方法[J]. , 2013, 28(1): 203-216.
Viewed
Full text

Abstract

Cited

Shared
Discussed
 [1] 李未;. A Structural Operational Semantics for an Edison Like Language(2)[J]. , 1986, 1(2): 42 -53 . [2] 冯玉琳;. Recursive Implementation of VLSI Circuits[J]. , 1986, 1(2): 72 -82 . [3] 潘启敬;. A Routing Algorithm with Candidate Shortest Path[J]. , 1986, 1(3): 33 -52 . [4] 章萃; 赵沁平; 徐家福;. Kernel Language KLND[J]. , 1986, 1(3): 65 -79 . [5] 陈肇雄; 高庆狮;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 . [6] 沈理; Stephen Y.H.Su;. Generalized Parallel Signature Analyzers with External Exclusive-OR Gates[J]. , 1986, 1(4): 49 -61 . [7] 闵应骅; 韩智德;. A Built-in Test Pattern Generator[J]. , 1986, 1(4): 62 -74 . [8] 龚振和;. On Conceptual Model Specification and Verification[J]. , 1987, 2(1): 35 -50 . [9] 闵应骅;. Easy Test Generation PLAs[J]. , 1987, 2(1): 72 -80 . [10] 乔香珍;. An Efficient Parallel Algorithm for FFT[J]. , 1987, 2(3): 174 -190 .
 版权所有 © 《计算机科学技术学报》编辑部 本系统由北京玛格泰克科技发展有限公司设计开发 技术支持：support@magtech.com.cn 总访问量：