Special Issue: Software Systems
|  Andor D, Alberti C, Weiss D, Severyn A, Presta A, Ganchev K, Petrov S, Collins M. Globally normalized transition-based neural networks. arXiv:1603.06042, 2016. https://arxiv.org/abs/1603.06042, June 2020.
 Hinton G, Deng L, Yu D, Dahl G, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition:The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6):82-97.
 He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proc. the IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.770-778.
 Wang X, Huang C, Yao L, Benatallah B, Dong M. A survey on expert recommendation in community question answering. Journal of Computer Science and Technology, 2018, 33(4):625-653.
 Liu Q, Zhao H K, Wu L, Li Z, Chen E H. Illuminating recommendation by understanding the explicit item relations. Journal of Computer Science and Technology, 2018, 33(4):739-755.
 Silver D, Huang A, Maddison C J et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529(7587):484-489.
 Ameur H, Jamoussi S, Hamadou A B. A new method for sentiment analysis using contextual auto-encoders. Journal of Computer Science and Technology, 2018, 33(6):1307-1319.
 Bojarski M, Testa D D, Dworakowski D et al. End to end learning for self-driving cars. arXiv:1604.07316, 2016. https://arxiv.org/abs/1604.07316, June 2020.
 Esteva A, Kuprel B, Novoa R A, Ko J, Swetter S M, Blau H M, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017, 542(7639):115-118.
 Yuan Z, Lu Y, Wang Z, Xue Y. Droid-Sec:Deep learning in Android malware detection. ACM SIGCOMM Computer Communication Review, 2014, 44(4):371-372.
 Li Z, Ma X, Xu C, Xu J, Cao C, Lü J. Operational calibration:Debugging confidence errors for DNNs in the field. arXiv:1910.02352, 2019. https://arxiv.org/abs/1910.02352, Sept. 2020.
 Li Z, Ma X, Xu C, Cao C, Xu J, Lü J. Boosting operational DNN testing efficiency through conditioning. In Proc. the 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, August 2019, pp.499-509.
 LeCun Y, Bengio Y, Hinton G. Deep Learning. MIT Press, 2016.
 Burrell J. How the machine ‘thinks’:Understanding opacity in machine learning algorithms. Big Data & Society, 2016, 3(1):Article No. 2053951715622512.
 Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv:1412.6572, 2014. https://arxiv.org/abs/1412.6572, June 2020.
 Moosavi-Dezfooli S, Fawzi A, Frossard P. DeepFool:A simple and accurate method to fool deep neural networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, June 2016, pp.2574-2582.
 Carlini N, Wagner D. Adversarial examples are not easily detected:Bypassing ten detection methods. In Proc. the 10th ACM Workshop on Artificial Intelligence and Security, November 2017, pp.3-14.
 Athalye A, Carlini N, Wagner D. Obfuscated gradients give a false sense of security:Circumventing defenses to adversarial examples. arXiv:1802.00420, 2018. https://arxiv.org/abs/1802.00420, June 2020.
 Katz G, Barrett C, Dill D L, Julian K, Kochenderfer M J. Reluplex:An efficient SMT solver for verifying deep neural networks. In Proc. the 29th International Conference on Computer Aided Verification, July 2017, pp.97-117.
 Bastani O, Ioannou Y, Lampropoulos L, Vytiniotis D, Nori A, Criminisi A. Measuring neural net robustness with constraints. In Proc. the Annual Conference on Neural Information Processing Systems, December 2016, pp.2613-2621.
 Hendrycks D, Gimpel K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv:1610.02136, 2016. https://arxiv.org/abs/1610.02136, June 2020.
 Weng T, Zhang H, Chen H, Song Z, Hsieh C, Boning D, Dhillon I S, Daniel L. Towards fast computation of certified robustness for ReLU networks. arXiv:1804.09699, 2018. https://arxiv.org/abs/1804.09699, June 2020.
 Singh G, Gehr T, Püschel M, Vechev M. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 2019, 3(POPL):Article No. 41.
 Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In Proc. the 2017 IEEE Symposium on Security and Privacy, May 2017, pp.39-57.
 Feinman R, Curtin R R, Shintre S, Gardner A B. Detecting adversarial samples from artifacts. arXiv:1703.00410, 2017. https://arxiv.org/abs/1703.00410, June 2020.
 Ma X, Li B, Wang Y, Erfani S M, Wijewickrema S, Schoenebeck G, Song D, Houle M E, Bailey J. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv:1801.02613, 2018. https://arxiv.org/abs/1801.02613, June 2020.
 Wang Y, Li Z, Xu J, Yu P, Ma X. Fast robustness prediction for deep neural network. In Proc. the 11th Asia-Pacific Symposium on Internetware, Oct. 2019.
 Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world. arXiv:1607.02533, 2016. https://arxiv.org/abs/1607.02533, June 2020.
 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik Z B, Swami A. The limitations of deep learning in adversarial settings. In Proc. the 2016 IEEE European Symposium on Security and Privacy, March 2016, pp.372-387.
 Huang X, Kroening D, Kwiatkowska M, Ruan W, Sun Y, Thamo E, Wu M, Yi X. Safety and trustworthiness of deep neural networks:A survey. arXiv:1812.08342, 2018. https://arxiv.org/abs/1812.08342, June 2020.
 Huang X, Kwiatkowska M, Wang S, Wu M. Safety verification of deep neural networks. In Proc. the 29th International Conference on Computer Aided Verification, July 2017, pp.3-29.
 Wong E, Kolter J Z. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv:1711.00851, 2017. https://arxiv.org/abs/1711.00851, June 2020.
 Gopinath D, Pasareanu C S, Wang K, Zhang M, Khurshid S. Symbolic execution for attribution and attack synthesis in neural networks. In Proc. the 41st IEEE/ACM International Conference on Software Engineering, May 2019, pp.282-283.
 Pei K, Cao Y, Yang J, Jana S. DeepXplore:Automated whitebox testing of deep learning systems. In Proc. the 26th Symposium on Operating Systems Principles, October 2017, pp.1-18.
 Ma L, Juefei-Xu F, Zhang F et al. DeepGauge:Multigranularity testing criteria for deep learning systems. In Proc. the 33rd ACM/IEEE International Conference on Automated Software Engineering, September 2018, pp.120-131.
 Ma L, Zhang F, Xue M, Li B, Liu Y, Zhao J, Wang Y. Combinatorial testing for deep learning systems. arXiv:1806.07723, 2018. https://arxiv.org/abs/1806.07723, June 2020.
 Zong B, Song Q, Min M, Cheng W, Lumezanu C, Cho D, Chen H. Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. In Proc. International Conference on Learning Representations, February 2018.
 Santhanam G K, Grnarova P. Defending against adversarial attacks by leveraging an entire GAN. arXiv:1805.10652, 2018. https://arxiv.org/abs/1805.10652, June 2020.
 Grosse K, Manoharan P, Papernot N, Backes M, McDaniel P. On the (statistical) detection of adversarial examples. arXiv:1702.06280, 2017. https://arxiv.org/abs/1702.06280, June 2020.
 Xu W, Evans D, Qi Y. Feature squeezing:Detecting adversarial examples in deep neural networks. arXiv:1704.01155, 2017. https://arxiv.org/abs/1704.01155, June 2020.
 Benesty J, Chen J, Huang Y, Cohen I. Pearson correlation coefficient. In Noise Reduction in Speech Processing, Cohen I, Huang Y, Chen J, Benesty J (eds.), Springer, 2009, pp.1-4.
 LeCun L, Boser B, Denker J S, Henderson D, Howard R E, Hubbard W, Jackel L D. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989, 1(4):541-551.
 Krizhevsky A. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009. http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf, June 2020.
 Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A Y. Reading digits in natural images with unsupervised feature learning. In Proc. the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Dec. 2011.
 Deng J, Dong W, Socher R, Li L J, Li K, Li F F. ImageNet:A large-scale hierarchical image database. In Proc. the 2009 IEEE Conference on Computer Vision and Pattern Recognition, June 2009, pp.248-255.
 LeCun L, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11):2278-2324.
 Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. https://arxiv.org/abs/1409.1556, June 2020.
 Kim J, Feldt R, Yoo S. Guiding deep learning system testing using surprise adequacy. In Proc. the 41st International Conference on Software Engineering, May 2019, pp.1039-1049.
|||Xiao-Zheng Xie, Jian-Wei Niu, Xue-Feng Liu, Qing-Feng Li, Yong Wang, Jie Han, and Shaojie Tang. DG-CNN: Introducing Margin Information into Convolutional Neural Networks for Breast Cancer Diagnosis in Ultrasound Images [J]. Journal of Computer Science and Technology, 2022, 37(2): 277-294.|
|||Jun-Feng Fan, Mei-Ling Wang, Chang-Liang Li, Zi-Qiang Zhu, and Lu Mao. Intent-Slot Correlation Modeling for Joint Intent Prediction and Slot Filling [J]. Journal of Computer Science and Technology, 2022, 37(2): 309-319.|
|||Ibrahim S. Alsukayti. Quality of Service Support in RPL Networks: Standing State and Future Prospects [J]. Journal of Computer Science and Technology, 2022, 37(2): 344-368.|
|||Tong Chen, Ji-Qiang Liu, He Li, Shuo-Ru Wang, Wen-Jia Niu, En-Dong Tong, Liang Chang, Qi Alfred Chen, Gang Li. Robustness Assessment of Asynchronous Advantage Actor-Critic Based on Dynamic Skewness and Sparseness Computation: A Parallel Computing View [J]. Journal of Computer Science and Technology, 2021, 36(5): 1002-1021.|
|||Mohammad Y. Mhawish, Manjari Gupta. Predicting Code Smells and Analysis of Predictions: Using Machine Learning Techniques and Software Metrics [J]. Journal of Computer Science and Technology, 2020, 35(6): 1428-1445.|
|||Sara Elmidaoui, Laila Cheikhi, Ali Idri, Alain Abran. Machine Learning Techniques for Software Maintainability Prediction: Accuracy Analysis [J]. Journal of Computer Science and Technology, 2020, 35(5): 1147-1174.|
|||Monidipa Das, Soumya K. Ghosh. Data-Driven Approaches for Spatio-Temporal Analysis: A Survey of the State-of-the-Arts [J]. Journal of Computer Science and Technology, 2020, 35(3): 665-696.|
|||Qiang Zhou, Jing-Jing Gu, Chao Ling, Wen-Bo Li, Yi Zhuang, Jian Wang. Exploiting Multiple Correlations Among Urban Regions for Crowd Flow Prediction [J]. Journal of Computer Science and Technology, 2020, 35(2): 338-352.|
|||Yun-Yun Wang, Jian-Min Gu, Chao Wang, Song-Can Chen, Hui Xue. Discrimination-Aware Domain Adversarial Neural Network [J]. Journal of Computer Science and Technology, 2020, 35(2): 259-267.|
|||Yu-Qi Li, Li-Quan Xiao, Jing-Hua Feng, Bin Xu, Jian Zhang. AquaSee: Predict Load and Cooling System Faults of Supercomputers Using Chilled Water Data [J]. Journal of Computer Science and Technology, 2020, 35(1): 221-230.|
|||Xiang Chen, Dun Zhang, Zhan-Qi Cui, Qing Gu, Xiao-Lin Ju. DP-Share: Privacy-Preserving Software Defect Prediction Model Sharing Through Differential Privacy [J]. Journal of Computer Science and Technology, 2019, 34(5): 1020-1038.|
|||Zhou Xu, Shuai Pang, Tao Zhang, Xia-Pu Luo, Jin Liu, Yu-Tian Tang, Xiao Yu, Lei Xue. Cross Project Defect Prediction via Balanced Distribution Adaptation Based Transfer Learning [J]. Journal of Computer Science and Technology, 2019, 34(5): 1039-1062.|
|||Robail Yasrab. SRNET: A Shallow Skip Connection Based Convolutional Neural Network Design for Resolving Singularities [J]. Journal of Computer Science and Technology, 2019, 34(4): 924-938.|
|||De-Fu Lian, Qi Liu. Jointly Recommending Library Books and Predicting Academic Performance: A Mutual Reinforcement Perspective [J]. , 2018, 33(4): 654-667.|
|||Chao Ni, Wang-Shu Liu, Xiang Chen, Qing Gu, Dao-Xu Chen, Qi-Guo Huang. A Cluster Based Feature Selection Method for Cross-Project Software Defect Prediction [J]. , 2017, 32(6): 1090-1107.|