›› 2017, Vol. 32 ›› Issue (3): 630-643.doi: 10.1007/s11390-017-1746-7

Special Issue: Artificial Intelligence and Pattern Recognition

• Regular Paper • Previous Articles     Next Articles

Length-Changeable Incremental Extreme Learning Machine

You-Xi Wu1,2,3, Senior Member, CCF, Dong Liu1,3,4, He Jiang4,*, Senior Member, CCF   

  1. 1. School of Computer Science and Software, Hebei University of Technology, Tianjin 300130, China;
    2. School of Economics and Management, Hebei University of Technology, Tianjin 300130, China;
    3. Hebei Province Key Laboratory of Big Data Calculation, Tianjin 300401, China;
    4. School of Software, Dalian University of Technology, Dalian 116621, China
  • Received:2015-11-29 Revised:2017-03-08 Online:2017-05-05 Published:2017-05-05
  • Contact: He Jiang E-mail:jianghe@dlut.edu.cn
  • About author:You-Xi Wu received his Ph.D. degree in theory and new technology of electrical engineering from Hebei University of Technology, Tianjin, in 2007. He is currently a Ph.D. supervisor and a professor with Hebei University of Technology, Tianjin. His current research interests include data mining and machine learning. Dr. Wu is a senior member of CCF.
  • Supported by:

    This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 61673159 and 61370144, and the Natural Science Foundation of Hebei Province of China under Grant No. F2016202145.

Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.

[1] Castano A, Fernández-Navarro F, Hervás Martinez C. PCA-ELM: A robust and pruned extreme learning machine approach based on principal component analysis. Neural Processing Letters, 2013, 37(3): 377-392.

[2] Chen H, Gong Y, Hong X. Online modeling with tunable RBF network. IEEE Transactions on Cybernetics, 2013, 43(3): 935-947.

[3] Frénay B, Verleysen M. Using SVMs with randomised feature spaces: An extreme learning approach. In Proc. the 18th European Symposium on Artificial Neural Networks, Apr. 2010, pp.315-320.

[4] Shin Y, Ghosh J. Approximation of multivariate functions using ridge polynomial networks. In Proc. International Joint Conference on Neural Networks, June 1992, pp.380- 385.

[5] Park B J, Kim W D, Oh S K, Pedrycz W. Fuzzy set-oriented neural networks based on fuzzy polynomial inference and dynamic genetic optimization. Knowledge and Information Systems, 2014, 39(1): 207-240.

[6] Han F, Huang D S. Improved extreme learning machine for function approximation by encoding a priori information. Neurocomputing, 2006, 69(16/17/18): 2369-2373.

[7] Lin F J, Hung Y C, Ruan K C. An intelligent second-order sliding-mode control for an electric power steering system using a wavelet fuzzy neural network. IEEE Transactions on Fuzzy Systems, 2014, 22(6): 1598-1611.

[8] Capizzi G, Capizzi C, Bonanno F. Innovative secondgeneration wavelets construction with recurrent neural networks for solar radiation forecasting. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(11): 1805-1815.

[9] Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 1991, 4(2): 251-257.

[10] Leshno M, Lin V Y, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6): 861-867.

[11] Park J, Sandberg I W. Universal approximation using radial-basis-function networks. Neural Computation, 1991, 3(2): 246-257.

[12] Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70(1/2/3): 489-501.

[13] Huang G B, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, Cybernetics, Part B (Cybernetics), 2012, 42(2): 513-529.

[14] Wang S J, Chen H L, Yan W J, Chen Y H, Fu X L. Face recognition and micro-expression recognition based on discriminant tensor subspace analysis plus extreme learning machine. Neural Processing Letters, 2014, 39(1): 25-43.

[15] Liu D, Wu Y, Jiang H. FP-ELM: An online sequential learning algorithm for dealing with concept drift. Neurocomputing, 2016, 207(26): 322-334.

[16] Han D H, Zhang X, Wang G R. Classifying uncertain and evolving data streams with distributed extreme learning machine. Journal of Computer Science and Technology, 2015, 30(4): 874-887.

[17] Zhang T, Dai Q, Ma Z. Extreme learning machines' ensemble selection with GRASP. Applied Intelligence, 2015, 43(2): 439-459.

[18] Nie L, Jiang H, Ren Z et al. Query expansion based on crowd knowledge for code search. IEEE Transactions on Services Computing, 2016, 9(5): 771-783.

[19] Deng C W, Huang G B, Xu J et al. Extreme learning machines: New trends and applications. Science China Information Sciences, 2015, 58(2): 1-16.

[20] Jiang H, Nie L, Sun Z et al. ROSF: Leveraging information retrieval and supervised learning for recommending code snippets. IEEE Transactions on Services Computing, 2016. doi:10.1109/TSC.2016.2592909

[21] Huang G B, Chen L, Siew C K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 2006, 17(4): 879-892.

[22] Wang N, Han M, Dong N, Er M J. Constructive multioutput extreme learning machine with application to large tanker motion dynamics identification. Neurocomputing, 2014, 128: 59-72.

[23] Feng G, Huang G B, Lin Q, Gay R. Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Transactions on Neural Networks, 2009, 20(8): 1352-1357.

[24] Wang N, Er M J, Han M. Parsimonious extreme learning machine using recursive orthogonal least squares. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(10): 1828-1841.

[25] Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A. OP-ELM: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 2010, 21(1): 158- 162.

[26] Luo X, Liu F, Yang S, Wang X, Zhou Z. Joint sparse regularization based sparse semi-supervised extreme learning machine (S3ELM) for classification. Knowledge-Based Systems, 2015, 73: 149-160.

[27] Zhang R, Lan Y, Huang G B, Xu Z B. Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(2): 365-371.

[28] Zhang R, Lan Y, Huang G B, Xu Z B, Soh Y C. Dynamic extreme learning machine and its approximation capability. IEEE Transactions on Cybernetics, 2013, 43(6): 2054-2065.

[29] Feng G, Lan Y, Zhang X et al. Dynamic adjustment of hidden node parameters for extreme learning machine. IEEE Transactions on Cybernetics, 2015, 45(2): 279-288.

[30] Yang Y, Wu Q M J. Extreme learning machine with subnetwork hidden nodes for regression and classification. IEEE Transactions on Cybernetics, 2016, 46(12): 2885-2898.

[31] Huang G B, Chen L. Convex incremental extreme learning machine. Neurocomputing, 2007, 70(16/17/18): 3056-3062.

[32] Huang G B, Chen L. Enhanced random search based incremental extreme learning machine. Neurocomputing, 2008, 71(16/17/18): 3460-3468.

[33] Xu Z, Yao M, Wu Z, Dai W. Incremental regularized extreme learning machine and it's enhancement. Neurocomputing, 2016, 174: 134-142.

[34] Kolmogorov A N, Fomin S V. Elements of the Theory of Functions and Functional Analysis: Measure. Graylock Press, 1961.

[35] Kwok T Y, Yeung D Y. Objective functions for training new hidden units in constructive neural networks. IEEE Transactions on Neural Networks, 1997, 8(5): 1131-1148.

[36] Micchelli C A. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation, 1986, 2: 11-22.
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] Liu Mingye; Hong Enyu;. Some Covering Problems and Their Solutions in Automatic Logic Synthesis Systems[J]. , 1986, 1(2): 83 -92 .
[2] Chen Shihua;. On the Structure of (Weak) Inverses of an (Weakly) Invertible Finite Automaton[J]. , 1986, 1(3): 92 -100 .
[3] Gao Qingshi; Zhang Xiang; Yang Shufan; Chen Shuqing;. Vector Computer 757[J]. , 1986, 1(3): 1 -14 .
[4] Chen Zhaoxiong; Gao Qingshi;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 .
[5] Huang Heyan;. A Parallel Implementation Model of HPARLOG[J]. , 1986, 1(4): 27 -38 .
[6] Min Yinghua; Han Zhide;. A Built-in Test Pattern Generator[J]. , 1986, 1(4): 62 -74 .
[7] Tang Tonggao; Zhao Zhaokeng;. Stack Method in Program Semantics[J]. , 1987, 2(1): 51 -63 .
[8] Min Yinghua;. Easy Test Generation PLAs[J]. , 1987, 2(1): 72 -80 .
[9] Zhu Hong;. Some Mathematical Properties of the Functional Programming Language FP[J]. , 1987, 2(3): 202 -216 .
[10] Li Minghui;. CAD System of Microprogrammed Digital Systems[J]. , 1987, 2(3): 226 -235 .

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved