? Length-Changeable Incremental Extreme Learning Machine
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | FAQ
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2017, Vol. 32 Issue (3) :630-643    DOI: 10.1007/s11390-017-1746-7
Regular Paper Current Issue | Archive | Adv Search << Previous Articles | Next Articles >>
Length-Changeable Incremental Extreme Learning Machine
You-Xi Wu1,2,3, Senior Member, CCF, Dong Liu1,3,4, He Jiang4,*, Senior Member, CCF
1. School of Computer Science and Software, Hebei University of Technology, Tianjin 300130, China;
2. School of Economics and Management, Hebei University of Technology, Tianjin 300130, China;
3. Hebei Province Key Laboratory of Big Data Calculation, Tianjin 300401, China;
4. School of Software, Dalian University of Technology, Dalian 116621, China

Related Articles
Download: [PDF 778KB]     Export: BibTeX or EndNote (RIS)  
Abstract Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.
Articles by authors
You-Xi Wu
Dong Liu
He Jiang
Keywordssingle-hidden-layer feed-forward network(SLFN)   incremental extreme learning machine(I-ELM)   random hidden node   convergence rate   universal approximation     
Received 2015-11-29;

This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 61673159 and 61370144, and the Natural Science Foundation of Hebei Province of China under Grant No. F2016202145.

Corresponding Authors: He Jiang     Email: jianghe@dlut.edu.cn
About author: You-Xi Wu received his Ph.D. degree in theory and new technology of electrical engineering from Hebei University of Technology, Tianjin, in 2007. He is currently a Ph.D. supervisor and a professor with Hebei University of Technology, Tianjin. His current research interests include data mining and machine learning. Dr. Wu is a senior member of CCF.
Cite this article:   
You-Xi Wu, Dong Liu, He Jiang.Length-Changeable Incremental Extreme Learning Machine[J]  Journal of Computer Science and Technology, 2017,V32(3): 630-643
Copyright 2010 by Journal of Computer Science and Technology