? 可变长增量极限学习机
Journal of Computer Science and Technology
Quick Search in JCST
 Advanced Search 
      Home | PrePrint | SiteMap | Contact Us | Help
 
Indexed by   SCIE, EI ...
Bimonthly    Since 1986
Journal of Computer Science and Technology 2017, Vol. 32 Issue (3) :630-643    DOI: 10.1007/s11390-017-1746-7
Regular Paper << Previous Articles | Next Articles >>
可变长增量极限学习机
You-Xi Wu1,2,3, Senior Member, CCF, Dong Liu1,3,4, He Jiang4,*, Senior Member, CCF
1. School of Computer Science and Software, Hebei University of Technology, Tianjin 300130, China;
2. School of Economics and Management, Hebei University of Technology, Tianjin 300130, China;
3. Hebei Province Key Laboratory of Big Data Calculation, Tianjin 300401, China;
4. School of Software, Dalian University of Technology, Dalian 116621, China
Length-Changeable Incremental Extreme Learning Machine
You-Xi Wu1,2,3, Senior Member, CCF, Dong Liu1,3,4, He Jiang4,*, Senior Member, CCF
1. School of Computer Science and Software, Hebei University of Technology, Tianjin 300130, China;
2. School of Economics and Management, Hebei University of Technology, Tianjin 300130, China;
3. Hebei Province Key Laboratory of Big Data Calculation, Tianjin 300401, China;
4. School of Software, Dalian University of Technology, Dalian 116621, China

摘要
参考文献
相关文章
Download: [PDF 778KB]  
摘要 极限学习机(ELM)是一个广义上的单隐层前馈网络学习算法(SLFNs)。增量极限学习机(I-ELM)作为其中一种ELM,采用每次增加单个隐层结点的方式以获得合适的网络结构。虽然目前已经提出了多种I-ELM类方法以提高收敛速度或最小化训练误差,但是这些方法没有改变I-ELM方法的整体结构或面临过拟合风险。因此,如何使得测试误差快速收敛并且稳定是一个亟待解决的问题。本文提出了一个名为可变长增量极限学习机的I-ELM,该方法允许每次增加多个隐层结点,并将已存在的网络结构视为一个整体来更新输出权值,新增加结点的输出权值计算采用部分误差最小方式。我们证明了用LCI-ELM构建的网络在紧致输入集上具有全局逼近能力,并且也能实现对有限的训练数据的插值逼近。实验结果表明,与一些其它I-ELM类方法相比LCI-ELM具有更高的收敛速度,同时保持较低的过学习风险。
关键词单隐层前向网络   极限学习机   随机隐层结点   收敛速度   全局逼近     
Abstract: Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.
Keywordssingle-hidden-layer feed-forward network(SLFN)   incremental extreme learning machine(I-ELM)   random hidden node   convergence rate   universal approximation     
Received 2015-11-29;
本文基金:

This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 61673159 and 61370144, and the Natural Science Foundation of Hebei Province of China under Grant No. F2016202145.

通讯作者: He Jiang     Email: jianghe@dlut.edu.cn
About author: You-Xi Wu received his Ph.D. degree in theory and new technology of electrical engineering from Hebei University of Technology, Tianjin, in 2007. He is currently a Ph.D. supervisor and a professor with Hebei University of Technology, Tianjin. His current research interests include data mining and machine learning. Dr. Wu is a senior member of CCF.
引用本文:   
You-Xi Wu, Dong Liu, He Jiang.可变长增量极限学习机[J]  Journal of Computer Science and Technology , 2017,V32(3): 630-643
You-Xi Wu, Dong Liu, He Jiang.Length-Changeable Incremental Extreme Learning Machine[J]  Journal of Computer Science and Technology, 2017,V32(3): 630-643
链接本文:  
http://jcst.ict.ac.cn:8080/jcst/CN/10.1007/s11390-017-1746-7
Copyright 2010 by Journal of Computer Science and Technology