.tabbox {width:400px; margin-top: 15px;margin-bottom: 5px} .tabmenu {width:400px;height:28px;border-left:1px solid #CCC;border-top:1px solid #ccc;} .tabmenu ul {margin:0;padding:0;list-style-type: none;} .tabmenu li { text-align:center; float:left; display:block; width:99px; overflow:hidden; background-color: #f1f1f1; line-height:27px; border-right:#ccc 1px solid; border-bottom:#ccc 1px solid; display:inline;} .tabmenu .cli {text-align:center;float:left;display:block;width:99px;overflow:hidden;background-color: #fff;line-height:27px;border-right:#ccc 1px solid;border-bottom:#fff 1px solid;display:inline; cursor:pointer; color: #810505; font-weight:bold} #tabcontent {width:399px;background-color:#fff;border-left:#CCC 1px solid;border-right:#CCC 1px solid;border-bottom:#CCC 1px solid; height:60px;} #tabcontent ul {margin:0;padding:5px;list-style-type: none;} #tabcontent .hidden {display:none;} Search Browse by Issue Fig/Tab Adv Search
 HOME ABOUT JCST AUTHORS REVIEWERS PUBLISHED PAPERS FORTHCOMING PAPERS

• Special Section of Advances in Computer Science and Technology—Current Advances in the NSFC Joint Research Fund for Overseas Chinese Scholars and Scholars in Hong Kong and Macao 2014-2017 (Part 2) •

### A 2-Stage Strategy for Non-Stationary Signal Prediction and Recovery Using Iterative Filtering and Neural Network

Feng Zhou1, Hao-Min Zhou2, Zhi-Hua Yang1, Li-Hua Yang3,4, Senior Member, IEEE

1. 1 School of Information Science, Guangdong University of Finance and Economics, Guangzhou 510320, China;
2 School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, U.S.A.;
3 Guangdong Province Key Laboratory of Computational Science, Guangzhou 510275, China;
4 School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
• Received:2018-10-14 Revised:2019-01-18 Online:2019-03-05 Published:2019-03-16
• About author:Feng Zhou received his B.S. degree in information computing science from Minnan Normal University, Zhangzhou, in 2010, and Ph.D. degree in computational mathematics from Sun Yat-sen University, Guangzhou, in 2015. As a visiting student, he studied in the School of Mathematics, Georgia Institute of Technology, Atlanta, from 2013 to 2014, supported by the China Scholarship Council (CSC). From 2015 to 2017, he worked as a sensor algorithm engineer in Baidu and Tencent successively in the research of recommendation system. Now he is an assistant professor at the School of Information Science, Guangdong University of Finance and Economics, Guangzhou. His research interests include signal analysis, machine learning and ensemble learning.
• Supported by:
This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 11771458, 431015 and 61628203, the National Science Foundation of US under Grant Nos. DMS-1620345 and DMS-1830225, the Office of Naval Research (ONR) Award of US under Grant No. N00014-18-1-2852, the Guangdong Youth Innovation Talent Project (Natural Sciences) under Grant No. 2017KQNCX083, the Guangdong Philosophy and Social Science Project of China under Grant No. GD15CGL11, and the Guangzhou Science and Technology Project of China under Grant No. 201707010495.

Predicting the future information and recovering the missing data for time series are two vital tasks faced in various application fields. They are often subjected to big challenges, especially when the signal is nonlinear and nonstationary which is common in practice. In this paper, we propose a hybrid 2-stage approach, named IF2FNN, to predict (including short-term and long-term predictions) and recover the general types of time series. In the first stage, we decompose the original non-stationary series into several “quasi stationary” intrinsic mode functions (IMFs) by the iterative filtering (IF) method. In the second stage, all of the IMFs are fed as the inputs to the factorization machine based neural network model to perform the prediction and recovery. We test the strategy on five datasets including an artificial constructed signal (ACS), and four real-world signals: the length of day (LOD), the northern hemisphere land-ocean temperature index (NHLTI), the troposphere monthly mean temperature (TMMT), and the national association of securities dealers automated quotations index (NASDAQ). The results are compared with those obtained from the other prevailing methods. Our experiments indicate that under the same conditions, the proposed method outperforms the others for prediction and recovery according to various metrics such as mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE).

 [1] Safari N, Chung C Y, Price G C D. Novel multi-step shortterm wind power prediction framework based on chaotic time series analysis and singular spectrum analysis. IEEE Transactions on Power Systems, 2018, 33(1): 590-601.[2] Oh K J, Kim K J. Analyzing stock market tick data using piecewise nonlinear model. Expert Systems with Applications, 2002, 22(3): 249-255.[3] Wang Y F. Mining stock price using fuzzy rough set system. Expert Systems with Applications, 2003, 24(1): 13-23.[4] Faruk D Ö. A hybrid neural network and ARIMA model for water quality time series prediction. Engineering Applications of Artificial Intelligence, 2010, 23(4): 586-594.[5] Kasabov N K, Song Q. DENFIS: Dynamic evolving neuralfuzzy inference system and its application for time-series prediction. IEEE Transactions on Fuzzy Systems, 2002, 10(2): 144-154.[6] Franses P H, Ghijsels H. Additive outliers, GRACH and forecasting volatility. International Journal of Forecasting, 1999, 15(1): 1-9.[7] Sarantis N. Nonlinearities, cyclical behaviour and predictability in stock markets: International evidence. International Journal of Forecasting, 2001, 17(3): 459-482.[8] Kalekar P S. Time series forecasting using Holt-Winters exponential smoothing. https://c.mql5.com/forextsd/forum/69/exponentialsmoothing.pdf,Jan.2019.[9] Hansen J V, Nelson R D. Data mining of time series using stacked generalizers. Neurocomputing, 2002, 43(1/2/3/4): 173-184.[10] Zhang G P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing, 2003, 50: 159- 175.[11] Enke D, Thawornwong S. The use of data mining and neural networks for forecasting stock market returns. Expert Systems with Applications, 2005, 29(4): 927-940.[12] Ture M, Kurt I. Comparison of four different time series methods to forecast hepatitis a virus infection. Expert Systems with Applications, 2006, 31(1): 41-46.[13] Kim K J. Financial time series forecasting using support vector machines. Neurocomputing, 2003, 55(1/2): 307-319.[14] Qian X Y. Financial series prediction: Comparison between precision of time series models and machine learning methods. arXiv:1706.00948, 2017. https://arxiv.org/abs/1706.00948,June2018.[15] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In Proc. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2016, pp.785-794.[16] Ye J, Chow J H, Chen J, Zheng Z. Stochastic gradient boosted distributed decision trees. In Proc. the 18th ACMConference on Information and Knowledge Management, November 2009, pp.2061-2064.[17] Kim K J, Han I. Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index. Expert Systems with Applications, 2000, 19(2): 125-132.[18] Wang Y F. Predicting stock price using fuzzy grey prediction system. Expert Systems with Applications, 2002, 22(1): 33-38.[19] Shen L, Han T L. Applying rough sets to market timing decisions. Decision Support Systems, 2004, 37(4): 583-597.[20] Vellido A, Lisboa P J G, Meehan K. Segmentation of the on-line shopping market using neural networks. Expert Systems with Applications, 1999, 17(4): 303-314.[21] Chen A S, Leung M T, Daouk H. Application of neural networks to an emerging financial market: Forecasting and trading the Taiwan stock index. Computers and Operations Research, 2003, 30(6): 901-923.[22] Rather A M, Agarwal A, Sastry V N. Recurrent neural network and a hybrid model for prediction of stock returns. Expert Systems with Applications, 2015, 42(6): 3234-3241.[23] Yang Z, Yang L, Qi D. Detection of spindles in sleep EEGs using a novel algorithm based on the Hilbert-Huang transform. In Wavelet Analysis and Applications, Qian T, Vai M I, Xu Y S (eds.), Birkhäuser, 2007, pp.543-559.[24] Wang J Z, Wang J J, Zhang Z G, Guo S P. Forecasting stock indices with back propagation neural network. Expert Systems with Applications, 2011, 38(11): 14346-14355.[25] Liu H, Chen C, Tian H Q, Li Y F. A hybrid model for wind speed prediction using empirical mode decomposition and artificial neural networks. Renewable Energy, 2012, 48: 545-556.[26] Kao L J, Chiu C C, Lu C J, Chang C H. A hybrid approach by integrating wavelet-based feature extraction with MARS and SVR for stock index forecasting. Decision Support Systems, 2013, 54(3): 1228-1244.[27] Zhang L, Wu X, Ji W, Abourizk S M. Intelligent approach to estimation of tunnel-induced ground settlement using wavelet packet and support vector machines. Journal of Computing in Civil Engineering, 2016, 31(2): Article No. 04016053.[28] Wei L Y. A hybrid ANFIS model based on empirical mode decomposition for stock time series forecasting. Applied Soft Computing, 2016, 42: 368-376.[29] Zhou F, Zhou H, Yang Z, Yang L. EMD2FNN: A strategy combining empirical mode decomposition and factorization machine based neural network for stock market trend prediction. Expert Systems with Applications, 2019, 115: 136- 151.[30] Thompson W R, Weil C S. On the construction of tables for moving-average interpolation. Biometrics, 1952, 8(1): 51-54.[31] Watson D F. A refinement of inverse distance weighted interpolation. GeoProcessing, 1985, 2(4): 315-327.[32] Liu G R, Zhang G Y. A novel scheme of strain-constructed point interpolation method for static and dynamic mechanics problems. International Journal of Applied Mechanics, 2009, 1(1): 233-258.[33] Schoenberg I J. Contributions to the problem of approximation of equidistant data by analytic functions (part A). Quarterly of Applied Mathematics, 1946, 4: 3-57.[34] Schoenberg I J. Cardinal Spline Interpolation. Society for Industrial and Applied Mathematics, 1973.[35] Lin L, Wang Y, Zhou H. Iterative filtering as an alternative algorithm for empirical mode decomposition. Advances in Adaptive Data Analysis, 2009, 1(4): 543-560.[36] Cicone A, Liu J, Zhou H. Adaptive local iterative filtering for signal decomposition and instantaneous frequency analysis. Applied and Computational Harmonic Analysis, 2016, 41(2): 384-411.[37] Cicone A, Zhou H. Multidimensional iterative filtering method for the decomposition of high-dimensional nonstationary signals. Numerical Mathematics: Theory, Methods and Applications, 2017, 10(2): 278-298.[38] Huang N E, Shen Z, Long S R, Wu M C, Shih H H, Zheng Q, Yen N C, Chi C T, Liu H H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis. Proceedings of the Royal Society A: Mathematical Physical and Engineering Sciences, 1998, 454(1971): 903-995.[39] Holt C C. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal of Forecasting, 2004, 20(1): 5-10.[40] Winters P R. Forecasting sales by exponentially weighted moving averages. Management Science, 1960, 6(3): 231- 362.[41] Flandrin P, Rilling G, Goncalves P. Empirical mode decomposition as a filter bank. IEEE Signal Processing Letters, 2004, 11(2): 112-114.[42] Zhou F, Yang L, Zhou H, Yang L. Optimal averages for nonlinear signal decompositions — Another alternative for empirical mode decomposition. Signal Processing, 2016, 121: 17-29.[43] Huang N E, Shen Z, Long S R. A new view of nonlinear water waves: The Hilbert spectrum. Annual Review of Fluid Mechanics, 1999, 31(1): 417-457.[44] Huang W, Shen Z, Huang N E, Yuan C F. Engineering analysis of biological variables: An example of blood pressure over 1 day. Proceedings of the National Academy of Sciences of the United States of America, 1998, 95(9): 4816-4821.[45] Yang Z, Qi D, Yang L. Signal period analysis based on Hilbert-Huang transform and its application to texture analysis. In Proc. the 3rd International Conference on Image and Graphics, April 2005, pp.430-433.[46] Smith J S. The local mean decomposition and its application to EEG perception data. Journal of the Royal Society Interface, 2005, 2(5): 443-454.[47] Delechelle E, Lemoine J, Niang O. Empirical mode decomposition: An analytical approach for sifting process. IEEE Signal Processing Letters, 2005, 12(11): 764-767.[48] Diop E H S, Alexandre R, Boudraa A O. Analysis of intrinsic mode functions: A PDE approach. IEEE Signal Processing Letters, 2010, 17(4): 398-401.[49] Hong H, Wang X, Tao Z. Local integral mean-based sifting for empirical mode decomposition. IEEE Signal Processing Letters, 2009, 16(10): 841-844.[50] Peng S, Hwang W L. Null space pursuit: An operator-based approach to adaptive signal separation. IEEE Transactions on Signal Processing, 2010, 58(5): 2475-2483.[51] Daubechies I, Lu J, Wu H T. Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Applied and Computational Harmonic Analysis, 2011, 30(2): 243-261.[52] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.[53] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651.[54] Hinton G, Deng L, Yu D, Dahl G E, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T N. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6): 82-97.[55] Chen C H. Handbook of Pattern Recognition and Computer Vision (5th edition). World Scientific Publishing, 2016.[56] Goldberg Y. Neural Network Methods for Natural Language Processing. Morgan and Claypool Publishers, 2017.[57] Rendle S. Factorization machines. In Proc. the 10th IEEE International Conference on Data Mining, December 2010, pp.995-1000.[58] Han J, Moraga C. The influence of the sigmoid function parameters on the speed of backpropagation learning. In Proc. International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation, June 1995, pp.195-201.[59] Lecun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436-444.[60] He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proc. the 2015 IEEE International Conference on Computer Vision, December 2015, pp.1026-1034.[61] Clevert D A, Unterthiner T, Hochreiter S. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv:1511.07289, 2015. https://arxiv.org/pdf/1511.07289.pdf,November2018.[62] Makridakis S. Accuracy measures: Theoretical and practical concerns. International Journal of Forecasting, 1993, 9(4): 527-529.
 [1] Dong-Di Zhao, Fan Li, Kashif Sharif, Guang-Min Xia, Yu Wang. Space Efficient Quantization for Deep Convolutional Neural Networks [J]. Journal of Computer Science and Technology, 2019, 34(2): 305-317. [2] Tie-Ke He, Hao Lian, Ze-Min Qin, Zhen-Yu Chen, Bin Luo. PTM: A Topic Model for the Inferring of the Penalty [J]. , 2018, 33(4): 756-767. [3] Bei-Ji Zou, Yun-Di Guo, Qi He, Ping-Bo Ouyang, Ke Liu, Zai-Liang Chen. 3D Filtering by Block Matching and Convolutional Neural Network for Image Denoising [J]. , 2018, 33(4): 838-848. [4] Nai-Ming Yao, Hui Chen, Qing-Pei Guo, Hong-An Wang. Non-Frontal Facial Expression Recognition Using a Depth-Patch Based Deep Neural Network [J]. , 2017, 32(6): 1172-1185. [5] Wei-Qing, Liu Jing Li. An Approach to Automatic Performance Prediction for Cloud-enhanced Mobile Applications with Sparse Data [J]. , 2017, 32(5): 936-956. [6] Shu-Chang Zhou, Yu-Zhi Wang, He Wen, Qin-Yao He, Yu-Heng Zou. Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks [J]. , 2017, 32(4): 667-682. [7] Xiang Bai, Zheng Zhang, Hong-Yang Wang, Wei Shen. Directional Edge Boxes: Exploiting Inner Normal Direction Cues for Effective Object Proposal Generation [J]. , 2017, 32(4): 701-713. [8] Ai-Wen Jiang, Bo Liu, Ming-Wen Wang. Deep Multimodal Reinforcement Network with Contextually Guided Recurrent Attention for Image Question Answering [J]. , 2017, 32(4): 738-748. [9] Lin-Er Yang, Mao-Song Sun, Yong Cheng, Jia-Cheng Zhang, Zheng-Hao Liu, Huan-Bo Luan, Yang Liu. Neural Parse Combination [J]. , 2017, 32(4): 749-757. [10] Zhuo-Ran Liu, Yang Liu. Exploiting Unlabeled Data for Neural Grammatical Error Detection [J]. , 2017, 32(4): 758-767. [11] Ayana, Shi-Qi Shen, Yan-Kai Lin, Cun-Chao Tu, Yu Zhao, Zhi-Yuan Liu, Mao-Song Sun. Recent Advances on Neural Headline Generation [J]. , 2017, 32(4): 768-784. [12] Jun Yin, Wayne Xin Zhao, Xiao-Ming Li. Type-Aware Question Answering over Knowledge Base with Attention-Based Tree-Structured Neural Networks [J]. , 2017, 32(4): 805-813. [13] Ting Bai, Hong-Jian Dou, Wayne Xin Zhao, Ding-Yi Yang, Ji-Rong Wen. An Experimental Study of Text Representation Methods for Cross-Site Purchase Preference Prediction Using the Social Text Data [J]. , 2017, 32(4): 828-842. [14] Xu-Ran Zhao, Xun Wang, Qi-Chao Chen. Temporally Consistent Depth Map Prediction Using Deep CNN and Spatial-temporal Conditional Random Field [J]. , 2017, 32(3): 443-456. [15] Xi-Jin Zhang, Yi-Fan Lu, Song-Hai Zhang. Multi-Task Learning for Food Identification and Analysis with Deep Convolutional Neural Networks [J]. , 2016, 31(3): 489-500.
Viewed
Full text

Abstract

Cited

Shared
Discussed