利用迭代滤波和神经网络对非平稳信号进行预测和恢复的两阶段策略
A 2-Stage Strategy for Non-Stationary Signal Prediction and Recovery Using Iterative Filtering and Neural Network
-
摘要: 预测未来信息并恢复时间序列中的缺失数据是各种应用领域面临的两项重要任务。它们经常受到很大的挑战,特别是当信号是具有非线性和非平稳时,这在实践中很常见。在本文中,我们提出了一种称为IF2FNN的混合两阶段方法来预测(包括短期和长期预测)并恢复一般类型的时间序列。在第一阶段,我们通过迭代滤波(IF)方法将原始非平稳序列分解为几个“近似平稳”本征模态函数(IMF)。在第二阶段,所有IMF都作为输入馈送到基于机器的神经网络模型进行预测和恢复。我们使用五组数据集进行测试,包括人工构造信号(ACS)和四个真实世界信号,包括:日照长度(LOD),北半球的土地-海洋温度指数(NHLTI),对流层月平均温度(TMMT)和全国证券交易商协会自动报价指数(纳斯达克)。将实验结果与其他主流方法的结果进行了比较得出:在相同条件下,根据平均绝对误差(MAE),均方根误差(RMSE)和平均绝对百分比误差(MAPE)等各种指标,本文所提出的方法优于其他方法进行预测和恢复。目的:本文主要解决对非线性、非平稳时间序列数据的预测和恢复问题。创新点:本论文创新点在于:(1)使用迭代滤波算法对时间序列进行分解,用于降低非平稳性的影响;(2)数据预测和恢复模型使用改进的神经网络模型,该模型比传统的神经网络具有更强的非线性表达能力。方法:迭代滤波算法(IF)+改进的神经网络模型(FNN)结论:本论文提出的方法在设计上对时间序列数据的非平稳性,预测和恢复能力的非线性性都有较强的优势。5组数据,包括1组人为构造数据和4组真实数据也验证了该结论。Abstract: Predicting the future information and recovering the missing data for time series are two vital tasks faced in various application fields. They are often subjected to big challenges, especially when the signal is nonlinear and nonstationary which is common in practice. In this paper, we propose a hybrid 2-stage approach, named IF2FNN, to predict (including short-term and long-term predictions) and recover the general types of time series. In the first stage, we decompose the original non-stationary series into several “quasi stationary” intrinsic mode functions (IMFs) by the iterative filtering (IF) method. In the second stage, all of the IMFs are fed as the inputs to the factorization machine based neural network model to perform the prediction and recovery. We test the strategy on five datasets including an artificial constructed signal (ACS), and four real-world signals: the length of day (LOD), the northern hemisphere land-ocean temperature index (NHLTI), the troposphere monthly mean temperature (TMMT), and the national association of securities dealers automated quotations index (NASDAQ). The results are compared with those obtained from the other prevailing methods. Our experiments indicate that under the same conditions, the proposed method outperforms the others for prediction and recovery according to various metrics such as mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE).
-
Keywords:
- iterative filtering /
- factorization machine /
- neural network /
- time series /
- data recovery
-
-
[1] Safari N, Chung C Y, Price G C D. Novel multi-step shortterm wind power prediction framework based on chaotic time series analysis and singular spectrum analysis. IEEE Transactions on Power Systems, 2018, 33(1): 590-601.
[2] Oh K J, Kim K J. Analyzing stock market tick data using piecewise nonlinear model. Expert Systems with Applications, 2002, 22(3): 249-255.
[3] Wang Y F. Mining stock price using fuzzy rough set system. Expert Systems with Applications, 2003, 24(1): 13-23.
[4] Faruk D Ö. A hybrid neural network and ARIMA model for water quality time series prediction. Engineering Applications of Artificial Intelligence, 2010, 23(4): 586-594.
[5] Kasabov N K, Song Q. DENFIS: Dynamic evolving neuralfuzzy inference system and its application for time-series prediction. IEEE Transactions on Fuzzy Systems, 2002, 10(2): 144-154.
[6] Franses P H, Ghijsels H. Additive outliers, GRACH and forecasting volatility. International Journal of Forecasting, 1999, 15(1): 1-9.
[7] Sarantis N. Nonlinearities, cyclical behaviour and predictability in stock markets: International evidence. International Journal of Forecasting, 2001, 17(3): 459-482.
[8] Kalekar P S. Time series forecasting using Holt-Winters exponential smoothing. https://c.mql5.com/forextsd/forum/69/exponentialsmoothing.pdf,Jan.2019.
[9] Hansen J V, Nelson R D. Data mining of time series using stacked generalizers. Neurocomputing, 2002, 43(1/2/3/4): 173-184.
[10] Zhang G P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing, 2003, 50: 159- 175.
[11] Enke D, Thawornwong S. The use of data mining and neural networks for forecasting stock market returns. Expert Systems with Applications, 2005, 29(4): 927-940.
[12] Ture M, Kurt I. Comparison of four different time series methods to forecast hepatitis a virus infection. Expert Systems with Applications, 2006, 31(1): 41-46.
[13] Kim K J. Financial time series forecasting using support vector machines. Neurocomputing, 2003, 55(1/2): 307-319.
[14] Qian X Y. Financial series prediction: Comparison between precision of time series models and machine learning methods. arXiv:1706.00948, 2017. https://arxiv.org/abs/1706.00948,June2018.
[15] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In Proc. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2016, pp.785-794.
[16] Ye J, Chow J H, Chen J, Zheng Z. Stochastic gradient boosted distributed decision trees. In Proc. the 18th ACMConference on Information and Knowledge Management, November 2009, pp.2061-2064.
[17] Kim K J, Han I. Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index. Expert Systems with Applications, 2000, 19(2): 125-132.
[18] Wang Y F. Predicting stock price using fuzzy grey prediction system. Expert Systems with Applications, 2002, 22(1): 33-38.
[19] Shen L, Han T L. Applying rough sets to market timing decisions. Decision Support Systems, 2004, 37(4): 583-597.
[20] Vellido A, Lisboa P J G, Meehan K. Segmentation of the on-line shopping market using neural networks. Expert Systems with Applications, 1999, 17(4): 303-314.
[21] Chen A S, Leung M T, Daouk H. Application of neural networks to an emerging financial market: Forecasting and trading the Taiwan stock index. Computers and Operations Research, 2003, 30(6): 901-923.
[22] Rather A M, Agarwal A, Sastry V N. Recurrent neural network and a hybrid model for prediction of stock returns. Expert Systems with Applications, 2015, 42(6): 3234-3241.
[23] Yang Z, Yang L, Qi D. Detection of spindles in sleep EEGs using a novel algorithm based on the Hilbert-Huang transform. In Wavelet Analysis and Applications, Qian T, Vai M I, Xu Y S (eds.), Birkhäuser, 2007, pp.543-559.
[24] Wang J Z, Wang J J, Zhang Z G, Guo S P. Forecasting stock indices with back propagation neural network. Expert Systems with Applications, 2011, 38(11): 14346-14355.
[25] Liu H, Chen C, Tian H Q, Li Y F. A hybrid model for wind speed prediction using empirical mode decomposition and artificial neural networks. Renewable Energy, 2012, 48: 545-556.
[26] Kao L J, Chiu C C, Lu C J, Chang C H. A hybrid approach by integrating wavelet-based feature extraction with MARS and SVR for stock index forecasting. Decision Support Systems, 2013, 54(3): 1228-1244.
[27] Zhang L, Wu X, Ji W, Abourizk S M. Intelligent approach to estimation of tunnel-induced ground settlement using wavelet packet and support vector machines. Journal of Computing in Civil Engineering, 2016, 31(2): Article No. 04016053.
[28] Wei L Y. A hybrid ANFIS model based on empirical mode decomposition for stock time series forecasting. Applied Soft Computing, 2016, 42: 368-376.
[29] Zhou F, Zhou H, Yang Z, Yang L. EMD2FNN: A strategy combining empirical mode decomposition and factorization machine based neural network for stock market trend prediction. Expert Systems with Applications, 2019, 115: 136- 151.
[30] Thompson W R, Weil C S. On the construction of tables for moving-average interpolation. Biometrics, 1952, 8(1): 51-54.
[31] Watson D F. A refinement of inverse distance weighted interpolation. GeoProcessing, 1985, 2(4): 315-327.
[32] Liu G R, Zhang G Y. A novel scheme of strain-constructed point interpolation method for static and dynamic mechanics problems. International Journal of Applied Mechanics, 2009, 1(1): 233-258.
[33] Schoenberg I J. Contributions to the problem of approximation of equidistant data by analytic functions (part A). Quarterly of Applied Mathematics, 1946, 4: 3-57.
[34] Schoenberg I J. Cardinal Spline Interpolation. Society for Industrial and Applied Mathematics, 1973.
[35] Lin L, Wang Y, Zhou H. Iterative filtering as an alternative algorithm for empirical mode decomposition. Advances in Adaptive Data Analysis, 2009, 1(4): 543-560.
[36] Cicone A, Liu J, Zhou H. Adaptive local iterative filtering for signal decomposition and instantaneous frequency analysis. Applied and Computational Harmonic Analysis, 2016, 41(2): 384-411.
[37] Cicone A, Zhou H. Multidimensional iterative filtering method for the decomposition of high-dimensional nonstationary signals. Numerical Mathematics: Theory, Methods and Applications, 2017, 10(2): 278-298.
[38] Huang N E, Shen Z, Long S R, Wu M C, Shih H H, Zheng Q, Yen N C, Chi C T, Liu H H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis. Proceedings of the Royal Society A: Mathematical Physical and Engineering Sciences, 1998, 454(1971): 903-995.
[39] Holt C C. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal of Forecasting, 2004, 20(1): 5-10.
[40] Winters P R. Forecasting sales by exponentially weighted moving averages. Management Science, 1960, 6(3): 231- 362.
[41] Flandrin P, Rilling G, Goncalves P. Empirical mode decomposition as a filter bank. IEEE Signal Processing Letters, 2004, 11(2): 112-114.
[42] Zhou F, Yang L, Zhou H, Yang L. Optimal averages for nonlinear signal decompositions — Another alternative for empirical mode decomposition. Signal Processing, 2016, 121: 17-29.
[43] Huang N E, Shen Z, Long S R. A new view of nonlinear water waves: The Hilbert spectrum. Annual Review of Fluid Mechanics, 1999, 31(1): 417-457.
[44] Huang W, Shen Z, Huang N E, Yuan C F. Engineering analysis of biological variables: An example of blood pressure over 1 day. Proceedings of the National Academy of Sciences of the United States of America, 1998, 95(9): 4816-4821.
[45] Yang Z, Qi D, Yang L. Signal period analysis based on Hilbert-Huang transform and its application to texture analysis. In Proc. the 3rd International Conference on Image and Graphics, April 2005, pp.430-433.
[46] Smith J S. The local mean decomposition and its application to EEG perception data. Journal of the Royal Society Interface, 2005, 2(5): 443-454.
[47] Delechelle E, Lemoine J, Niang O. Empirical mode decomposition: An analytical approach for sifting process. IEEE Signal Processing Letters, 2005, 12(11): 764-767.
[48] Diop E H S, Alexandre R, Boudraa A O. Analysis of intrinsic mode functions: A PDE approach. IEEE Signal Processing Letters, 2010, 17(4): 398-401.
[49] Hong H, Wang X, Tao Z. Local integral mean-based sifting for empirical mode decomposition. IEEE Signal Processing Letters, 2009, 16(10): 841-844.
[50] Peng S, Hwang W L. Null space pursuit: An operator-based approach to adaptive signal separation. IEEE Transactions on Signal Processing, 2010, 58(5): 2475-2483.
[51] Daubechies I, Lu J, Wu H T. Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Applied and Computational Harmonic Analysis, 2011, 30(2): 243-261.
[52] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[53] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651.
[54] Hinton G, Deng L, Yu D, Dahl G E, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T N. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6): 82-97.
[55] Chen C H. Handbook of Pattern Recognition and Computer Vision (5th edition). World Scientific Publishing, 2016.
[56] Goldberg Y. Neural Network Methods for Natural Language Processing. Morgan and Claypool Publishers, 2017.
[57] Rendle S. Factorization machines. In Proc. the 10th IEEE International Conference on Data Mining, December 2010, pp.995-1000.
[58] Han J, Moraga C. The influence of the sigmoid function parameters on the speed of backpropagation learning. In Proc. International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation, June 1995, pp.195-201.
[59] Lecun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436-444.
[60] He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proc. the 2015 IEEE International Conference on Computer Vision, December 2015, pp.1026-1034.
[61] Clevert D A, Unterthiner T, Hochreiter S. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv:1511.07289, 2015. https://arxiv.org/pdf/1511.07289.pdf,November2018.
[62] Makridakis S. Accuracy measures: Theoretical and practical concerns. International Journal of Forecasting, 1993, 9(4): 527-529.
-
其他相关附件
-
本文英文pdf
2019-2-6-9167-Highlights 点击下载(69KB)
-
计量
- 文章访问数: 79
- HTML全文浏览量: 3
- PDF下载量: 309