We use cookies to improve your experience with our site.

Indexed in:

SCIE, EI, Scopus, INSPEC, DBLP, CSCD, etc.

Submission System
(Author / Reviewer / Editor)
Zhang HB, Wang P, Zhang MM et al. Shapelet based two-step time series positive and unlabeled learning. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 38(6): 1387−1402 Nov. 2023. DOI: 10.1007/s11390-022-1320-9.
Citation: Zhang HB, Wang P, Zhang MM et al. Shapelet based two-step time series positive and unlabeled learning. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 38(6): 1387−1402 Nov. 2023. DOI: 10.1007/s11390-022-1320-9.

Shapelet Based Two-Step Time Series Positive and Unlabeled Learning

Funds: The work was supported by the National Key Research and Development Program of China under Grant No. 2020YFB1710001.
More Information
  • Author Bio:

    Han-Bo Zhang received his B.S. degree in information engineering from East China University of Science and Technology, Shanghai, in 2013. He received his M.S. degree in computer system architecture from China Academic of Electronics and Information Technology, Beijing, in 2016. Now he is a Ph.D. candidate in School of Computer Science, Fudan University, Shanghai. His research interests include database, data mining, series data processing, and low quality data processing

    Peng Wang received his Ph.D. degree in computer science from Fudan University, Shanghai, in 2007. Now he is a professor in School of Computer Science, Fudan University, Shanghai. His research interests include database, data mining, and series data processing. He has published more than 30 papers in refereed international journals and conference proceedings

    Ming-Ming Zhang received her B.S. degree in software engineering from Henan University, Kaifeng, in 2019. She is currently studying for her M.S. degree in software engineering at Fudan University, Shanghai. Her research interests include motif discovery, time series similarity search, and data mining

    Wei Wang received his Ph.D. degree in computer science from Fudan University, Shanghai, in 1998. Now he is a professor in School of Computer Science of Fudan University, Shanghai. His research interests include database, data mining, and series data processing. He has published more than 100 papers in refereed international journals and conference proceedings

  • Corresponding author:

    pengwang5@fudan.edu.cn

  • Received Date: January 24, 2021
  • Accepted Date: December 21, 2022
  • In the last decade, there has been significant progress in time series classification. However, in real-world industrial settings, it is expensive and difficult to obtain high-quality labeled data. Therefore, the positive and unlabeled learning (PU-learning) problem has become more and more popular recently. The current PU-learning approaches of the time series data suffer from low accuracy due to the lack of negative labeled time series. In this paper, we propose a novel shapelet based two-step (2STEP) PU-learning approach. In the first step, we generate shapelet features based on the positive time series, which are used to select a set of negative examples. In the second step, based on both positive and negative time series, we select the final features and build the classification model. The experimental results show that our 2STEP approach can improve the average F1 score on 15 datasets by 9.1% compared with the baselines, and achieves the highest F1 score on 10 out of 15 time series datasets.

  • [1]
    Yeh C C M, Zhu Y, Ulanova L, Begum N, Ding Y F, Dau H A, Silva D F, Mueen A, Keogh E. Matrix profile I: All pairs similarity joins for time series: A unifying view that includes motifs, discords and shapelets. In Proc. the 16th International Conference on Data Mining, Dec. 2016, pp.1317–1322. DOI: 10.1109/ICDM.2016.0179.
    [2]
    Ye L X, Keogh E. Time series shapelets: A new primitive for data mining. In Proc. the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Jun. 2009, pp.947–956. DOI: 10.1145/1557019.1557122.
    [3]
    Paparrizos J, Gravano L. k-Shape: Efficient and accurate clustering of time series. In Proc. the 2015 ACM SIGMOD International Conference on Management of Data, May 2015, pp.1855–1870. DOI: 10.1145/2723372.2737793.
    [4]
    Yeh C C M, Kavantzas N, Keogh E. Matrix profile IV: Using weakly labeled time series to predict outcomes. Proceedings of the VLDB Endowment, 2017, 10(12): 1802–1812. DOI: 10.14778/3137765.3137784.
    [5]
    Chen Y P, Hu B, Keogh E, Batista G E A P A. DTW-D: Time series semi-supervised learning from a single example. In Proc. the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2013, pp.383–391. DOI: 10.1145/2487575.2487633.
    [6]
    Liang S, Zhang Y C, Ma J G. PU-Shapelets: Towards pattern-based positive unlabeled classification of time series. In Proc. the 24th International Conference on Database Systems for Advanced Applications, Apr. 2019, pp.87–103. DOI: 10.1007/978-3-030-18576-3_6.
    [7]
    Liu B, Dai Y, Li X, Lee W S, Yu P S. Building text classifiers using positive and unlabeled examples. In Proc. the 3rd IEEE International Conference on Data Mining, Nov. 2003, pp.179–186. DOI: 10.1109/ICDM.2003.1250918.
    [8]
    Ratanamahatana C A, Wanichsan D. Stopping criterion selection for efficient semi-supervised time series classification. In Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, Lee R (ed.), Springer, 2008, pp.1–14. DOI: 10.1007/978-3-540-70560-4_1.
    [9]
    Begum N, Hu B, Rakthanmanon T, Keogh E. Towards a minimum description length based stopping criterion for semi-supervised time series classification. In Proc. the 14th International Conference on Information Reuse & Integration, Aug. 2013, pp.333–340. DOI: 10.1109/IRI.2013.6642490.
    [10]
    González M, Bergmeir C, Triguero I, Rodríguez Y, Benítez J M. On the stopping criteria for k-Nearest Neighbor in positive unlabeled time series classification problems. Information Sciences, 2016, 328: 42–59. DOI: 10.1016/j.ins.2015.07.061.
    [11]
    Elkan C, Noto K. Learning classifiers from only positive and unlabeled data. In Proc. the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2008, pp.213–220. DOI: 10.1145/1401890.1401920.
    [12]
    Plessis M C, Niu G, Sugiyama M. Convex formulation for learning from positive and unlabeled data. In Proc. the 32nd International Conference on Machine Learning, Jul. 2015, pp.1386–1394.
    [13]
    Ling C X, Sheng V S. Cost-sensitive learning. In Encyclopedia of Machine Learning, Sammut C, Webb G I (eds.), Springer, 2010, pp.231–235. DOI: 10.1007/978-0-387-30164- 8_181.
    [14]
    Kiryo R, Niu G, du Plessis M C, Sugiyama M. Positive-unlabeled learning with non-negative risk estimator. In Proc. the 31st International Conference on Neural Information Processing Systems, Dec. 2017, pp.1674–1684.
    [15]
    Zhang C, Ren D X, Liu T L, Yang J, Gong C. Positive and unlabeled learning with label disambiguation. In Proc. the 28th International Joint Conference on Artificial Intelligence, Aug. 2019, pp.4250–4256. DOI: 10.24963/ ijcai.2019/590.
    [16]
    Gong C, Shi H, Liu T L, Zhang C, Yang J, Tao D C. Loss decomposition and centroid estimation for positive and unlabeled learning. IEEE Trans. Pattern Analysis and Machine Intelligence, 2021, 43(3): 918–932. DOI: 10.1109/TPAMI.2019.2941684.
    [17]
    Gong C, Shi H, Yang J, Yang J. Multi-manifold positive and unlabeled learning for visual analysis. IEEE Trans. Circuits and Systems for Video Technology, 2020, 30(5): 1396–1409. DOI: 10.1109/TCSVT.2019.2903563.
    [18]
    Gong C, Liu T L, Yang J, Tao D C. Large-margin label-calibrated support vector machines for positive and unlabeled learning. IEEE Trans. Neural Networks and Learning Systems, 2019, 30(11): 3471–3483. DOI: 10.1109/TNNLS.2019.2892403.
    [19]
    Li X L, Liu B. Learning to classify texts using positive and unlabeled data. In Proc. the 18th International Joint Conference on Artificial Intelligence, Aug. 2003, pp.587–592.
    [20]
    Liu B, Lee W S, Yu P S, Li X L. Partially supervised classification of text documents. In Proc. the 19th International Conference on Machine Learning, Jul. 2002, pp.387–394.
    [21]
    Zhang B Z, Zuo W L. Reliable negative extracting based on kNN for learning from positive and unlabeled examples. Journal of Computers, 2009, 4(1): 94–101. DOI: 10.4304/jcp.4.1.94-101.
    [22]
    Fung G P C, Yu J X, Lu H J, Yu P S. Text classification without negative examples revisit. IEEE Trans. Knowledge and Data Engineering, 2006, 18(1): 6–20. DOI: 10.1109/TKDE.2006.16.
    [23]
    Zhu Y, Zimmerman Z, Senobari N S, Yeh C C M, Funning G, Mueen A, Brisk P, Keogh E. Matrix profile II: Exploiting a novel algorithm and GPUs to break the one hundred million barrier for time series motifs and joins. In Proc. the 16th International Conference on Data Mining, Dec. 2016, pp.739–748. DOI: 10.1109/ICDM.2016.0085.
    [24]
    Linardi M, Zhu Y, Palpanas T, Keogh E. Matrix profile X: VALMOD—Scalable discovery of variable-length motifs in data series. In Proc. the 2018 International Conference on Management of Data, May 2018, pp.1053–1066. DOI: 10.1145/3183713.3183744.
    [25]
    Kantrowitz M, Mohit B, Mittal V. Stemming and its effects on TFIDF ranking (poster session). In Proc. the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Jul. 2000, pp.357–359. DOI: 10.1145/345508.345650.
    [26]
    Ito Y. Approximation of continuous functions on R d by linear combinations of shifted rotations of a sigmoid function with and without scaling. Neural Networks, 1992, 5(1): 105–115. DOI: 10.1016/S0893-6080(05)80009-7.
    [27]
    Mueen A, Keogh E, Young N. Logical-shapelets: An expressive primitive for time series classification. In Proc. the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2011, pp.1154–1162. DOI: 10.1145/2020408.2020587.
    [28]
    Kumar R, Bishnu P S. Identification of k-most promising features to set blue ocean strategy in decision making. Data Science and Engineering, 2019, 4(4): 367–384. DOI: 10.1007/s41019-019-00106-z.
    [29]
    Ye J P, Janardan R, Li Q, Park H. Feature extraction via generalized uncorrelated linear discriminant analysis. In Proc. the 21st International Conference on Machine Learning, Jul. 2004. DOI: 10.1145/1015330.1015348, Nov. 2023.
    [30]
    Chen Y P, Keogh E, Hu B, Begum N, Bagnall A, Mueen A, Batista G. The UCR time series classification archive, 2015. https://www.cs.ucr.edu/~eamonn/time_series_data/, November 2023.
  • Others

Catalog

    Article views (170) PDF downloads (14) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return