Journal of Computer Science and Technology ›› 2021, Vol. 36 ›› Issue (4): 741-761.doi: 10.1007/s11390-021-1350-8

Special Issue: Data Management and Data Mining

• Special Section on AI4DB and DB4AI • Previous Articles     Next Articles

WATuning: A Workload-Aware Tuning System with Attention-Based Deep Reinforcement Learning

Jia-Ke Ge1,2, Yan-Feng Chai2,3, and Yun-Peng Chai1,2,*, Member, CCF        

  1. 1 Key Laboratory of Data Engineering and Knowledge Engineering of Ministry of Education Renmin University of China, Beijing 100872, China;
    2 School of Information, Renmin University of China, Beijing 100872, China;
    3 College of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan 030027, China
  • Received:2021-02-01 Revised:2021-06-24 Online:2021-07-05 Published:2021-07-30
  • Contact: Yun-Peng Chai E-mail:ypchai@ruc.edu.cn
  • About author:Jia-Ke Ge received his B.E. degree in software engineering from Shanxi University, Taiyuan, in 2017, his M.S. degree in software engineering from Beijing University of Technology, Beijing, in 2020. He is currently a Ph.D. candidate with Renmin University of China, Beijing. His research interests include the intersection of key-value storage systems and machine learning.
  • Supported by:
    This work was supported by the National Key Research and Development Program of China under Grant No. 2019YFE0198600 and the National Natural Science Foundation of China under Grant Nos. 61972402, 61972275, and 61732014.

Configuration tuning is essential to optimize the performance of systems (e.g., databases, key-value stores). High performance usually indicates high throughput and low latency. At present, most of the tuning tasks of systems are performed artificially (e.g., by database administrators), but it is hard for them to achieve high performance through tuning in various types of systems and in various environments. In recent years, there have been some studies on tuning traditional database systems, but all these methods have some limitations. In this article, we put forward a tuning system based on attention-based deep reinforcement learning named WATuning, which can adapt to the changes of workload characteristics and optimize the system performance efficiently and effectively. Firstly, we design the core algorithm named ATT-Tune for WATuning to achieve the tuning task of systems. The algorithm uses workload characteristics to generate a weight matrix and acts on the internal metrics of systems, and then ATT-Tune uses the internal metrics with weight values assigned to select the appropriate configuration. Secondly, WATuning can generate multiple instance models according to the change of the workload so that it can complete targeted recommendation services for different types of workloads. Finally, WATuning can also dynamically fine-tune itself according to the constantly changing workload in practical applications so that it can better fit to the actual environment to make recommendations. The experimental results show that the throughput and the latency of WATuning are improved by 52.6% and decreased by 31%, respectively, compared with the throughput and the latency of CDBTune which is an existing optimal tuning method.

Key words: attention mechanism; auto-tuning system; reinforcement learning (RL); workload-aware;

[1] O'Neil P, Cheng E, Gawlick D, O'Neil E. The log-structured merge-tree (LSM-tree). Acta Informatica, 1996, 33(4):351-385. DOI:10.1007/s002360050048.
[2] Dong S Y, Callaghan M, Galanis L, Borthakur D, Savor T, Stumm M. Optimizing space amplification in RocksDB. In Proc. the 8th Biennial Conference on Innovative Data Systems Research, Jan. 2017.
[3] Chai Y P, Chai Y F, Wang X, Wei H C, Bao N, Liang Y S. LDC:A lower-level driven compaction method to optimize SSD-oriented key-value stores. In Proc. the 35th IEEE International Conference on Data Engineering, April 2019, pp.722-733. DOI:10.1109/ICDE.2019.00070.
[4] Chai Y P, Chai Y F, Wang X, Wei H C, Wang Y Y. Adaptive lower-level driven compaction to optimize LSM-Tree key-value stores. IEEE Transactions on Knowledge Data Engineering. DOI:10.1109/TKDE.2020.3019264.
[5] Zhu Y Q, Liu J X, Guo M Y, Bao Y G, Ma W L, Liu Z Y, Song K P, Yang Y C. BestConfig:Tapping the performance potential of systems via automatic configuration tuning. In Proc. ACM Symposium on Cloud Computing, Sept. 2017, pp.338-350. DOI:10.1145/3127479.3128605.
[6] Van Aken D, Pavlo A, Gordon G J, Zhang B H. Automatic database management system tuning through large-scale machine learning. In Proc. the 2017 ACM International Conference on Management of Data, May 2017, pp.1009-1024. DOI:10.1145/3035918.3064029.
[7] Zhang J, Liu L, Ran M, Li Z K, Liu Y, Zhou K, Li G L, Xiao Z L, Cheng B, Xing J S, Wang Y T, Cheng T H. An end-to-end automatic cloud database tuning system using deep reinforcement learning. In Proc. the 2019 International Conference on Management of Data, June 2019, pp.415-432. DOI:10.1145/3299869.3300085.
[8] Li G L, Zhou X H, Li S F, Gao B. QTune:A query-aware database tuning system with deep reinforcement learning. Proc. the VLDB Endowment, 2019, 12(12):2118-2130. DOI:10.14778/3352063.3352129.
[9] Lillicrap T P, Hunt J J, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015. https://arxiv.org/abs/1509.02971, Jun. 2021.
[10] Van Hasselt H. Double Q-learning. In Proc. the 24th Annual Conference on Neural Information Processing Systems, Dec. 2010, pp.2613-2621.
[11] Kingma D, Ba J. Adam:A method for stochastic optimization. In Proc. the 3rd International Conference on Learning Representations, May 2015.
[12] Munos R, Moore A. Variable resolution discretization in optimal control. Machine Learning, 2002, 49(2/3):291-323. DOI:10.1023/A:1017992615625.
[13] Mnih V, Kavukcuoglu K, Silver D et al. Human-level control through deep reinforcement learning. Nature, 2015, 518(7540):529-533. DOI:10.1038/nature14236.
[14] Ban T W. An autonomous transmission scheme using dueling DQN for D2D communication networks. IEEE Transactions on Vehicular Technology, 2020, 69(12):16348-16352. DOI:10.1109/TVT.2020.3041458.
[15] Chen L, Hu X M, Tang B, Cheng Y. Conditional DQNbased motion planning with fuzzy logic for autonomous driving. IEEE Transactions on Intelligent Transportation Systems. DOI:10.1109/TITS.2020.3025671.
[16] Huang H J, Yang Y C, Wang H, Ding Z G, Sari H, Adachi F. Deep reinforcement learning for UAV navigation through massive MIMO technique. IEEE Transactions on Vehicular Technology, 2020, 69(1):1117-1121. DOI:10.1109/TVT.2019.2952549.
[17] Li J X, Yao L, Xu X, Cheng B, Ren J K. Deep reinforcement learning for pedestrian collision avoidance and humanmachine cooperative driving. Information Sciences, 2020, 532:110-124. DOI:10.1016/j.ins.2020.03.105.
[18] Yoo H, Kim B, Kim J W, Lee J H. Reinforcement learning based optimal control of batch processes using Monte-Carlo deep deterministic policy gradient with phase segmentation. Computers & Chemical Engineering, 2021, 144:Article No. 107133. DOI:10.1016/j.compchemeng.2020.107133.
[19] He X M, Lu H D, Du M, Mao Y C, Wang K. QoE-based task offloading with deep reinforcement learning in edgeenabled Internet of vehicles. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(4):2252-2261. DOI:10.1109/TITS.2020.3016002.
[20] Li L Y, Xu H, Ma J, Zhou A Z. Joint EH time and transmit power optimization based on DDPG for EH communications. IEEE Communications Letters, 2020, 24(9):2043-2046. DOI:10.1109/LCOMM.2020.2999914.
[21] Nguyen D Q, Vien N A, Dang V H, Chung T. Asynchronous framework with Reptile+ algorithm to meta learn partially observable Markov decision process. Applied Intelligence, 2020, 50(11):4050-4062. DOI:10.1007/s10489-020-01748-7.
[22] Gheisarnejad M, Khooban M H. IoT-based DC/DC deep learning power converter control:Real-time implementation. IEEE Transactions on Power Electronics, 2020, 35(12):13621-13630. DOI:10.1109/TPEL.2020.2993635.
[23] Tang Z T, Shao K, Zhao D B, Zhu Y H. Recent progress of deep reinforcement learning:From AlphaGo to AlphaGo Zero. Control Theory & Applications, 2017, 34(12):1529-1546. DOI:10.7641/CTA.2017.70808. (in Chinese)
[24] Silver D, Schrittwieser J, Simonyan K et al. Mastering the game of Go without human knowledge. Nature, 2017, 550(7676):354-359. DOI:10.1038/nature24270.
[25] Ye D H, Chen G B, Zhang W et al. Towards playing full MOBA games with deep reinforcement learning. arXiv:2011.12692, 2020. https://arxiv.org/abs/2011.12692, Dec. 2020.
[26] Li G L. Human-in-the-loop data integration. Proceedings of the VLDB Endowment, 2017, 10(12):2006-2017. DOI:10.14778/3137765.3137833.
[27] Li G L, Zhou X H, Li S H. XuanYuan:An AI-native database. IEEE Data Engineering Bulletin, 2019, 42(2):70-81.
[28] Basu D, Lin Q, Chen W, Vo H T, Yuan Z, Senellart P, Bressan S. Regularized cost-model oblivious database tuning with reinforcement learning. In Transactions on LargeScale Data- and Knowledge-Centered Systems XXVⅢ, Hameurlain A, Küng J, Wagner R, Chen Q (eds.), Springer, 2016, pp.96-132. DOI:10.1007/978-3-662-53455-75.
[29] Sun J, Li G L. An end-to-end learning-based cost estimator. Proceedings of the VLDB Endowment, 2019, 13(3):307-319. DOI:10.14778/3368289.3368296.
[30] Kraska T, Alizadeh M, Beutel A et al. SageDB:A learned database system. In Proc. the 9th Biennial Conference on Innovative Data Systems Research, Jan. 2019.
[31] Duan S Y, Thummala V, Babu S. Tuning database configuration parameters with iTuned. Proceedings of the VLDB Endowment, 2009, 2(1):1246-1257. DOI:10.14778/1687-627.1687767.
[32] Wei Z J, Ding Z H, Hu J L. Self-tuning performance of database systems based on fuzzy rules. In Proc. the 11th International Conference on Fuzzy Systems and Knowledge Discovery, Aug. 2014, pp.194-198. DOI:10.1109/FSKD.2014.6980831.
[33] Zheng C H, Ding Z H, Hu J L. Self-tuning performance of database systems with neural network. In Proc. the 10th International Conference on Natural Computation, Aug. 2014, pp.1-12. DOI:10.1007/978-3-319-09333-8_1.
[1] Chen-Chen Sun, De-Rong Shen. Mixed Hierarchical Networks for Deep Entity Matching [J]. Journal of Computer Science and Technology, 2021, 36(4): 822-838.
[2] Yang Liu, Ruili He, Xiaoqian Lv, Wei Wang, Xin Sun, Shengping Zhang. Is It Easy to Recognize Baby's Age and Gender? [J]. Journal of Computer Science and Technology, 2021, 36(3): 508-519.
[3] Yi-Ting Wang, Jie Shen, Zhi-Xu Li, Qiang Yang, An Liu, Peng-Peng Zhao, Jia-Jie Xu, Lei Zhao, Xun-Jie Yang. Enriching Context Information for Entity Linking with Web Data [J]. Journal of Computer Science and Technology, 2020, 35(4): 724-738.
[4] Ying Li, Jia-Jie Xu, Peng-Peng Zhao, Jun-Hua Fang, Wei Chen, Lei Zhao. ATLRec: An Attentional Adversarial Transfer Learning Network for Cross-Domain Recommendation [J]. Journal of Computer Science and Technology, 2020, 35(4): 794-808.
[5] Chun-Yang Ruan, Ye Wang, Jiangang Ma, Yanchun Zhang, Xin-Tian Chen. Adversarial Heterogeneous Network Embedding with Metapath Attention Mechanism [J]. Journal of Computer Science and Technology, 2019, 34(6): 1217-1229.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] Zhou Di;. A Recovery Technique for Distributed Communicating Process Systems[J]. , 1986, 1(2): 34 -43 .
[2] Chen Shihua;. On the Structure of Finite Automata of Which M Is an(Weak)Inverse with Delay τ[J]. , 1986, 1(2): 54 -59 .
[3] Li Wanxue;. Almost Optimal Dynamic 2-3 Trees[J]. , 1986, 1(2): 60 -71 .
[4] Liu Mingye; Hong Enyu;. Some Covering Problems and Their Solutions in Automatic Logic Synthesis Systems[J]. , 1986, 1(2): 83 -92 .
[5] C.Y.Chung; H.R.Hwa;. A Chinese Information Processing System[J]. , 1986, 1(2): 15 -24 .
[6] Zhang Cui; Zhao Qinping; Xu Jiafu;. Kernel Language KLND[J]. , 1986, 1(3): 65 -79 .
[7] Wang Jianchao; Wei Daozheng;. An Effective Test Generation Algorithm for Combinational Circuits[J]. , 1986, 1(4): 1 -16 .
[8] Chen Zhaoxiong; Gao Qingshi;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 .
[9] Huang Heyan;. A Parallel Implementation Model of HPARLOG[J]. , 1986, 1(4): 27 -38 .
[10] Zheng Guoliang; Li Hui;. The Design and Implementation of the Syntax-Directed Editor Generator(SEG)[J]. , 1986, 1(4): 39 -48 .

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved