We use cookies to improve your experience with our site.

Indexed in:

SCIE, EI, Scopus, INSPEC, DBLP, CSCD, etc.

Submission System
(Author / Reviewer / Editor)
Wei X, Liu J, Wang Y. Joint participant selection and learning optimization for federated learning of multiple models in edge cloud. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 38(4): 754−772 July 2023. DOI: 10.1007/s11390-023-3074-4.
Citation: Wei X, Liu J, Wang Y. Joint participant selection and learning optimization for federated learning of multiple models in edge cloud. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 38(4): 754−772 July 2023. DOI: 10.1007/s11390-023-3074-4.

Joint Participant Selection and Learning Optimization for Federated Learning of Multiple Models in Edge Cloud

Funds: A preliminary version of the paper was published in the Proceedings of MASS 2022. This work is partially supported by the US National Science Foundation under Grant Nos. CCF-1908843 and CNS-2006604.
More Information
  • Author Bio:

    Xinliang Wei is a Ph.D. student in the Department of Computer and Information Sciences at Temple University, Philadelphia. He received his M.S. and B.E. degrees both in software engineering from Sun Yat-sen University, Guangzhou, in 2016 and 2014, respectively. His research interests include edge computing, federated learning, reinforcement learning, and Internet of Things. He is a recipient of Outstanding Research Assistant from College of Science and Technology and Scott Hibbs Future of Computing Award from Department of Computer & Information Sciences at Temple University

    Jiyao Liu is a Ph.D. candidate at the Department of Computer and Information Sciences at Temple University, Philadelphia. He received his B.E. degree in information security from North China University of Technology, Beijing, in 2016. His research interests include AI, security, and privacy in edge computing

    Yu Wang is a professor in the Department of Computer and Information Sciences at Temple University, Philadelphia. He received his Ph.D. degree from Illinois Institute of Technology, Chicago, his M.Eng. degree and B.Eng. degree from Tsinghua University, Beijing, all in computer science. His research interests include wireless networks, smart sensing, and mobile computing. He has published over 200 papers in peer reviewed journals and conferences. He is a recipient of Ralph E. Powe Junior Faculty Enhancement Awards from Oak Ridge Associated Universities (2006), Outstanding Faculty Research Award from College of Computing and Informatics at the University of North Carolina at Charlotte (2008), Fellow of IEEE (2018), and ACM Distinguished Member (2020). He has served as associate editor for IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Cloud Computing, among others

  • Corresponding author:

    wangyu@temple.edu

  • Received Date: March 20, 2023
  • Accepted Date: July 30, 2023
  • To overcome the limitations of long latency and privacy concerns from cloud computing, edge computing along with distributed machine learning such as federated learning (FL), has gained much attention and popularity in academia and industry. Most existing work on FL over the edge mainly focuses on optimizing the training of one shared global model in edge systems. However, with the increasing applications of FL in edge systems, there could be multiple FL models from different applications concurrently being trained in the shared edge cloud. Such concurrent training of these FL models can lead to edge resource competition (for both computing and network resources), and further affect the FL training performance of each other. Therefore, in this paper, considering a multi-model FL scenario, we formulate a joint participant selection and learning optimization problem in a shared edge cloud. This joint optimization aims to determine FL participants and the learning schedule for each FL model such that the total training cost of all FL models in the edge cloud is minimized. We propose a multi-stage optimization framework by decoupling the original problem into two or three subproblems that can be solved respectively and iteratively. Extensive evaluation has been conducted with real-world FL datasets and models. The results have shown that our proposed algorithms can reduce the total cost efficiently compared with prior algorithms.

  • [1]
    McMahan B, Moore E, Ramage D, Hampson S, Arcas B A Y. Communication-efficient learning of deep networks from decentralized data. In Proc. the 20th International Conference on Artificial Intelligence and Statistics, Apr. 2017, pp.1273–1282. DOI: 10.48550/arXiv.1602.05629.
    [2]
    Ji S X, Jiang W Q, Walid A, Li X. Dynamic sampling and selective masking for communication-efficient federated learning. IEEE Intelligent Systems, 2022, 37(2): 27–34. DOI: 10.1109/MIS.2021.3114610.
    [3]
    Sattler F, Wiedemann S, Muller K R, Samek W. Robust and communication-efficient federated learning from non-I. I. D. data. IEEE Trans. Neural Networks and Learning Systems, 2019, 31(9): 3400–3413. DOI: 10.1109/TNNLS.2019.2944481.
    [4]
    Lim W Y B, Luong N C, Hoang D T, Jiao Y T, Liang Y C, Yang Q, Niyato D, Miao C Y. Federated learning in mobile edge networks: A comprehensive survey. IEEE Communications Surveys & Tutorials, 2020, 22(3): 2031–2063. DOI: 10.1109/COMST.2020.2986024.
    [5]
    Liu L M, Zhang J, Song S H, Letaief K B. Client-edge-cloud hierarchical federated learning. In Proc. the 2020 IEEE International Conference on Communications, Jun. 2020. DOI: 10.1109/ICC40277.2020.9148862.
    [6]
    Wang S Q, Tuor T, Salonidis T, Leung K K, Makaya C, He T, Chan K. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 2019, 37(6): 1205–1221. DOI: 10.1109/JSAC.2019.2904348.
    [7]
    Nishio T, Yonetani R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proc. the 2019 IEEE International Conference on Communications, May 2019. DOI: 10.1109/ICC.2019.8761315.
    [8]
    Luo S Q, Chen X, Wu Q, Zhou Z, Yu S. HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans. Wireless Communications, 2020, 19(10): 6535–6548. DOI: 10.1109/TWC.2020.3003744.
    [9]
    Jin Y B, Jiao L, Qian Z Z, Zhang S, Lu S L. Learning for learning: Predictive online control of federated learning with edge provisioning. In Proc. the 2021 IEEE Conference on Computer Communications, May 2021. DOI: 10.1109/INFOCOM42981.2021.9488733.
    [10]
    Meng Z Y, Xu H L, Chen M, Xu Y, Zhao Y M, Qiao C M. Learning-driven decentralized machine learning in resource-constrained wireless edge computing. In Proc. the 2021 IEEE Conference on Computer Communications, May 2021. DOI: 10.1109/INFOCOM42981.2021.9488817.
    [11]
    Wang Z Y, Xu H L, Liu J C, Huang H, Qiao C M, Zhao Y M. Resource-efficient federated learning with hierarchical aggregation in edge computing. In Proc. the 2021 IEEE Conference on Computer Communications, May 2021. DOI: 10.1109/INFOCOM42981.2021.9488 756.
    [12]
    Wei X L, Liu J Y, Shi X H, Wang Y. Participant selection for hierarchical federated learning in edge clouds. In Proc. the 2022 IEEE International Conference on Networking, Architecture and Storage, Oct. 2022. DOI: 10.1109/NAS55553.2022.9925313.
    [13]
    Liu J, Wei X, Liu X, Gao H, Wang Y. Group-based hierarchical federated learning: Convergence, group formation, and sampling. In Proc. International Conference on Parallel Processing, Aug. 2023. DOI: 10.1145/3605573.3605584.
    [14]
    Nguyen M N H, Tran N H, Tun Y K, Han Z, Hong C S. Toward multiple federated learning services resource sharing in mobile edge networks. IEEE Trans. Mobile Computing, 2023, 22(1): 541–555. DOI: 10.1109/TMC.2021.3085 979.
    [15]
    Wei X L, Liu J Y, Wang Y. Joint participant selection and learning scheduling for multi-model federated edge learning. In Proc. the 19th International Conference on Mobile Ad Hoc and Smart Systems, Oct. 2022, pp.537–545. DOI: 10.1109/MASS56207.2022.00081.
    [16]
    Yang Z H, Chen M Z, Saad W, Hong C S, Shikh-Bahaei M. Energy efficient federated learning over wireless communication networks. IEEE Trans. Wireless Communications, 2021, 20(3): 1935–1949. DOI: 10.1109/TWC.2020.3037 554.
    [17]
    Li L, Shi D, Hou R H, Li H, Pan M, Han Z. To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices. In Proc. the 2021 IEEE Conference on Computer Communications, May 2021. DOI: 10.1109/INFOCOM42981.2021.9488839.
    [18]
    Wang J Y, Pan J L, Esposito F, Calyam P, Yang Z C, Mohapatra P. Edge cloud offloading algorithms: Issues, methods, and perspectives. ACM Computing Surveys, 2020, 52(1): Article No. 2. DOI: 10.1145/3284387.
    [19]
    Li T, Qiu Z J, Cao L J, Cheng D Z, Wang W C, Shi X H, Wang Y. Privacy-preserving participant grouping for mobile social sensing over edge clouds. IEEE Trans. Network Science and Engineering, 2021, 8(2): 865–880. DOI: 10.1109/TNSE.2020.3020159.
    [20]
    Tan H S, Han Z H, Li X Y, Lau F C M. Online job dispatching and scheduling in edge-clouds. In Proc. the 2017 IEEE Conference on Computer Communications, May 2017. DOI: 10.1109/INFOCOM.2017.8057116.
    [21]
    Yang S, Li F, Trajanovski S, Chen X, Wang Y, Fu X M. Delay-aware virtual network function placement and routing in edge clouds. IEEE Trans. Mobile Computing, 2021, 20(2): 445–459. DOI: 10.1109/TMC.2019.2942306.
    [22]
    Wei X L, Rahman A B M M, Cheng D Z, Wang Y. Joint optimization across timescales: Resource placement and task dispatching in edge clouds. IEEE Trans. Cloud Computing, 2023, 11(1): 730–744. DOI: 10.1109/TCC.2021.3113 605.
    [23]
    Cho Y J, Wang J Y, Joshi G. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv: 2010.01243, 2020. https://arxiv.org/abs/2010.01243, Jul. 2023.
    [24]
    Li X, Huang K X, Yang W H, Wang S S, Zhang Z H. On the convergence of FedAvg on non-IID data. arXiv: 1907.02189, 2019. https://arxiv.org/abs/1907.02189, Jul. 2023.
    [25]
    Li Y Q, Li F, Chen L X, Zhu L H, Zhou P, Wang Y. Power of redundancy: Surplus client scheduling for federated learning against user uncertainties. IEEE Trans. Mobile Computing, 2023, 22(9): 5449–5462. DOI: 10.1109/TMC.2022.3178167.
    [26]
    Tran N H, Bao W, Zomaya A, Nguyen M N H, Hong C S. Federated learning over wireless networks: Optimization model design and analysis. In Proc. the 2019 IEEE Conference on Computer Communications, Apr. 29–May 2, 2019, pp.1387–1395. DOI: 10.1109/INFOCOM.2019.8737 464.
    [27]
    Jin Y B, Jiao L, Qian Z Z, Zhang S, Lu S L, Wang X L. Resource-efficient and convergence-preserving online participant selection in federated learning. In Proc. the 40th International Conference on Distributed Computing Systems, Nov. 29–Dec. 1, 2020, pp.606–616. DOI: 10.1109/ ICDCS47774.2020.00049.
    [28]
    Chen M Z, Yang Z H, Saad W, Yin C C, Poor H V, Cui S G. A joint learning and communications framework for federated learning over wireless networks. IEEE Trans. Wireless Communications, 2021, 20(1): 269–283. DOI: 10.1109/TWC.2020.3024629.
    [29]
    Mitchell S, Kean A, Mason A, O’Sullivan M, Phillips A, Peschiera F. PuLP 2.6. 0. https://pypi.org/project/PuLP/, July 2023.
    [30]
    Beal L D R, Hill D C, Martin R A, Hedengren J D. GEKKO optimization suite. Processes, 2018, 6(8): 106. DOI: 10.3390/pr6080106.
    [31]
    Lai P, He Q, Abdelrazek M, Chen F F, Hosking J, Grundy J, Yang Y. Optimal edge user allocation in edge computing with variable sized vector bin packing. In Proc. the 16th International Conference on Service-Oriented Computing, Nov. 2018, pp.230–245. DOI: 10.1007/978-3-030-03596-9_15.
    [32]
    Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 2011, 12: 2825–2830.
    [33]
    Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv: 1708.07747, 2017. https://arxiv.org/abs/1708.07747, Jul. 2023.
    [34]
    Warden P. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv: 1804.03209, 2018. https://arxiv.org/abs/1804.03209, Jul. 2023.
    [35]
    Zhang X, Zhao J B, LeCun Y. Character-level convolutional networks for text classification. In Proc. the 28th Advances in Neural Information Processing Systems, Dec. 2015, pp.649–657. DOI: 10.5555/2969239.2969312.
    [36]
    Wei X, Fan L, Guo Y, Gong Y, Han Z, Wang Y. Quantum assisted scheduling algorithm for federated learning in distributed networks. In Proc. the 32nd International Conference on Computer Communications and Networks, Jul. 2023. DOI: 10.1109/ICCCN58024.2023.10230094.
  • Related Articles

    [1]Ying-Chi Mao, Li-Juan Shen, Jun Wu, Ping Ping, Jie Wu. Federated Dynamic Client Selection for Fairness Guarantee in Heterogeneous Edge Computing[J]. Journal of Computer Science and Technology, 2024, 39(1): 139-158. DOI: 10.1007/s11390-023-2972-9
    [2]Shu-Zheng Zhang, Zhen-Yu Zhao, Chao-Chao Feng, Lei Wang. A Machine Learning Framework with Feature Selection for Floorplan Acceleration in IC Physical Design[J]. Journal of Computer Science and Technology, 2020, 35(2): 468-474. DOI: 10.1007/s11390-020-9688-x
    [3]XI Haifeng, LUO Yupin, YANG Shiyuan. An Approach to Active Learning for Classifier Systems[J]. Journal of Computer Science and Technology, 1999, 14(4): 372-378.
    [4]HUANG Linpeng, SUN Yongqiang, YUAN Wei. Hierarchical Bulk Synchronous Parallel Model and Performance Optimization[J]. Journal of Computer Science and Technology, 1999, 14(3): 224-233.
    [5]Ma Jiyong, Gao Wen. The Supervised Learning Gaussian Mixture Model[J]. Journal of Computer Science and Technology, 1998, 13(5): 471-474.
    [6]Dong Yunmei. An Interactive Learning Algorithm for Acquisition of Concepts Represented as CFL[J]. Journal of Computer Science and Technology, 1998, 13(1): 1-8.
    [7]Jiamg Xiong. Some Undecidable Problems on Approximability of NP Optimization Problems[J]. Journal of Computer Science and Technology, 1996, 11(2): 126-132.
    [8]Zhou Aoying, Shi Baile. Query Optimization for Deductive Databases[J]. Journal of Computer Science and Technology, 1995, 10(2): 134-148.
    [9]Yao Shu, Zhang Bo. The Learning Convergence of CMAC in Cyclic Learning[J]. Journal of Computer Science and Technology, 1994, 9(4): 320-328.
    [10]Wu Xindong. Inductive Learning[J]. Journal of Computer Science and Technology, 1993, 8(2): 22-36.
  • Cited by

    Periodical cited type(2)

    1. Xinliang Wei, Kejiang Ye, Xinghua Shi, et al. Joint Participant and Learning Topology Selection for Federated Learning in Edge Clouds. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(8): 1456. DOI:10.1109/TPDS.2024.3413751
    2. Jiyao Liu, Xuanzhang Liu, Xinliang Wei, et al. Group Formation and Sampling in Group-Based Hierarchical Federated Learning. IEEE Transactions on Cloud Computing, 2024, 12(4): 1433. DOI:10.1109/TCC.2024.3482865

    Other cited types(0)

Catalog

    Article views (380) PDF downloads (66) Cited by(2)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return