We use cookies to improve your experience with our site.
Lei Liu, Xiu Ma, Hua-Xiao Liu, Guang-Li Li, Lei Liu. FlexPDA: A Flexible Programming Framework for Deep Learning Accelerators[J]. Journal of Computer Science and Technology, 2022, 37(5): 1200-1220. DOI: 10.1007/s11390-021-1406-9
Citation: Lei Liu, Xiu Ma, Hua-Xiao Liu, Guang-Li Li, Lei Liu. FlexPDA: A Flexible Programming Framework for Deep Learning Accelerators[J]. Journal of Computer Science and Technology, 2022, 37(5): 1200-1220. DOI: 10.1007/s11390-021-1406-9

FlexPDA: A Flexible Programming Framework for Deep Learning Accelerators

  • There are a wide variety of intelligence accelerators with promising performance and energy efficiency, deployed in a broad range of applications such as computer vision and speech recognition. However, programming productivity hinders the deployment of deep learning accelerators. The low-level library invoked in the high-level deep learning framework which supports the end-to-end execution with a given model, is designed to reduce the programming burden on the intelligence accelerators. Unfortunately, it is inflexible for developers to build a network model for every deep learning application, which probably brings unnecessary repetitive implementation. In this paper, a flexible and efficient programming framework for deep learning accelerators, FlexPDA, is proposed, which provides more optimization opportunities than the low-level library and realizes quick transplantation of applications to intelligence accelerators for fast upgrades. We evaluate FlexPDA by using 10 representative operators selected from deep learning algorithms and an end-to-end network. The experimental results validate the effectiveness of FlexPDA, which achieves an end-to-end performance improvement of 1.620x over the low-level library.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return