We use cookies to improve your experience with our site.
Yun Liang, Shuo Wang. Performance-Centric Optimization for Racetrack Memory Based Register File on GPUs[J]. Journal of Computer Science and Technology, 2016, 31(1): 36-49. DOI: 10.1007/s11390-016-1610-1
Citation: Yun Liang, Shuo Wang. Performance-Centric Optimization for Racetrack Memory Based Register File on GPUs[J]. Journal of Computer Science and Technology, 2016, 31(1): 36-49. DOI: 10.1007/s11390-016-1610-1

Performance-Centric Optimization for Racetrack Memory Based Register File on GPUs

  • The key to high performance for GPU architecture lies in its massive threading capability to drive a large number of cores and enable execution overlapping among threads. However, in reality, the number of threads that can simultaneously execute is often limited by the size of the register file on GPUs. The traditional SRAM-based register file takes up so large amount of chip area that it cannot scale to meet the increasing demand of GPU applications. Racetrack memory (RM) is a promising technology for designing large capacity register file on GPUs due to its high data storage density. However, without careful deployment of RM-based register file, the lengthy shift operations of RM may hurt the performance. In this paper, we explore RM for designing high-performance register file for GPU architecture. High storage density RM helps to improve the thread level parallelism (TLP), but if the bits of the registers are not aligned to the ports, shift operations are required to move the bits to the access ports before they are accessed, and thus the read/write operations are delayed. We develop an optimization framework for RM-based register file on GPUs, which employs three different optimization techniques at the application, compilation, and architecture level, respectively. More clearly, we optimize the TLP at the application level, design a register mapping algorithm at the compilation level, and design a preshifting mechanism at the architecture level. Collectively, these optimizations help to determine the TLP without causing cache and register file resource contention and reduce the shift operation overhead. Experimental results using a variety of representative workloads demonstrate that our optimization framework achieves up to 29% (21% on average) performance improvement.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return