We use cookies to improve your experience with our site.
李焱, 张云泉, 刘益群, 龙国平, 贾海鹏. MPFFT: 自动调优GPU FFT库[J]. 计算机科学技术学报, 2013, 28(1): 90-105. DOI: 10.1007/s11390-013-1314-8
引用本文: 李焱, 张云泉, 刘益群, 龙国平, 贾海鹏. MPFFT: 自动调优GPU FFT库[J]. 计算机科学技术学报, 2013, 28(1): 90-105. DOI: 10.1007/s11390-013-1314-8
Yan Li, Yun-Quan Zhang, Yi-Qun Liu, Guo-Ping Long, Hai-Peng Jia. MPFFT: An Auto-Tuning FFT Library for OpenCL GPUs[J]. Journal of Computer Science and Technology, 2013, 28(1): 90-105. DOI: 10.1007/s11390-013-1314-8
Citation: Yan Li, Yun-Quan Zhang, Yi-Qun Liu, Guo-Ping Long, Hai-Peng Jia. MPFFT: An Auto-Tuning FFT Library for OpenCL GPUs[J]. Journal of Computer Science and Technology, 2013, 28(1): 90-105. DOI: 10.1007/s11390-013-1314-8

MPFFT: 自动调优GPU FFT库

MPFFT: An Auto-Tuning FFT Library for OpenCL GPUs

  • 摘要: 傅里叶方法在天文学、医学影像、地震学和光谱学等科学与工程领域中有着广泛的应用.快速傅里叶变换(FFT)是计算离散傅里叶变换的快速算法.GPU作为一种新兴的高性能计算体系结构,其独特的存储层次给应用程序带来了较高的性能和效率,然而GPU需要软件对其存储进行管理,其编程复杂性对开发人员带来了重大的挑战.在本文中,我们基于GPU提出了一个FFT自适应性能优化框架,并且基于该框架实现了名为MPFFT的高性能FFT库.对于计算长度为2的幂时多维FFT,MPFFT的性能在AMD GPU上远远优于clAmdFft,并且在NVIDIA GPU上与CUFFT的性能相当.此外,MPFFT也支持规模非2的幂时FFT的计算.对于3维非2的幂FFT,MPFF是FFTW在4线程时性能的1.5倍至28倍,并且在Tesla C2050上相较于CUFFT 4.0取得了20.01倍的平均加速比.

     

    Abstract: Fourier methods have revolutionized many fields of science and engineering, such as astronomy, medical imaging, seismology and spectroscopy, and the fast Fourier transform (FFT) is a computationally efficient method of generating a Fourier transform. The emerging class of high performance computing architectures, such as GPU, seeks to achieve much higher performance and efficiency by exposing a hierarchy of distinct memories to software. However, the complexity of GPU programming poses a significant challenge to developers. In this paper, we propose an automatic performance tuning framework for FFT on various OpenCL GPUs, and implement a high performance library named MPFFT based on this framework. For power-of-two length FFTs, our library substantially outperforms the clAmdFft library on AMD GPUs and achieves comparable performance as the CUFFT library on NVIDIA GPUs. Furthermore, our library also supports non-power-of-two size. For 3D non-power-of-two FFTs, our library delivers 1.5x to 28x faster than FFTW with 4 threads and 20.01x average speedup over CUFFT 4.0 on Tesla C2050.

     

/

返回文章
返回