›› 2013,Vol. 28 ›› Issue (1): 72-89.doi: 10.1007/s11390-013-1313-9

所属专题: Computer Architecture and Systems

• Special Section on Selected Paper from NPC 2011 • 上一篇    下一篇

Carlos Teijeiro1, Student Member, IEEE, Guillermo L. Taboada1, Juan Touriño1, Senior Member, IEEE, Member, ACM, Ramón Doallo1, Member, IEEE, José C. Mouriño2, Damián A. Mallón3, and Brian Wibecan4   

  • 收稿日期:2012-02-08 修回日期:2012-09-21 出版日期:2013-01-05 发布日期:2013-01-05
  • 基金资助:

    This work was funded by Hewlett-Packard (Project "Improving UPC Usability and Performance in Constellation Systems: Imple- mentation/Extensions of UPC Libraries"), and partially supported by the Ministry of Science and Innovation of Spain under Project No. TIN2010-16735 and the Galician Government (Consolidation of Competitive Research Groups, Xunta de Galicia ref. 2010/6).

Design and Implementation of an Extended Collectives Library for Unified Parallel C

Carlos Teijeiro1, Student Member, IEEE, Guillermo L. Taboada1, Juan Touriño1, Senior Member, IEEE, Member, ACM, Ramón Doallo1, Member, IEEE, José C. Mouriño2, Damián A. Mallón3, and Brian Wibecan4   

  1. 1. Computer Architecture Group, University of A Coruña, A Coruña 15071, Spain;
    2. Galicia Supercomputing Center, Santiago de Compostela 15705, Spain;
    3. Jülich Supercomputing Centre, Institute for Advanced Simulation, Forschungszentrum Jülich, Jülich D-52425, Germany;
    4. Industry Standard Servers Group, Hewlett-Packard Company, Montgomery, Alabama 36117, U.S.A.
  • Received:2012-02-08 Revised:2012-09-21 Online:2013-01-05 Published:2013-01-05
  • Supported by:

    This work was funded by Hewlett-Packard (Project "Improving UPC Usability and Performance in Constellation Systems: Imple- mentation/Extensions of UPC Libraries"), and partially supported by the Ministry of Science and Innovation of Spain under Project No. TIN2010-16735 and the Galician Government (Consolidation of Competitive Research Groups, Xunta de Galicia ref. 2010/6).

Abstract: Unified Parallel C (UPC) is a parallel extension of ANSI C based on the Partitioned Global Address Space (PGAS) programming model, which provides a shared memory view that simplifies code development while it can take advantage of the scalability of distributed memory architectures. Therefore, UPC allows programmers to write parallel applications on hybrid shared/distributed memory architectures, such as multi-core clusters, in a more productive way, accessing remote memory by means of different high-level language constructs, such as assignments to shared variables or collective primitives. However, the standard UPC collectives library includes a reduced set of eight basic primitives with quite limited functionality. This work presents the design and implementation of extended UPC collective functions that overcome the limitations of the standard collectives library, allowing, for example, the use of a specific source and destination thread or defining the amount of data transferred by each particular thread. This library fulfills the demands made by the UPC developers community and implements portable algorithms, independent of the specific UPC compiler/runtime being used. The use of a representative set of these extended collectives has been evaluated using two applications and four kernels as case studies. The results obtained confirm the suitability of the new library to provide easier programming without trading off performance, thus achieving high productivity in parallel programming to harness the performance of hybrid shared/distributed memory architectures in high performance computing.

[1] El-Ghazawi T, Chauvin S. UPC benchmarking issues. InProc. the 30th Int. Conference on Parallel Processing, Sept.2001, pp.365-372.
[2] Taboada G L, Teijeiro C, Touriño J et al. Performance evalu-ation of unified parallel C collective communications. In Proc.the 11th IEEE Int. Conf. High Performance Computing andCommunications, Jun. 2009, pp.69-78.
[3] Salama R A, Sameh A. Potential performance improvementof collective operations in UPC. Advances in Parallel Com-puting, 2008, 15: 413-422.
[4] Cantonnet F, Yao Y, Zahran M M et al. Productivity analy-sis of the UPC language. In Proc. the 18th Int. Parallel andDistributed Processing Symposium, Apr. 2004, pp.254.
[5] Nishtala R, Alm醩i G, Cascaval C. Performance without pain= productivity: Data layout and collective communication inUPC. In Proc. the 13thACM SIGPLAN Symp. Principlesand Practice of Parallel Programming, Feb. 2008, pp.99-110.
[6] Nishtala R, Zheng Y, Hargrove P, Yelick K. Tuning collec-tive communication for Partitioned Global Address Space pro-gramming models. Parallel Computing, 2011, 37(9): 576-591.
[7] Bruck J, Ho C T, Kipnis S, Upfal E, Weathersby D. Effi-cient algorithms for all-to-all communications in multiportmessage-passing systems. IEEE Transactions on Parallel andDistributed Systems, 1997, 8(11): 1143-1156.
[8] Dinan J, Balaji P, Lusk E L et al. Hybrid parallel program-ming with MPI and unified parallel C. In Proc. the 7th Int.Conf. Computing Frontiers, May 2010, pp.177-186.
[9] El-Ghazawi T, Cantonnet F, Yao Y, Annareddy S, MohamedA S. Benchmarking parallel compilers: A UPC case study. Future Generation Computer Systems, 2006, 22(7): 764-775.
[10] Mall髇 D A, Taboada G L, Teijeiro C, Touriño J, Fraguela BB, G髆ez A, Doallo R, Mouriño J C. Performance evaluationof MPI, UPC and OpenMP on multicore architectures. InProc. the 16th European PVM/MPI Users' Group Meeting,Sept. 2009, pp.174-184.
[11] Zhang Z, Seidel S. Benchmark measurements of current UPCplatforms. In Proc. the 19th Int. Parallel and DistributedProcessing Symposium, Apr. 2005.
[12] Dean J, Ghemawat S. MapReduce: A flexible data processingtool. Communications of the ACM, 2010, 53(1): 72-77.
[13] Teijeiro C, Taboada G L, Touriño J, Doallo R. Design andimplementation of MapReduce using the PGAS programmingmodel with UPC. In Proc. the 17th International Conferenceon Parallel and Distributed Systems, Dec. 2011, pp.196-203.
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!
版权所有 © 《计算机科学技术学报》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn
总访问量: