计算机科学技术学报 ›› 2020,Vol. 35 ›› Issue (1): 145-160.doi: 10.1007/s11390-020-9822-9

所属专题: Computer Architecture and Systems

• • 上一篇    下一篇

ExaHDF5:为百万兆次计算系统提供有效并行I/O

Suren Byna1,*, M. Scot Breitenfeld2, Bin Dong1, Quincey Koziol1, Elena Pourmal2, Dana Robinson2, Jerome Soumagne2, Houjun Tang1, Venkatram Vishwanath3, Richard Warren2   

  1. 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94597, U.S.A;
    2 The HDF Group, Champaign, IL 61820, U.S.A;
    3 Argonne National Laboratory, Lemont, IL 60439, U.S.A
  • 收稿日期:2019-07-06 修回日期:2019-08-28 出版日期:2020-01-05 发布日期:2020-01-14
  • 通讯作者: Suren Byna E-mail:sbyna@lbl.gov
  • 作者简介:Suren Byna received his Master's degree in 2001 and Ph.D. degree in 2006, both in computer science from Illinois Institute of Technology, Chicago. He is a Staff Scientist in the Scientific Data Management (SDM) Group in CRD at Lawrence Berkeley National Laboratory (LBNL). His research interests are in scalable scientific data management. More specifically, he works on optimizing parallel I/O and on developing systems for managing scientific data. He is the PI of the ECP funded ExaHDF5 project, and ASCR funded object-centric data management systems (Proactive Data Containers-PDC) and experimental and observational data management (EOD-HDF5) projects.
  • 基金资助:
    This research was supported by the Exascale Computing Project under Grant No. 17-SC-20-SC, a joint project of the U.S. Department of Energy's Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation's exascale computing imperative. This work is also supported by the Director, Office of Science, Office of Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract Nos. DE-AC02-05CH11231 and DE-AC02-06CH11357. This research was funded in part by the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract No. DE-AC02-06CH11357. This research used resources of the National Energy Research Scientific Computing Center, which is DOE Office of Science User Facilities supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

ExaHDF5: Delivering Efficient Parallel I/O on Exascale Computing Systems

Suren Byna1,*, M. Scot Breitenfeld2, Bin Dong1, Quincey Koziol1, Elena Pourmal2, Dana Robinson2, Jerome Soumagne2, Houjun Tang1, Venkatram Vishwanath3, Richard Warren2        

  1. 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94597, U.S.A;
    2 The HDF Group, Champaign, IL 61820, U.S.A;
    3 Argonne National Laboratory, Lemont, IL 60439, U.S.A
  • Received:2019-07-06 Revised:2019-08-28 Online:2020-01-05 Published:2020-01-14
  • Contact: Suren Byna E-mail:sbyna@lbl.gov
  • About author:Suren Byna received his Master's degree in 2001 and Ph.D. degree in 2006, both in computer science from Illinois Institute of Technology, Chicago. He is a Staff Scientist in the Scientific Data Management (SDM) Group in CRD at Lawrence Berkeley National Laboratory (LBNL). His research interests are in scalable scientific data management. More specifically, he works on optimizing parallel I/O and on developing systems for managing scientific data. He is the PI of the ECP funded ExaHDF5 project, and ASCR funded object-centric data management systems (Proactive Data Containers-PDC) and experimental and observational data management (EOD-HDF5) projects.
  • Supported by:
    This research was supported by the Exascale Computing Project under Grant No. 17-SC-20-SC, a joint project of the U.S. Department of Energy's Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation's exascale computing imperative. This work is also supported by the Director, Office of Science, Office of Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract Nos. DE-AC02-05CH11231 and DE-AC02-06CH11357. This research was funded in part by the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract No. DE-AC02-06CH11357. This research used resources of the National Energy Research Scientific Computing Center, which is DOE Office of Science User Facilities supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

百万兆级的科学应用产生并分析了大量数据。此类应用急需有效访问和管理百万兆次系统中的数据。并行I/O,是使得数据能在计算结点和存储间移动的关键技术。它面临来自百万兆级系统设计中应考虑的新应用、内存和存储系统结构所产生的巨大挑战。随着存储层次结构不断扩展,包括了结点本地持久内存、突发缓存等,以及基于磁盘的存储,这些层次间的数据移动必须是有效的。将来的并行I/O库应能处理兆字节及以上的大小的文件。本文描述了分层数据格式版本5(Hierarchical Data Format version 5,HDF5)中研发的新功能。HDF5为最流行的用于科学应用的平行I/O库,是现有HPC系统中执行并行I/O的主导计算设施所使用的最常用函数库之一。我们描述的具有代表性的特征包括:虚拟对象层(VOL),数据电梯(Data Elevator),异步I/O,全功能单写多读(Full SWMR),以及并行查询。本文我们介绍了这些特征及其实现,以及它们的性能和能为应用和其它函数库所能带来的好处。

关键词: 并行I/O, 分层数据格式版本5(HDF5), I/O性能, 虚拟对象层, HDF5优化

Abstract: Scientific applications at exascale generate and analyze massive amounts of data. A critical requirement of these applications is the capability to access and manage this data efficiently on exascale systems. Parallel I/O, the key technology enables moving data between compute nodes and storage, faces monumental challenges from new applications, memory, and storage architectures considered in the designs of exascale systems. As the storage hierarchy is expanding to include node-local persistent memory, burst buffers, etc., as well as disk-based storage, data movement among these layers must be efficient. Parallel I/O libraries of the future should be capable of handling file sizes of many terabytes and beyond. In this paper, we describe new capabilities we have developed in Hierarchical Data Format version 5 (HDF5), the most popular parallel I/O library for scientific applications. HDF5 is one of the most used libraries at the leadership computing facilities for performing parallel I/O on existing HPC systems. The state-of-the-art features we describe include:Virtual Object Layer (VOL), Data Elevator, asynchronous I/O, full-featured single-writer and multiple-reader (Full SWMR), and parallel querying. In this paper, we introduce these features, their implementations, and the performance and feature benefits to applications and other libraries.

Key words: parallel I/O, Hierarchical Data Format version 5 (HDF5), I/O performance, virtual object layer, HDF5 optimizations

[1] Folk M, Heber G, Koziol Q, Pourmal E, Robinson D. An overview of the HDF5 technology suite and its applications. In Proc. the 2011 EDBT/ICDT Workshop on Array Databases, March 2011, pp.36-47.
[2] Li J W, Liao W K, Choudhary A N et al. Parallel netCDF:A high-performance scientific I/O interface. In Proc. the 2003 ACM/IEEE Conference on Supercomputing, November 2003, Article No. 39.
[3] Lofstead J, Zheng F, Klasky S, Schwan K. Adaptable, metadata rich IO methods for portable high performance IO. In Proc. the 23rd IEEE International Symposium on Parallel Distributed Processing, May 2009, Article No. 44.
[4] Dong B, Byna S, Wu K S et al. Data elevator:Lowcontention data movement in hierarchical storage system. In Proc. the 23rd IEEE International Conference on High Performance Computing, December 2016, pp.152-161.
[5] Dong B, Wang T, Tang H, Koziol Q, Wu K, Byna S. ARCHIE:Data analysis acceleration with array caching in hierarchical storage. In Proc. the 2018 IEEE International Conference on Big Data, December 2018, pp.211-220.
[6] Seo S, Amer A, Balaji P et al. Argobots:A lightweight lowlevel threading and tasking framework. IEEE Transactions on Parallel and Distributed Systems, 2018, 29(3):512-526.
[7] Wu K. FastBit:An efficient indexing technology for accelerating data-intensive science. Journal of Physics:Conference Series, 2005, 16(16):556-560.
[8] Racah E, Beckham C, Maharaj T, Kahou S E, Prabhat, Pal C. ExtremeWeather:A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events. In Proc. the 31st Annual Conference on Neural Information Processing Systems, December 2017, pp.3402-3413.
[9] Byna S, Chou J C Y, Rübel O et al. Parallel I/O, analysis, and visualization of a trillion particle simulation. In Proc. the International Conference on High Performance Computing, Networking, Storage and Analysis, November 2012, Article No. 59.
[10] Chen J H, Choudhary A, de Supinski B et al. Terascale direct numerical simulations of turbulent combustion using S3D. Computational Science & Discovery, 2009, 2(1).
[11] Dong B, Wu K S, Byna S, Liu J L, Zhao W J, Rusu F. ArrayUDF:User-defined scientific data analysis on arrays. In Proc. the 26th International Symposium on HighPerformance Parallel and Distributed Computing, June 2017, pp.53-64.
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 周笛;. A Recovery Technique for Distributed Communicating Process Systems[J]. , 1986, 1(2): 34 -43 .
[2] 陈世华;. On the Structure of Finite Automata of Which M Is an(Weak)Inverse with Delay τ[J]. , 1986, 1(2): 54 -59 .
[3] 李万学;. Almost Optimal Dynamic 2-3 Trees[J]. , 1986, 1(2): 60 -71 .
[4] C.Y.Chung; 华宣仁;. A Chinese Information Processing System[J]. , 1986, 1(2): 15 -24 .
[5] 章萃; 赵沁平; 徐家福;. Kernel Language KLND[J]. , 1986, 1(3): 65 -79 .
[6] 屈延文;. AGDL: A Definition Language for Attribute Grammars[J]. , 1986, 1(3): 80 -91 .
[7] 王建潮; 魏道政;. An Effective Test Generation Algorithm for Combinational Circuits[J]. , 1986, 1(4): 1 -16 .
[8] 陈肇雄; 高庆狮;. A Substitution Based Model for the Implementation of PROLOG——The Design and Implementation of LPROLOG[J]. , 1986, 1(4): 17 -26 .
[9] 黄河燕;. A Parallel Implementation Model of HPARLOG[J]. , 1986, 1(4): 27 -38 .
[10] 郑国梁; 李辉;. The Design and Implementation of the Syntax-Directed Editor Generator(SEG)[J]. , 1986, 1(4): 39 -48 .
版权所有 © 《计算机科学技术学报》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn
总访问量: