Journal of Computer Science and Technology ›› 2020, Vol. 35 ›› Issue (1): 145-160.doi: 10.1007/s11390-020-9822-9

Special Issue: Computer Architecture and Systems

Previous Articles     Next Articles

ExaHDF5: Delivering Efficient Parallel I/O on Exascale Computing Systems

Suren Byna1,*, M. Scot Breitenfeld2, Bin Dong1, Quincey Koziol1, Elena Pourmal2, Dana Robinson2, Jerome Soumagne2, Houjun Tang1, Venkatram Vishwanath3, Richard Warren2   

  1. 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94597, U.S.A;
    2 The HDF Group, Champaign, IL 61820, U.S.A;
    3 Argonne National Laboratory, Lemont, IL 60439, U.S.A
  • Received:2019-07-06 Revised:2019-08-28 Online:2020-01-05 Published:2020-01-14
  • Contact: Suren Byna E-mail:sbyna@lbl.gov
  • About author:Suren Byna received his Master's degree in 2001 and Ph.D. degree in 2006, both in computer science from Illinois Institute of Technology, Chicago. He is a Staff Scientist in the Scientific Data Management (SDM) Group in CRD at Lawrence Berkeley National Laboratory (LBNL). His research interests are in scalable scientific data management. More specifically, he works on optimizing parallel I/O and on developing systems for managing scientific data. He is the PI of the ECP funded ExaHDF5 project, and ASCR funded object-centric data management systems (Proactive Data Containers-PDC) and experimental and observational data management (EOD-HDF5) projects.
  • Supported by:
    This research was supported by the Exascale Computing Project under Grant No. 17-SC-20-SC, a joint project of the U.S. Department of Energy's Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation's exascale computing imperative. This work is also supported by the Director, Office of Science, Office of Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract Nos. DE-AC02-05CH11231 and DE-AC02-06CH11357. This research was funded in part by the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract No. DE-AC02-06CH11357. This research used resources of the National Energy Research Scientific Computing Center, which is DOE Office of Science User Facilities supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

Scientific applications at exascale generate and analyze massive amounts of data. A critical requirement of these applications is the capability to access and manage this data efficiently on exascale systems. Parallel I/O, the key technology enables moving data between compute nodes and storage, faces monumental challenges from new applications, memory, and storage architectures considered in the designs of exascale systems. As the storage hierarchy is expanding to include node-local persistent memory, burst buffers, etc., as well as disk-based storage, data movement among these layers must be efficient. Parallel I/O libraries of the future should be capable of handling file sizes of many terabytes and beyond. In this paper, we describe new capabilities we have developed in Hierarchical Data Format version 5 (HDF5), the most popular parallel I/O library for scientific applications. HDF5 is one of the most used libraries at the leadership computing facilities for performing parallel I/O on existing HPC systems. The state-of-the-art features we describe include:Virtual Object Layer (VOL), Data Elevator, asynchronous I/O, full-featured single-writer and multiple-reader (Full SWMR), and parallel querying. In this paper, we introduce these features, their implementations, and the performance and feature benefits to applications and other libraries.

Key words: parallel I/O, Hierarchical Data Format version 5 (HDF5), I/O performance, virtual object layer, HDF5 optimizations

[1] Folk M, Heber G, Koziol Q, Pourmal E, Robinson D. An overview of the HDF5 technology suite and its applications. In Proc. the 2011 EDBT/ICDT Workshop on Array Databases, March 2011, pp.36-47.
[2] Li J W, Liao W K, Choudhary A N et al. Parallel netCDF:A high-performance scientific I/O interface. In Proc. the 2003 ACM/IEEE Conference on Supercomputing, November 2003, Article No. 39.
[3] Lofstead J, Zheng F, Klasky S, Schwan K. Adaptable, metadata rich IO methods for portable high performance IO. In Proc. the 23rd IEEE International Symposium on Parallel Distributed Processing, May 2009, Article No. 44.
[4] Dong B, Byna S, Wu K S et al. Data elevator:Lowcontention data movement in hierarchical storage system. In Proc. the 23rd IEEE International Conference on High Performance Computing, December 2016, pp.152-161.
[5] Dong B, Wang T, Tang H, Koziol Q, Wu K, Byna S. ARCHIE:Data analysis acceleration with array caching in hierarchical storage. In Proc. the 2018 IEEE International Conference on Big Data, December 2018, pp.211-220.
[6] Seo S, Amer A, Balaji P et al. Argobots:A lightweight lowlevel threading and tasking framework. IEEE Transactions on Parallel and Distributed Systems, 2018, 29(3):512-526.
[7] Wu K. FastBit:An efficient indexing technology for accelerating data-intensive science. Journal of Physics:Conference Series, 2005, 16(16):556-560.
[8] Racah E, Beckham C, Maharaj T, Kahou S E, Prabhat, Pal C. ExtremeWeather:A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events. In Proc. the 31st Annual Conference on Neural Information Processing Systems, December 2017, pp.3402-3413.
[9] Byna S, Chou J C Y, Rübel O et al. Parallel I/O, analysis, and visualization of a trillion particle simulation. In Proc. the International Conference on High Performance Computing, Networking, Storage and Analysis, November 2012, Article No. 59.
[10] Chen J H, Choudhary A, de Supinski B et al. Terascale direct numerical simulations of turbulent combustion using S3D. Computational Science & Discovery, 2009, 2(1).
[11] Dong B, Wu K S, Byna S, Liu J L, Zhao W J, Rusu F. ArrayUDF:User-defined scientific data analysis on arrays. In Proc. the 26th International Symposium on HighPerformance Parallel and Distributed Computing, June 2017, pp.53-64.
[1] Xiao-Dong Meng, Chen-Tao Wu, Min-Yi Guo, Jie Li, Xiao-Yao Liang, Bin Yao, Long Zheng. A Hint Frequency Based Approach to Enhancing the I/O Performance of Multilevel Cache Storage Systems [J]. , 2017, 32(2): 312-328.
[2] Dan Feng, Hong Jiang, and Yi-Feng Zhu. I/O Performance of an RAID-10 Style Parallel File System [J]. , 2004, 19(6): 0-0.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] Wang Jianchao; Wei Daozheng;. Reconvergent-Fanout-Oriented Testability Measure[J]. , 1988, 3(1): 16 -28 .
[2] Hong Jiarong; Carl Uhrik;. The ALFALFA Entomology Pest Identification System[J]. , 1988, 3(4): 251 -262 .
[3] Xu Jie; Li Qingnan; Huang Shize; Xu Jiangfeng;. DFTSNA:A Distributed Fault-Tolerant Shipboard System[J]. , 1990, 5(2): 109 -116 .
[4] Weigeng Shi;. Reconnectable Network with Limited Resources[J]. , 1991, 6(3): 243 -249 .
[5] Li Tao;. Competition Based Neural Networks for Assignment Problems[J]. , 1991, 6(4): 305 -315 .
[6] Chen Shifu; Chen Bin; Pan Jingui;. ICAS: An Incremental Concept Acquisition System Using Attribute-Based Description[J]. , 1992, 7(3): 284 -288 .
[7] Tan Jianrong; Zheng Jianmin; Peng Qunsheng;. A Unified Algorithm for Finding the Intersection Curve of Surfaces[J]. , 1994, 9(2): 107 -116 .
[8] Tang Weiqing; Wen Sili; Liu Shenquan;. An Object-Oriented Model ofUser Interface Generation Tool[J]. , 1994, 9(3): 275 -284 .
[9] Wang Xiaoming; Yang Qiaolin;. Using Virtual ATE Model to Migrate Test Programs[J]. , 1995, 10(4): 289 -297 .
[10] Sun Yuning; Wang Xiaoming; Shi Wanchun;. ICTSSE: An Object-Oriented IC Test Software Supporting Environment[J]. , 1995, 10(5): 447 -454 .

ISSN 1000-9000(Print)

         1860-4749(Online)
CN 11-2296/TP

Home
Editorial Board
Author Guidelines
Subscription
Journal of Computer Science and Technology
Institute of Computing Technology, Chinese Academy of Sciences
P.O. Box 2704, Beijing 100190 P.R. China
Tel.:86-10-62610746
E-mail: jcst@ict.ac.cn
 
  Copyright ©2015 JCST, All Rights Reserved