Abstract
Rapid growth in scientific data and a widening gap between computational speed and I/O bandwidth make it increasingly infeasible to store and share all data produced by scientific simulations. Instead, we need methods for reducing data volumes: ideally, methods that can scale data volumes adaptively so as to enable negotiation of performance and fidelity tradeoffs in different situations. Multigrid-based hierarchical data representations hold promise as a solution to this problem, allowing for flexible conversion between different fidelities so that, for example, data can be created at high fidelity and then transferred or stored at lower fidelity via logically simple and mathematically sound operations. However, the effective use of such representations has been hindered until now by the relatively high costs of creating, accessing, reducing, and otherwise operating on such representations. We describe here highly optimized data refactoring kernels for GPU accelerators that enable efficient creation and manipulation of data in multigrid-based hierarchical forms. We demonstrate that our optimized design can achieve up to 250 TB/s aggregated data refactoring throughput -- 83% of theoretical peak -- on 1024 nodes of the Summit supercomputer. We showcase our optimized design by applying it to a largescale scientific visualization workflow and the MGARD lossy compression software.
Recommended Citation
J. Chen et al., "Accelerating Multigrid-Based Hierarchical Scientific Data Refactoring on GPUs," Proceedings of the 35th IEEE International Parallel and Distributed Symposium (2021, Portland, OR), Institute of Electrical and Electronics Engineers (IEEE), May 2021.
Meeting Name
35th IEEE International Parallel and Distributed Symposium (2021: May 17-21, Portland, OR)
Department(s)
Computer Science
Keywords and Phrases
Multigrid; Data Refactoring; GPU
Document Type
Article - Conference proceedings
Document Version
Preprint
File Type
text
Language(s)
English
Rights
© 2021 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
Publication Date
21 May 2021
Comments
This work was made possible by support from the Department of Energy’s Office of Advanced Scientific Computing Research, including via the CODAR and ADIOS Exascale Computing Project (ECP) projects. This research used resources of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility supported under Contract DE-C05-00OR22725.