TSM2: Optimizing Tall-And-Skinny Matrix-Matrix Multiplication on GPUs
Linear algebra operations have been widely used in big data analytics and scientific computations. Many works have been done on optimizing linear algebra operations on GPUs with regular-shaped input. However, few works are focusing on fully utilizing GPU resources when the input is not regular-shaped. Current optimizations lack of considering fully utilizing the memory bandwidth and computing power, therefore they could only achieve sub-optimal performance. In this paper, we propose a performant tall-and-skinny matrix-matrix multiplication algorithm on GPUs - TSM2. It focuses on optimizing linear algebra operation with none regular-shaped input. We implement the proposed algorithm and test on three different Nvidia GPU micro-architectures: Kepler, Maxwell, and Pascal. Experiments show that our TSM2 speedups the computation by 1.1x - 3x, improves memory bandwidth utilization by 8% - 47.6%, and improves computing power utilization by 7% - 37.3% comparing to the current state-of-the-art works. We replace the original matrix operations in K-means and Algorithm-Bases Fault Tolerance (ABFT) with TSM2 and achieve up to 1.89x and 1.90x speed up.
J. Chen et al., "TSM2: Optimizing Tall-And-Skinny Matrix-Matrix Multiplication on GPUs," Proceedings of the ACM International Conference on Supercomputing (2019, Phoenix, AZ), pp. 106-116, Association for Computing Machinery (ACM), Jun 2019.
The definitive version is available at https://doi.org/10.1145/3330345.3330355
ACM International Conference on Supercomputing (2019: Jun. 26-28, Phoenix, AZ)
Keywords and Phrases
GEMM; GPU; Matrix-Matrix Multiplication; Optimization; Tall-And-Skinny
International Standard Book Number (ISBN)
Article - Conference proceedings
© 2019 Association for Computing Machinery (ACM), All rights reserved.
26 Jun 2019