Register-Aware Optimizations for Parallel Sparse Matrix-Matrix Multiplication
Journal article, Peer reviewed
MetadataShow full item record
Original versionInternational journal of parallel programming. 2018, . 10.1007/s10766-018-0604-8
General sparse matrix–matrix multiplication (SpGEMM) is a fundamental building block of a number of high-level algorithms and real-world applications. In recent years, several efficient SpGEMM algorithms have been proposed for many-core processors such as GPUs. However, their implementations of sparse accumulators, the core component of SpGEMM, mostly use low speed on-chip shared memory and global memory, and high speed registers are seriously underutilised. In this paper, we propose three novel register-aware SpGEMM algorithms for three representative sparse accumulators, i.e., sort, merge and hash, respectively. We fully utilise the GPU registers to fetch data, finish computations and store results out. In the experiments, our algorithms deliver excellent performance on a benchmark suite including 205 sparse matrices from the SuiteSparse Matrix Collection. Specifically, on an Nvidia Pascal P100 GPU, our three register-aware sparse accumulators achieve on average 2.0 × (up to 5.4 × ), 2.6 × (up to 10.5 × ) and 1.7 × (up to 5.2 × ) speedups over their original implementations in libraries bhSPARSE, RMerge and NSPARSE, respectively.