Research Article
Parallel Sparse Matrix Vector Multiplication on Intel MIC: Performance Analysis
@INPROCEEDINGS{10.1007/978-3-319-94180-6_29, author={Hana Alyahya and Rashid Mehmood and Iyad Katib}, title={Parallel Sparse Matrix Vector Multiplication on Intel MIC: Performance Analysis}, proceedings={Smart Societies, Infrastructure, Technologies and Applications. First International Conference, SCITA 2017, Jeddah, Saudi Arabia, November 27--29, 2017, Proceedings}, proceedings_a={SCITA}, year={2018}, month={7}, keywords={SpMV Intel Many Integrated Core Architecture (MIC) KNC OpenMP CSR Xeon Phi}, doi={10.1007/978-3-319-94180-6_29} }
- Hana Alyahya
Rashid Mehmood
Iyad Katib
Year: 2018
Parallel Sparse Matrix Vector Multiplication on Intel MIC: Performance Analysis
SCITA
Springer
DOI: 10.1007/978-3-319-94180-6_29
Abstract
Numerous important scientific and engineering applications rely on and are hindered by, the intensive computational and storage requirements of sparse matrix-vector multiplication (SpMV) operation. SpMV also forms an important part of many (stationary and non-stationary) iterative methods for solving linear equation systems. Its performance is affected by factors including the storage format used to store the sparse matrix, the specific computational algorithm and its implementation. While SpMV performance has been studied extensively on conventional CPU architectures, research on its performance on emerging architectures, such as Intel Many Integrated Core (MIC) Architecture, is still in its infancy. In this paper, we provide a performance analysis of the parallel implementation of SpMV on the first-generation of Intel Xeon Phi Coprocessor, Intel MIC, named Knights Corner (KNC). We use the offload programming model to offload the SpMV computations to MIC using OpenMP. We measure the performance in terms of the execution time, offloading time and memory usage. We achieve speedups of up to 11.63x on execution times and 3.62x on offloading times using up to 240 threads compared to the sequential implementation. The memory usage varies depending on the size of the sparse matrix and the number of non-zero elements in the matrix.