Research Article
Design and Implementation for GPU-based Seamless Rate Adaptive Decoder
@INPROCEEDINGS{10.4108/icst.chinacom.2014.256356, author={Lu Qiu and Min Wang and Jun Wu and Zhifeng Zhang and Xinlin Huang}, title={Design and Implementation for GPU-based Seamless Rate Adaptive Decoder}, proceedings={9th International Conference on Communications and Networking in China}, publisher={IEEE}, proceedings_a={CHINACOM}, year={2015}, month={1}, keywords={gpu seamless rate adaptation massive parallel computing cuda}, doi={10.4108/icst.chinacom.2014.256356} }
- Lu Qiu
Min Wang
Jun Wu
Zhifeng Zhang
Xinlin Huang
Year: 2015
Design and Implementation for GPU-based Seamless Rate Adaptive Decoder
CHINACOM
IEEE
DOI: 10.4108/icst.chinacom.2014.256356
Abstract
Recently, the research on rate adaption at receiver has caused widespread concern. Seamless rate adaptive (SRA) is one of the promising rate adaptation schemes for wireless communication system. However, the high complexity of decoding hinders its application. The graphics processor unit (GPU) is able to provide a low-cost and flexible software-based multi-core architecture for high performance computing. This paper proposes a GPU design and implementation for SRA decoder. Firstly, we discuss the parallelism of SRA decoding algorithm. In order to improve the throughput of the GPU-based SRA decoder, a massive parallel architecture is used in SRA decoder, which consists of NxL parallel threads. Given fully consideration of the hardware architecture of GPU, we partition the block and select the appropriate number of threads within an individual block to further improve the throughput of GPU-based SRA decoder. In addition, we propose an efficient memory-usage mechanism in GPU-based SRA decoder which takes fully advantage of the shared memory in one block. Finally, We implement the SRA decoder on the Compute Unified Device Architecture (CUDA) platform. The GPU-based SRA decoder is flexible for different measurement matrix, and achieves a 60x speedup compared by its single-threaded counterpart performed on central processing unit (CPU).