ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

An Empirical Analysis of Image Compression with Huffman Coding Technique on GPGPU

Journal: International Journal of Scientific Engineering and Technology (IJSET) (Vol.6, No. 2)

Publication Date:

Authors : ; ; ; ; ;

Page : 91-95

Keywords : GPGPU; CUDA; MLP; GFLOPS; PRAM;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Processing the volume of medical images may involve high computations and complexity, so the GPU (Graphics Processing Unit) technology provides the remarkable acceleration in the graphics computations and also in general purpose computations due to its highly threaded parallelism support. The General Purpose Graphics Processing Unit (GPGPU or GP2U) is accomplished a superior performance and efficiency with the support of Compute Unified Device Architecture (CUDA) programming model. This is the preliminary motivation to adapt the GPGPU technology in the medical field to process the volume of images like ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI) to diagnose the diseases. The Central Processing Unit (CPU) may have to pay more attention to process the volume of medical images on vital computations other than the CPU administration tasks. To overwhelm the CPU's overhead, the coprocessor GPU effectively can operate as a turbo booster on images and graphical data. This review analyzes the medical image compression techniques in the GPU environment and it is showing the profound speedup to compute task in a snap. This paper presents the comparative study of traditional compression techniques Huffman coding with respect to computational performance when they implemented in CPU and GPU environments. The results shown that most of the GPU's compression techniques are computationally faster by 3.9 times on an average when compare to CPU processing

Last modified: 2017-02-02 20:13:32