A Multiple Processor Approach to Data Compression
Fast data compression is necessary for efficient use of computer storage and transmission line bandwidth. The importance of utilizing multiple processor computers to speedup the performance of this task grows as the availability of these machines increases. The approach taken in this paper is to distribute the task of data compression among multiple processors in a pipeline architecture. The implementation of this technique was found to be effective at providing increased compression speeds as the number of processors increased. The ability to use different numbers of processors, in this algorithm, for compression than for decompression provides a basis for wider use and acceptance of this algorithm. The parallel algorithm was implemented on Intel iPSC/860 hypercube machine. A wide variety of dam sets found in literature were tested with this technique. Experimental results for speedup and compression ratio are presented. The results provide valuable insight into the effects of software architecture selection, complexity on the compression ratio and speed.
J. L. Simpson and C. Sabharwal, "A Multiple Processor Approach to Data Compression," Proceedings of the 1998 ACM Symposium on Applied Computing, Association for Computing Machinery (ACM), Jan 1998.
The definitive version is available at http://dx.doi.org/10.1145/330560.331007
Keywords and Phrases
Data Compression; Process Modeling; System Modeling; Entropy; Parallel algorithms
Article - Conference proceedings
© 1998 Association for Computing Machinery (ACM), All rights reserved.