CUED Publications database

Lossy Compression in Near-Linear Time via Efficient Random Codebooks and Databases

Gioran, C and Kontoyiannis, I Lossy Compression in Near-Linear Time via Efficient Random Codebooks and Databases. (Unpublished)

Full text not available from this repository.

Abstract

The compression-complexity trade-off of lossy compression algorithms that are based on a random codebook or a random database is examined. Motivated, in part, by recent results of Gupta-Verd\'{u}-Weissman (GVW) and their underlying connections with the pattern-matching scheme of Kontoyiannis' lossy Lempel-Ziv algorithm, we introduce a non-universal version of the lossy Lempel-Ziv method (termed LLZ). The optimality of LLZ for memoryless sources is established, and its performance is compared to that of the GVW divide-and-conquer approach. Experimental results indicate that the GVW approach often yields better compression than LLZ, but at the price of much higher memory requirements. To combine the advantages of both, we introduce a hybrid algorithm (HYB) that utilizes both the divide-and-conquer idea of GVW and the single-database structure of LLZ. It is proved that HYB shares with GVW the exact same rate-distortion performance and implementation complexity, while, like LLZ, requiring less memory, by a factor which may become unbounded, depending on the choice or the relevant design parameters. Experimental results are also presented, illustrating the performance of all three methods on data generated by simple discrete memoryless sources. In particular, the HYB algorithm is shown to outperform existing schemes for the compression of some simple discrete sources with respect to the Hamming distortion criterion.

Item Type: Article
Uncontrolled Keywords: cs.IT cs.IT math.IT
Subjects: UNSPECIFIED
Divisions: Div F > Signal Processing and Communications
Depositing User: Cron Job
Date Deposited: 08 Jan 2018 20:12
Last Modified: 18 Aug 2020 12:42
DOI: