# 1991Vector_Quantization_and_Sinal compression(BookZZ.org).pdf

VECTOR QUANTIZATION AND SIGNAL COMPRESSION THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE COMMUNICATIONS AND INFORMATION THEORY Consulting Editor: Robert Gallager Other books in the series: Digital Communication. Edward A. Lee, David G. Messerschmitt ISBN: 0-89838-274-2 An Introduction to Cryptolog) . Henk c.A. van Tilborg ISBN: 0-89838-271-8 Finite Fields for Computer Scientists and Engineers. Robert J. McEliece ISBN: 0-89838-191-6 An Introduction to Error Correcting Codes With Applications. Scott A. Vanstone and Paul C. van Oorschot ISBN: 0-7923-9017-2 Source Coding Theory Robert M. Gray ISBN: 0-7923-9048-2 Switching and TraffIC Theory for Integrated BroadbandNetworks. Joseph Y. Hui ISBN: 0-7923-9061-X Advances in Speech Coding, Bishnu Atal, Vladimir Cuperman and Allen Gersho ISBN: 0-7923-9091-1 Coding: An Algorithmic Approach, John B. Anderson and Seshadri Mohan ISBN: 0-7923-9210-8 Third Generation Wireless Information Networks, edited by Sanjiv Nanda and David J. Goodman ISBN: 0-7923-9128-3 VECTOR QUANTIZATION AND SIGNAL COMPRESSION by Allen Gersho University of California, Santa Barbara Robert M. Gray Stanford University . “ SPRINGER SCIENCE+BUSINESS MEDIA, LLC Library of Congress Cataloging-in-Publication Data Gersho, A1len. Vector quantization and signal compression / by Allen Gersho, Robert M. Gray. p. cm. -- (K1uwer international series in engineering and computer science ; SECS 159) Includes bibliographical references and index. ISBN 978-1-4613-6612-6 ISBN 978-1-4615-3626-0 (eBook) DOI 10.1007/978-1-4615-3626-0 1. Signal processing--Digital techniques. 2. Data compression (Telecommunication) 3. Coding theory. 1. Gray, Robert M., 1943- . II. Title. III. Series. TK5102.5.G45 1991 621.382 2--dc20 91-28580 CIP This book was prepared with U-TEX and reproduced by Kluwer from camera-ready copy supplied by the authors. Copyright ? 1992 Springer Science+Business Media New York Eighth Printing 2001. Originally published by Kluwer Academic Publishers,New York in 1992 Softcover reprint ofthe hardcover Ist edition 1992 AII rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photo-copying, recording, or otherwise, without the prior written permission of the publisher, Springer Science+Business Media, LLC. Printed on acid-free paper. to Roberta that is, a vector quantizer operates on vectors intsead of scalars. Shannon called vector quantizers “block source codes with a fidelity criterion“ and they have also been called “block quantizers.“ Much of the material contained in Part III is relatively recent in origin. The development of useful design algorithms and coding structures began in the late 1970s and interest in vector quantization expanded rapidly in the 1980s. Prior to that time digital signal processing circuitry was not fast enough and the memories were not large enough to use vector coding techniques in real time and there was little interest in design algorithms for such codes. The rapid advance in digital signal processor chips in the past decade made possible low cost implementations of such algorithms that would have been totally infeasible in the 1970s. During the past ten years, vector quantization has proved a valuable coding technique in a variety of applications, especially in voice and image coding. This is because of its simple structure, its ability to trade ever cheaper memory for often expensive computation, and the often serendipi- tous structural properties of the codes designed by iterative clustering algo- rithms. As an example of the desirable struct ural properties of vector quan- tizers, suitably designed tree-structured codes are nested and are naturally optimized for progressive transmission applications where one progressively improves a signal (such as an image) as more bits arrive. Another example is the ability of clustering algorithms used to design vector quantizers to enhance certain features of the original signal such as small tumors in a medici:il image. In many applications the traditional scalar techniques remain dominant and likely will remain so, but their vector extensions are finding a steadily xvi PREFACE increasing niche in signal compression and other signal processing applica- tions. This book grew out of the authors long standing interests in a wide range of theoretical and practical problems in analog-to-digital conversion and data compression. Our common interest in vector quantization and our cooperation and competition date from the late 1970s and continue to the present. Combined with our pedagogical interests, writing this book has been a common goal and chore for years. Other responsibilities and a constant desire to add more new material made progress slower than we would have liked. Many compromises have finally led to a completed book that includes most of the originally intended contents and provides a useful reference and teaching text, but it is less than the perfect treatment of our fantasies. Many interesting and useful techniques omitted here have emerged in the ever widening research literature of vector quantization. We apologize to the authors of such works for this omission. It was not possible to do justice to the entire research literature in the field and the book has already reached a length that stretches the ability of Kluwer Academic Publishers to be able to produce this volume at a reasonable price. We cannot claim to be the first to write a book devoted to signal com- pression, That honor goes to the excellent text by J ayant and Noll, Digital Coding of Waveforms [195]. Their book and ours have little in common, however, except for the common goal of analyzing and designing AID con- version and signal compression systems. Our emphasis is far more on vector quantization than was theirs (although they had one of the first basic treat- ments of vector quantization published in book form). We spend more time on the underlying fundamentals and basic properties and far more time on the rich variety of techniques for vector quantization. Much has happened since 1984 when their book was published and we have tried to describe the more important variations and extensions. While the two books over- lap somewhat in their treatment of scalar quantization, our treatment is designed to emphasize all the ideas necessary for the subsequent extensions to vector quantizers. We do not develop traditional techniques in the detail of J ayant and Noll as we reserve the space for the far more detailed de- velopment of vector quantization. For example, they provide an extensive coverage of ADPCM, which we only briefly mention in presenting the basic concepts of DPCM. We do not, however, skimp on the fundamentals. Our treatment of entropy coding differs from J ayant and Noll in that we provide more of the underlying theory and treat arithmetic codes and Ziv-Lempel codes as well as the standard Huffman codes. This is not, however, a text devoted to entropy coding and the reader is referred to the cited references for further details. PREFACE Synopsis Part I XVll Part I contains the underlying theory required to explain and derive the coding algorithms. Chapter 1 introduces the topic, the history, and further discusses the goals of the book. Chapter 2 provides the basic stochastic and linear systems background. Chapter 3 treats sampling, the conversion of a continuous time waveform into a discrete time waveform or sequence of sample values. Sampling is the first step in analog-to-digital conver- sion. Chapter 3 treats both the traditional one-dimensional sampling of a waveform and two-dimensional sampling used to convert two-dimensional image intensity rasters into a rectangular array of image pixels. Chapter 4 presents the basics of prediction theory with an emphasis on linear predic- tion. Prediction forms an essential component of many coding algorithms and the basic theory of prediction provides a guide to the design of such coding methods. Part II The traditional scalar coding techniques are developed in Part II. Chapters 5 through 8 treat analog-to-digital conversion techniques that perform the essential coding operation on individual symbols using simple scalar quan- tization and Chapter 9 treats entropy coding and its combination with quantization. Chapters 5 and 6 focus on direct coding of scalars by quanti- zation, in Chapter 7 prediction is used prior to quantization, and in Chapter 8 linear transformations on the data are taken before quantization. Chapter 5 treats the basics of simple scalar quantization: the performance charac- teristics and common high resolution approximations developed by Ben- nett. Chapter 6 describes the optimality properties of simple quantizers, the structure of high-resolution optimal quantizers, and the basic design algorithm used throughout the book to design codebooks, the algorithm developed by Stuart Lloyd of Bell Laboratories in the mid 1950s. Chapters 7 and 8 build on scalar quantizers by operating on the signal before quanti- zation so as to make the quantization more efficient. Such pre-processing is intended to remove some of the redundancy in the signal, to reduce the sig- nal variance, or to concentrate the signal energy. All of these properties can result in better performance for a given bit rate and complexity if properly used. Chapter 7 concentrates on predictive quantization wherein a linear prediction based on past reconstructed values is removed from the signal and the resulting prediction residual is quantized. In Chapter 8 vectors or blocks of input symbols are transformed by a simple linear and orthogo-XVlll PREFACE nal transform and the resulting transform coefficients are quantized. The issues of the optimal transform and bit allocation among the scalar quan- tizers are treated. The discrete-cosine transform is briefly covered and the basic concepts and performance capability of sub-band coding are presented briefly. Part III Part III is a detailed exposition of vector quantization fundamentals, design algorithms, and applications. Chapters 10 and 11 extend the fundamentals of scalar quantization of Chapters 5 and 6 to vectors. Chapter 10 pro- vides the motivation, definitions, properies, structures, and figures of merit of vector quantization. Chapter 11 develops the basic optimality proper- ties for vector quantizers and extends the Lloyd clustering algorithm to vectors. A variety of design examples to random processes, speech wave- forms, speech models, and images are described and pursued through the subsequent chapters. Chapter 12 considers the shortcomings in terms of complexity and memory of simple memoryless, unconstrained vector quan- tizers and provides a variety of constrained coding schemes that provide reduced complexity and better performance in trade for a tolerable loss of optimality. Included are tree-structured vector quantization (TSVQ), clas- sified vector quantizers, transform vector quantizers, product codes such as gain/shape and mean-residual vector quantizers, and multistage vector quantizers. Also covered are fast search algorithms for codebook searching nonlinear interpolative coding, and hierarchical coding. Chapters 13 and 14 consider vector quantizers with memory, sometimes called recursive vector quantizers or feedback vector quantizers. Chapter 13 treats the extension of predictive quantization to vectors, predictive vector quantization (PVQ). Here vector predictors are used to form a prediction residual of the original input vector and the resulting residual is quantized. This chapter builds on the linear prediction theory of Chapter 4 and de- velops some vector extensions for more sophisticated systems. Chapter 14 treats finite-state vector quantization (FSVQ) wherein the encoder and de- coder are finite-state machines. Like a predictive VQ, a finite-state VQ uses the past to implicitly predict the future and use a code book matched to the likely behavior. Unlike a predictive VQ, a finite-state VQ is limited to only a finite number of possible codebooks. Design algorithms and examples are provided for both coding classes. Chapter 15 is devoted to tree and trellis encoding systems. These sys- tems have decoders like those of predictive and finite-state vector quan- tizers, but the encoders are allowed to “look ahead“ into the future before making their decision as to which bits to send. At the cost of additional de-PREFACE XIX lay, such coding methods can provide improved performance by effectively increasing the input vector size while keeping complexity managable. Vari- ations on the design algorithms of Chapters 13 and 14 which are matched to such look-ahead coders are considered. Chapter 16 treats adaptive vector quantizers wherein the codebooks are allowed to change in a slow manner relative to the incoming data rate so as to better track local statistical variations. Both forward and backward adaptation are treated and simple gain and mean adaptive systems are described. More complicated adaptive coding techniques such as residual excited linear prediction (RELP) and code excited linear prediction (CELP) are also described. Chapter 17 is devoted to variable-rate coding, vector quantizers that can use more bits for active signals and fewer for less active signals while preserving an overall average bit rate. Such coding systems can provide a significantly better tradeoff between bit rate and average distortion, but they can be more complex and can require buffering if they are used in conjunction with fixed rate communication links. The performance im- provement often merits any such increase in complexity, however, and the complexity may in fact be reduced in applications that are inherently vari- able rate such as storage channels and communication networks. Much of Chapter 17 consists of taking advantage of the similarities of variable-rate vector quantizers and decision trees for statistical pattern classification in order to develop coder design algorithms for unbalanced tree-structured vector quantizers. Methods of growing and pruning such tree-structured coders are detailed. As vector quantizers can be used in conjunction with entropy coding to obtain even further compression at the expense of the added complication and the necessity of variable-rate coding, the design of vector quantizers specifically for such application is considered. Such entropy-constrained vector quantizers are seen to provide excellent com- pression if one is willing to pay the price. The techniques for designing variable-rate vector quantizers are shown to provide a simple and exact solution to the bit allocation problem introduced in Chapter 8 and impor- tant for a variety of vector quantizer structures, including classified and transform vector quantizers. Instructional Use This book is intended both as a reference text and for use in a graduate Electrical Engin