Digital images are becoming increasingly successful thanks to the development and the facilitated access to systems permitting their generation (i.e. camera, scanner, imaging software, etc). A digital image basically corresponds to a 2D discrete set of regularly spaced samples, called pixels, where each pixel contains the light intensity information (e.g., luminance, chrominance) of a very localized spatial region of the image. In the case of natural images, pixel values are acquired through one or several arrays of MOS semiconductors (Charge Couple Devices, CCDs), each generating an electrical information proportional to the incoming light intensity. The initial finalities of digital images were the storage on a dedicated medium (e.g., camera's memory, computer's hard drive, CDROM), eventual transmissions, and final display on a screen or printing. With such a narrow scope, the principal goal of image processing and coding tools was to face storage and transmission bandwidth limitations thanks to efficient compression algorithms reducing the image representation size. However, with recent developments in computing, algorithmic and telecommunication domains, many new applications (i.e. web-publishing, remote browsing etc.) have arisen. They generally require additional and enhanced features (i.e. progressive decoding, random-access, region of interest support, robustness to transmission errors, etc.) and have motivated the creation of a new generation of coding algorithms which, besides their good compression performance, present many other useful features. Hence, digital images are almost never represented as a simple set of pixel values (i.e. raw representation) but, instead, under a specific compact way (i.e. compressed or coded representation), chosen according to features it brings to the considered application. A compressed version of an image is obtained by removing as much spatial, visual and statistical redundancies as possible, thanks to appropriate coding methods, while keeping an acceptable visual quality. Noting that natural images have most of their energy concentrated in low frequency components, recent coding algorithms generally first decompose the image into a specific frequency domain DCT, DWT, etc). The goal is to obtain a representation, where few coefficients are sufficient for reconstructing the image with a good quality. The precision of transformed coefficients is then generally reduced by quantization in order to make them more compressible by an entropy coder, aiming at removing statistical redundancies of quantization indexes. The ultimate compressed representation, called codestream, is usually obtained by a rate-allocation process that tries to achieve the best trade-off between the compression ratio and the reconstructed image quality. JPEG 2000, the new still image coding standard developed by the Joint Photographic Experts Group JPEG), is based on these state-of-the-art compression techniques, but is also designed
Touradj Ebrahimi, Michela Testolina, Davi Nachtigall Lazzarotto