Rate–distortion theoryRate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D. Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods.
Algorithme de compression sans pertevignette|Comparaison de la compression d'image entre les formats JPG (à gauche) et PNG (à droite). PNG utilise une compression sans perte. On appelle algorithme de compression sans perte toute procédure de codage ayant pour objectif de représenter une certaine quantité d'information en utilisant ou en occupant un espace plus petit, permettant ainsi une reconstruction exacte des données d'origine. C'est-à-dire que la compression sans perte englobe les techniques permettant de générer un duplicata exact du flux de données d'entrée après un cycle de compression/expansion.
Transformée en cosinus discrèteLa transformée en cosinus discrète ou TCD (de l'anglais : DCT ou Discrete Cosine Transform) est une transformation proche de la transformée de Fourier discrète (DFT). Le noyau de projection est un cosinus et crée donc des coefficients réels, contrairement à la DFT, dont le noyau est une exponentielle complexe et qui crée donc des coefficients complexes. On peut cependant exprimer la DCT en fonction de la DFT, qui est alors appliquée sur le signal symétrisé.
Pearson correlation coefficientIn statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations.
Débit binaireLe débit binaire est une mesure de la quantité de données numériques transmises par unité de temps. Selon ses définitions normatives, il s'exprime en bits par seconde (bit/s, b/s ou bps) ou un de ses multiples en employant les préfixes du Système international (SI) : kb/s (kilobits par seconde), Mb/s (mégabits par seconde) et ainsi de suite. Dans le domaine de l'informatique, le débit est parfois exprimé en octets par seconde. Un octet équivaut à 8 bits, nombre de bits correspondant aux premières et aux plus simples des machines, et permettant de transmettre un caractère alphanumérique.
Linear network codingIn computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network.
Intraclass correlationIn statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly units in the same group resemble each other. While it is viewed as a type of correlation, unlike most other correlation measures, it operates on data structured as groups rather than data structured as paired observations.
Digital image processingDigital image processing is the use of a digital computer to process s through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over . It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.
Compression artifactA compression artifact (or artefact) is a noticeable distortion of media (including , audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired or transmitted (streamed) within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts.
Codage de HuffmanLe codage de Huffman est un algorithme de compression de données sans perte. Le codage de Huffman utilise un code à longueur variable pour représenter un symbole de la source (par exemple un caractère dans un fichier). Le code est déterminé à partir d'une estimation des probabilités d'apparition des symboles de source, un code court étant associé aux symboles de source les plus fréquents. Un code de Huffman est optimal au sens de la plus courte longueur pour un codage par symbole, et une distribution de probabilité connue.