Digital image processingDigital image processing is the use of a digital computer to process s through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over . It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.
Color quantizationIn computer graphics, color quantization or color image quantization is applied to color spaces; it is a process that reduces the number of distinct colors used in an , usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images.
Pouvoir de résolutionLe pouvoir de résolution, ou pouvoir de séparation, pouvoir séparateur, résolution spatiale, résolution angulaire, exprime la capacité d'un système optique de mesure ou d'observation – les microscopes, les télescopes ou l'œil, mais aussi certains détecteurs, particulièrement ceux utilisés en – à distinguer les détails. Il peut être caractérisé par l'angle ou la distance minimal(e) qui doit séparer deux points contigus pour qu'ils soient correctement discernés.
ComputationA computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms. Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. The study of computation is the field of computability, itself a sub-field of computer science. The notion that mathematical statements should be ‘well-defined’ had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive.
Computational complexityIn computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.
Indexed colorIn computing, indexed color is a technique to manage s' colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. It is a form of vector quantization compression. When an image is encoded in this way, color information is not directly carried by the image pixel data, but is stored in a separate piece of data called a color lookup table (CLUT) or palette: an array of color specifications. Every element in the array represents a color, indexed by its position within the array.
Theory of computationIn theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
Complexité en tempsEn algorithmique, la complexité en temps est une mesure du temps utilisé par un algorithme, exprimé comme fonction de la taille de l'entrée. Le temps compte le nombre d'étapes de calcul avant d'arriver à un résultat. Habituellement, le temps correspondant à des entrées de taille n est le temps le plus long parmi les temps d’exécution des entrées de cette taille ; on parle de complexité dans le pire cas. Les études de complexité portent dans la majorité des cas sur le comportement asymptotique, lorsque la taille des entrées tend vers l'infini, et l'on utilise couramment les notations grand O de Landau.
Grand modèle de langageUn grand modèle de langage, grand modèle linguistique, grand modèle de langue, modèle massif de langage ou encore modèle de langage de grande taille (LLM, pour l'anglais large language model) est un modèle de langage possédant un grand nombre de paramètres (généralement de l'ordre du milliard de poids ou plus). Ce sont des réseaux de neurones profonds entraînés sur de grandes quantités de texte non étiqueté utilisant l'apprentissage auto-supervisé ou l'apprentissage semi-supervisé.
Scale-invariant feature transform[[Fichier:Matching of two images using the SIFT method.jpg|thumb|right|alt=Exemple de mise en correspondance de deux images par la méthode SIFT : des lignes vertes relient entre eux les descripteurs communs à un tableau et une photo de ce même tableau, de moindre qualité, ayant subi des transformations. |Exemple de résultat de la comparaison de deux images par la méthode SIFT (Fantasia ou Jeu de la poudre, devant la porte d’entrée de la ville de Méquinez, par Eugène Delacroix, 1832).