Transformeur génératif pré-entraînédroite|vignette| Architecture du modèle GPT Le transformeur génératif pré-entraîné (ou GPT, de l’anglais generative pre-trained transformer) est une famille de modèles de langage généralement formée sur un grand corpus de données textuelles pour générer un texte de type humain. Il est construit en utilisant plusieurs blocs de l'architecture du transformeur. Ils peuvent être affinés pour diverses tâches de traitement du langage naturel telles que la génération de texte, la traduction de langue et la classification de texte.
Rec. 601Le format CCIR 601, connu également sous les noms ITU-R BT.601 et REC. 601 fut défini par le CCIR. Il régit la numérisation d'un flux vidéo entrelacé analogique, au format 525 lignes / 60 Hz ou 625 lignes / 50 Hz. La vidéo numérisée sous ce format doit utiliser l'encodage de couleur YUV 4:2:2, avec la luminance qui doit être codée au minimum sur 8 bits et la chrominance sur 4 bits. Rec. 709, un standard équivalent pour la haute définition Rec. 2020, un standard équivalent pour la ultra haute définition Vid
User researchUser research focuses on understanding user behaviors, needs and motivations through interviews, surveys, usability evaluations and other forms of feedback methodologies. It is used to understand how people interact with products and evaluate whether design solutions meet their needs. This field of research aims at improving the user experience (UX) of products, services, or processes by incorporating experimental and observational research methods to guide the design, development, and refinement of a product.
User experience designUser experience design (UX design, UXD, UED, or XD) is the process of defining the experience a user would go through when interacting with a company, its services, and its products. Design decisions in UX design are often driven by research, data analysis, and test results rather than aesthetic preferences and opinions. Unlike user interface design, which focuses solely on the design of a computer interface, UX design encompasses all aspects of a user's perceived experience with a product or website, such as its usability, usefulness, desirability, brand perception, and overall performance.
Grand modèle de langageUn grand modèle de langage, grand modèle linguistique, grand modèle de langue, modèle massif de langage ou encore modèle de langage de grande taille (LLM, pour l'anglais large language model) est un modèle de langage possédant un grand nombre de paramètres (généralement de l'ordre du milliard de poids ou plus). Ce sont des réseaux de neurones profonds entraînés sur de grandes quantités de texte non étiqueté utilisant l'apprentissage auto-supervisé ou l'apprentissage semi-supervisé.
Modality (human–computer interaction)In the context of human–computer interaction, a modality is the classification of a single independent channel of input/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory), or other significant differences in processing (e.g., text vs. image). A system is designated unimodal if it has only one modality implemented, and multimodal if it has more than one. When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities.
Réseau de neurones artificielsUn réseau de neurones artificiels, ou réseau neuronal artificiel, est un système dont la conception est à l'origine schématiquement inspirée du fonctionnement des neurones biologiques, et qui par la suite s'est rapproché des méthodes statistiques. Les réseaux de neurones sont généralement optimisés par des méthodes d'apprentissage de type probabiliste, en particulier bayésien.
Attention (machine learning)Machine learning-based attention is a mechanism mimicking cognitive attention. It calculates "soft" weights for each word, more precisely for its embedding, in the context window. It can do it either in parallel (such as in transformers) or sequentially (such as recursive neural networks). "Soft" weights can change during each runtime, in contrast to "hard" weights, which are (pre-)trained and fine-tuned and remain frozen afterwards. Multiple attention heads are used in transformer-based large language models.
Explainable artificial intelligenceExplainable AI (XAI), also known as Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
ITU-TThe International Telecommunication Union Telecommunication Standardization Sector (ITU-T) is one of the three Sectors (branches) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Communication Technology, such as X.509 for cybersecurity, Y.3172 and Y.3173 for machine learning, and H.264/MPEG-4 AVC for video compression, between its Member States, Private Sector Members, and Academia Members.