Text segmentationText segmentation is the process of dividing written text into meaningful units, such as words, sentences, or topics. The term applies both to mental processes used by humans when reading text, and to artificial processes implemented in computers, which are the subject of natural language processing. The problem is non-trivial, because while some written languages have explicit word boundary markers, such as the word spaces of written English and the distinctive initial, medial and final letter shapes of Arabic, such signals are sometimes ambiguous and not present in all written languages.
Grammar inductionGrammar induction (or grammatical inference) is the process in machine learning of learning a formal grammar (usually as a collection of re-write rules or productions or alternatively as a finite state machine or automaton of some kind) from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs.
LemmatisationLa lemmatisation désigne un traitement lexical apporté à un texte en vue de son classement dans un index ou de son analyse. Ce traitement consiste à appliquer aux occurrences des lexèmes sujets à flexion (en français, verbes, substantifs, adjectifs) un codage renvoyant à leur entrée lexicale commune (« forme canonique » enregistrée dans les dictionnaires de la langue, le plus couramment), que l'on désigne sous le terme de lemme.
PyTorchPyTorch est une bibliothèque logicielle Python open source d'apprentissage machine qui s'appuie sur développée par Meta. PyTorch est gouverné par la PyTorch Foundation. PyTorch permet d'effectuer les calculs tensoriels nécessaires notamment pour l'apprentissage profond (deep learning). Ces calculs sont optimisés et effectués soit par le processeur (CPU) soit, lorsque c'est possible, par un processeur graphique (GPU) supportant CUDA.
Constraint grammarConstraint grammar (CG) is a methodological paradigm for natural language processing (NLP). Linguist-written, context-dependent rules are compiled into a grammar that assigns grammatical tags ("readings") to words or other tokens in running text. Typical tags address lemmatisation (lexeme or base form), inflexion, derivation, syntactic function, dependency, valency, case roles, semantic type etc. Each rule either adds, removes, selects or replaces a tag or a set of grammatical tags in a given sentence context.
Argument miningArgument mining, or argumentation mining, is a research area within the natural-language processing field. The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs. Such argumentative structures include the premise, conclusions, the argument scheme and the relationship between the main and subsidiary argument, or the main and counter-argument within discourse.
Sentence boundary disambiguationSentence boundary disambiguation (SBD), also known as sentence breaking, sentence boundary detection, and sentence segmentation, is the problem in natural language processing of deciding where sentences begin and end. Natural language processing tools often require their input to be divided into sentences; however, sentence boundary identification can be challenging due to the potential ambiguity of punctuation marks. In written English, a period may indicate the end of a sentence, or may denote an abbreviation, a decimal point, an ellipsis, or an email address, among other possibilities.
Association for Computational LinguisticsL’Association for Computational Linguistics (ACL) est la principale société savante en traitement automatique des langues. Elle organise tous les ans la plus prestigieuse conférence scientifique du domaine. Avant 1968, l'association était nommé Association for Machine Translation and Computational Linguistics (AMTCL). Elle est également responsable d'une revue scientifique publiée par MIT Press depuis 1988.
Analyse syntaxique de la langue naturelleEn linguistique informatique ou en TALN, l'analyse syntaxique (syntactic parsing) réfère au processus d'analyse automatisé d'une chaine de mots — représentant une phrase — dans le but d'obtenir les relations coexistant entre ces mots, par l'intermédiaire d'un arbre syntaxique. Lorsqu’on part de texte brut, ce dernier doit avoir été segmenté en unités lexicales au préalable (tokenization). Habituellement, une analyse lexicale (lemmatisation, analyse morphosyntaxique...
Distributional semanticsDistributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-called distributional hypothesis: linguistic items with similar distributions have similar meanings. The distributional hypothesis in linguistics is derived from the semantic theory of language usage, i.