Meaning–text theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, for the construction of models of natural language. The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications, including machine translation, phraseology, and lexicography.
Linguistic models in meaning–text theory operate on the principle that language consists in a mapping from the content or meaning (semantics) of an utterance to its form or text (phonetics). Intermediate between these poles are additional levels of representation at the syntactic and morphological levels.
Representations at the different levels are mapped, in sequence, from the unordered network of the semantic representation (SemR) through the dependency tree-structures of the syntactic representation (SyntR) to a linearized chain of morphemes of the morphological representation (MorphR) and, ultimately, the temporally-ordered string of phones of the phonetic representation (PhonR) (not generally addressed in work in this theory). The relationships between representations on the different levels are considered to be translations or mappings, rather than transformations, and are mediated by sets of rules, called "components", which ensure the appropriate, language-specific transitions between levels.
Semantic representations (SemR) in meaning–text theory consist primarily of a web-like semantic structure (SemS) which combines with other semantic-level structures (most notably the semantic-communicative structure [SemCommS], which represents what is commonly referred to as "information structure" in other frameworks). The SemS itself consists of a network of predications, represented as nodes with arrows running from predicate nodes to argument node(s). Arguments can be shared by multiple predicates, and predicates can themselves be arguments of other predicates.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
In linguistics, a discontinuity occurs when a given word or phrase is separated from another word or phrase that it modifies in such a manner that a direct connection cannot be established between the two without incurring crossing lines in the tree structure. The terminology that is employed to denote discontinuities varies depending on the theory of syntax at hand. The terms discontinuous constituent, displacement, long distance dependency, unbounded dependency, and projectivity violation are largely synonymous with the term discontinuity.
The term phrase structure grammar was originally introduced by Noam Chomsky as the term for grammar studied previously by Emil Post and Axel Thue (Post canonical systems). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars.
En grammaire, le prédicat est une partie de la phrase simple. Sa notion connaît plusieurs interprétations, toutes prenant en compte son rapport avec une autre partie de la phrase simple, le sujet. Selon l’une des interprétations, le prédicat est un syntagme verbal, qu’il soit constitué d’un verbe seul, ou de celui-ci et d’un ou plusieurs éléments qui lui sont subordonnés. Par exemple, dans la phrase Pierre écrit une lettre à sa mère, le prédicat serait toute la partie de la phrase qui suit le sujet Pierre.
Automatic evaluation of non-native speech accentedness has potential implications for not only language learning and accent identification systems but also for speaker and speech recognition systems. From the perspective of speech production, the two prima ...
The work presented in this thesis deals with several problems met in information retrieval (IR), task which one can summarise as identifying, in a collection of "documents", a subset of documents carrying a sought information, i.e.. relevant for a request ...
Automatic evaluation of non-native speech accentedness has potential implications for not only language learning and accent identification systems but also for speaker and speech recognition systems. From the perspective of speech production, the two prima ...