Meaning–text theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, for the construction of models of natural language. The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications, including machine translation, phraseology, and lexicography.
Linguistic models in meaning–text theory operate on the principle that language consists in a mapping from the content or meaning (semantics) of an utterance to its form or text (phonetics). Intermediate between these poles are additional levels of representation at the syntactic and morphological levels.
Representations at the different levels are mapped, in sequence, from the unordered network of the semantic representation (SemR) through the dependency tree-structures of the syntactic representation (SyntR) to a linearized chain of morphemes of the morphological representation (MorphR) and, ultimately, the temporally-ordered string of phones of the phonetic representation (PhonR) (not generally addressed in work in this theory). The relationships between representations on the different levels are considered to be translations or mappings, rather than transformations, and are mediated by sets of rules, called "components", which ensure the appropriate, language-specific transitions between levels.
Semantic representations (SemR) in meaning–text theory consist primarily of a web-like semantic structure (SemS) which combines with other semantic-level structures (most notably the semantic-communicative structure [SemCommS], which represents what is commonly referred to as "information structure" in other frameworks). The SemS itself consists of a network of predications, represented as nodes with arrows running from predicate nodes to argument node(s). Arguments can be shared by multiple predicates, and predicates can themselves be arguments of other predicates.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In linguistics, a discontinuity occurs when a given word or phrase is separated from another word or phrase that it modifies in such a manner that a direct connection cannot be established between the two without incurring crossing lines in the tree structure. The terminology that is employed to denote discontinuities varies depending on the theory of syntax at hand. The terms discontinuous constituent, displacement, long distance dependency, unbounded dependency, and projectivity violation are largely synonymous with the term discontinuity.
The term phrase structure grammar was originally introduced by Noam Chomsky as the term for grammar studied previously by Emil Post and Axel Thue (Post canonical systems). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars.
A predicate is one of the two main parts of a sentence (the other being the subject, which the predicate modifies). For the simple sentence "John [is yellow]", John acts as the subject, and is yellow acts as the predicate, a subsequent description of the subject headed with a verb. In current linguistic semantics, a predicate is an expression that can be true of something. Thus, the expressions "is yellow" or "is like broccoli" are true of those things that are yellow or like broccoli, respectively.
The work presented in this thesis deals with several problems met in information retrieval (IR), task which one can summarise as identifying, in a collection of "documents", a subset of documents carrying a sought information, i.e.. relevant for a request ...
Automatic evaluation of non-native speech accentedness has potential implications for not only language learning and accent identification systems but also for speaker and speech recognition systems. From the perspective of speech production, the two prima ...
2015
,
Automatic evaluation of non-native speech accentedness has potential implications for not only language learning and accent identification systems but also for speaker and speech recognition systems. From the perspective of speech production, the two prima ...