In linguistics (especially generative grammar), a complementizer or complementiser (glossing abbreviation: ) is a (part of speech) that includes those words that can be used to turn a clause into the subject or object of a sentence. For example, the word that may be called a complementizer in English sentences like Mary believes that it is raining. The concept of complementizers is specific to certain modern grammatical theories. In traditional grammar, such words are normally considered conjunctions. The standard abbreviation for complementizer is C.
The complementizer is often held to be the syntactic head of a full clause, which is therefore often represented by the abbreviation CP (for complementizer phrase). Evidence of the complementizer functioning as the head of its clause includes that it is commonly the last element in a clause in head-final languages like Korean or Japanese in which other heads follow their complements, but it appears at the start of a clause in head-initial languages such as English in which heads normally precede their complements. The trees below illustrate the phrase "Taro said that he married Hanako" in Japanese and English; syntactic heads are marked in red and demonstrate that C falls in head-final position in Japanese, and in head-initial position in English.
It is common for the complementizers of a language to develop historically from other syntactic categories, a process known as grammaticalization.
Across world languages, pronouns and determiners are especially commonly used as complementizers (e.g., English that).
I read in the paper that it's going to be cold today.
Another frequent source of complementizers is the class of interrogative words. It is especially common for a form that otherwise means what to be borrowed as a complementizer, but other interrogative words are often used as well, as in the following colloquial English example in which unstressed how is roughly equivalent to that.
I read in the paper how it's going to be cold today.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
In linguistics, an empty category, which may also be referred to as a covert category, is an element in the study of syntax that does not have any phonological content and is therefore unpronounced. Empty categories exist in contrast to overt categories which are pronounced. When representing empty categories in tree structures, linguists use a null symbol (∅) to depict the idea that there is a mental category at the level being represented, even if the word(s) are being left out of overt speech.
Langlais (English ; prononcé : ) est une langue indo-européenne germanique originaire d'Angleterre qui tire ses racines de langues du nord de l'Europe (terre d'origine des Angles, des Saxons et des Frisons) dont le vocabulaire a été enrichi et la syntaxe et la grammaire modifiées par le français anglo-normand, apporté par les Normands, puis par le français avec les Plantagenêt. La langue anglaise est ainsi composée d'environ 29 % de mots d'origine normande et française et plus des deux tiers de son vocabulaire proviennent du français ou du latin.
A syntactic category is a syntactic unit that theories of syntax assume. Word classes, largely corresponding to traditional parts of speech (e.g. noun, verb, preposition, etc.), are syntactic categories. In phrase structure grammars, the phrasal categories (e.g. noun phrase, verb phrase, prepositional phrase, etc.) are also syntactic categories. Dependency grammars, however, do not acknowledge phrasal categories (at least not in the traditional sense).
This introductory, along with the eight articles contained within this Special Issue, highlights and brings greater clarity to entrant-incumbent interactions and to firm movement - when entrants traverse market territories for the creation and/or delivery ...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into vectors of real numbers in a low-dimensional space. By leveraging large corpora of unlabeled text, such continuous space representations can be computed for c ...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into vectors of real numbers in a low-dimensional space. By leveraging large corpora of unlabeled text, such continuous space representations can be computed for c ...