In linguistics (especially generative grammar), a complementizer or complementiser (glossing abbreviation: ) is a (part of speech) that includes those words that can be used to turn a clause into the subject or object of a sentence. For example, the word that may be called a complementizer in English sentences like Mary believes that it is raining. The concept of complementizers is specific to certain modern grammatical theories. In traditional grammar, such words are normally considered conjunctions. The standard abbreviation for complementizer is C.
The complementizer is often held to be the syntactic head of a full clause, which is therefore often represented by the abbreviation CP (for complementizer phrase). Evidence of the complementizer functioning as the head of its clause includes that it is commonly the last element in a clause in head-final languages like Korean or Japanese in which other heads follow their complements, but it appears at the start of a clause in head-initial languages such as English in which heads normally precede their complements. The trees below illustrate the phrase "Taro said that he married Hanako" in Japanese and English; syntactic heads are marked in red and demonstrate that C falls in head-final position in Japanese, and in head-initial position in English.
It is common for the complementizers of a language to develop historically from other syntactic categories, a process known as grammaticalization.
Across world languages, pronouns and determiners are especially commonly used as complementizers (e.g., English that).
I read in the paper that it's going to be cold today.
Another frequent source of complementizers is the class of interrogative words. It is especially common for a form that otherwise means what to be borrowed as a complementizer, but other interrogative words are often used as well, as in the following colloquial English example in which unstressed how is roughly equivalent to that.
I read in the paper how it's going to be cold today.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In linguistics, an empty category, which may also be referred to as a covert category, is an element in the study of syntax that does not have any phonological content and is therefore unpronounced. Empty categories exist in contrast to overt categories which are pronounced. When representing empty categories in tree structures, linguists use a null symbol (∅) to depict the idea that there is a mental category at the level being represented, even if the word(s) are being left out of overt speech.
English is a West Germanic language in the Indo-European language family. It originated in early medieval England and, today, is the most spoken language in the world and the third most spoken native language, after Mandarin Chinese and Spanish. English is the most widely learned second language and is either the official language or one of the official languages in 59 sovereign states. There are more people who have learned English as a second language than there are native speakers.
A syntactic category is a syntactic unit that theories of syntax assume. Word classes, largely corresponding to traditional parts of speech (e.g. noun, verb, preposition, etc.), are syntactic categories. In phrase structure grammars, the phrasal categories (e.g. noun phrase, verb phrase, prepositional phrase, etc.) are also syntactic categories. Dependency grammars, however, do not acknowledge phrasal categories (at least not in the traditional sense).
This introductory, along with the eight articles contained within this Special Issue, highlights and brings greater clarity to entrant-incumbent interactions and to firm movement - when entrants traverse market territories for the creation and/or delivery ...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into vectors of real numbers in a low-dimensional space. By leveraging large corpora of unlabeled text, such continuous space representations can be computed for c ...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into vectors of real numbers in a low-dimensional space. By leveraging large corpora of unlabeled text, such continuous space representations can be computed for c ...