Lexical semantics (also known as lexicosemantics), as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.
The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units include the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantics interface.
The study of lexical semantics concerns:
the classification and decomposition of lexical items
the differences and similarities in lexical semantic structure cross-linguistically
the relationship of lexical meaning to sentence meaning and syntax.
Lexical units, also referred to as syntactic atoms, can be independent such as in the case of root words or parts of compound words or they require association with other units, as prefixes and suffixes do. The former are termed free morphemes and the latter bound morphemes. They fall into a narrow range of meanings (semantic fields) and can combine with each other to generate new denotations.
Cognitive semantics is the linguistic paradigm/framework that since the 1980s has generated the most studies in lexical semantics, introducing innovations like prototype theory, conceptual metaphors, and frame semantics.
Lexical items contain information about category (lexical and syntactic), form and meaning. The semantics related to these categories then relate to each lexical item in the lexicon. Lexical items can also be semantically classified based on whether their meanings are derived from single lexical units or from their surrounding environment.
Lexical items participate in regular patterns of association with each other.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The objective of this course is to present the main models, formalisms and algorithms necessary for the development of applications in the field of natural language information processing. The concept
This course introduces the foundations of information retrieval, data mining and knowledge bases, which constitute the foundations of today's Web-based distributed information systems.
In linguistics, a word sense is one of the meanings of a word. For example, a dictionary may have over 50 different senses of the word "play", each of these having a different meaning based on the context of the word's usage in a sentence, as follows: We went to see the play Romeo and Juliet at the theater. The coach devised a great play that put the visiting team on the defensive. The children went out to play in the park. In each sentence different collocates of "play" signal its different meanings.
Frame semantics is a theory of linguistic meaning developed by Charles J. Fillmore that extends his earlier case grammar. It relates linguistic semantics to encyclopedic knowledge. The basic idea is that one cannot understand the meaning of a single word without access to all the essential knowledge that relates to that word.
Word-sense disambiguation (WSD) is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious/automatic but can often come to conscious attention when ambiguity impairs clarity of communication, given the pervasive polysemy in natural language. In computational linguistics, it is an open problem that affects other computer-related writing, such as discourse, improving relevance of search engines, anaphora resolution, coherence, and inference.
Explores methods for information extraction, including traditional and embedding-based approaches, supervised learning, distant supervision, and taxonomy induction.
We discuss some properties of generative models for word embeddings. Namely, (Arora et al., 2016) proposed a latent discourse model implying the concentration of the partition function of the word vectors. This concentration phenomenon led to an asymptotic ...
MICROTOME PUBL2021
,
The objective of this study was to evaluate the effect of Motor Imagery (MI) training on language comprehension. In line with literature suggesting an intimate relationship between the language and the motor system, we proposed that a MI-training could imp ...
Natural Language Processing (NLP) has become increasingly utilized to provide adaptivity in educational applications. However, recent research has highlighted a variety of biases in pre-trained language models. While existing studies investigate bias in di ...