Machine translation can use a method based on dictionary entries, which means that the words will be translated as a dictionary does – word by word, usually without much correlation of meaning between them. Dictionary lookups may be done with or without morphological analysis or lemmatisation. While this approach to machine translation is probably the least sophisticated, dictionary-based machine translation is ideally suitable for the translation of long lists of phrases on the subsentential (i.e., not a full sentence) level, e.g. inventories or simple catalogs of products and services.
It can also be used to expedite manual translation, if the person carrying it out is fluent in both languages and therefore capable of correcting syntax and grammar.
LMT, introduced around 1990, is a Prolog-based machine-translation system that works
on specially made bilingual dictionaries, such as the Collins English-German
(CEG), which have been rewritten in an indexed form which is easily readable by
computers. This method uses a structured lexical data base (LDB) in order to
correctly identify word categories from the source language, thus constructing
a coherent sentence in the target language, based on rudimentary morphological
analysis. This system uses "frames" to identify the position a certain word
should have, from a syntactical point of view, in a sentence. This "frames" are
mapped via language conventions, such as UDICT in the case of English.
In its early (prototype) form LMT uses three lexicons,
accessed simultaneously: source, transfer and target, although it is possible
to encapsulate this whole information in a single lexicon. The program uses a
lexical configuration consisting of two main elements. The first element is a
hand-coded lexicon addendum which contains possible incorrect translations. The
second element consist of various bilingual and monolingual dictionaries
regarding the two languages which are the source and target languages.
This method of Dictionary-Based Machine translation explores
a different paradigm from systems such as LMT.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The Human Language Technology (HLT) course introduces methods and applications for language processing and generation, using statistical learning and neural networks.
Machine translation is use of either rule-based or probabilistic (i.e. statistical and, most recently, neural network-based) machine learning approaches to translation of text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. History of machine translation The origins of machine translation can be traced back to the work of Al-Kindi, a ninth-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine translation.
Spinal Cord Injury (SCI) disrupts the communication between the brain and spinal circuits below the lesion, leading to a plethora of neurological impairments, including the loss of motor function. At present, the only medical practices to enhance recovery ...
In recent decades, major efforts to digitize historical documents led to the creation of large machine readable corpora, including newspapers, which are waiting to be processed and analyzed. Newspapers are a valuable historical source, notably because of t ...
Vision-Language Pre-training (VLP) has advanced the performance of many visionlanguage tasks, such as image-text retrieval, visual entailment, and visual reasoning. The pre-training mostly utilizes lexical databases and image queries in English. Previous w ...