The Association for Computational Linguistics (ACL) is a scientific and professional organization for people working on natural language processing. Its namesake conference is one of the primary high impact conferences for natural language processing research, along with EMNLP. The conference is held each summer in locations where significant computational linguistics research is carried out.
It was founded in 1962, originally named the Association for Machine Translation and Computational Linguistics (AMTCL). It became the ACL in 1968. The ACL has a European (EACL), a North American (NAACL), and an Asian (AACL) chapter.
The ACL was founded in 1962 as the Association for Machine Translation and Computational Linguistics (AMTCL). The initial membership was about 100. In 1965 the AMTCL took over the journal Mechanical Translation and Computational Linguistics. This journal was succeeded by many other journals: American Journal of Computational Linguistics (1974—1978, 1980—1983), and then Computational Linguistics (1984—present). Since 1988, the journal has been published for the ACL by MIT Press.
The annual meeting was first held in 1963 in conjunction with the Association for Computing Machinery National Conference. The annual meeting was, for much time, relatively informal and did not publish anything lengthier than abstracts. By 1968, the society took on its current name, the Association for Computational Linguistics (ACL). The publishing of the annual meeting's Proceedings of the ACL began in 1979, and gradually matured into its modern form. Many of the meetings were held in conjunction with the Linguistic Society of America, and a few with the American Society for Information Science and Cognitive Science Society.
The United States government sponsored much research from 1989 to 1994, leading to a maturing of the ACL, characterized by an increase in author retention rates and an increase in research in some key topics, such as speech recognition.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Computational linguistics has since 2020s became a near-synonym of either natural language processing or language technology, with deep learning approaches, such as large language models, overperforming the specific approaches previously used in the field. The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English.
Natural language processing (NLP) is an interdisciplinary subfield of linguistics and computer science. It is primarily concerned with processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic (i.e. statistical and, most recently, neural network-based) machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them.
The Human Language Technology (HLT) course introduces methods and applications for language processing and generation, using statistical learning and neural networks.
The Deep Learning for NLP course provides an overview of neural network based methods applied to text. The focus is on models particularly suited to the properties of human language, such as categori
The objective of this course is to present the main models, formalisms and algorithms necessary for the development of applications in the field of natural language information processing. The concept
Explores the processing of large digital texts, revealing hidden patterns and structures, and the convergence of Humanities Computing and Computational Linguistics.
The archive of science is a place where scientific practices are sedimented in the form of drafts, protocols of rejected hypotheses and failed experiments, obsolete instruments, outdated visualizations and other residues. Today, just as science goes more a ...
In comparison to computational linguistics, with its abundance of natural-language datasets, corpora of music analyses are rather fewer and generally smaller. This is partly due to difficulties inherent to the encoding of music analyses, whose multimodal r ...
In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of v ...