The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think? Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." Turing describes the new form of the problem in terms of a three-person game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?" This question, Turing believed, was one that could actually be answered.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
A chatbot (originally chatterbot) is a software application that aims to mimic human conversation through text or voice interactions, typically online. Modern chatbots are artificial intelligence (AI) systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such technologies often utilize aspects of deep learning and natural language processing.
IBM Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's founder and first CEO, industrialist Thomas J. Watson. The computer system was initially developed to answer questions on the quiz show Jeopardy! and in 2011, the Watson computer system competed on Jeopardy! against champions Brad Rutter and Ken Jennings, winning the first place prize of 1 million USD.
A CAPTCHA (ˈkæp.tʃə ) is a type of challenge–response test used in computing to determine whether the user is human in order to deter bot attacks and spam. The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford. It is a contrived acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart." A historically common type of CAPTCHA (displayed as Version 1.0) was first invented in 1997 by two groups working in parallel.
Recent advances in machine learning have contributed to the emergence of powerful models for how humans and other animals reason and behave. In this course we will compare and contrast how such brain
La discipline de l'Interaction Homme-Machine (ou HCI : Human-Computer Interaction) vise à systématiquement placer le facteur humain dans la conception de systèmes interactifs.
We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an
Large language models (LLMs) have been leveraged for several years now, obtaining state-of-the-art performance in recognizing entities from modern documents. For the last few months, the conversational agent ChatGPT has "prompted" a lot of interest in the ...
Due to the increasing demands of today's fast-paced world, mental health concerns are on the rise, which necessitates innovative approaches to provide support to those in need. Open-domain conversational agents known as chatbots, offer a unique opportunit ...
EPFL2023
, , ,
Recently, substantial progress has been made in the understanding of microresonators frequency combs based on dissipative Kerr solitons (DKSs). However, most of the studies have focused on the single-resonator level. Coupled resonator systems can open new ...