A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
The elements of a dialogue system are not defined because this idea is under research, however, they are different from chatbot. The typical GUI wizard engages in a sort of dialogue, but it includes very few of the common dialogue system components, and the dialogue state is trivial.
After dialogue systems based only on written text processing starting from the early Sixties, the first speaking dialogue system was issued by the DARPA Project in the USA in 1977. After the end of this 5-year project, some European projects issued the first dialogue system able to speak many languages (also French, German and Italian). Those first systems were used in the telecom industry to provide phone various services in specific domains, e.g. automated agenda and train tables service.
What sets of components are included in a dialogue system, and how those components divide up responsibilities differs from system to system. Principal to any dialogue system is the dialogue manager, which is a component that manages the state of the dialogue, and dialogue strategy. A typical activity cycle in a dialogue system contains the following phases:
The user speaks, and the input is converted to plain text by the system's input recogniser/decoder, which may include:
automatic speech recogniser (ASR)
gesture recogniser
handwriting recogniser
The text is analysed by a natural language understanding (NLU) unit, which may include:
Proper Name identification
part-of-speech tagging
Syntactic/semantic parser
The semantic information is analysed by the dialogue manager, which keeps the history and state of the dialogue and manages the general flow of the conversation.
Usually, the dialogue manager contacts one or more task managers, that have knowledge of the specific task domain.