Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot might, when asked to generate a financial report for a company, falsely state that the company's revenue was $13.6 billion (or some other random number apparently "plucked from thin air"). Such phenomena are termed "hallucinations", in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs. Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to pointlessly embed plausible-sounding random falsehoods within their generated content. By 2023, analysts considered frequent hallucination to be a major problem in LLM technology. Various researchers cited by Wired have classified adversarial hallucinations as a high-dimensional statistical phenomenon, or have attributed hallucinations to insufficient training data. Some researchers believe that some "incorrect" AI responses classified by humans as "hallucinations" in the case of object detection may in fact be justified by the training data, or even that an AI may be giving the "correct" answer that the human reviewers are failing to see. For example, an adversarial image that looks, to a human, like an ordinary image of a dog, may in fact be seen by the AI to contain tiny patterns that (in authentic images) would only appear when viewing a cat. The AI is detecting real-world visual patterns that humans are insensitive to. However, these findings have been challenged by other researchers.
Julien René Pierre Fageot, Adrien Raphaël Depeursinge, Daniel Abler
Jean-Philippe Thiran, Guillaume Marc Georges Vray, Devavrat Tomar