Cyc (pronounced ˈsaɪk ) is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations.
Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he is the CEO.
The need for a massive symbolic artificial intelligence project of this kind was born in the early 1980s. Early AI researchers had ample experience over the previous 25 years with AI programs that would generate encouraging early results but then fail to "scale up"—move beyond the 'training set' to tackle a broader range of cases. Douglas Lenat and Alan Kay publicized this need, and they organized a meeting at Stanford in 1983 to address the problem. The back-of-the-envelope calculations by Lenat, Kay, and their colleagues (including Marvin Minsky, Allen Newell, Edward Feigenbaum, and John McCarthy) indicated that that effort would require between 1000 and 3000 person-years of effort, far beyond the standard academic project model. However, events within a year of that meeting enabled an effort of that scale to get underway.
The project began in July 1984 as the flagship project of the 400-person Microelectronics and Computer Technology Corporation (MCC), a research consortium started by two dozen large United States based corporations "to counter a then ominous Japanese effort in AI, the so-called "fifth-generation" project.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In information science, an ontology encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject. Every academic discipline or field creates ontologies to limit complexity and organize data into information and knowledge.
YAGO (Yet Another Great Ontology) is an open source knowledge base developed at the Max Planck Institute for Informatics in Saarbrücken. It is automatically extracted from Wikipedia and other sources. As of 2019, YAGO3 has knowledge of more than 10 million entities and contains more than 120 million facts about these entities. The information in YAGO is extracted from Wikipedia (e.g., categories, redirects, infoboxes), WordNet (e.g., synsets, hyponymy), and GeoNames. The accuracy of YAGO was manually evaluated to be above 95% on a sample of facts.
In knowledge representation and reasoning, knowledge graph is a knowledge base that uses a graph-structured data model or topology to integrate data. Knowledge graphs are often used to store interlinked descriptions of entities - objects, events, situations or abstract concepts - while also encoding the semantics underlying the used terminology. Since the development of the Semantic Web, knowledge graphs are often associated with linked open data projects, focusing on the connections between concepts and entities.
This course introduces the foundations of information retrieval, data mining and knowledge bases, which constitute the foundations of today's Web-based distributed information systems.
Universal inference enables the construction of confidence intervals and tests without regularity conditions by splitting the data into two parts and appealing to Markov's inequality. Previous investigations have shown that the cost of this generality is a ...
The progress towards intelligent systems and digitalization relies heavily on the use of automation technology. However, the growing diversity of control objects presents significant challenges for traditional control approaches, as they are highly depende ...
EPFL2023
, , , ,
Sustaining coherent and engaging narratives requires dialogue or storytelling agents to understand how the personas of speakers or listeners ground the narrative. Specifically, these agents must infer personas of their listeners to produce statements that ...