Apprentissage par renforcementEn intelligence artificielle, plus précisément en apprentissage automatique, l'apprentissage par renforcement consiste, pour un agent autonome ( robot, agent conversationnel, personnage dans un jeu vidéo), à apprendre les actions à prendre, à partir d'expériences, de façon à optimiser une récompense quantitative au cours du temps. L'agent est plongé au sein d'un environnement et prend ses décisions en fonction de son état courant. En retour, l'environnement procure à l'agent une récompense, qui peut être positive ou négative.
Symbolic artificial intelligenceIn artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems (in particular, expert systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems.
Agent-based computational economicsAgent-based computational economics (ACE) is the area of computational economics that studies economic processes, including whole economies, as dynamic systems of interacting agents. As such, it falls in the paradigm of complex adaptive systems. In corresponding agent-based models, the "agents" are "computational objects modeled as interacting according to rules" over space and time, not real people. The rules are formulated to model behavior and social interactions based on incentives and information.
Singularité technologiquevignette|redresse=1.75|La frise chronologique de Ray Kurzweil (2005) date le moment où un superordinateur sera plus puissant qu'un cerveau humain, et pourra alors le simuler : il s'agit d'une méthode envisagée pour créer une intelligence artificielle, utilisée notamment dans le projet Blue Brain. La singularité technologique (ou simplement la Singularité) est l'hypothèse selon laquelle l'invention de l'intelligence artificielle déclencherait un emballement de la croissance technologique qui induirait des changements imprévisibles dans la société humaine.
Machine ethicsMachine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers.
Action selectionAction selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior. One problem for understanding action selection is determining the level of abstraction used for specifying an "act".
Hybrid intelligent systemHybrid intelligent system denotes a software system which employs, in parallel, a combination of methods and techniques from artificial intelligence subfields, such as: Neuro-symbolic systems Neuro-fuzzy systems Hybrid connectionist-symbolic models Fuzzy expert systems Connectionist expert systems Evolutionary neural networks Genetic fuzzy systems Rough fuzzy hybridization Reinforcement learning with fuzzy, neural, or evolutionary methods as well as symbolic reasoning methods.