In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often (but not always) exploit reactive plans, which are stored structures describing the agent's priorities and behaviour. The term reactive planning goes back to at least 1988, and is synonymous with the more modern term dynamic planning.
There are several ways to represent a reactive plan. All require a basic representational unit and a means to compose these units into plans.
A condition action rule, or if-then rule, is a rule in the form: if condition then action. These rules are called productions. The meaning of the rule is as follows: if the condition holds, perform the action. The action can be either external (e.g., pick something up and move it), or internal (e.g., write a fact into the internal memory, or evaluate a new set of rules). Conditions are normally boolean and the action either can be performed, or not.
Production rules may be organized in relatively flat structures, but more often are organized into a hierarchy of some kind. For example, subsumption architecture consists of layers of interconnected behaviors, each actually a finite state machine which acts in response to an appropriate input. These layers are then organized into a simple stack, with higher layers subsuming the goals of the lower ones. Other systems may use trees, or may include special mechanisms for changing which goal / rule subset is currently most important. Flat structures are relatively easy to build, but allow only for description of simple behavior, or require immensely complicated conditions to compensate for the lacking structure.
An important part of any distributed action selection algorithms is a conflict resolution mechanism.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Software agents are widely used to control physical, economic and financial processes. The course presents practical methods for implementing software agents and multi-agent systems, supported by prog
Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior. One problem for understanding action selection is determining the level of abstraction used for specifying an "act".
Related lectures (2)
,
The modeling of walking behavior and design of walk-friendly urban pathways have been of interest to many researchers over the past decades. One of the major issues in pedestrian modeling is path planning decision-making in a dynamic walking environment wi ...
With technological advances, the sources of available information have become more and more diverse. Recently, a new source of information has gained growing importance: sensor data. Sensors are devices sensing their environment in various ways and reporti ...
EPFL2010
,
In this research we study the berth allocation problem (BAP) in real time as disruptions occur. In practice, the actual arrival times and handling times of the vessels deviate from their expected or estimated values, which can disrupt the original berthing ...