Résumé
In computer science, a state space is a discrete space representing the set of all possible configurations of a "system". It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory. For instance, the toy problem Vacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are the natural numbers starting at 1 and are incremented over time has an infinite discrete state space. The angular position of an undamped pendulum is a continuous (and therefore infinite) state space. State spaces are useful in computer science as a simple model of machines. Formally, a state space can be defined as a tuple [N, A, S, G] where: N is a set of states A is a set of arcs connecting the states S is a nonempty subset of N that contains start states G is a nonempty subset of N that contains the goal states. A state space has some common properties: complexity, where branching factor is important structure of the space, see also graph theory: directionality of arcs tree rooted graph For example, the Vacuum World has a branching factor of 4, as the vacuum cleaner can end up in 1 of 4 adjacent squares after moving (assuming it cannot stay in the same square nor move diagonally). The arcs of Vacuum World are bidirectional, since any square can be reached from any adjacent square, and the state space is not a tree since it is possible to enter a loop by moving between any 4 adjacent squares. State spaces can be either infinite or finite, and discrete or continuous. The size of the state space for a given system is the number of possible configurations of the space. If the size of the state space is finite, calculating the size of the state space is a combinatorial problem. For example, in the Eight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Cours associés (12)
ME-425: Model predictive control
Provide an introduction to the theory and practice of Model Predictive Control (MPC). Main benefits of MPC: flexible specification of time-domain objectives, performance optimization of highly complex
ME-221: Dynamical systems
Provides the students with basic notions and tools for the analysis of dynamic systems. Shows them how to develop mathematical models of dynamic systems and perform analysis in time and frequency doma
ME-427: Networked control systems
This course offers an introduction to control systems using communication networks for interfacing sensors, actuators, controllers, and processes. Challenges due to network non-idealities and opportun
Afficher plus
Séances de cours associées (34)
Représentation de l'État et dynamique des systèmes
Couvre la représentation des modèles état-espace et la dynamique des systèmes.
Réponse dynamique de l'État et de l'espace
Couvre le calcul des fonctions de transfert, des pôles système et des zéros.
Méthode de linéarisation: Exemples
Couvre la méthode de linéarisation à travers deux exemples de contrôle de processus.
Afficher plus
Publications associées (38)
Unités associées (1)
Concepts associés (6)
Théorie des systèmes dynamiques
La théorie des systèmes dynamiques désigne couramment la branche des mathématiques qui s'efforce d'étudier les propriétés d'un système dynamique. Cette recherche active se développe à la frontière de la topologie, de l'analyse, de la géométrie, de la théorie de la mesure et des probabilités. La nature de cette étude est conditionnée par le système dynamique étudié et elle dépend des outils utilisés (analytiques, géométriques ou probabilistes).
Game complexity
Combinatorial game theory measures game complexity in several ways: State-space complexity (the number of legal game positions from the initial position), Game tree size (total number of possible games), Decision complexity (number of leaf nodes in the smallest decision tree for initial position), Game-tree complexity (number of leaf nodes in the smallest full-width decision tree for initial position), Computational complexity (asymptotic difficulty of a game as it grows arbitrarily large).
Chaîne de Markov
vignette|Exemple élémentaire de chaîne de Markov, à deux états A et E. Les flèches indiquent les probabilités de transition d'un état à un autre. En mathématiques, une chaîne de Markov est un processus de Markov à temps discret, ou à temps continu et à espace d'états discret. Un processus de Markov est un processus stochastique possédant la propriété de Markov : l'information utile pour la prédiction du futur est entièrement contenue dans l'état présent du processus et n'est pas dépendante des états antérieurs (le système n'a pas de « mémoire »).
Afficher plus
MOOCs associés (4)
Neuronal Dynamics - Computational Neuroscience of Single Neurons
The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.
Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Neuronal Dynamics 2- Computational Neuroscience: Neuronal Dynamics of Cognition
This course explains the mathematical and computational models that are used in the field of theoretical neuroscience to analyze the collective dynamics of thousands of interacting neurons.
Afficher plus