Spatial cognition is the acquisition, organization, utilization, and revision of knowledge about spatial environments. It is most about how animals including humans behave within space and the knowledge they built around it, rather than space itself. These capabilities enable individuals to manage basic and high-level cognitive tasks in everyday life. Numerous disciplines (such as cognitive psychology, neuroscience, artificial intelligence, geographic information science, cartography, etc.
In cognitive psychology and neuroscience, spatial memory is a form of memory responsible for the recording and recovery of information needed to plan a course to a location and to recall the location of an object or the occurrence of an event. Spatial memory is necessary for orientation in space. Spatial memory can also be divided into egocentric and allocentric spatial memory. A person's spatial memory is required to navigate around a familiar city. A rat's spatial memory is needed to learn the location of food at the end of a maze.
The Morris water navigation task, also known as the Morris water maze (not to be confused with water maze), is a behavioral procedure mostly used with rodents. It is widely used in behavioral neuroscience to study spatial learning and memory. It enables learning, memory, and spatial working to be studied with great accuracy, and can also be used to assess damage to particular cortical regions of the brain.
In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed. In finite-dimensional systems, the evolving variable may be represented algebraically as an n-dimensional vector. The attractor is a region in n-dimensional space.
A language model is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. Large language models, as their most advanced form, are a combination of feedforward neural networks and transformers. They have superseded recurrent neural network-based models, which had previously superseded the pure statistical models, such as word n-gram language model.