Concept

Hierarchical temporal memory

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. At the core of HTM are learning algorithms that can store, learn, infer, and recall high-order sequences. Unlike most other machine learning methods, HTM constantly learns (in an unsupervised process) time-based patterns in unlabeled data. HTM is robust to noise, and has high capacity (it can learn multiple patterns simultaneously). When applied to computers, HTM is well suited for prediction, anomaly detection, classification, and ultimately sensorimotor applications. HTM has been tested and implemented in software through example applications from Numenta and a few commercial applications from Numenta's partners. A typical HTM network is a tree-shaped hierarchy of levels (not to be confused with the "layers" of the neocortex, as described below). These levels are composed of smaller elements called regions (or nodes). A single level in the hierarchy possibly contains several regions. Higher hierarchy levels often have fewer regions. Higher hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize more complex patterns. Each HTM region has the same basic function. In learning and inference modes, sensory data (e.g. data from the eyes) comes into bottom-level regions. In generation mode, the bottom level regions output the generated pattern of a given category. The top level usually has a single region that stores the most general and most permanent categories (concepts); these determine, or are determined by, smaller concepts at lower levels—concepts that are more restricted in time and space.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (5)
EE-390(a): Lab on hardware-software digital systems codesign
This course explores hardware-software co-design techniques to develop heterogeneous multi-core embedded systems running Linux on FPGAs. The course explores high-level synthesis tools (HLS) to design
CS-119(c): Information, Computation, Communication
L'objectif de ce cours est d'introduire les étudiants à la pensée algorithmique, de les familiariser avec les fondamentaux de l'Informatique et de développer une première compétence en programmation (
CS-119(g): Information, Computation, Communication
L'objectif de ce cours est d'initier les étudiants à la pensée algorithmique, de les familiariser avec les fondamentaux de l'informatique et des communications et de développer une première compétence
Show more
Related lectures (20)
Emerging Memory II
Explores challenges in memory hierarchies, TB-scale address spaces, and optimizing performance through near-memory processing.
Cache Coherence: Basics and Protocols
Explores cache coherence challenges, protocols, and directory-based solutions in multi-core systems.
Analytical Mechanics: Newton's Three Laws
Introduces the mathematical approach to analytical mechanics, emphasizing Newton's three laws for a complete description of phenomena.
Show more
Related publications (141)

InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts

Martin Jaggi, Vinitra Swamy, Jibril Albachir Frej, Julian Thomas Blackwell

Interpretability for neural networks is a trade-off between three key requirements: 1) faithfulness of the explanation (i.e., how perfectly it explains the prediction), 2) understandability of the explanation by humans, and 3) model performance. Most exist ...
2024

TeSLA: Test-Time Self-Learning With Automatic Adversarial Augmentation

Jean-Philippe Thiran, Guillaume Marc Georges Vray, Devavrat Tomar

Most recent test-time adaptation methods focus on only classification tasks, use specialized network architectures, destroy model calibration or rely on lightweight information from the source domain. To tackle these issues, this paper proposes a novel Tes ...
IEEE2023

Temporal Prediction of Landslide-GeneratedWaves Using a Theoretical–Statistical Combined Method.

Christophe Ancey, Zhenzhu Meng, Yating Hu

For the prediction of landslide-generated waves, previous studies have developed numerous empirical equations to express the maximums of wave characteristics as functions of slide parameters upon impact. In this study, we built the temporal relationship be ...
2023
Show more
Related concepts (5)
Memory-prediction framework
The memory-prediction framework is a theory of brain function created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampi and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future. The theory is motivated by the observed similarities between the brain structures (especially neocortical tissue) that are used for a wide range of behaviours available to mammals.
Sparse distributed memory
Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines – e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, etc. Sparse distributed memory is used for storing and retrieving large amounts ( bits) of information without focusing on the accuracy but on similarity of information.
Cognitive architecture
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R (Adaptive Control of Thought - Rational) and SOAR.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.