Homeomorphism (graph theory)In graph theory, two graphs and are homeomorphic if there is a graph isomorphism from some subdivision of to some subdivision of . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in illustrations), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if they are homeomorphic in the topological sense. In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G.
Binary treeIn computer science, a binary tree is a k-ary tree data structure in which each node has at most two children, which are referred to as the and the . A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set containing the root. Some authors allow the binary tree to be the empty set as well. From a graph theory perspective, binary (and K-ary) trees as defined here are arborescences.
B-treeIn computer science, a B-tree is a self-balancing tree data structure that maintains sorted data and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree generalizes the binary search tree, allowing for nodes with more than two children. Unlike other self-balancing binary search trees, the B-tree is well suited for storage systems that read and write relatively large blocks of data, such as databases and s. B-trees were invented by Rudolf Bayer and Edward M.
T-treeIn computer science a T-tree is a type of binary tree data structure that is used by main-memory databases, such as Datablitz, eXtremeDB, MySQL Cluster, Oracle TimesTen and MobileLite. A T-tree is a balanced index tree data structure optimized for cases where both the index and the actual data are fully kept in memory, just as a B-tree is an index structure optimized for storage on block oriented secondary storage devices like hard disks.
DendriteA dendrite (from Greek δένδρον déndron, "tree") or dendron is a branched protoplasmic extension of a nerve cell that propagates the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons (usually via their axons) via synapses which are located at various points throughout the dendritic tree.
Density functional theoryDensity-functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals, i.e. functions of another function. In the case of DFT, these are functionals of the spatially dependent electron density.
Electron densityElectron density or electronic density is the measure of the probability of an electron being present at an infinitesimal element of space surrounding any given point. It is a scalar quantity depending upon three spatial variables and is typically denoted as either or . The density is determined, through definition, by the normalised -electron wavefunction which itself depends upon variables ( spatial and spin coordinates). Conversely, the density determines the wave function modulo up to a phase factor, providing the formal foundation of density functional theory.
Pattern recognitionPattern recognition is the automated recognition of patterns and regularities in data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, , information retrieval, bioinformatics, data compression, computer graphics and machine learning.
AutoencoderAn autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction. Variants exist, aiming to force the learned representations to assume useful properties.