Transistor–transistor logicTransistor–transistor logic (TTL) is a logic family built from bipolar junction transistors. Its name signifies that transistors perform both the logic function (the first "transistor") and the amplifying function (the second "transistor"), as opposed to earlier resistor–transistor logic (RTL) and diode–transistor logic (DTL). TTL integrated circuits (ICs) were widely used in applications such as computers, industrial controls, test equipment and instrumentation, consumer electronics, and synthesizers.
Transistor countThe transistor count is the number of transistors in an electronic device (typically on a single substrate or "chip"). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in the cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observed that the transistor count doubles approximately every two years.
Artificial neural networkArtificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons.
Diode–transistor logicDiode–transistor logic (DTL) is a class of digital circuits that is the direct ancestor of transistor–transistor logic. It is called so because the logic gating functions AND and OR are performed by diode logic, while logical inversion (NOT) and amplification (providing signal restoration) is performed by a transistor (in contrast with RTL and TTL). The DTL circuit shown in the first picture consists of three stages: an input diode logic stage (D1, D2 and R1), an intermediate level shifting stage (R3 and R4), and an output common-emitter amplifier stage (Q1 and R2).
Experiential learningExperiential learning (ExL) is the process of learning through experience, and is more narrowly defined as "learning through reflection on doing". Hands-on learning can be a form of experiential learning, but does not necessarily involve students reflecting on their product. Experiential learning is distinct from rote or didactic learning, in which the learner plays a comparatively passive role. It is related to, but not synonymous with, other forms of active learning such as action learning, adventure learning, free-choice learning, cooperative learning, service-learning, and situated learning.
Machine learningMachine learning (ML) is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines 'discover' their 'own' algorithms, without needing to be explicitly told what to do by any human-developed algorithms. Recently, generative artificial neural networks have been able to surpass results of many previous approaches.
Artificial neuronAn artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs (representing excitatory postsynaptic potentials and inhibitory postsynaptic potentials at neural dendrites) and sums them to produce an output (or , representing a neuron's action potential which is transmitted along its axon).
Reinforcement learningReinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected.
Learning theory (education)Learning theory describes how students receive, process, and retain knowledge during learning. Cognitive, emotional, and environmental influences, as well as prior experience, all play a part in how understanding, or a worldview, is acquired or changed and knowledge and skills retained. Behaviorists look at learning as an aspect of conditioning and advocate a system of rewards and targets in education.
Inverter (logic gate)In digital logic, an inverter or NOT gate is a logic gate which implements logical negation. It outputs a bit opposite of the bit that is put into it. The bits are typically implemented as two differing voltage levels. The NOT gate outputs a zero when given a one, and a one when given a zero. Hence, it inverts its inputs. Colloquially, this inversion of bits is called "flipping" bits. As with all binary logic gates, other pairs of symbols such as true and false, or high and low may be used in lieu of one and zero.