Virtual memoryIn computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory". The computer's operating system, using a combination of hardware and software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.
LC circuitFile:LC parallel simple.svg|LC circuit diagram File:Low cost DCF77 receiver.jpg|LC circuit ''(left)'' consisting of ferrite coil and capacitor used as a tuned circuit in the receiver for a [[radio clock]] File:Tuned circuit of shortwave radio transmitter from 1938.jpg|Output tuned circuit of [[shortwave]] [[radio transmitter]] An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit, is an electric circuit consisting of an inductor, represented by the letter L, and a capacitor, represented by the letter C, connected together.
Synaptic weightIn neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research. In a computational neural network, a vector or set of inputs and outputs , or pre- and post-synaptic neurons respectively, are interconnected with synaptic weights represented by the matrix , where for a linear neuron where the rows of the synaptic matrix represent the vector of synaptic weights for the output indexed by .
Address generation unitThe address generation unit (AGU), sometimes also called address computation unit (ACU), is an execution unit inside central processing units (CPUs) that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements.
Caffe (software)Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework, originally developed at University of California, Berkeley. It is open source, under a BSD license. It is written in C++, with a Python interface. Yangqing Jia created the Caffe project during his PhD at UC Berkeley. It is currently hosted on GitHub. Caffe supports many different types of deep learning architectures geared towards and . It supports CNN, RCNN, LSTM and fully-connected neural network designs.