Brass instrumentA brass instrument is a musical instrument that produces sound by sympathetic vibration of air in a tubular resonator in sympathy with the vibration of the player's lips. Brass instruments are also called labrosones or labrophones, from Latin and Greek elements meaning 'lip' and 'sound'. There are several factors involved in producing different pitches on a brass instrument.
Transformer (machine learning model)A transformer is a deep learning architecture that relies on the parallel multi-head attention mechanism. The modern transformer was proposed in the 2017 paper titled 'Attention Is All You Need' by Ashish Vaswani et al., Google Brain team. It is notable for requiring less training time than previous recurrent neural architectures, such as long short-term memory (LSTM), and its later variation has been prevalently adopted for training large language models on large (language) datasets, such as the Wikipedia corpus and Common Crawl, by virtue of the parallelized processing of input sequence.
Live electronic musicLive electronic music (also known as live electronics) is a form of music that can include traditional electronic sound-generating devices, modified electric musical instruments, hacked sound generating technologies, and computers. Initially the practice developed in reaction to sound-based composition for fixed media such as musique concrète, electronic music and early computer music. Musical improvisation often plays a large role in the performance of this music.
Music appreciationMusic appreciation is a division of musicology that is designed to teach students how to understand and describe the contexts and creative processes involved in music composition. The concept of music appreciation is often taught as a subset of music theory in higher education and focuses predominantly on Western art music, commonly called "Classical music". This study of music is classified in a number of ways, including (but not limited to) examining music literacy and core musical elements such as pitch, duration, structure, texture and expressive techniques.
Large language modelA large language model (LLM) is a language model characterized by its large size. Their size is enabled by AI accelerators, which are able to process vast amounts of text data, mostly scraped from the Internet. The artificial neural networks which are built can contain from tens of millions and up to billions of weights and are (pre-)trained using self-supervised learning and semi-supervised learning. Transformer architecture contributed to faster training.
Recording studioA recording studio is a specialized facility for recording and mixing of instrumental or vocal musical performances, spoken words, and other sounds. They range in size from a small in-home project studio large enough to record a single singer-guitarist, to a large building with space for a full orchestra of 100 or more musicians. Ideally, both the recording and monitoring (listening and mixing) spaces are specially designed by an acoustician or audio engineer to achieve optimum acoustic properties (acoustic isolation or diffusion or absorption of reflected sound echoes that could otherwise interfere with the sound heard by the listener).
GuitarThe guitar is a fretted musical instrument that typically has six strings. It is usually held flat against the player's body and played by strumming or plucking the strings with the dominant hand, while simultaneously pressing selected strings against frets with the fingers of the opposite hand. A plectrum or individual finger picks may also be used to strike the strings. The sound of the guitar is projected either acoustically, by means of a resonant chamber on the instrument, or amplified by an electronic pickup and an amplifier.
Convolutional neural networkConvolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters (or kernel) optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels.
Artificial neural networkArtificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.