Equalization (audio)Equalization, or simply EQ, in sound recording and reproduction is the process of adjusting the volume of different frequency bands within an audio signal. The circuit or equipment used to achieve this is called an equalizer. Most hi-fi equipment uses relatively simple filters to make bass and treble adjustments. Graphic and parametric equalizers have much more flexibility in tailoring the frequency content of an audio signal.
Sound recording and reproductionSound recording and reproduction is the electrical, mechanical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording. Sound recording is the transcription of invisible vibrations in air onto a storage medium such as a phonograph disc. The process is reversed in sound reproduction, and the variations stored on the medium are transformed back into sound waves.
Digital audioDigital audio is a representation of sound recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical samples in a continuous sequence. For example, in CD audio, samples are taken 44,100 times per second, each with 16-bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form.
Effects unitAn effects unit or effects pedal is an electronic device that alters the sound of a musical instrument or other audio source through audio signal processing. Common sound effects include distortion/overdrive, often used with electric guitar in electric blues and rock music; dynamic effects such as volume pedals and compressors, which affect loudness; filters such as wah-wah pedals and graphic equalizers, which modify frequency ranges; modulation effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as reverb and delay, which create echoing sounds and emulate the sound of different spaces.
MP3MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio developed largely by the Fraunhofer Society in Germany under the lead of Karlheinz Brandenburg, with support from other digital scientists in the United States and elsewhere. Originally defined as the third audio format of the MPEG-1 standard, it was retained and further extended — defining additional bit-rates and support for more audio channels — as the third audio format of the subsequent MPEG-2 standard.
Speech synthesisSpeech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. The reverse process is speech recognition. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database.
Digital signal processingDigital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.
SoundIn physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Only acoustic waves that have frequencies lying between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory percept in humans. In air at atmospheric pressure, these represent sound waves with wavelengths of to . Sound waves above 20 kHz are known as ultrasound and are not audible to humans.
Audio engineerAn audio engineer (also known as a sound engineer or recording engineer) helps to produce a recording or a live performance, balancing and adjusting sound sources using equalization, dynamics processing and audio effects, mixing, reproduction, and reinforcement of sound. Audio engineers work on the "technical aspect of recording—the placing of microphones, pre-amp knobs, the setting of levels. The physical recording of any project is done by an engineer... the nuts and bolts.
Delay (audio effect)Delay is an audio signal processing technique that records an input signal to a storage medium and then plays it back after a period of time. When the delayed playback is mixed with the live audio, it creates an echo-like effect, whereby the original audio is heard followed by the delayed audio. The delayed signal may be played back multiple times, or fed back into the recording, to create the sound of a repeating, decaying echo. Delay effects range from a subtle echo effect to a pronounced blending of previous sounds with new sounds.