Audio signalAn audio signal is a representation of sound, typically using either a changing level of electrical voltage for analog signals, or a series of binary numbers for digital signals. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz, which corresponds to the lower and upper limits of human hearing. Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head.
Audio signal processingAudio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain.
Hearing aidA hearing aid is a device designed to improve hearing by making sound audible to a person with hearing loss. Hearing aids are classified as medical devices in most countries, and regulated by the respective regulations. Small audio amplifiers such as personal sound amplification products (PSAPs) or other plain sound reinforcing systems cannot be sold as "hearing aids". Early devices, such as ear trumpets or ear horns, were passive amplification cones designed to gather sound energy and direct it into the ear canal.
SignalIn signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, , sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information.
Digital audioDigital audio is a representation of sound recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical samples in a continuous sequence. For example, in CD audio, samples are taken 44,100 times per second, each with 16-bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form.
Frequency-hopping spread spectrumFrequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly changing the carrier frequency among many frequencies occupying a large spectral band. The changes are controlled by a code known to both transmitter and receiver. FHSS is used to avoid interference, to prevent eavesdropping, and to enable code-division multiple access (CDMA) communications. The frequency band is divided into smaller sub-bands. Signals rapidly change ("hop") their carrier frequencies among the center frequencies of these sub-bands in a determined order.
Audio bit depthIn digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample. In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range.
Unilateral hearing lossUnilateral hearing loss (UHL) is a type of hearing impairment where there is normal hearing in one ear and impaired hearing in the other ear. Patients with unilateral hearing loss have difficulty: Hearing conversation on their impaired side Localizing sound Understanding speech in the presence of background noise In interpersonal interaction in social settings Focusing on individual sound sources in large, open environments Heavy impairment of the auditory Figure–ground perception In quiet conditions, speech discrimination is no worse than normal hearing in those with partial deafness; however, in noisy environments speech discrimination is almost always severe.
Signal processingSignal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, , potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, subjective video quality and to also detect or pinpoint components of interest in a measured signal. According to Alan V. Oppenheim and Ronald W.
HearingHearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science. Sound may be heard through solid, liquid, or gaseous matter. It is one of the traditional five senses. Partial or total inability to hear is called hearing loss.