Hearing conservation programHearing conservation programs are designed to prevent hearing loss due to noise. Hearing conservation programs require knowledge about risk factors such as noise and ototoxicity, hearing, hearing loss, protective measures to prevent hearing loss at home, in school, at work, in the military and, and at social/recreational events, and legislative requirements.
Noise-induced hearing lossNoise-induced hearing loss (NIHL) is a hearing impairment resulting from exposure to loud sound. People may have a loss of perception of a narrow range of frequencies or impaired perception of sound including sensitivity to sound or ringing in the ears. When exposure to hazards such as noise occur at work and is associated with hearing loss, it is referred to as occupational hearing loss. Hearing may deteriorate gradually from chronic and repeated noise exposure (such as to loud music or background noise) or suddenly from exposure to impulse noise, which is a short high intensity noise (such as a gunshot or airhorn).
Johnson–Nyquist noiseJohnson–Nyquist noise (thermal noise, Johnson noise, or Nyquist noise) is the electronic noise generated by the thermal agitation of the charge carriers (usually the electrons) inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment (such as radio receivers) can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments.
Advanced Audio CodingAdvanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. Designed to be the successor of the MP3 format, AAC generally achieves higher sound quality than MP3 encoders at the same bit rate. AAC has been standardized by ISO and IEC as part of the MPEG-2 and MPEG-4 specifications. Part of AAC, HE-AAC ("AAC+"), is part of MPEG-4 Audio and is adopted into digital radio standards DAB+ and Digital Radio Mondiale, and mobile television standards DVB-H and ATSC-M/H.
SoundscapeA soundscape is the acoustic environment as perceived by humans, in context. The term was originally coined by Michael Southworth, and popularised by R. Murray Schafer. There is a varied history of the use of soundscape depending on discipline, ranging from urban design to wildlife ecology to computer science. An important distinction is to separate soundscape from the broader acoustic environment. The acoustic environment is the combination of all the acoustic resources, natural and artificial, within a given area as modified by the environment.
Coding theoryCoding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods.
Numerical weather predictionNumerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Digital audioDigital audio is a representation of sound recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical samples in a continuous sequence. For example, in CD audio, samples are taken 44,100 times per second, each with 16-bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form.
SignalIn signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, , sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information.
Speech recognitionSpeech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.