Latency refers to a short period of delay (usually measured in milliseconds) between when an audio signal enters a system and when it emerges. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in the transmission medium.
Latency can be a critical performance metric in professional audio including sound reinforcement systems, foldback systems (especially those using in-ear monitors) live radio and television. Excessive audio latency has the potential to degrade call quality in telecommunications applications. Low latency audio in computers is important for interactivity.
In all systems, latency can be said to consist of three elements: codec delay, playout delay and network delay.
Latency in telephone calls is sometimes referred to as delay; the telecommunications industry also uses the term quality of experience (QoE). Voice quality is measured according to the ITU model; measurable quality of a call degrades rapidly where the mouth-to-ear delay latency exceeds 200 milliseconds. The mean opinion score (MOS) is also comparable in a near-linear fashion with the ITU's quality scale - defined in standards G.107, G.108 and G.109 - with a quality factor R ranging from 0 to 100. An MOS of 4 ('Good') would have an R score of 80 or above; to achieve 100R requires an MOS exceeding 4.5.
The ITU and 3GPP groups end-user services into classes based on latency sensitivity:
Similarly, the G.114 recommendation regarding mouth-to-ear delay indicates that most users are "very satisfied" as long as latency does not exceed 200 ms, with an according R of 90+. Codec choice also plays an important role; the highest quality (and highest bandwidth) codecs like G.711 are usually configured to incur the least encode-decode latency, so on a network with sufficient throughput sub-100 ms latencies can be achieved. G.711 at a bitrate of 64 kbit/s is the encoding method predominantly used on the public switched telephone network.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This lecture is oriented towards the study of audio engineering, with a special focus on room acoustics applications. The learning outcomes will be the techniques for microphones and loudspeaker desig
In this class we will explore some of the fundamental ways in which the pervasiveness of digital devices has completely revolutionized the world of music in the last 40 years, both from the point of v
Ce cours a pour objectif de former les étudiants de section Génie Electrique et Electronique à la conception de systèmes acoustiques, à l'aide d'un formalisme basé sur l'électrotechnique. A la fin du
Explores sound wave propagation in waveguides, room modes, and sound pressure distribution in rooms, highlighting the detrimental effects of room modes on sound reproduction.
Opus is a lossy audio coding format developed by the Xiph.Org Foundation and standardized by the Internet Engineering Task Force, designed to efficiently code speech and general audio in a single format, while remaining low-latency enough for real-time interactive communication and low-complexity enough for low-end embedded processors. Opus replaces both Vorbis and Speex for new applications, and several blind listening tests have ranked it higher-quality than any other standard audio format at any given bitrate until transparency is reached, including MP3, AAC, and HE-AAC.
In audio and broadcast engineering, Audio over Ethernet (sometimes AoE—not to be confused with ATA over Ethernet) is the use of an Ethernet-based network to distribute real-time digital audio. AoE replaces bulky snake cables or audio-specific installed low-voltage wiring with standard network structured cabling in a facility. AoE provides a reliable backbone for any audio application, such as for large-scale sound reinforcement in stadiums, airports and convention centers, multiple studios or stages.
An audio engineer (also known as a sound engineer or recording engineer) helps to produce a recording or a live performance, balancing and adjusting sound sources using equalization, dynamics processing and audio effects, mixing, reproduction, and reinforcement of sound. Audio engineers work on the "technical aspect of recording—the placing of microphones, pre-amp knobs, the setting of levels. The physical recording of any project is done by an engineer... the nuts and bolts.
Acoustical knee health assessment has long promised an alternative to clinically available medical imaging tools, but this modality has yet to be adopted in medical practice. The field is currently led by machine learning models processing acoustical featu ...
2024
, , , , , ,
Due to its high parallelism, belief propagation (BP)decoding is amenable to high-throughput applications and thusrepresents a promising solution for the ultra-high peak datarate required by future communication systems. To bridge theperformance gap compare ...
This thesis demonstrates that it is feasible for systems code to expose a latency interface that describes its latency and related side effects for all inputs, just like the code's semantic interface describes its functionality and related side effects.Sem ...