Publication

Quality Assessment of Image Features in Video Coding

Olivier Verscheure
1997
Report or working paper
Abstract

This paper describes an extension of a vision model for video that has been designed to estimate how specific features in moving pictures are rendered as a consequence of a compression process. The resulting model permits to evaluate the distortions on contours and textures in a sequence, as well as to estimate how strong the blocking effect (resulting from a block-DCT compression scheme) is. The model has been used to evaluate feature rendition in MPEG-2 compressed sequences.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (32)
Data compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information.
Video coding format
A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital video content (such as in a data file or bitstream). It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation. A specific software, firmware, or hardware implementation capable of compression or decompression to/from a specific video coding format is called a video codec.
Compression artifact
A compression artifact (or artefact) is a noticeable distortion of media (including , audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired or transmitted (streamed) within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts.
Show more
Related publications (93)

Subjective performance evaluation of bitrate allocation strategies for MPEG and JPEG Pleno point cloud compression

Touradj Ebrahimi, Michela Testolina, Davi Nachtigall Lazzarotto

The recent rise in interest in point clouds as an imaging modality has motivated standardization groups such as JPEG and MPEG to launch activities aiming at developing compression standards for point clouds. Lossy compression usually introduces visual arti ...
Springer2024

Landmarking for Navigational Streaming of Stored High-Dimensional Media

Pascal Frossard, Yuan Yuan

Modern media data such as 360 degrees videos and light field (LF) images are typically captured in much higher dimensions than the observers' visual displays. To efficiently browse high-dimensional media, a navigational streaming model is considered: a cli ...
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC2022

Learning residual coding for point clouds

Touradj Ebrahimi

Recent advancements in acquisition of three-dimensional models have been increasingly drawing attention to imaging modalities based on the plenoptic representations, such as light fields and point clouds. Since point cloud models can often contain millions ...
SPIE2021
Show more
Related MOOCs (6)
Digital Signal Processing [retired]
The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a
Digital Signal Processing I
Basic signal processing concepts, Fourier analysis and filters. This module can be used as a starting point or a basic refresher in elementary DSP
Digital Signal Processing II
Adaptive signal processing, A/D and D/A. This module provides the basic tools for adaptive filtering and a solid mathematical framework for sampling and quantization
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.