Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In most watermarking systems, masking models, inherited from data compression algorithms, are used to preserve fidelity by controlling the perceived distortion resulting from adding the watermark to the original signal. So far, little attention has been paid to the consequences of using such models on a key design parameter: the robustness of the watermark to intentional attacks. The goal of this paper is to demonstrate that by considering fidelity alone, key information on the location and strength of the watermark may become available to an attacker; the latter can exploit such knowledge to build an effective mask attack. First, defining a theoretical framework in which analytical expressions for masking and watermarking are laid, a relation between the decrease of the detection statistic and the introduced perceptual distortion is found for the mask attack. The latter is compared to the Wiener filter attack. Then, considering masking models widely used in watermarking, experiments on both simulated and real data (audio and images) demonstrate how knowledge on the mask enables to greatly reduce the detection statistic, even for small perceptual distortion costs. The critical tradeoff between robustness and distortion is further discussed, and conclusions on the use of masking models in watermarking drawn.
, ,
, ,
,