We adopt an innovation-driven framework and investigate the sparse/compressible distributions obtained by linearly measuring or expanding continuous-domain stochastic models. Starting from the first principles, we show that all such distributions are necessarily infinitely divisible. This property is satisfied by many distributions used in statistical learning, such as Gaussian, Laplace, and a wide range of fat-tailed distributions, such as student's-t and alpha-stable laws. However, it excludes some popular distributions used in compressed sensing, such as the Bernoulli-Gaussian distribution and distributions, that decay like exp (-O(vertical bar x vertical bar(p))) for 1 < p < 2. We further explore the implications of infinite divisibility on distributions and conclude that tail decay and unimodality are preserved by all linear functionals of the same continuous-domain process. We explain how these results help in distinguishing suitable variational techniques for statistically solving inverse problems like denoising.
Matthias Finger, Qian Wang, Yiming Li, Varun Sharma, Konstantin Androsov, Jan Steggemann, Xin Chen, Rakesh Chawla, Matteo Galli, Jian Wang, João Miguel das Neves Duarte, Tagir Aushev, Matthias Wolf, Yi Zhang, Tian Cheng, Yixing Chen, Werner Lustermann, Andromachi Tsirou, Alexis Kalogeropoulos, Andrea Rizzi, Ioannis Papadopoulos, Paolo Ronchese, Hua Zhang, Leonardo Cristella, Siyuan Wang, Tao Huang, David Vannerom, Michele Bianco, Sebastiana Gianì, Sun Hee Kim, Davide Di Croce, Kun Shi, Abhisek Datta, Jian Zhao, Federica Legger, Gabriele Grosso, Anna Mascellani, Ji Hyun Kim, Donghyun Kim, Zheng Wang, Sanjeev Kumar, Wei Li, Yong Yang, Ajay Kumar, Ashish Sharma, Georgios Anagnostou, Joao Varela, Csaba Hajdu, Muhammad Ahmad, Ekaterina Kuznetsova, Ioannis Evangelou, Milos Dordevic, Meng Xiao, Sourav Sen, Xiao Wang, Kai Yi, Jing Li, Rajat Gupta, Hui Wang, Seungkyu Ha, Pratyush Das, Anton Petrov, Xin Sun, Valérie Scheurer, Muhammad Ansar Iqbal, Lukas Layer