**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Proximity Operators of Discrete Information Divergences

Giovanni Chierchia, Mireille El Gheche

*IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, *2018

Journal paper

Journal paper

Abstract

While phi-divergences have been extensively studied in convex analysis, their use in optimization problems often remains challenging. In this regard, one of the main shortcomings of existing methods is that the minimization of phi-divergences is usually performed with respect to one of their arguments, possibly within alternating optimization techniques. In this paper, we overcome this limitation by deriving new closed-form expressions for the proximity operator of such two-variable functions. This makes it possible to employ standard proximal methods for efficiently solving a wide range of convex optimization problems involving phi-divergences. In addition, we show that these proximity operators are useful to compute the epigraphical projection of several functions. The proposed proximal tools are numerically validated in the context of optimal query execution within database management systems, where the problem of selectivity estimation plays a central role. Experiments are carried out on small to large scale scenarios.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (10)

Closed-form expression

In mathematics, an expression is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (+, −, ×, ÷, and integer powers) and

Optimization problem

In mathematics, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions.
Optimization problems can be divided into two catego

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternative

Related publications (3)

Loading

Loading

Loading

In this thesis we study calculus of variations for differential forms. In the first part we develop the framework of direct methods of calculus of variations in the context of minimization problems for functionals of one or several differential forms of the type, $\int_{\Omega} f(d\omega), \quad \int_{\Omega} f(d\omega_{1}, \ldots, d\omega_{m}) \quad \text{ and } \int_{\Omega} f(d\omega, \delta\omega).$ We introduce the appropriate convexity notions in each case, called \emph{ext. polyconvexity}, \emph{ext. quasiconvexity} and \emph{ext. one convexity} for functionals of the type $\int_{\Omega} f(d\omega),$ \emph{vectorial ext. polyconvexity}, \emph{vectorial ext. quasiconvexity} and \emph{vectorial ext. one convexity} for functionals of the type $\int_{\Omega} f(d\omega_{1}, \ldots, d\omega_{m})$ and \emph{ext-int. polyconvexity}, \emph{ext-int. quasiconvexity} and \emph{ext-int. one convexity} for functionals of the type $\int_{\Omega} f(d\omega, \delta\omega).$ We study their interrelationships and the connections of these convexity notions with the classical notion of polyconvexity, quasiconvexity and rank one convexity in classical vectorial calculus of variations. We also study weak lower semicontinuity and weak continuity of these functionals in appropriate spaces, address coercivity issues and obtain existence theorems for minimization problems for functionals of one differential forms.\smallskip In the second part we study different boundary value problems for linear, semilinear and quasilinear Maxwell type operator for differential forms. We study existence and derive interior regularity and $L^{2}$ boundary regularity estimates for the linear Maxwell operator $\delta (A(x)d\omega) = f$ with different boundary conditions and the related Hodge Laplacian type system $\delta (A(x)d\omega) + d\delta\omega = f,$ with appropriate boundary data. We also deduce, as a corollary, some existence and regularity results for div-curl type first order systems. We also deduce existence results for semilinear boundary value problems \begin{align*} \left\lbrace \begin{gathered} \delta ( A (x) ( d\omega ) ) + f( \omega ) = \lambda\omega \text{ in } \Omega, \ \nu \wedge \omega = 0 \text{ on } \partial\Omega. \end{gathered} \right. \end{align*} Lastly, we briefly discuss existence results for quasilinear Maxwell operator \begin{align*} \delta ( A ( x, d \omega ) ) = f , \end{align*} with different boundary data.

Razvan Cristescu, Martin Vetterli

Consider a set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources. The minimization of cost functions which are the product of a function of the rate and a function of the path weight is considered, for both the data-gathering scenario, which is relevant in sensor networks, and general traffic matrices, relevant for general networks. The minimization is achieved by jointly optimizing a) the transmission structure, which is shown to consist in general of a superposition of trees, and b) the rate allocation across the source nodes, which is done by Slepian-Wolf coding. The overall minimization can be achieved in two concatenated steps. First, the optimal transmission structure is found, which in general amounts to finding a Steiner tree, and second, the optimal rate allocation is obtained by solving an optimization problem with cost weights determined by the given optimal transmission structure, and with linear constraints given by the Slepian-Wolf rate region. For the case of data gathering, the optimal transmission structure is fully characterized and a closed-form solution for the optimal rate allocation is provided. For the general case of an arbitrary traffic matrix, the problem of finding the optimal transmission structure is NP-complete. For large networks, in some simplified scenarios, the total costs associated with Slepian-Wolf coding and explicit communication (conditional encoding based on explicitly communicated side information) are compared. Finally, the design of decentralized algorithms for the optimal rate allocation is analyzed.

2005Nikolaos Arvanitopoulos Darginis

Millions of digital images are captured by imaging devices on a daily basis. The way imaging devices operate follows an integral process from which the information of the original scene needs to be estimated. The estimation is done by inverting the integral process of the imaging device with the use of optimization techniques. This linear inverse problem, the inversion of the integral acquisition process, is at the heart of several image processing applications such as denoising, deblurring, inpainting, and super-resolution. We describe in detail the use of linear inverse problems in these applications. We review and compare several state-of-the-art optimization algorithms that invert this integral process. Linear inverse problems are usually very difficult to solve. Therefore, additional prior assumptions need to be introduced to successfully estimate the output signal. Several priors have been suggested in the research literature, with the Total Variation (TV) being one of the most prominent. In this thesis, we review another prior, the l0 pseudo-norm over the gradient domain. This prior allows full control over how many non-zero gradients are retained to approximate prominent structures of the image. We show the superiority of the l0 gradient prior over the TV prior in recovering genuinely piece-wise constant signals. The l0 gradient prior has shown to produce state-of-the-art results in edge-preserving image smoothing. Moreover, this general prior can be applied to several other applications, such as edge extraction, clip-art JPEG artifact removal, non-photorealistic image rendering, detail magnification, and tone mapping. We review and evaluate several state-of-the-art algorithms that solve the optimization problem based on the l0 gradient prior. Subsequently we apply the l0 gradient prior to two applications where we show superior results as compared to the current state-of-the-art. The first application is that of single-image reflection removal. Existing solutions to this problem have shown limited success because of the highly ill-posed nature of the problem. We show that the standard l0 gradient prior with a modified data-fidelity term based on the Laplacian operator is able to sufficiently remove unwanted reflections from images in many realistic scenarios. We conduct extensive experiments and show that our method outperforms the state-of-the-art. In the second application of haze removal from visible-NIR image pairs we propose a novel optimization framework, where the prior term penalizes the number of non-zero gradients of the difference between the output and the NIR image. Due to the longer wavelengths of NIR, an image taken in the NIR spectrum suffers significantly less from haze artifacts. Using this prior term, we are able to transfer details from the haze-free NIR image to the final result. We show that our formulation provides state-of-the-art results compared to haze removal methods that use a single image and also to those that are based on visible-NIR image pairs.