**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Julien René Fageot

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Related research domains (28)

Stochastic process

In probability theory and related fields, a stochastic (stəˈkæstɪk) or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has

White noise

In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used, with this or similar meanings, i

Inverse problem

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source r

Related publications (32)

Loading

Loading

Loading

People doing similar research (108)

Courses taught by this person

No results

Related units (4)

Julien René Fageot, Adrian Thibault Etienne Jarret, Matthieu Martin Jean-André Simeoni

Nous nous intéressons à la reconstruction parcimonieuse d’images à l’aide du problème d’optimisation régularisé LASSO. Dans de nombreuses applications pratiques, les grandes dimensions des objets à reconstruire limitent, voire empêchent, l’utilisation des méthodes de résolution proximales classiques. C’est le cas par exemple en radioastronomie. Nous détaillons dans cet article le fonctionnement de l’algorithme Frank-Wolfe Polyatomique, spécialement développé pour résoudre le problème LASSO dans ces contextes exigeants. Nous démontrons sa supériorité par rapport aux méthodes proximales dans des situations en grande dimension avec des mesures de Fourier, lors de la résolution de problèmes simulés inspirés de la radio-interférométrie.

2022Alessia Caponera, Julien René Fageot, Victor Panaretos, Matthieu Martin Jean-André Simeoni

We propose nonparametric estimators for the second-order central moments of possibly anisotropic spherical random fields, within a functional data analysis context. We consider a measurement framework where each random field among an identically distributed collection of spherical random fields is sampled at a few random directions, possibly subject to measurement error. The collection of random fields could be i.i.d. or serially dependent. Though similar setups have already been explored for random functions defined on the unit interval, the nonparametric estimators proposed in the literature often rely on local polynomials, which do not readily extend to the (product) spherical setting. We therefore formulate our estimation procedure as a variational problem involving a generalized Tikhonov regularization term. The latter favours smooth covariance/autocovariance functions, where the smoothness is specified by means of suitable Sobolev-like pseudo-differential operators. Using the machinery of reproducing kernel Hilbert spaces, we establish representer theorems that fully characterize the form of our estimators. We determine their uniform rates of convergence as the number of random fields diverges, both for the dense (increasing number of spatial samples) and sparse (bounded number of spatial samples) regimes. We moreover demonstrate the computational feasibility and practical merits of our estimation procedure in a simulation setting, assuming a fixed number of samples per random field. Our numerical estimation procedure leverages the sparsity and second-order Kronecker structure of our setup to reduce the computational and memory requirements by approximately three orders of magnitude compared to a naive implementation would require.

2022Thomas Jean Debarre, Quentin Alain Denoyelle, Julien René Fageot, Michaël Unser

We study the problem of one-dimensional regression of data points with total-variation (TV) regularization (in the sense of measures) on the second derivative, which is known to promote piecewise-linear solutions with few knots. While there are efficient algorithms for determining such adaptive splines, the difficulty with TV regularization is that the solution is generally non-unique, an aspect that is often ignored in practice. In this paper, we present a systematic analysis that results in a complete description of the solution set with a clear distinction between the cases where the solution is unique and those, much more frequent, where it is not. For the latter scenario, we identify the sparsest solutions, i.e., those with the minimum number of knots, and we derive a formula to compute the minimum number of knots based solely on the data points. To achieve this, we first consider the problem of exact interpolation which leads to an easier theoretical analysis. Next, we relax the exact interpolation requirement to a regression setting, and we consider a penalized optimization problem with a strictly convex data-fidelity cost function. We show that the underlying penalized problem can be reformulated as a constrained problem, and thus that all our previous results still apply. Based on our theoretical analysis, we propose a simple and fast two-step algorithm, agnostic to uniqueness, to reach a sparsest solution of this penalized problem.