**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Shayan Aziznejad

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Courses taught by this person

No results

Related research domains (7)

Learning

Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machines; th

Neural network

A neural network can refer to a neural circuit of biological neurons (sometimes also called a biological neural network), a network of artificial neurons or nodes in the case of an artificial neur

Inverse problem

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source r

Related publications (16)

Loading

Loading

Loading

People doing similar research (117)

Related units (2)

In this thesis, we reveal that supervised learning and inverse problems share similar mathematical foundations. Consequently, we are able to present a unified variational view of these tasks that we formulate as optimization problems posed over infinite-dimensional Banach spaces. Throughout the thesis, we study this class of optimization problems from a mathematical perspective. We start by specifying adequate search spaces and loss functionals that are derived from applications. Next, we identify conditions that guarantee the existence of a solution and we provide a (finite) parametric form for the optimal solution. Finally, we utilize these theoretical characterizations to derive numerical solvers.The thesis is divided into five parts. The first part is devoted to the theory of splines, a large class of continuous-domain models that are optimal in many of the studied frameworks. Our contributions in this part include the introduction of the notion of multi-splines, their theoretical properties, and shortest-support generators. In the second part, we study a broad class of optimization problems over Banach spaces and we prove a general representer theorem that characterizes their solution sets. The third and fourth parts of the thesis invoke the applicability of our general framework to supervised learning and inverse problems, respectively. Specifically, we derive various learning schemes from our variational framework that inherit a certain notion of "sparsity" and we establish the connection between our theory and deep neural networks, which are state-of-the-art in supervised learning. Moreover, we deploy our general theory to study continuous-domain inverse problems with multicomponent models, which can be applied to various signal and image processing tasks, in particular, curve fitting. Finally, we revisit the notions of splines and sparsity in the last part of the thesis, this time, from a stochastic perspective.

Shayan Aziznejad, Thomas Jean Debarre, Michaël Unser

We present a novel framework for the reconstruction of 1D composite signals assumed to be a mixture of two additive components, one sparse and the other smooth, given a finite number of linear measurements. We formulate the reconstruction problem as a continuous-domain regularized inverse problem with multiple penalties. We prove that these penalties induce reconstructed signals that indeed take the desired form of the sum of a sparse and a smooth component. We then discretize this problem using Riesz bases, which yields a discrete problem that can be solved by standard algorithms. Our discretization is exact in the sense that we are solving the continuous-domain problem over the search space specified by our bases without any discretization error. We propose a complete algorithmic pipeline and demonstrate its feasibility on simulated data.

, ,

We develop a novel 2D functional learning framework that employs a sparsity-promoting regularization based on second-order derivatives. Motivated by the nature of the regularizer, we restrict the search space to the span of piecewise-linear box splines shifted on a 2D lattice. Our formulation of the infinite-dimensional problem on this search space allows us to recast it exactly as a finite-dimensional one that can be solved using standard methods in convex optimization. Since our search space is composed of continuous and piecewise-linear functions, our work presents itself as an alternative to training networks that deploy rectified linear units, which also construct models in this family. The advantages of our method are fourfold: the ability to enforce sparsity, favoring models with fewer piecewise-linear regions; the use of a rotation, scale and translation-invariant regularization; a single hyperparameter that controls the complexity of the model; and a clear model interpretability that provides a straightforward relation between the parameters and the overall learned function. We validate our framework in various experimental setups and compare it with neural networks.