**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Anastasios Kyrillidis

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Related units (3)

People doing similar research (99)

Courses taught by this person

No results

Related research domains (25)

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algo

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured

Related publications (23)

Loading

Loading

Loading

Volkan Cevher, Anastasios Kyrillidis, Quoc Tran Dinh

We propose a new proximal, path-following framework for a class of---possibly non-smooth---constrained convex problems. We consider settings where the non-smooth part is endowed with a proximity operator, and the constraint set is equipped with a self-concordant barrier. Our main contribution is a new re-parametrization of the optimality condition of the barrier problem, that allows us to process the objective function with its proximal operator within a new path following scheme. In particular, our approach relies on the following two main ideas. First, we re-parameterize the optimality condition as an auxiliary problem, such that a "good" initial point is available. Second, we combine the proximal operator of the objective and path-following ideas to design a single phase, proximal, path-following algorithm. Our method has several advantages. First, it allows handling non-smooth objectives via proximal operators, this avoids lifting the problem dimension via slack variables and additional constraints. Second, it consists of only a \emph{single phase} as compared to a two-phase algorithm in [43] In this work, we show how to overcome this difficulty in the proximal setting and prove that our scheme has the same O(ν√log(1/ε)) worst-case iteration-complexity with standard approaches [30, 33], but our method can handle nonsmooth objectives, where ν is the barrier parameter and ε is a desired accuracy. Finally, our framework allows errors in the calculation of proximal-Newton search directions, without sacrificing the worst-case iteration complexity. We demonstrate the merits of our algorithm via three numerical examples, where proximal operators play a key role to improve the performance over off-the-shelf interior-point solvers.

Volkan Cevher, Ya-Ping Hsieh, Yu-Chun Kao, Rabeeh Karimi Mahabadi, Anastasios Kyrillidis

We study convex optimization problems that feature low-rank matrix solutions. In such scenarios, non-convex methods offer significant advantages over convex methods due to their lower space complexity as well as faster convergence speed. Moreover, many of these methods feature rigorous approximation guarantees. Non-convex algorithms are simple to analyze and implement as they perform Euclidean gradient descent on matrix factors. In contrast, this paper derives non-Euclidean optimization frame- work in the non-convex setting that takes nonlinear gradient steps on the factors. We prove convergence rates to the global minimum under appropriate assumptions. We provide numerical evidence with Fourier Ptychography and FastText applications using real data that shows our approach can significantly enhance solution quality

2018, ,

We propose a new proximal path-following framework for a class of constrained convex problems. We consider settings where the nonlinear-and possibly nonsmooth-objective part is endowed with a proximity operator, and the constraint set is equipped with a self-concordant barrier. Our approach relies on the following two main ideas. First, we reparameterize the optimality condition as an auxiliary problem, such that a good initial point is available; by doing so, a family of alternative paths toward the optimum is generated. Second, we combine the proximal operator with path-following ideas to design a single-phase, proximal path-following algorithm. We prove that our algorithm has the same worst-case iteration complexity bounds as in standard path-following methods from the literature but does not require an initial phase. Our framework also allows inexactness in the evaluation of proximal Newton directions, without sacrificing the worst-case iteration complexity. We demonstrate the merits of our algorithm via three numerical examples, where proximal operators play a key role.