Modern optimization is tasked with handling applications of increasingly large scale, chiefly due to the massive amounts of widely available data and the ever-growing reach of Machine Learning. Consequently, this area of research is under steady pressure t ...
Distributed learning is the key for enabling training of modern large-scale machine learning models, through parallelising the learning process. Collaborative learning is essential for learning from privacy-sensitive data that is distributed across various ...
We develop new tools to study landscapes in nonconvex optimization. Given one optimization problem, we pair it with another by smoothly parametrizing the domain. This is either for practical purposes (e.g., to use smooth optimization algorithms with good g ...
This paper develops a fast algorithm for computing the equilibrium assignment with the perturbed utility route choice (PURC) model. Without compromise, this allows the significant advantages of the PURC model to be used in large-scale applications. We form ...
Quantum support vector machines employ quantum circuits to define the kernel function. It has been shown that this approach offers a provable exponential speedup compared to any known classical algorithm for certain data sets. The training of such models c ...
Verein Forderung Open Access Publizierens Quantenwissenschaf2024
Multiple tensor-times-matrix (Multi-TTM) is a key computation in algorithms for computing and operating with the Tucker tensor decomposition, which is frequently used in multidimensional data analysis. We establish communication lower bounds that determine ...
We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of optimal control problems (OCPs) constrained by random partial differential equ ...
Orthogonal group synchronization is the problem of estimating n elements Z(1),& mldr;,Z(n) from the rxr orthogonal group given some relative measurements R-ij approximate to Z(i)Z(j)(-1). The least-squares formulation is nonconvex. To avoid its local minim ...
A method for optimizing at least one of a geometry, an implantation procedure, and/or stimulation protocol of one or more electrodes for an electrical stimulation of a target structure in a nervous system of a living being by a computer device, the method ...
While momentum-based accelerated variants of stochastic gradient descent (SGD) are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods. In this work, we first show that th ...