Related publications (43)

High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization

Volkan Cevher, Fanghui Liu

This paper studies kernel ridge regression in high dimensions under covariate shifts and analyzes the role of importance re-weighting. We first derive the asymptotic expansion of high dimensional kernels under covariate shifts. By a bias-variance decomposi ...
2024

A new variable shape parameter strategy for RBF approximation using neural networks

Jan Sickmann Hesthaven

The choice of the shape parameter highly effects the behaviour of radial basis function (RBF) approximations, as it needs to be selected to balance between the ill-conditioning of the interpolation matrix and high accuracy. In this paper, we demonstrate ho ...
PERGAMON-ELSEVIER SCIENCE LTD2023

Implicit Distance Functions: Learning and Applications in Control

Aude Billard, Mikhail Koptev, Nadia Barbara Figueroa Fernandez

This paper describes a novel approach to learn an implicit, differentiable distance function for arbitrary configurations of a robotic manipulator used for reactive control. By exploiting GPU processing, we efficiently query the learned collision represent ...
2022

Multiscale Representation Learning of Graph Data With Node Affinity

Pascal Frossard, Chenglin Li, Xing Gao

Graph neural networks have emerged as a popular and powerful tool for learning hierarchical representation of graph data. In complement to graph convolution operators, graph pooling is crucial for extracting hierarchical representation of data in graph neu ...
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC2021

Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems

Pascal Fua, Mathieu Salzmann, Zheng Dang, Kwang Moo Yi, Fei Wang, Yinlin Hu

Many classical Computer Vision problems, such as essential matrix computation and pose estimation from 3D to 2D correspondences, can be tackled by solving a linear least-square problem, which can be done by finding the eigenvector corresponding to the smal ...
IEEE COMPUTER SOC2021

Scaling up Kernel Ridge Regression via Locality Sensitive Hashing

Mikhail Kapralov, Amir Zandieh, Navid Nouri

Random binning features, introduced in the seminal paper of Rahimi and Recht '07, are an efficient method for approximating a kernel matrix using locality sensitive hashing. Random binning features provide a very simple and efficient way to approximate the ...
ADDISON-WESLEY PUBL CO2020

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.