Person

Vidit Vidit

This person is no longer with EPFL

Related publications (4)

Improving Object Detection under Domain Shifts

Vidit Vidit

Object detection plays a critical role in various computer vision applications, encompassingdomains like autonomous vehicles, object tracking, and scene understanding. These applica-tions rely on detectors that generate bounding boxes around known object c ...
EPFL2023

CLIP the Gap: A Single Domain Generalization Approach for Object Detection

Mathieu Salzmann, Martin Pierre Engilberge, Vidit Vidit

Single Domain Generalization (SDG) tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain. While this has been well studied for image classification, the literature on SDG object detection remai ...
Los Alamitos2023

Learning Transformations To Reduce the Geometric Shift in Object Detection

Mathieu Salzmann, Martin Pierre Engilberge, Vidit Vidit

The performance of modern object detectors drops when the test distribution differs from the training one. Most of the methods that address this focus on object appearance changes caused by, e.g., different illumination conditions, or gaps between syntheti ...
Los Alamitos2023

Attention-based domain adaptation for single-stage detectors

Mathieu Salzmann, Vidit Vidit

While domain adaptation has been used to improve the performance of object detectors when the training and test data follow different distributions, previous work has mostly focused on two-stage detectors. This is because their use of region proposals make ...
SPRINGER2022

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.