Publication

SafeAMC: Adversarial training for robust modulation classification models

Related publications (47)

Towards Robust Vision Transformer

Shaokai Ye, Yuan He

Recent advances on Vision Transformer (ViT) and its improved variants have shown that self-attention-based networks surpass traditional Convolutional Neural Networks (CNNs) in most vision tasks. However, existing ViTs focus on the standard accuracy and com ...
IEEE COMPUTER SOC2022

Safety-compliant Generative Adversarial Networks for Human Trajectory Forecasting

Alexandre Massoud Alahi, Parth Ashit Kothari

Human trajectory forecasting in crowds presents the challenges of modelling social interactions and outputting collision-free multimodal distribution. Following the success of Social Generative Adversarial Networks (SGAN), recent works propose various GAN- ...
2022

On the benefits of robust models in modulation recognition

Pascal Frossard, Javier Alejandro Maroto Morales

Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications. However, in other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations, which consist of impercep ...
2021

Predicting optical transmission through complex scattering media from reflection patterns with deep neural networks

Demetri Psaltis, Eirini Kakkava

Deep neural networks (DNNs) are used to reconstruct transmission speckle intensity patterns from the respective reflection speckle intensity patterns generated by illuminated parafilm layers. The dependence of the reconstruction accuracy on the thickness o ...
2021

Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks

James Richard Larus, Xiaoting Li, Dinghao Wu, Junjun Zhang

The vulnerability of deep neural networks to adversarial attacks has posed significant threats to real-world applications, especially security-critical ones. Given a well-trained model, slight modifications to the input samples can cause drastic changes in ...
IEEE2021

Biologically plausible unsupervised learning in shallow and deep neural networks

Bernd Albert Illing

The way our brain learns to disentangle complex signals into unambiguous concepts is fascinating but remains largely unknown. There is evidence, however, that hierarchical neural representations play a key role in the cortex. This thesis investigates biolo ...
EPFL2021

Robustness via Cross-Domain Ensembles

Shuqing Teresa Yeo, Amir Roshan Zamir, Oguzhan Fatih Kar

We present a method for making neural network predictions robust to shifts from the training data distribution. The proposed method is based on making predictions via a diverse set of cues (called 'middle domains') and ensembling them into one strong predi ...
IEEE2021

Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Mathieu Salzmann, Wei Mao

Recent progress in stochastic motion prediction, i.e., predicting multiple possible future human motions given a single past pose sequence, has led to producing truly diverse future motions and even providing control over the motion of some body parts. How ...
IEEE2021

Supplementary Material - AL2: Progressive Activation Loss for Learning General Representations in Classification Neural Networks

Sabine Süsstrunk, Majed El Helou, Frederike Dümbgen

In this supplementary material, we present the details of the neural network architecture and training settings used in all our experiments. This holds for all experiments presented in the main paper as well as in this supplementary material. We also show ...
2020

Disentangling feature and lazy training in deep neural networks

Matthieu Wyart, Mario Geiger, Stefano Spigler, Arthur Jacot

Two distinct limits for deep learning have been derived as the network width h -> infinity, depending on how the weights of the last layer scale with h. In the neural tangent Kernel (NTK) limit, the dynamics becomes linear in the weights and is described b ...
IOP PUBLISHING LTD2020

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.