Adversarial Attacks and Defences

Published: 09 Oct 2015 Category: deep_learning


Intriguing properties of neural networks

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Explaining and Harnessing Adversarial Examples

Distributional Smoothing with Virtual Adversarial Training

Confusing Deep Convolution Networks by Relabelling

Exploring the Space of Adversarial Images

Learning with a Strong Adversary

Adversarial examples in the physical world

DeepFool: a simple and accurate method to fool deep neural networks

Adversarial Autoencoders

Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization

(Deep Learning’s Deep Flaws)’s Deep Flaws (By Zachary Chase Lipton)

Deep Learning Adversarial Examples – Clarifying Misconceptions

Adversarial Machines: Fooling A.Is (and turn everyone into a Manga)

How to trick a neural network into thinking a panda is a vulture

Assessing Threat of Adversarial Examples on Deep Neural Networks

Safety Verification of Deep Neural Networks

Adversarial Machine Learning at Scale

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

Parseval Networks: Improving Robustness to Adversarial Examples

Towards Deep Learning Models Resistant to Adversarial Attacks

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

One pixel attack for fooling deep neural networks

Enhanced Attacks on Defensively Distilled Deep Neural Networks

Adversarial Attacks Beyond the Image Space

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Training Ensembles to Detect Adversarial Examples

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Where Classification Fails, Interpretation Rises

Query-Efficient Black-box Adversarial Examples

Adversarial Examples: Attacks and Defenses for Deep Learning

Wolf in Sheep’s Clothing - The Downscaling Attack Against Deep Learning Applications

Note on Attacking Object Detectors with Adversarial Stickers

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Awesome Adversarial Examples for Deep Learning

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Exploring the Space of Black-box Attacks on Deep Neural Networks

Adversarial Patch

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

Spatially transformed adversarial examples

Generating adversarial examples with adversarial networks

Adversarial Spheres

LaVAN: Localized and Visible Adversarial Noise

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Adversarial Examples that Fool both Human and Computer Vision

On the Suitability of Lp-norms for Creating and Preventing Adversarial Examples

Protecting JPEG Images Against Adversarial Attacks

Sparse Adversarial Perturbations for Videos

DeepDefense: Training Deep Neural Networks with Improved Robustness

Improving Transferability of Adversarial Examples with Input Diversity

Adversarial Attacks and Defences Competition

Semantic Adversarial Examples

Generating Natural Adversarial Examples

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

  • intro: Northeastern University & MIT-IBM Watson AI Lab & IBM Research AI
  • keywords: Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers)
  • arxiv:

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

VectorDefense: Vectorization as a Defense to Adversarial Examples

On the Limitation of MagNet Defense against L1-based Adversarial Examples

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Siamese networks for generating adversarial examples

Generative Adversarial Examples

Detecting Adversarial Examples via Key-based Network

Adversarial Attacks on Variational Autoencoders

Non-Negative Networks Against Adversarial Attacks

Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning

Adversarial Reprogramming of Neural Networks

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Motivating the Rules of the Game for Adversarial Example Research

Defense Against Adversarial Attacks with Saak Transform

Are adversarial examples inevitable?

Open Set Adversarial Examples

Towards Query Efficient Black-box Attacks: An Input-free Perspective

  • intro: 11th ACM Workshop on Artificial Intelligence and Security (AISec) with the 25th ACM Conference on Computer and Communications Security (CCS)
  • arxiv:

SparseFool: a few pixels make a big difference

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

Adversarial Defense by Stratified Convolutional Sparse Coding

Learning Transferable Adversarial Examples via Ghost Networks

Feature Denoising for Improving Adversarial Robustness

Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks

Curls & Whey: Boosting Black-Box Adversarial Attacks

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Black-box Adversarial Attacks on Video Recognition Models

Interpreting Adversarial Examples with Attributes

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

Deep Neural Rejection against Adversarial Examples

Unrestricted Adversarial Attacks for Semantic Segmentation