Here are
45 public repositories
matching this topic...
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Updated
Aug 8, 2022
Jupyter Notebook
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Updated
Jun 22, 2022
Python
Implementation of Papers on Adversarial Examples
Updated
Jan 19, 2019
Python
Detection by Attack: Detecting Adversarial Samples by Undercover Attack
Updated
Feb 13, 2021
Python
Tensorflow Implementation of Adversarial Attack to Capsule Networks
Updated
Nov 9, 2017
Python
PyTorch library for adversarial attack and training
Updated
Jan 16, 2019
Python
SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Updated
Jun 21, 2022
Python
Updated
Jun 9, 2018
Python
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
Updated
Dec 17, 2020
Jupyter Notebook
The first real-world adversarial attack on MTCNN face detetction system to date
Updated
May 27, 2021
Python
Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)
Updated
Jul 8, 2021
Python
Implementation of adversarial training under fast-gradient sign method (FGSM), projected gradient descent (PGD) and CW using Wide-ResNet-28-10 on cifar-10. Sample code is re-usable despite changing the model or dataset.
Updated
May 15, 2020
Python
Reproduce multiple adversarial attack methods
Updated
May 5, 2020
Python
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.
Updated
Nov 13, 2021
Python
implement Kervolutional Neural Networks (CVPR, 2019) and compare with CNN under the white box attack
Updated
May 20, 2019
Jupyter Notebook
六代兴亡如梦,苒苒惊时月。纵使岁寒途远,此志应难夺。
Updated
Mar 15, 2020
Python
using adversarial attacks to confuse deep-chicken-terminator 🛡️ 🐔
Updated
Sep 13, 2020
Jupyter Notebook
Adversarial Attack on 3D U-Net model: Brain Tumour Segmentation.
Updated
Jun 16, 2020
Jupyter Notebook
Fast Gradient Sign Method for Adversarial Attack (PyTorch)
Updated
Mar 12, 2019
Python
FGSM attack Pytorch module for semantic segmentation networks, with examples provided for Deeplab V3.
Updated
Feb 21, 2021
Python
Improve this page
Add a description, image, and links to the
fgsm
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
fgsm
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.