Skip to content

We perform adversarial attacks to a convolutional network trained on medical images in the context of multiclass classification

Notifications You must be signed in to change notification settings

marcosolime/break-fix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ab8f3e4 · Aug 30, 2024

History

24 Commits
Jul 31, 2024
Aug 28, 2024
Aug 23, 2024
Aug 30, 2024
Aug 27, 2024
Aug 23, 2024
Aug 22, 2024
Jul 31, 2024
Aug 29, 2024
Aug 29, 2024
Aug 23, 2024
Aug 27, 2024

Repository files navigation

break-fix

We perform adversarial attacks to a CNN trained on medical images. Then, we apply counter measures to make the model robust and mantain high accuracy.

The project is part of the course "Ethics in Artificial Intelligence" at the University of Bologna.

Contributors:

  • Alessandro Folloni
  • Daniele Napolitano
  • Marco Solime

Breaking

We perform the following attacks (code in the sandbox.ipynb notebook):

  • Data poisoning
  • Adversarial examples:
    • FGSM: Fast Gradient Sign Method
    • PGD: Projected Gradient Descent
  • Biasing the loss:
    • NoisyLoss
    • FoolingLoss
  • Manipulating Gradient direction

Fixing

We apply the following counter measures:

  • Data Augmentation: makes the model robust and invariant to small changes in the input
  • Adversarial Training: to contrast adversarial examples
  • Pattern Detector: We train a model to detect the possible presence of adversarial examples, to be used in the early stages of the production pipeline

Requirements

To run the code, you need to install the libraries from the requirements.txt file. You can do it by running the following command:

pip install -r requirements.txt

About

We perform adversarial attacks to a convolutional network trained on medical images in the context of multiclass classification

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published