Skip to content

Python library for adversarial machine learning, attacks and defences for neural networks, logistic regression, decision trees, SVM, gradient boosted trees, Gaussian processes and more with multiple framework support

License

Notifications You must be signed in to change notification settings

sgxcj777/adversarial-robustness-toolbox

 
 

Repository files navigation

Making use of Adverserial Robustness Toolbox (ART) from IBM

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

The library is still under development. Feedback, bug reports and extensions are highly appreciated. Get in touch with us on Slack (invite here)!

Documentation of ART: https://adversarial-robustness-toolbox.readthedocs.io

The library is under continuous development. Feedback, bug reports and contributions are very welcome. Get in touch with us on Slack (invite here)!

Supported Machine Learning Libraries and Applications

Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications

Evasion Attacks:

Poisoning Attacks:

Defences:

Robustness metrics, certifications and verifications:

Detection of adversarial samples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer
  • Detector based on Fast Generalized Subset Scan (Speakman et al., 2018)

Detectoion of poisoning attacks:

Setup

Installation with pip

The toolbox is designed and tested to run with Python 3. ART can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

The most recent version of ART can be downloaded or cloned from this repository:

git clone https://github.com/IBM/adversarial-robustness-toolbox

Install ART with the following command from the project folder art:

pip install .

ART provides unit tests that can be run with the following command:

bash run_tests.sh

Get Started with ART

Examples of using ART can be found in examples and examples/README.md provides an overview and additional information. It contains a minimal example for each machine learning framework. All examples can be run with the following command:

python examples/<example_name>.py

More detailed examples and tutorials are located in notebooks and notebooks/README.md provides and overview and more information.

Contributing

Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the Adversarial Robustness 360 Toolbox so that others may evaluate it fairly in their own work.

Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness 360 Toolbox, we ask that you follow the PEP 8 coding standard and that you provide unit tests for the new features.

This project uses DCO. Be sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Example

git commit -s -m 'Add new feature'

Futher implementations of CLEVER metric (not from original repo)

The file Examples with CLEVER contains files with examples of models and implementations of CLEVER metric evaluation.

Note that as CLEVER uses a "white-box" setting, the ball of radius of minimum perturbations R and norm of gradient X needs to be known and specified when using ART library for CLEVER. However, I have not figured out the derivation of these values. Hence, I will put a random number for these parameters.

The examples will iterate through 10 batches of which each batch will sample 50 repetitions to estimate CLEVER. We will also use the first 10 test input to evaluate CLEVER score.

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v1.0.1},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Buesser, Beat and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069},
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}

About

Python library for adversarial machine learning, attacks and defences for neural networks, logistic regression, decision trees, SVM, gradient boosted trees, Gaussian processes and more with multiple framework support

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 92.5%
  • Python 7.5%