A python package for automated selection of explanation method for CNNs.
Reports from demo datasets can be found here: CXR dataset, Imagenette dataset, Kandinsky Patterns dataset.
Prerequisites: installed python 3.9.
To install all dependencies from poetry.toml file using poetry run:
git clone https://github.com/MI2DataLab/autoexplainer.git
cd autoexplainer
poetry config virtualenvs.in-project true
poetry shell # if you want create dedicated .venv inside autoexplainer
poetry installTo use created enviroment, activate it with poetry shell.
To install dependencies the regular way you can use pip:
git clone https://github.com/MI2DataLab/autoexplainer.git
cd autoexplainer
pip install .After pulled changes in dependencies, you can update dependencies with:
poetry updateIn order to use Pytorch with GPU, you must match torch CUDA version with CUDA driver on your machine.
e.g. to install torch 12.1 with CUDA 11.6 support, run:
pip uninstall torch torchvision
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116The function to_pdf creates both .tex and .pdf report versions. Due to this fact, additional features have to be installed to render PDF report properly:
- install LaTeX eg. MiKTeX and add to PATH
- in MiKTeX enable automatically installing missing packages
 
 - add dependency with 
pip: 
pip install pylatexSee sample notebook in development/notebooks/auto_explainer_usage.ipynb.
The most time consuming part. First, all methods are evaluated on provided data. Then, the best method is selected by aggregating raw results. Finally, a report is generated.
from autoexplainer import AutoExplainer
auto_explainer = AutoExplainer(model, data, targets)
# compute all metric values and see not aggregated results (very long)
auto_explainer.evaluate()
auto_explainer.raw_results
# aggregate metric scores and see aggregated results (almost instant)
auto_explainer.aggregate()
auto_explainer.first_aggregation_results  # single value per (method, metric) pair
auto_explainer.second_aggregation_results  # single value per method
# produce a pdf report
auto_explainer.to_html('examples/example_report.html')Later, the selected explanation method can be extracted and used right away to explain more data.
best_explanation = auto_explainer.get_best_explanation()
new_attributions = best_explanation.explain(model, data, targets)This best_explanation object contains all the information about the selected method, including the name of the method, the parameters used, and the attributions of data used during explanation method selection.
best_explanation.name
best_explanation.parameters
best_explanation.attributionsIt is also possible to calculate metric values for methods used during selection process but on other data. Values can be either raw (1 value per data point) or aggregated (1 value only, as in auto_explainer.second_aggregation_results).
raw_metric_scores = best_explanation.evaluate(model, data, targets, new_attributions)
aggregated_metric_scores = best_explanation.evaluate(model, data, targets, attributions, aggregate=True)To run test (-n tests in parallel):
pytest tests -n autoor 1 test at a time:
pytest testsor a selected test a time:
pytest tests/test_autoexplainer.pyor print output during tests:
pytest -s testsTo check formatting, linting, and other checks before commiting, run:
pre-commit run --all-filesTo generate documentation, run:
mkdocs buildThe documentation will be generated in site/ directory.
To generate documentation and serve it locally, run:
mkdocs serveThe documentation will be available at http://127.0.0.1:8000/.
If You didn't activate poetry shell, precede commands above with poetry run.
This repository contains all code used for our bachelor thesis written at the Faculty of Mathematics and Information Science, Warsaw University of Technology.
This project was generated using the wolt-python-package-cookiecutter template.