https://github.com/allenai/scruples
- explain anecdotes with LIME
- report generation with statistics on explanation features
File | features |
---|---|
anecdotes.py | generates LIME explanations for a random anecdotes subset and creates pickled explanations as well as html report + plots |
anecdotes_utils.py | interface functions that call the scruples REST API, reads anecdotes data from /data/ directory |
anecdotes_shap.py | explanations with SHAPley values, not functional currently |
anecdotes_LIME.py | explanations with LIME, functional |
anecdotes_anchors.py | explanations with anchors, functional but slow |
- model considers connotated verbs and personal pronouns
-
clone XAI_scruples
-
create venv from requirements.txt
-
clone and install scruples in venv from requirements.txt: https://github.com/allenai/scruples#setup
-
download scruples model & config: https://github.com/allenai/scruples/blob/master/docs/demos.md#norms
-
move model (.json & .bin) to ./model/anecdotes or ./models/dilemmas
-
update path and config: start_server.sh (for GPU; CPU: start_server_CPU.sh)
./start_server.sh
-
clone XAI_scruples
-
create venv from requirements.txt
-
clone scruples: https://github.com/allenai/scruples
-
build and run docker container from Dockerfile
docker build --tag scruples . docker run --name scruples_instance -publish 5050:8000 scruples
- download scruples data: https://github.com/allenai/scruples#data
- move data to XAI_scruples/data/anecdotes and/or XAI_scruples/data/dilemmas
- configure params in ancedotes.py
- run ancedotes.py
- html and pngs will be created in root dir
- single explanation with 10 pertubations: 14s CPU
- single explanation with 5000 pertubations: 6532s CPU, 348s GPU
- CPU i7, GPU GTX1080
-
Anchors: runtime extremly high, ~3hours per anecdote on GPU
-
SHAP not compatible with embedding layers: shap/shap#595