Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added more papers #1

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ This is an on-going attempt to consolidate interesting efforts in the area of un
* [Lucid](https://github.com/tensorflow/lucid) (activation maximization, heatmaps, Tensorflow)

# Surveys

* Methods for Interpreting and Understanding Deep Neural Networks. _Montavon et al. 2017_ [pdf](https://arxiv.org/pdf/1706.07979.pdf)
* Visualizations of Deep Neural Networks in Computer Vision: A Survey. _Seifert et al. 2017_ [pdf](https://link.springer.com/chapter/10.1007/978-3-319-54024-5_6)
* How convolutional neural network see the world - A survey of convolutional neural network visualization methods. _Qin et al. 2018_ [pdf](https://arxiv.org/abs/1804.11191)
Expand All @@ -24,6 +23,7 @@ This is an on-going attempt to consolidate interesting efforts in the area of un
* Understanding Neural Networks via Feature Visualization: A survey. _Nguyen et al. 2019_ [pdf](https://arxiv.org/pdf/1904.08939.pdf)
* Explaining Explanations: An Overview of Interpretability of Machine Learning. _Gilpin et al. 2019_ [pdf](https://arxiv.org/pdf/1806.00069.pdf)
* DARPA updates on the XAI program [pdf](https://www.darpa.mil/attachments/XAIProgramUpdate.pdf)
* A Survey on Explainable Artificial Intelligence (XAI): towards Medical XAI. _Toja et al. [pdf](https://arxiv.org/pdf/1907.07374.pdf)

#### Definitions of Interpretability
* The Mythos of Model Interpretability. _Lipton 2016_ [pdf](https://arxiv.org/abs/1606.03490)
Expand Down Expand Up @@ -64,6 +64,7 @@ This is an on-going attempt to consolidate interesting efforts in the area of un
* Distilling a Neural Network Into a Soft Decision Tree [pdf](https://arxiv.org/abs/1711.09784)
* Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. _Tan et al. 2018_ [pdf](https://arxiv.org/abs/1710.06169)
* Improving the Interpretability of Deep Neural Networks with Knowledge Distillation. _Liu et al. 2018_ [pdf](https://arxiv.org/pdf/1812.10924.pdf)
* EDIT: Interpreting Ensemble Models via Compact Soft Decision Trees. _Yoo et al. 2019 (https://pdfs.semanticscholar.org/7a86/aaa70dc919af0d30eccc364583b9a09839c6.pdf?_ga=2.158361272.382053237.1579885022-681255730.1545175980)

## A4. Quantitatively characterizing hidden features
* TCAV: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors. _Kim et al. 2018_ [pdf](https://arxiv.org/abs/1711.11279) | [code](https://github.com/tensorflow/tcav)
Expand Down Expand Up @@ -134,6 +135,7 @@ This is an on-going attempt to consolidate interesting efforts in the area of un
* A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations. _Nie et al. 2018_ [pdf](https://arxiv.org/abs/1805.07039)
* BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth. _Yang et al. 2019_ [pdf](https://arxiv.org/abs/1907.09701)
* On the (In)fidelity and Sensitivity for Explanations. _Yeh et al. 2019_ [pdf](https://arxiv.org/pdf/1901.09392.pdf)
* BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth. _Yang et al. 2019 [pdf](https://arxiv.org/pdf/1907.09701.pdf) |

## B2. Learning to explain
* Learning how to explain neural networks: PatternNet and PatternAttribution [pdf](https://arxiv.org/abs/1705.05598)
Expand All @@ -148,7 +150,14 @@ This is an on-going attempt to consolidate interesting efforts in the area of un
* Counterfactual Visual Explanations. _Goyal et al. 2019_ [pdf](https://arxiv.org/pdf/1904.07451.pdf)
* Generative Counterfactual Introspection for Explainable Deep Learning. _Liu et al. 2019_ [pdf](https://arxiv.org/abs/1907.03077)

# D. Others
#D. Using Attention for Training
* Squeeze-and-Excitation Networks. _Hu et al. 2017_ [pdf](https://arxiv.org/pdf/1709.01507.pdf)
* CBAM: Convolutional Block Attention Module. _Woo et al. 2018 [pdf](https://arxiv.org/pdf/1807.06521.pdf) | [code](https://github.com/Jongchan/attention-module/blob/master)
* Sharpen Focus: Learning with Attention Separability and Consistency. _Wang et al. 2019 [pdf](https://arxiv.org/pdf/1811.07484.pdf)
* Tell Me Where to Look: Guided Attention Inference Network. _Li et al. 2018 [pdf](https://arxiv.org/pdf/1802.10171.pdf) | [code](https://github.com/ngxbac/GAIN)
* Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks Via Attention Transfer _Zagoruyko et al. 2017 [pdf](https://openreview.net/pdf?id=Sks9_ajex)

# E. Others
* Yang, S. C. H., & Shafto, P. Explainable Artificial Intelligence via Bayesian Teaching. NIPS 2017 [pdf](http://shaftolab.com/assets/papers/yangShafto_NIPS_2017_machine_teaching.pdf)
* Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation [pdf](http://www.antoniosliapis.com/papers/explainable_ai_for_designers.pdf)
* ICADx: Interpretable computer aided diagnosis of breast masses. _Kim et al. 2018_ [pdf](https://arxiv.org/abs/1805.08960)
Expand Down