Skip to content

Commit

Permalink
Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
lucasdavid committed Nov 29, 2022
1 parent 70b6365 commit 034009c
Show file tree
Hide file tree
Showing 23 changed files with 551 additions and 412 deletions.
42 changes: 30 additions & 12 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,33 @@ Implemented Explaining Methods
:widths: auto
:align: left

=========================== ========= ========================================================================================
Method Kind Reference
=========================== ========= ========================================================================================
Gradient Back-propagation gradient `paper <https://arxiv.org/abs/1312.6034>`_
Full-Gradient gradient `paper <https://arxiv.org/abs/1905.00780>`_
CAM CAM `paper <https://arxiv.org/abs/1512.04150>`_
Grad-CAM CAM `paper <https://arxiv.org/abs/1610.02391>`_
Grad-CAM++ CAM `paper <https://arxiv.org/abs/1710.11063>`_
Score-CAM CAM `paper <https://arxiv.org/abs/1910.01279>`_
SmoothGrad Meta `paper <https://arxiv.org/abs/1706.03825>`_
TTA Meta `paper <https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0197-0/>`_
=========================== ========= ========================================================================================
=========================== ========= ====================================================== ==================
Method Kind Description Reference
=========================== ========= ====================================================== ==================
Gradient Back-propagation gradient Computes the gradient of the output activation unit `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.gradient.gradients>`_
being explained with respect to each unit in the input `paper <https://arxiv.org/abs/1312.6034>`_
signal.
Full-Gradient gradient Adds the individual contributions of each bias factor `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.gradient.full_gradients>`_
in the model to the extracted gradient, forming the `paper <https://arxiv.org/abs/1905.00780>`_
"full gradient" representation.
CAM CAM Creates class-specific maps by linearly combining the `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.cams.cam>`_
activation maps advent from the last convolutional `paper <https://arxiv.org/abs/1512.04150>`_
layer, scaled by their contributions to the unit of
interest.
Grad-CAM CAM Linear combination of activation maps, weighted by `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.cams.gradcam>`_
the gradient of the output unit with respect to the `paper <https://arxiv.org/abs/1610.02391>`_
maps themselves.
Grad-CAM++ CAM Weights pixels in the activation maps in order to `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.cams.gradcampp>`_
counterbalance, resulting in similar activation `paper <https://arxiv.org/abs/1710.11063>`_
intensity over multiple instances of objects.
Score-CAM CAM Combines activation maps considering their `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.cams.scorecam>`_
contribution towards activation, when used to mask `paper <https://arxiv.org/abs/1910.01279>`_
Activation maps are used to mask the input signal,
which is feed-forwarded and activation intensity is
computed for the new . Maps are combined weighted by
their relative activation retention.
SmoothGrad Meta Consecutive applications of an AI explaining method, `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.meta.smooth>`_
adding Gaussian noise to the input signal each time. `paper <https://arxiv.org/abs/1706.03825>`_
TTA Meta Consecutive applications of an AI explaining method, `docs <https://lucasdavid.github.io/keras-explainable/api/keras_explainable.methods.html#keras_explainable.methods.meta.tta>`_
applying augmentation to the input signal each time. `paper <https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0197-0/>`_
=========================== ========= ====================================================== ==================
Binary file modified _static/images/cover.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
69 changes: 52 additions & 17 deletions docs/_static/css/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,19 @@
* Code Highlighting.
**/

.highlight {
background-color: #f7f7f7 !important;
.highlight {
background-color: #fafafa !important;
padding: 0 !important;
}

.notranslate {
margin-bottom: 1em;
}

.highlight > pre {
background: none;
border: none;
padding: 1em !important;
/* padding: 1em !important; */

-webkit-box-shadow: none;
-moz-box-shadow: none;
Expand Down Expand Up @@ -44,15 +48,23 @@
}

.jupyter_container > .cell_output > .output > .highlight > pre {
padding: 0 1em !important;
padding: 0 20px !important;
}

.jupyter_container > .cell_output > .output:first-child > .highlight > pre {
padding-top: 1em !important;
padding-top: 20px !important;
}

.jupyter_container > .cell_output > .output:last-child > .highlight > pre {
padding-bottom: 1em !important;
padding-bottom: 20px !important;
}

/***
* Sizes
*/

.site-main {
max-width: 1200px;
}

/***
Expand All @@ -66,17 +78,10 @@
input[type="text"] {
display: block;
width: 100%;
padding: .375rem .75rem;
font-size: 1rem;
line-height: 1.5;
color: #495057;
background-color: #fff;
background-clip: padding-box;
border: 1px solid #ced4da;
border-radius: .25rem;
transition: border-color .15s ease-in-out, box-shadow .15s ease-in-out;
}

/*
input[type="text"]:focus {
border-color: #007daf;
box-shadow: 0 0 0 3px rgba(54, 198, 255, .25);
Expand All @@ -85,9 +90,39 @@ input[type="text"]:focus {
.function>dt,
.method>dt {
overflow-x: auto;
} */

img {
max-width: 100%;
}

/* Spacing */

h1,h2,h3 {
margin-top: 0.5em;
margin-bottom: 0.5em;
}

table {
p {
margin-bottom: 0.5em;
}

.admonition, .note {
border: 0;
margin-top: 1em;
margin-bottom: 1em;
}

.viewcode-link {
float: right;
}


/***
* Tables
**/

/* table {
color: #666;
border: #eee 1px solid;
width: 100%;
Expand All @@ -102,4 +137,4 @@ table td {
text-align: right;
border: #efefef 1px solid;
padding: 0.2em;
}
} */
Binary file modified docs/_static/images/cover.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 9 additions & 5 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@
except Exception as e:
print("Running `sphinx-apidoc` failed!\n{}".format(e))

import sphinx_redactor_theme

# -- General configuration ---------------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
Expand Down Expand Up @@ -141,7 +143,7 @@
# show_authors = False

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "arduino"
pygments_style = "vs"

# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
Expand All @@ -156,20 +158,22 @@

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_book_theme'
# html_theme = 'sphinx_book_theme'
html_theme = 'sphinx_redactor_theme'

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
# "sidebar_width": "300px", "page_width": "1200px"
"repository_url": "https://github.com/lucasdavid/keras-explainable",
"use_repository_button": True,
# "repository_url": "https://github.com/lucasdavid/keras-explainable",
# "use_repository_button": True,
}


# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
html_theme_path = [sphinx_redactor_theme.get_html_theme_path()]

# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
Expand Down Expand Up @@ -220,7 +224,7 @@
# html_show_sourcelink = True

# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
html_show_sphinx = False

# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
Expand Down
71 changes: 51 additions & 20 deletions docs/explaining.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,28 +32,28 @@ GPUs and/or workers.

SOURCE_DIRECTORY = 'docs/_static/images/singleton/'
SAMPLES = 8
SIZES = (224, 224)
SIZES = (299, 299)

file_names = os.listdir(SOURCE_DIRECTORY)
image_paths = [os.path.join(SOURCE_DIRECTORY, f) for f in file_names if f != '_links.txt']
images = np.stack([img_to_array(load_img(ip).resize(SIZES)) for ip in image_paths])
images = images.astype("uint8")[:SAMPLES]

We demonstrate bellow how predictions can be explained using the
ResNet50 network trained over ImageNet, using a few image samples.
Xception network trained over ImageNet, using a few image samples.
Firstly, we load the network:

.. jupyter-execute::

rn50 = tf.keras.applications.ResNet50V2(
model = tf.keras.applications.Xception(
classifier_activation=None,
weights='imagenet',
)

print(f"Spatial map sizes: {rn50.get_layer('avg_pool').input.shape}")
print(f"Spatial map sizes: {model.get_layer('avg_pool').input.shape}")

We can feed-foward the samples once and get the predicted classes for each sample.
Besides making sure the model is outputing the expected classes, this step is
We can feed-forward the samples once and get the predicted classes for each sample.
Besides making sure the model is outputting the expected classes, this step is
required in order to determine the most activating units in the *logits* layer,
which improves performance of the explaining methods.

Expand All @@ -62,7 +62,7 @@ which improves performance of the explaining methods.
from tensorflow.keras.applications.imagenet_utils import preprocess_input, decode_predictions

inputs = images / 127.5 - 1
logits = rn50.predict(inputs, verbose=0)
logits = model.predict(inputs, verbose=0)

indices = np.argsort(logits, axis=-1)[:, ::-1]
probs = tf.nn.softmax(logits).numpy()
Expand All @@ -79,29 +79,28 @@ which improves performance of the explaining methods.
Finally, we can simply run all available explaining methods:

.. jupyter-execute::
:hide-output:

explaining_units = indices[:, :1] # First most likely class.

# Gradient Back-propagation
_, g_maps = ke.gradients(rn50, inputs, explaining_units)
_, g_maps = ke.gradients(model, inputs, explaining_units)

# Full-Gradient
logits = ke.inspection.get_logits_layer(rn50)
inters, biases = ke.inspection.layers_with_biases(rn50, exclude=[logits])
rn50_exp = ke.inspection.expose(rn50, inters, logits)
_, fg_maps = ke.full_gradients(rn50_exp, inputs, explaining_units, biases=biases)
logits = ke.inspection.get_logits_layer(model)
inters, biases = ke.inspection.layers_with_biases(model, exclude=[logits])
model_exp = ke.inspection.expose(model, inters, logits)
_, fg_maps = ke.full_gradients(model_exp, inputs, explaining_units, biases=biases)

# CAM-Based
rn50_exp = ke.inspection.expose(rn50)
_, c_maps = ke.cam(rn50_exp, inputs, explaining_units)
_, gc_maps = ke.gradcam(rn50_exp, inputs, explaining_units)
_, gcpp_maps = ke.gradcampp(rn50_exp, inputs, explaining_units)
_, sc_maps = ke.scorecam(rn50_exp, inputs, explaining_units)

Following the original Grad-CAM paper, we only consider the positive contributing regions
in the creation of the CAMs, crunching negatively contributing and non-related regions together:
model_exp = ke.inspection.expose(model)
_, c_maps = ke.cam(model_exp, inputs, explaining_units)
_, gc_maps = ke.gradcam(model_exp, inputs, explaining_units)
_, gcpp_maps = ke.gradcampp(model_exp, inputs, explaining_units)
_, sc_maps = ke.scorecam(model_exp, inputs, explaining_units)

.. jupyter-execute::
:hide-code:

all_maps = (g_maps, fg_maps, c_maps, gc_maps, gcpp_maps, sc_maps)

Expand All @@ -110,3 +109,35 @@ in the creation of the CAMs, crunching negatively contributing and non-related r
_overlays = sum(zip([None] * len(images), *all_maps), ())
ke.utils.visualize(_images, _titles, _overlays, cols=1 + len(all_maps))

The functions above are simply shortcuts for
:func:`~keras_explainable.engine.explaining.explain`, using their conventional
hyper-parameters and post processing functions.
For more flexibility, you can use the regular form:

.. code-block:: python
logits, cams = ke.explain(
ke.methods.cam.gradcam,
model_exp,
inputs,
explaining_units,
batch_size=32,
postprocessing=ke.filters.positive_normalize,
)
While the :func:`~keras_explainable.engine.explaining.explain` function is a convenient
wrapper, transparently distributing the workload based on the distribution strategy
associated with the model, it is not a necessary component in the overall functioning
of the library. Alternatively, one can call any explaining method directly:

.. code-block:: python
logits, cams = ke.methods.cams.gradcam(model, inputs, explaining_units)
# Or the following, which is more efficient:
gradcam = tf.function(ke.methods.cams.gradcam, reduce_retracing=True)
logits, cams = gradcam(model, inputs, explaining_units)
cams = ke.filters.positive_normalize(cams)
cams = tf.image.resize(cams, (299, 299)).numpy()
Loading

0 comments on commit 034009c

Please sign in to comment.