Skip to content

Conversation

@Abhinavexists
Copy link
Contributor

@Abhinavexists Abhinavexists commented Jan 4, 2026

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Abhinavexists, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the documentation for Keras image preprocessing augmentation layers by embedding practical code examples directly within the docstrings of 20 different layers. This enhancement provides developers with immediate, runnable demonstrations of how to configure and apply various image augmentation techniques, streamlining the learning curve and accelerating the adoption of these powerful tools for building robust computer vision models.

Highlights

  • Documentation Examples Added: Comprehensive documentation examples have been added for 20 Keras image preprocessing augmentation layers, demonstrating their usage with sample code.
  • Improved Usability: The addition of practical code examples directly within the docstrings significantly enhances the clarity and ease of use for developers integrating these augmentation techniques into their models.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds valuable documentation examples to 20 image preprocessing augmentation layers, which significantly improves the usability of these layers. The examples are generally clear and demonstrate the functionality well.

However, there are a few areas for improvement:

  1. Consistency in Examples: Many examples are missing necessary imports (numpy, keras) and use placeholders for data (e.g., image = [...]), making them not directly runnable. It would be great to make all examples self-contained and runnable.
  2. Bug Fix: The PR description mentions a bug fix in AugMix.get_config(), but the bug appears to be present in the provided file content. This is a critical issue that needs to be addressed.
  3. Explicitness: For augmentation layers, it's good practice to explicitly pass training=True in the examples to make it clear that the augmentation is applied during training.

I've left specific comments with suggestions for each of these points. Addressing them will make this contribution even more impactful.

Comment on lines 78 to 94
# Create an AugMix layer
augmix = keras.layers.AugMix(
value_range=(0, 255),
num_chains=3, # Creates 3 different augmentation chains
chain_depth=3, # Each chain applies up to 3 random augmentations
factor=0.3, # Controls the strength of augmentations
all_ops=True
)
# Sample images
images = np.random.randint(0, 255, (8, 224, 224, 3), dtype='uint8')
# Each image is augmented in 3 different ways (chains) and then mixed
augmented_images = augmix(images, training=True)
# At inference, no augmentation is applied
output = augmix(images, training=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example is not runnable as it's missing the necessary imports for numpy and keras. To improve documentation quality and user experience, please make the example self-contained and runnable by adding the imports.

import numpy as np
import keras

# Create an AugMix layer
augmix = keras.layers.AugMix(
    value_range=(0, 255),
    num_chains=3,  # Creates 3 different augmentation chains
    chain_depth=3, # Each chain applies up to 3 random augmentations
    factor=0.3,    # Controls the strength of augmentations
    all_ops=True
)

# Sample images
images = np.random.randint(0, 255, (8, 224, 224, 3), dtype='uint8')

# Each image is augmented in 3 different ways (chains) and then mixed
augmented_images = augmix(images, training=True)

# At inference, no augmentation is applied
output = augmix(images, training=False)

Comment on lines 36 to 57
# Create a CutMix layer in which factor controls the patch size variability
cutmix = keras.layers.CutMix(factor=0.8)
# Generate sample images and one-hot encoded labels
images = np.random.randint(0, 255, (8, 224, 224, 3), dtype='uint8')
labels = keras.ops.one_hot(
np.array([0, 1, 2, 3, 0, 1, 2, 3]),
num_classes=4
)
# Random rectangular patches are cut from one image and pasted into another
# Labels are also mixed proportionally to the patch area
output = cutmix(
{"images": images, "labels": labels},
training=True
)
# At inference, no augmentation is applied
output_inference = cutmix(
{"images": images, "labels": labels},
training=False
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example is not runnable as it's missing the necessary imports for numpy and keras. To improve documentation quality and user experience, please make the example self-contained and runnable by adding the imports.

import numpy as np
import keras

# Create a CutMix layer in which factor controls the patch size variability
cutmix = keras.layers.CutMix(factor=0.8)

# Generate sample images and one-hot encoded labels
images = np.random.randint(0, 255, (8, 224, 224, 3), dtype='uint8')
labels = keras.ops.one_hot(
    np.array([0, 1, 2, 3, 0, 1, 2, 3]),
    num_classes=4
)

# Random rectangular patches are cut from one image and pasted into another
# Labels are also mixed proportionally to the patch area
output = cutmix(
    {"images": images, "labels": labels},
    training=True
)

# At inference, no augmentation is applied
output_inference = cutmix(
    {"images": images, "labels": labels},
    training=False
)

Comment on lines 22 to 39
# Create a MaxNumBoundingBoxes layer to ensure max 10 boxes
max_boxes_layer = keras.layers.MaxNumBoundingBoxes(
max_number=10,
fill_value=-1
)
# Sample bounding boxes dict
bounding_boxes = {
"boxes": np.array([
[[10, 20, 100, 150], [50, 60, 200, 250], [0, 0, 50, 50]],
]),
"labels": np.array([[1, 2, 3]])
}
# Ensure max 10 boxes per image
# If fewer than 10 boxes, pad with fill_value (-1)
# If more than 10 boxes, truncate to 10
result = max_boxes_layer({"bounding_boxes": bounding_boxes})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example is not runnable as it's missing the necessary imports for numpy and keras. To improve documentation quality and user experience, please make the example self-contained and runnable by adding the imports.

import numpy as np
import keras

# Create a MaxNumBoundingBoxes layer to ensure max 10 boxes
max_boxes_layer = keras.layers.MaxNumBoundingBoxes(
    max_number=10,
    fill_value=-1
)

# Sample bounding boxes dict
bounding_boxes = {
    "boxes": np.array([
        [[10, 20, 100, 150], [50, 60, 200, 250], [0, 0, 50, 50]],
    ]),
    "labels": np.array([[1, 2, 3]])
}

# Ensure max 10 boxes per image
# If fewer than 10 boxes, pad with fill_value (-1)
# If more than 10 boxes, truncate to 10
result = max_boxes_layer({"bounding_boxes": bounding_boxes})

Comment on lines 49 to 61
# Create a RandomPerspective layer with scale factor
# This simulates a 3D-like viewing angle shift
perspective_layer = keras.layers.RandomPerspective(
factor=1.0,
scale=0.3 # Control how extreme the perspective shift is
)
# Sample image
image = np.random.randint(0, 255, (224, 224, 3), dtype='uint8')
# Apply perspective transformation
# Different corners of the image will be shifted randomly
output = perspective_layer(image, training=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example is not runnable as it's missing the necessary imports for numpy and keras. To improve documentation quality and user experience, please make the example self-contained and runnable by adding the imports.

import numpy as np
import keras

# Create a RandomPerspective layer with scale factor
# This simulates a 3D-like viewing angle shift
perspective_layer = keras.layers.RandomPerspective(
    factor=1.0,
    scale=0.3  # Control how extreme the perspective shift is
)

# Sample image
image = np.random.randint(0, 255, (224, 224, 3), dtype='uint8')

# Apply perspective transformation
# Different corners of the image will be shifted randomly
output = perspective_layer(image, training=True)

Comment on lines 29 to 43
# Create a RandomPosterization layer with 4 bits
random_posterization = keras.layers.RandomPosterization(factor=4)
# Your input image
image = [...] # your input image
# Apply posterization
output = random_posterization(image)
# For more extreme posterization with 2 bits
extreme_posterization = keras.layers.RandomPosterization(
factor=2,
value_range=[0.0, 1.0]
)
output_extreme = extreme_posterization(image)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example uses a placeholder image = [...] and doesn't explicitly set training=True when calling the layer. For clarity and consistency with other augmentation layers, it's better to provide a runnable example and be explicit about the training mode.

import numpy as np
import keras

# Create a RandomPosterization layer with 4 bits
random_posterization = keras.layers.RandomPosterization(factor=4)

# Your input image
image = np.random.randint(0, 255, (224, 224, 3), dtype='uint8')

# Apply posterization
output = random_posterization(image, training=True)

# For more extreme posterization with 2 bits
extreme_posterization = keras.layers.RandomPosterization(
    factor=2,
    value_range=[0.0, 1.0]
)
output_extreme = extreme_posterization(image, training=True)

Comment on lines 39 to 50
# Create a RandomSharpness layer
# factor can be sampled between 0.0 (full blur) and 0.5 (no change)
sharpness_layer = keras.layers.RandomSharpness(
factor=0.5,
value_range=(0, 255)
)
# Sample image
image = np.array([[[100, 150, 200], [50, 75, 100]]])
# Apply sharpness adjustment
output = sharpness_layer(image)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example is not runnable as it's missing the necessary imports for numpy and keras. Also, for clarity and consistency with other augmentation layers, it's better to be explicit about the training mode by passing training=True.

import numpy as np
import keras

# Create a RandomSharpness layer
# factor can be sampled between 0.0 (full blur) and 0.5 (no change)
sharpness_layer = keras.layers.RandomSharpness(
    factor=0.5,
    value_range=(0, 255)
)

# Sample image
image = np.random.randint(0, 255, (224, 224, 3), dtype='uint8')

# Apply sharpness adjustment
output = sharpness_layer(image, training=True)

Comment on lines 68 to 73
shear_layer = keras.layers.RandomShear(x_factor=0.2, y_factor=0.2)
images = [...] # your input image
# Apply random shear transformation
output = shear_layer(images, training=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example uses a placeholder images = [...] which makes it not runnable. To improve the documentation, please provide a complete, runnable example. This can be done by creating a sample image tensor using numpy.

import numpy as np
import keras

shear_layer = keras.layers.RandomShear(x_factor=0.2, y_factor=0.2)

images = np.random.randint(0, 255, (2, 224, 224, 3), dtype='uint8')

# Apply random shear transformation
output = shear_layer(images, training=True)

Comment on lines 94 to 101
translation_layer = keras.layers.RandomTranslation(
height_factor=0.2, width_factor=0.2
)
images = [...] # your input image
# Apply random translation
output = translation_layer(images, training=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code example uses a placeholder images = [...] which makes it not runnable. To improve the documentation, please provide a complete, runnable example. This can be done by creating a sample image tensor using numpy.

import numpy as np
import keras

translation_layer = keras.layers.RandomTranslation(
    height_factor=0.2, width_factor=0.2
)

images = np.random.randint(0, 255, (2, 224, 224, 3), dtype='uint8')

# Apply random translation
output = translation_layer(images, training=True)

@codecov-commenter
Copy link

codecov-commenter commented Jan 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.69%. Comparing base (c67eddb) to head (2846fd5).
⚠️ Report is 24 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff            @@
##           master   #21978    +/-   ##
========================================
  Coverage   82.69%   82.69%            
========================================
  Files         588      592     +4     
  Lines       61448    62072   +624     
  Branches     9622     9723   +101     
========================================
+ Hits        50812    51332   +520     
- Misses       8147     8215    +68     
- Partials     2489     2525    +36     
Flag Coverage Δ
keras 82.52% <ø> (+<0.01%) ⬆️
keras-jax 61.55% <ø> (+<0.01%) ⬆️
keras-numpy 56.56% <ø> (-0.25%) ⬇️
keras-openvino 37.42% <ø> (+0.06%) ⬆️
keras-tensorflow 63.70% <ø> (-0.02%) ⬇️
keras-torch 62.47% <ø> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@SamanehSaadat SamanehSaadat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

Could you please review the comments in the examples and keep only the ones that are necessary? I think if the code is clear, it doesn't need a comment, e.g. "# Apply layer X" because it makes the example harder to read.

(images, labels), _ = keras.datasets.cifar10.load_data()
images = images.astype("float32")
# Create a RandAugment layer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make the examples more concise, we can remove comments if the code is clear. For example, here the comment doesn't add much value as it's already clear what the code is doing.

factor=0.5 # Control the strength of augmentations (0-1 normalized)
)
# Apply augmentation during training
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here regarding comment

# Ensure max 10 boxes per image
# If fewer than 10 boxes, pad with fill_value (-1)
# If more than 10 boxes, truncate to 10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you make these comments a paragraph?

random_erasing = keras.layers.RandomErasing(factor=1.0)
# Your input image
image = [...] # your input image
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to create random images similar to previous examples?

output = random_invert(image, training=True)
# For always inverting colors with custom value range
invert_always = keras.layers.RandomInvert(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove always from the variable name?

@Abhinavexists
Copy link
Contributor Author

Hi @SamanehSaadat i have made the changes as suggested by you, i hopes codes are much cleaner now while maintaining important comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants