Skip to content

Conversation

@pctablet505
Copy link
Collaborator

@pctablet505 pctablet505 commented Sep 17, 2025

Added support for model export to keras-hub models.

This PR requires keras-team/keras#21674 as prerequisite, the export feature in keras.
Then it is built on top of that.

Simple Demo

Complete Numeric verification tests multiple models for numeric verifications.

Verified models:

  • llama3.2_1b
  • gemma3_1b
  • gpt2_base_en
  • resnet_50_imagenet
  • efficientnet_b0_ra_imagenet
  • densenet_121_imagenet
  • mobilenet_v3_small_100_imagenet
  • dfine_nano_coco
  • retinanet_resnet50_fpn_coco
  • deeplab_v3_plus_resnet50_pascalvoc

pctablet505 and others added 11 commits September 1, 2025 19:11
This reverts commit 62d2484.
This reverts commit de830b1.
export working 1st commit
Refactored exporter and registry logic for better type safety and error handling. Improved input signature methods in config classes by extracting sequence length logic. Enhanced LiteRT exporter with clearer verbose handling and stricter error reporting. Registry now conditionally registers LiteRT exporter and extends export method only if dependencies are available.
@github-actions github-actions bot added the Gemma Gemma model specific issues label Sep 17, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @pctablet505, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive and extensible framework for exporting Keras-Hub models to various formats, with an initial focus on LiteRT. The system is designed to seamlessly integrate with Keras-Hub's model architecture, particularly by addressing the unique challenge of handling dictionary-based model inputs during the export process. This enhancement significantly improves the deployability of Keras-Hub models by providing a standardized and robust export pipeline, alongside crucial compatibility fixes for TensorFlow's SavedModel/TFLite export mechanisms.

Highlights

  • New Model Export Framework: Introduced a new, extensible framework for exporting Keras-Hub models, designed to support various formats and model types.
  • LiteRT Export Support: Added specific support for exporting Keras-Hub models to the LiteRT format, verified for models like gemma3, llama3.2, and gpt2.
  • Registry-Based Configuration: Implemented an ExporterRegistry to manage and retrieve appropriate exporter configurations and exporters based on model type and target format.
  • Input Handling for Keras-Hub Models: Developed a KerasHubModelWrapper to seamlessly convert Keras-Hub's dictionary-based inputs to the list-based inputs expected by the underlying Keras LiteRT exporter.
  • TensorFlow Export Compatibility: Added compatibility shims (_get_save_spec and _trackable_children) to Keras-Hub Backbone models to ensure proper functioning with TensorFlow's SavedModel and TFLite export utilities.
  • Automated Export Method Extension: The Task class in Keras-Hub models is now automatically extended with an export method, simplifying the model export process for users.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: model exporting to liteRT. The implementation is well-structured, using a modular and extensible registry pattern. However, there are several areas that require attention. The most critical issue is the complete absence of tests for the new export functionality, which is a direct violation of the repository's style guide stating that testing is non-negotiable. Additionally, I've identified a critical bug in the error handling logic within the lite_rt.py exporter that includes unreachable code. There are also several violations of the style guide regarding the use of type hints in function signatures across all new files. I've provided specific comments and suggestions to address these points, which should help improve the robustness, maintainability, and compliance of this new feature.

Introduces the keras_hub.api.export submodule and updates the main API to expose it. The new export module imports various exporter configs and functions from the internal export package, making them available through the public API.
Added ImageClassifierExporterConfig, ImageSegmenterExporterConfig, and ObjectDetectorExporterConfig to the export API. Improved input shape inference and dummy input generation for image-related exporter configs. Refactored LiteRTExporter to better handle model type checks and input signature logic, with improved error handling for input mapping.
Moved the 'import keras' statement to the top of the module and removed redundant local imports within class methods. This improves code clarity and avoids repeated imports.
Deleted the debug_object_detection.py script, which was used for testing object detection model outputs and export issues. This cleanup removes unused debugging code from the repository.
Renames all references of 'LiteRT' to 'Litert' across the codebase, including file names, class names, and function names. Updates exporter registry and API imports to use the new 'litert' naming. Also improves image model exporter configs to dynamically determine input dtype from the model, enhancing flexibility for different input types. Adds support for ImageSegmenter model type detection in the exporter registry.
Refactored InputSpec definitions in exporter configs for improved readability by placing each argument on a separate line. Updated import path in litert.py to import from keras.src.export.litert instead of keras.src.export.litert_exporter.
@divyashreepathihalli
Copy link
Collaborator

@pctablet505 can you update the Colab to use the changes from this PR? and keep the demo short where you load a model and export it, and reload and verify numerics?

@divyashreepathihalli
Copy link
Collaborator

Also if you have addressed the Gemini comments, please respond on it. Is this PR dependent on the PR in Keras repo? - keras-team/keras#21674

Simplifies and unifies input signature and dummy input generation for text and image models by removing redundant helper methods and centralizing logic. Updates model building in KerasHubExporter to use input signatures and improves error handling. Refactors LiteRT exporter to use the new parameterized input signature and model building approach, reducing code duplication and improving maintainability.
Removed redundant registry initialization in export_model and clarified model building comments in KerasHubExporter. Switched to using warnings.warn for import errors in models/__init__.py instead of print statements for better error reporting.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist please review it again

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is a significant and valuable addition to KerasHub, enabling model exportation to LiteRT. The overall structure is well-designed, following a modular and extensible pattern similar to Hugging Face's Optimum.

I've reviewed this pull request, focusing on correctness, maintainability, and adherence to the repository's style guide. The style guide was instrumental in this review, particularly the sections on Testing and Validation (lines 401-496) and Docstrings and Type Hinting (lines 364-398).

My main findings are:

  • Critical Issue: Missing Tests

    • The pull request introduces substantial new functionality for model exporting across several new files (base.py, configs.py, litert.py, registry.py), but it lacks corresponding tests.
    • The repository style guide is explicit that "Testing is a non-negotiable part of every contribution" (line 403) and "Every .py file containing logic...must have a corresponding _test.py file" (line 406).
    • Please add comprehensive unit tests for the new export logic, covering different model types, configurations, and edge cases. This is crucial to ensure the robustness and correctness of this feature.
  • Other Findings

    • I've also left several inline comments regarding a bug in model type detection, incorrect dtype handling, and violations of the docstring style guide. Please address these to improve code quality and consistency.

Refined dtype extraction logic in image and object model exporter configs to better handle different dtype representations. Updated LiteRT exporter to use Keras io_utils for progress messages and improved verbose flag handling. Added ObjectDetector and ImageSegmenter to export registry model type checks. Enhanced docstrings for clarity and consistency in base exporter classes.
Replaced explicit for-loops with list and dict comprehensions for filtering trackable children in lists and dicts. This improves code readability and conciseness in the Backbone model.
Replaces hardcoded model class list with iteration over registered configs in ExporterRegistry. This improves maintainability and extensibility by removing direct imports and manual class checks.
Removed unnecessary try/except blocks and test skipping logic from multiple test files, making test failures more explicit and improving readability. Updated registry initialization to ensure Seq2SeqLM is registered before CausalLM. Simplified _is_keras_hub_model logic to use isinstance(Task) for more robust model type detection.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: support for exporting Keras-Hub models to the LiteRT format. The implementation is well-designed and modular, with base classes for exporters and configurations, a registry for discoverability, and a specific LiteRTExporter. The feature is nicely integrated by extending the export method on Task models, providing a seamless user experience. The accompanying tests are thorough, covering unit tests with mocks and crucial integration tests with production models that include numerical verification. I have identified a couple of minor issues: one related to redundant code in an __init__.py file and another concerning a misleading docstring. These are straightforward to address. Overall, this is a high-quality contribution that adds valuable functionality.

Corrected the docstring in _is_model_compatible to refer to text classifier instead of image classifier.
@@ -0,0 +1,536 @@
"""Tests for LiteRT export with specific production models.
Copy link
Collaborator

@divyashreepathihalli divyashreepathihalli Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better way is to add a standardized test like this - https://github.com/keras-team/keras-hub/blob/master/keras_hub/src/tests/test_case.py#L414

enable it for all models

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can not run the test for all models on keras-hub, some are in multiple gigabytes, and that would require too much memory that won't be available. the system with 50GB crashes for 7b models.

Refactored LiteRT export tests to use a standardized helper method for model export and numerical accuracy verification, reducing code duplication and improving maintainability. Removed direct file and interpreter management in favor of a unified approach, tightened numerical accuracy thresholds, and ensured proper resource cleanup. Updated test cases to dynamically determine input shapes and handle model-specific requirements, improving robustness and reliability of export validation.
Adds support for specifying per-output numerical thresholds and input ranges in LiteRT export tests. Refactors test utilities to handle output mapping, threshold configuration, and input preparation for improved accuracy and flexibility across model types.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent and comprehensive pull request that adds a much-needed model export functionality to LiteRT. The design is well-thought-out, using a registry and adapter pattern to handle various model architectures and their specific requirements. The code is well-structured and accompanied by extensive tests, including unit, integration, and end-to-end tests with production models. The detailed design document is also a great addition. I have one suggestion to improve the code by replacing a private TensorFlow API with a public one.

Comment on lines 314 to 343
# Fall back to building specs from `self.inputs`.
try:
from tensorflow.python.framework import tensor_spec
except (ImportError, ModuleNotFoundError):
return None

inputs = getattr(self, "inputs", None)
if inputs is None:
return None

def _make_spec(t):
# t is a tf.Tensor-like object
shape = list(t.shape)
if dynamic_batch and len(shape) > 0:
shape[0] = None
# Convert to tuple for TensorSpec
try:
name = getattr(t, "name", None)
return tensor_spec.TensorSpec(
shape=tuple(shape), dtype=t.dtype, name=name
)
except (ImportError, ModuleNotFoundError):
return None

# Handle dict/list/single tensor inputs
if isinstance(inputs, dict):
return {k: _make_spec(v) for k, v in inputs.items()}
if isinstance(inputs, (list, tuple)):
return [_make_spec(t) for t in inputs]
return _make_spec(inputs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve future compatibility and adhere to best practices, it's better to use public APIs where possible. The current implementation uses a private import from tensorflow.python.framework import tensor_spec. This can be replaced by importing tensorflow and using the public tf.TensorSpec API. This change will make the code more robust against future changes in TensorFlow's internal structure.

            # Fall back to building specs from `self.inputs`.
            try:
                import tensorflow as tf
            except (ImportError, ModuleNotFoundError):
                return None

            inputs = getattr(self, "inputs", None)
            if inputs is None:
                return None

            def _make_spec(t):
                # t is a tf.Tensor-like object
                shape = list(t.shape)
                if dynamic_batch and len(shape) > 0:
                    shape[0] = None
                # Convert to tuple for TensorSpec
                name = getattr(t, "name", None)
                return tf.TensorSpec(
                    shape=tuple(shape), dtype=t.dtype, name=name
                )

            # Handle dict/list/single tensor inputs
            if isinstance(inputs, dict):
                return {k: _make_spec(v) for k, v in inputs.items()}
            if isinstance(inputs, (list, tuple)):
                return [_make_spec(t) for t in inputs]
            return _make_spec(inputs)

Replaces class-based test cases with pytest parameterized functions for CausalLM, ImageClassifier, ObjectDetector, and ImageSegmenter LiteRT export tests. This improves test readability, reduces code duplication, and ensures each model configuration is tested independently with clearer output. Cleans up helper methods and consolidates numerical verification logic.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive framework for exporting Keras-Hub models to the LiteRT format. The changes are well-structured, introducing base classes for exporters and configurations, a registry for extensibility, and specific implementations for various model types (CausalLM, ImageClassifiers, etc.). The integration is handled cleanly by extending the Task class's export method. The pull request also includes extensive tests, from unit tests for the new components to end-to-end numerical verification with production models, which is excellent. I've included a couple of suggestions to improve the maintainability and robustness of the new testing utilities and export registry.

Comment on lines 458 to 545
def _get_litert_output(self, interpreter, output_details):
"""Get output from LiteRT interpreter."""
if len(output_details) == 1:
return interpreter.get_tensor(output_details[0]["index"])
else:
litert_output = {}
for detail in output_details:
output_tensor = interpreter.get_tensor(detail["index"])
litert_output[detail["name"]] = output_tensor
return litert_output

def _verify_outputs(
self,
keras_output,
litert_output,
output_thresholds,
comparison_mode,
):
"""Verify numerical accuracy between Keras and LiteRT outputs."""
if isinstance(keras_output, dict) and isinstance(litert_output, dict):
# Map LiteRT generic keys to Keras semantic keys if needed
if all(
key.startswith("StatefulPartitionedCall")
for key in litert_output.keys()
):
litert_keys_sorted = sorted(litert_output.keys())
keras_keys_sorted = sorted(keras_output.keys())
if len(litert_keys_sorted) != len(keras_keys_sorted):
self.fail(
f"Different number of outputs:\n"
f"Keras: {len(keras_keys_sorted)} outputs -\n"
f" {keras_keys_sorted}\n"
f"LiteRT: {len(litert_keys_sorted)} outputs -\n"
f" {litert_keys_sorted}"
)
output_name_mapping = dict(
zip(litert_keys_sorted, keras_keys_sorted)
)
mapped_litert = {
keras_key: litert_output[litert_key]
for litert_key, keras_key in output_name_mapping.items()
}
litert_output = mapped_litert

common_keys = set(keras_output.keys()) & set(litert_output.keys())
if not common_keys:
self.fail(
f"No common keys between Keras and LiteRT outputs.\n"
f"Keras keys: {list(keras_output.keys())}\n"
f"LiteRT keys: {list(litert_output.keys())}"
)

for key in sorted(common_keys):
keras_val_np = ops.convert_to_numpy(keras_output[key])
litert_val = litert_output[key]
output_threshold = output_thresholds.get(
key, output_thresholds.get("*", {"max": 10.0, "mean": 0.1})
)
self._compare_outputs(
keras_val_np,
litert_val,
comparison_mode,
key,
output_threshold["max"],
output_threshold["mean"],
)
elif not isinstance(keras_output, dict) and not isinstance(
litert_output, dict
):
keras_output_np = ops.convert_to_numpy(keras_output)
output_threshold = output_thresholds.get(
"*", {"max": 10.0, "mean": 0.1}
)
self._compare_outputs(
keras_output_np,
litert_output,
comparison_mode,
key=None,
max_threshold=output_threshold["max"],
mean_threshold=output_threshold["mean"],
)
else:
keras_type = type(keras_output).__name__
litert_type = type(litert_output).__name__
self.fail(
f"Output structure mismatch: Keras returns "
f"{keras_type}, LiteRT returns {litert_type}"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for verifying numerical accuracy between Keras and LiteRT outputs in _verify_outputs relies on sorting output names, which can be fragile. A more robust approach is to rely on the preserved order of model outputs.

I suggest refactoring _get_litert_output to always return a list of tensors in order, and simplifying _verify_outputs to compare two lists of tensors. This makes the test helper more robust and easier to maintain.

Here's a suggested implementation:

def _get_litert_output(self, interpreter, output_details):
    """Get output from LiteRT interpreter as a list of tensors."""
    return [interpreter.get_tensor(detail["index"]) for detail in output_details]

def _verify_outputs(
    self,
    keras_outputs, # list
    litert_outputs, # list
    output_thresholds,
    comparison_mode,
    output_names=None,
):
    """Verify numerical accuracy between Keras and LiteRT outputs."""
    self.assertEqual(
        len(keras_outputs),
        len(litert_outputs),
        f"Mismatch in number of outputs: Keras has {len(keras_outputs)}, "
        f"LiteRT has {len(litert_outputs)}",
    )
    if output_names is None:
        output_names = [f"output_{i}" for i in range(len(keras_outputs))]

    for i, key in enumerate(output_names):
        keras_val_np = ops.convert_to_numpy(keras_outputs[i])
        litert_val = litert_outputs[i]
        output_threshold = output_thresholds.get(
            key, output_thresholds.get("*", {"max": 10.0, "mean": 0.1})
        )
        self._compare_outputs(
            keras_val_np,
            litert_val,
            comparison_mode,
            key,
            output_threshold["max"],
            output_threshold["mean"],
        )

You would also need to adjust run_litert_export_test to prepare the keras_outputs as a list, for example:

# In run_litert_export_test
keras_output = model(keras_input_data)
litert_output_list = self._get_litert_output(interpreter, output_details)

output_names = None
if isinstance(keras_output, dict):
    output_names = model.output_names
    keras_output_list = [keras_output[name] for name in output_names]
elif isinstance(keras_output, (list, tuple)):
    keras_output_list = list(keras_output)
else:
    keras_output_list = [keras_output]

if verify_numerical_accuracy:
    self._verify_outputs(
        keras_output_list,
        litert_output_list,
        output_thresholds,
        comparison_mode,
        output_names=output_names,
    )

This change will make your tests less dependent on output naming heuristics and more on the stable output order.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current name-based sorting approach is actually more robust here because:

  • Better error messages: "logits output failed" vs "output_2 failed"
  • Per-output thresholds: Can specify different tolerances for different outputs by name
  • Handles name mapping: Automatically maps LiteRT's generic names (like "StatefulPartitionedCall:0") to Keras semantic names

The sorting ensures consistent pairing even when LiteRT generates generic output names. This has proven reliable in practice across different model architectures.

Removed the export registry and related initialization logic, replacing it with a direct model type detection via `get_exporter_config`. The `Task` class now provides its own `export` method for specialized Keras-Hub model export, supporting dictionary inputs and LiteRT export. Cleaned up imports and removed registry-related tests and files.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: exporting Keras-Hub models to the LiteRT format. The implementation is well-structured, following an exporter pattern with base classes, model-specific configurations, and a dedicated LiteRT exporter. The use of adapter classes to handle Keras-Hub's dictionary-based inputs is a clever solution. The changes are accompanied by a comprehensive suite of tests, including unit tests, integration tests, and end-to-end numerical verification for a wide range of production models, which is excellent. The addition of run_litert_export_test to the base TestCase will be very valuable for future contributions. Overall, this is a high-quality contribution that greatly enhances the utility of Keras-Hub. I have found one critical issue in the convenience function export_litert that needs to be addressed.

Updates LiteRT exporter and related configs to support dynamic input shapes by default for text and image models, allowing runtime resizing via TFLite's interpreter.resize_tensor_input(). Removes static sequence length inference, adapts input signature logic, and updates tests to verify dynamic shape support and runtime resizing. Also improves multimodal model handling and input mapping for TFLite export.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Gemma Gemma model specific issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants