Skip to content

Add generic export_to_transformers to the base classes #2346

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

Bond099
Copy link
Contributor

@Bond099 Bond099 commented Jul 26, 2025

Description of the change

Reference

Colab Notebook

https://colab.research.google.com/drive/1CNUkqbRTBPirTaU1-2UrWuK2dTZhgOtn?usp=sharing

Checklist

  • I have added all the necessary unit tests for my change.
  • I have verified that my change does not break existing code and works with all backends (TensorFlow, JAX, and PyTorch).
  • My PR is based on the latest changes of the main branch (if unsure, rebase the code).
  • I have followed the Keras Hub Model contribution guidelines in making these changes.
  • I have followed the Keras Hub API design guidelines in making these changes.
  • I have signed the Contributor License Agreement.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Bond099, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement by providing direct export capabilities from Keras Hub models and tokenizers to the Hugging Face Transformers format. This change streamlines the process of converting Keras-based components for use within the Hugging Face ecosystem, improving interoperability and ease of use for developers working with both frameworks.

Highlights

  • Direct Hugging Face Export for Models: I've added a new export_to_transformers method to the Backbone and CausalLM base classes. This allows users to directly export their Keras models (or just the backbone) into a format compatible with Hugging Face Transformers, including configuration and weights, simplifying interoperability.
  • Direct Hugging Face Export for Tokenizers: A corresponding export_to_transformers method has been added to the Tokenizer base class. This enables the direct export of Keras tokenizers, saving their assets in a format that Hugging Face Transformers can readily consume.
  • Refactored Export Logic: The core export logic in keras_hub/src/utils/transformers/export/hf_exporter.py has been modularized. The existing export_to_safetensors function now accepts a verbose argument, and two new functions, export_backbone and export_tokenizer, have been introduced to handle the specific export needs of model backbones and tokenizers, respectively.
  • Gemma Model Export Improvements: For Gemma models, I've updated the Hugging Face configuration to explicitly set tie_word_embeddings to True. Additionally, the explicit weight tying for lm_head.weight in the Gemma exporter has been removed, as this is now implicitly handled by the tie_word_embeddings configuration in Hugging Face.
  • Updated Export Tests: The gemma_test.py file has been updated to reflect and validate the new export_to_transformers methods. Tests now separately export and verify the backbone, tokenizer, and the full CausalLM model, ensuring that all components are correctly converted and loadable by Hugging Face.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a generic export_to_transformers method to the Backbone, CausalLM, and Tokenizer base classes, enabling models to be exported to the Hugging Face Transformers format. The implementation adds new exporter functions and updates tests accordingly. I've provided feedback to reduce code duplication, improve error message clarity, and enhance code documentation.

@divyashreepathihalli
Copy link
Collaborator

Can you add a colab example demo?

@mattdangerw mattdangerw added the kokoro:force-run Runs Tests on GPU label Jul 31, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Jul 31, 2025
Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Overall looks good! Just had some questions on the API and a request for more testing.

@abheesht17 abheesht17 added the kokoro:force-run Runs Tests on GPU label Aug 7, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Aug 7, 2025
Copy link
Collaborator

@abheesht17 abheesht17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM, mostly. Left one important question and a few nits.

Copy link
Collaborator

@abheesht17 abheesht17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! I'm going to let @mattdangerw do another pass on this

Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last nit!


def _create_gemma_for_export_tests(self):
proto = os.path.join(
os.path.dirname(__file__),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just do this the same as our other usages?

proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")

@@ -43,3 +49,26 @@ def test_from_preset_errors(self):
with self.assertRaises(ValueError):
# No loading on a non-keras model.
GPT2CausalLMPreprocessor.from_preset("hf://spacy/en_core_web_sm")

def test_export_supported_preprocessor(self):
proto = os.path.join(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just do this the same as our other usages?

proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")

@@ -113,3 +114,32 @@ def test_save_to_preset(self, cls, preset_name, tokenizer_type):
# Check config class.
tokenizer_config = load_json(save_dir, TOKENIZER_CONFIG_FILE)
self.assertEqual(cls, check_config_class(tokenizer_config))

def test_export_supported_tokenizer(self):
proto = os.path.join(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")

user_defined_symbols=["<start_of_turn>", "<end_of_turn>"],
)
tokenizer = GemmaTokenizer(proto=f"{proto_prefix}.model")
proto = os.path.join(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")

@mattdangerw mattdangerw added the kokoro:force-run Runs Tests on GPU label Aug 15, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Aug 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants