-
Notifications
You must be signed in to change notification settings - Fork 292
Add generic export_to_transformers to the base classes #2346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Bond099, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a significant enhancement by providing direct export capabilities from Keras Hub models and tokenizers to the Hugging Face Transformers format. This change streamlines the process of converting Keras-based components for use within the Hugging Face ecosystem, improving interoperability and ease of use for developers working with both frameworks.
Highlights
- Direct Hugging Face Export for Models: I've added a new
export_to_transformers
method to theBackbone
andCausalLM
base classes. This allows users to directly export their Keras models (or just the backbone) into a format compatible with Hugging Face Transformers, including configuration and weights, simplifying interoperability. - Direct Hugging Face Export for Tokenizers: A corresponding
export_to_transformers
method has been added to theTokenizer
base class. This enables the direct export of Keras tokenizers, saving their assets in a format that Hugging Face Transformers can readily consume. - Refactored Export Logic: The core export logic in
keras_hub/src/utils/transformers/export/hf_exporter.py
has been modularized. The existingexport_to_safetensors
function now accepts averbose
argument, and two new functions,export_backbone
andexport_tokenizer
, have been introduced to handle the specific export needs of model backbones and tokenizers, respectively. - Gemma Model Export Improvements: For Gemma models, I've updated the Hugging Face configuration to explicitly set
tie_word_embeddings
toTrue
. Additionally, the explicit weight tying forlm_head.weight
in the Gemma exporter has been removed, as this is now implicitly handled by thetie_word_embeddings
configuration in Hugging Face. - Updated Export Tests: The
gemma_test.py
file has been updated to reflect and validate the newexport_to_transformers
methods. Tests now separately export and verify the backbone, tokenizer, and the full CausalLM model, ensuring that all components are correctly converted and loadable by Hugging Face.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a generic export_to_transformers
method to the Backbone
, CausalLM
, and Tokenizer
base classes, enabling models to be exported to the Hugging Face Transformers format. The implementation adds new exporter functions and updates tests accordingly. I've provided feedback to reduce code duplication, improve error message clarity, and enhance code documentation.
Can you add a colab example demo? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Overall looks good! Just had some questions on the API and a request for more testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM, mostly. Left one important question and a few nits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! I'm going to let @mattdangerw do another pass on this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last nit!
|
||
def _create_gemma_for_export_tests(self): | ||
proto = os.path.join( | ||
os.path.dirname(__file__), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just do this the same as our other usages?
proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")
@@ -43,3 +49,26 @@ def test_from_preset_errors(self): | |||
with self.assertRaises(ValueError): | |||
# No loading on a non-keras model. | |||
GPT2CausalLMPreprocessor.from_preset("hf://spacy/en_core_web_sm") | |||
|
|||
def test_export_supported_preprocessor(self): | |||
proto = os.path.join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just do this the same as our other usages?
proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")
@@ -113,3 +114,32 @@ def test_save_to_preset(self, cls, preset_name, tokenizer_type): | |||
# Check config class. | |||
tokenizer_config = load_json(save_dir, TOKENIZER_CONFIG_FILE) | |||
self.assertEqual(cls, check_config_class(tokenizer_config)) | |||
|
|||
def test_export_supported_tokenizer(self): | |||
proto = os.path.join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")
user_defined_symbols=["<start_of_turn>", "<end_of_turn>"], | ||
) | ||
tokenizer = GemmaTokenizer(proto=f"{proto_prefix}.model") | ||
proto = os.path.join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proto = os.path.join(self.get_test_data_dir(), "gemma_export_vocab.spm")
Description of the change
Reference
Colab Notebook
https://colab.research.google.com/drive/1CNUkqbRTBPirTaU1-2UrWuK2dTZhgOtn?usp=sharingChecklist