[1/n] torchtune <> llama-stack integration skeleton#540
Merged
Conversation
Member
|
added some initial comments |
raghotham
reviewed
Dec 3, 2024
llama_stack/providers/inline/post_training/meta_reference/config.py
Outdated
Show resolved
Hide resolved
...stack/providers/inline/post_training/meta_reference/recipes/lora_finetuning_single_device.py
Outdated
Show resolved
Hide resolved
llama_stack/providers/inline/post_training/meta_reference/utils.py
Outdated
Show resolved
Hide resolved
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 4, 2024
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 11, 2024
llama_stack/providers/inline/post_training/torchtune/recipes/lora_finetuning_single_device.py
Show resolved
Hide resolved
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 11, 2024
ashwinb
reviewed
Dec 12, 2024
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Context
This is the 1st of series PRs that integrate torchtune with llama-stack as meta reference post-training implementation. For MVP, we will focus on single device LoRA SFT.
Though this PR is still WIP, we want to get early feedback on the high level design of this skeleton while still working on several details
Scope
To limit the scope of this PR, we focus on the skeleton of the implementation.
What are included?
What are not includes?
Testing
e2e test
Although we haven't added detailed testing and numerical parity check with torchtune yet, we did a simple E2E test from client to server
llama stack build --template experimental-post-training --image-type condaandllama stack run experimental-post-trainingllama-stack-client --endpoint http://devgpu018.nha2.facebook.com:5000 post_training supervised_fine_tuneserver

client

parity check

torchtune dataloader output and llama-stack post training dataloader output are same
torchtune LoRA SFT and llama-stack post training LoRA SFT on alpaca dataset with llama3.2 3B instruct model are numerical match
**unit test **
![Uploading Screenshot 2024-12-09 at 1.35.10 PM.png…]()