Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mark step and inplace residual add in llama model code #833

Merged

Conversation

puneeshkhanna
Copy link
Contributor

Mark step helping in reducing workspace memory by approx twice of (BS,seq len, hidden dim).

Inplace add helping in reducing persistent tensors by approx twice of (BS, seq len, hidden dim).

Add lazy mode parameter.


What does this PR do?

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

…memory consumption (HabanaAI#65)

* Add mark step and inplace add.

Mark step helping in reducing workspace memory by
approx twice of (BS,seq len, hidden dim).

Inplace add helping in reducing persistent tensors by
approc twice of (BS, seq len, hidden dim).

Signed-off-by: Puneesh Khanna <pkhanna@habana.ai>

* Add lazy mode parameter

* Move mark step within the loop

* Move mark step before the loop

* Fix indentation

* update in place add only for inference

---------

Signed-off-by: Puneesh Khanna <pkhanna@habana.ai>
@puneeshkhanna
Copy link
Contributor Author

As an example for the config of BS-172, seq len-2048, hidden dim-8192 (size is ~5.3 GB) for llama-70B on 8x.
Max memory usage (without flash attention) reduced from ~86 Gb to ~66Gb.
Max memory usage (with flash attention) reduced from ~70 GB to ~59GB.

We can further go higher batch sizes because of reduced memory consumptions with the changes in this PR hence very important changes.

@regisss - please review and merge.

Copy link
Collaborator

@regisss regisss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just need to run make style.

Quick question, the new mark_step in the forward of GaudiLlamaModel will also benefit training right?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@puneeshkhanna
Copy link
Contributor Author

@regisss - mark step has no side effect on training. It should help in reducing workspace memory mainly.

@puneeshkhanna
Copy link
Contributor Author

@regisss - make style fixed. Just required an empty line after the import statement. It should pass now.

@regisss regisss added the run-test Run CI for PRs from external contributors label Apr 5, 2024
@regisss regisss merged commit 090627d into huggingface:main Apr 5, 2024
11 of 12 checks passed
regisss pushed a commit that referenced this pull request Apr 5, 2024
Signed-off-by: Puneesh Khanna <pkhanna@habana.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-test Run CI for PRs from external contributors
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants