Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mark step and inplace residual add in llama model code to reduce memory consumption #65

Merged
7 commits merged into from
Feb 29, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
update in place add only for inference
  • Loading branch information
Puneesh Khanna authored Feb 29, 2024
commit 8eab266b339b013aea80750b5436896952606894
15 changes: 11 additions & 4 deletions optimum/habana/transformers/models/llama/modeling_llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -551,8 +551,12 @@ def pre_attn(
def post_attn_pre_mlp(self, hidden_states, residual):
hidden_states = self.self_attn.post_attn_forward(hidden_states)

residual.add_(hidden_states)
hidden_states = residual
if self.training:
hidden_states = hidden_states + residual
residual = hidden_states
else:
residual.add_(hidden_states)
hidden_states = residual

hidden_states = self.post_attention_layernorm(hidden_states)

Expand All @@ -562,8 +566,11 @@ def post_attn_pre_mlp(self, hidden_states, residual):
def post_mlp(self, hidden_states, residual):
hidden_states = self.mlp.post_mlp_forward(hidden_states)

residual.add_(hidden_states)
hidden_states = residual
if self.training:
hidden_states = hidden_states + residual
else:
residual.add_(hidden_states)
hidden_states = residual

return hidden_states

Expand Down