Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for LongT5 optimization using ORT transformer optimizer script #683

Merged

Conversation

kunal-vaishnavi
Copy link
Contributor

@kunal-vaishnavi kunal-vaishnavi commented Jan 6, 2023

What does this PR do?

This PR adds support for optimizing LongT5 using the ORT transformer optimizer script.

Example usage:

from optimum.onnxruntime import ORTModelForSeq2SeqLM, AutoOptimizationConfig, ORTOptimizer
import torch
import time

model_name = 'google/long-t5-local-base'
device = 'cuda'
batch_size = 4
sequence_length = 16
inputs = {
     'input_ids': torch.ones((batch_size, sequence_length), dtype=torch.int64).to(device),
     'attention_mask': torch.ones((batch_size, sequence_length), dtype=torch.int64).to(device),
     'decoder_input_ids': torch.ones((batch_size, sequence_length), dtype=torch.int64).to(device)
}

model = ORTModelForSeq2SeqLM.from_pretrained(model_name, from_transformers=True, use_io_binding=True, provider='CUDAExecutionProvider')
start_time = time.time()
outputs_before = model(**inputs)
end_time = time.time()
print(f"Latency before optimization: {end_time - start_time}")

optimized_dir = './long-t5-local-base-optimized'
config = AutoOptimizationConfig.with_optimization_level('O3', for_gpu=True)
optimizer = ORTOptimizer.from_pretrained(model)
optimizer.optimize(save_dir=optimized_dir, optimization_config=config)

model_optimized = ORTModelForSeq2SeqLM.from_pretrained(optimized_dir, use_io_binding=True, provider='CUDAExecutionProvider')
start_time = time.time()
outputs_after = model_optimized(**inputs)
end_time = time.time()
print(f"Latency after optimization: {end_time - start_time}")

print(f"Are logits close? {torch.allclose(outputs_before.logits, outputs_after.logits)}")

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jan 8, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@fxmarty fxmarty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you for your contribution!

@fxmarty fxmarty merged commit 4d764e1 into huggingface:main Jan 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants