This repository has been archived by the owner on Jun 21, 2024. It is now read-only.
This repository has been archived by the owner on Jun 21, 2024. It is now read-only.
Open
Description
def SteamSHP(input_query: str):
device = "cuda" # if you have a GPU
tokenizer = AutoTokenizer.from_pretrained("stanfordnlp/SteamSHP-flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained(
"stanfordnlp/SteamSHP-flan-t5-large"
).to(device)
x = tokenizer([input_query], return_tensors="pt").input_ids.to(device)
y = model.generate(x, max_new_tokens=1)
output = tokenizer.batch_decode(y, skip_special_tokens=True)
return output
Metadata
Assignees
Labels
No labels