Skip to content

loading directly from hugginface #3

@pengzhangzhi

Description

@pengzhangzhi

Hi,
In the current codebase, we have to download the ckpt to the local and load it using the following method:

# download "{model}.safetensors" to the local 
# and load it like below
model = ESM2.from_pretrained("{model}.safetensors", device=0)

I wonder if we can directly load the ckpt from Hugginface?
such as

model = ESM2.from_pretrained("facebook/esm2_t30_150M_UR50D", device=0)

That way, it's more straightforward to replace existing codebase with a flash-attention version of esm2.

It seems doable to me bc eesm shares the same model architecture with ESM2 except for the use of flash attention?

Would love to hear ur thoughts!

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requestquestionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions