Skip to content

Support YaRN models (RoFormer implementation in rotary_embedding kernel) #980

Closed
@casper-hansen

Description

@casper-hansen

The YaRN model with a context size of 64k and 128k was recently released and pre-trained by people from Nous Research and EleutherAI. It uses the RoFormer type of embeddings that seem different from GPT-NeoX and GPT-J. It is based on the LLaMa 2 model, so support is mostly there, just need some small adjustments.

The original YaRN module uses the flash attention rotary embedding implementation and seems similar in functionality. You may also be interested in the original RoFormer implementation from Huggingface.

Models catalog:
https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k
https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k
https://huggingface.co/NousResearch/Yarn-Llama-2-13b-64k
https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions