Skip to content

[New Model]: Chameleon support #5721

Closed
@nopperl

Description

@nopperl

The model to consider.

https://huggingface.co/facebook/chameleon
(as of now, the models can be downloaded using the model form)

Chameleon is an interesting multimodal model architecture based on Llama 2. It adds image inputs and outputs to Llama 2 by tokenizing images using a VQ-VAE and adding the codebook to Llama's tokenizer vocabulary.
In principle, it supports text and images as input and output in arbitrary combination. However, the released models were finetuned to prevent image generation.

The closest model vllm already supports.

LlamaForCausalLM

What's your difficulty of supporting the model you want?

For text->text support, the implementation should actually be fairly easy. The model is based on Llama-2 with the following differences:

  • QK norm
  • reordering the norm similar to Swin Transformer (normalizing the outputs of the attention and ffn blocks instead of the inputs)

To enable image inputs, image tokenization using the provided VQ-VAE needs to be added.

Further info:

Metadata

Metadata

Assignees

No one assigned

    Labels

    new-modelRequests to new models

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions