Skip to content

[HANDS-ON BUG] Error in chat completion in unit 1 dummy agent library #627

@rahulsalgare

Description

@rahulsalgare

Describe the bug
Getting error 'the requested model is not a chat model' while running the code given in dummy agent library

To Reproduce

import os
from huggingface_hub import InferenceClient

HF_TOKEN = os.environ.get("HF_TOKEN")

client = InferenceClient(model="meta-llama/Llama-4-Scout-17B-16E-Instruct")

output = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "The capital of France is"},
    ],
    stream=False,
    max_tokens=20,
)
print(output.choices[0].message.content)

getting the error as,

Bad request:
{'message': "The requested model 'meta-llama/Llama-4-Scout-17B-16E-Instruct' is not a chat model.", 'type': 'invalid_request_error', 'param': 'model', 'code': 'model_not_supported'}

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions