Skip to content

WizardCoder Inference accuracy dropped a lot compared to fastchat or vllm #4001

Closed
@sanigochien

Description

@sanigochien

Is it possible that it is bug about llama.cpp?
Please forgive me if it is not a proper issue reporting.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions