Skip to content

Evaluation run for all "good open weight models" with all available quantizations and different GPUs #209

@zimmski

Description

@zimmski

See https://www.reddit.com/r/LocalLLaMA/comments/1dlsxab/comment/l9rzjj7/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Not sure on how we should do that yet. CPU-only-inference will break us here, and speed-metrics are important as well.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requestpostponedThis issue/PR is postponed until there is a very good reason (e.g. $$$) to implement it.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions