-
Couldn't load subscription status.
- Fork 1.1k
Open
Description
I was trying to run WebLLM in my nextjs app to inference a light weight LLM model like mlc-ai/gemma-3-1b-it-q4f16_1-MLC I get model not found in consol log but when I use the model in their nextjs example setup I see model being downloaded in browser to cache in indexdb
sample model Llama-3.1-8B-Instruct-q4f32_1-MLC
am I missing something
johnyquest7, sudomonikers and prpatel05
Metadata
Metadata
Assignees
Labels
No labels