Description
📦 Platform
Self hosting Docker
📦 Deploymenet mode
server db(lobe-chat-database image)
📌 Version
v1.79.6
💻 Operating System
Ubuntu
🌐 Browser
Chrome
🐛 Bug Description
The chat has the option to upload a file and send it to the model to work with it
And for the 4 different APIs I've tried, the quality of the model's response is as if the file had been verbally translated into Ancient Egyptian and back again.
📷 Recurrence Steps
- Take a file with a scientific publication (paper)
- Send it to the OpenAI website chat and ask for a structured summary through the system prompt.
- Repeat the same for the same model in lobechat.
And there is a huge difference, the model in lobechat hallucinates, misses key elements and speculates wherever it can, while OpenAI model - acts as expected - very nice and useful summary
🚦 Expected Behavior
Same output as in original model provider service
📝 Additional Information
It works the same way with Claude, Gemini and Grok
The system agent settings have no effect, the result is the same.
Apparently the question is how lobechat processes the file and in what form it actually gives the models.
Either the model does not match there to get a vector representation, or the text is compressed and reduced to impossible, but one way or another it is impossible to use it.
(Unfortunately I can't give you examples right now, as my lobechat instance is not yet available due to ISP issues, but I will provide examples if possible. )