Skip to content

[Question]: Difference between Local LLM and Online LLM #5137

Open
@TomTigerDong

Description

@TomTigerDong

Describe your problem

Ragflow calls the large model deployed locally with Ollama, and it responds to questions very quickly. However, when calling the large model API, it seems to be lost somewhere, resulting in a very slow response. Moreover, it appears to "dumb down" the large model, and the answers are quite silly.
——— This issue has been resolved. When calling the external large model API, Ragflow hides the thinking process, which is why it seems slow. The results are actually consistent with what is expected, and there is no "dumbing down" of the model.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions