Closed
Description
Describe your problem
Description:
I've been using RAGFlow for a few weeks and am facing difficulties maintaining context in sequential questions.
Scenario:
I have a knowledge base containing sales data segmented by month. When I ask:
"How much did product X sell?"
The model correctly asks for clarification:
"Which month?"
However, when I specify the month in the next prompt, the model often loses track of the original question. Instead of maintaining context, it retrieves new chunks that are no longer relevant to the initial inquiry, leading to inconsistencies or hallucinations in the responses.
What I've tried:
- Using metadata in the documents (which improved retrieval but didn't fully resolve the issue).
- Explicit instructions for the model to maintain context across prompts.
- Different approaches to question formulation.
Questions:
- How can I ensure that relevant chunks are aggregated until a complete answer is formed?
- What is an effective strategy to help the model retain context between prompts?
- The issue seems to stem from the fact that the model’s instructions do not directly apply to chunk retrieval. What would be the best way to mitigate this?
Any guidance would be greatly appreciated!