-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
Chainlit encounters a critical failure when handling files that contain Base64‑encoded text within user conversations.
When a user uploads or downloads such a file and then asks the RAG agent to decode its content, the agent fails to decode it.
For new users with no previous conversation history, this failure results in a harmless error like:
“I cannot answer this question.”
However, once the same user accumulates multiple conversations and several Base64-encoded files in their history, the issue becomes severe.
Upon logging out and back in, Chainlit attempts to reload the full conversation history. During this process, it encounters the previously stored problematic file, causing the entire Chainlit UI to crash.
In Kubernetes environments, this leads to the pod failing and restarting repeatedly, returning 503 Service Unavailable indefinitely.
The application becomes unusable until the database is manually cleared.
Steps to Reproduce
Log in as a user with no prior conversation history.
Upload or download a file containing Base64‑encoded text.
Ask the RAG agent to decode the content.
Observe that the agent fails to decode (expected minor failure).
Continue generating multiple conversations and upload multiple files containing Base64 text.
Log out.
Log back in.
Chainlit attempts to reload the user’s conversation history.
The UI crashes, and the Kubernetes pod enters a restart loop.
Application returns continuous 503 errors.
Expected Behavior
The RAG agent should gracefully handle Base64-encoded content or at least fail safely.
Chainlit should be able to reload past conversations and associated file metadata without crashing.
A corrupt or unreadable file should not break the entire UI or application.
Actual Behavior
RAG agent cannot decode Base64 content.
After multiple similar files/conversations exist in the user history, Chainlit:
Crashes on login
Fails to render UI
Causes the Kubernetes pod to fail and restart repeatedly
Returns persistent 503 Service Unavailable responses
The only recovery method is to delete the database.
Environment
Deployment: Kubernetes
Issue Reproducibility: Always (after user has multiple conversations and Base64 files)
Impact: Full application outage for affected user; system unusable until DB is wiped
Additional Notes
The problem does not occur for new users with no history.
The issue appears to be triggered only after the user accumulates several conversations and files.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status