Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance Stream Chat to Stream Data in Chunks Based on HTTP Status Evaluation #30361

Open
victoralfaro-dotcms opened this issue Oct 16, 2024 · 1 comment

Comments

@victoralfaro-dotcms
Copy link
Contributor

victoralfaro-dotcms commented Oct 16, 2024

Parent Issue

No response

Problem Statement

dotAI's "Stream Chat" since it currently waits until it gets all the contents instead of sending chunks of data as one would expect in a streaming mode. This happens because, in order to apply an OpenAI model fallback strategy, we wait for the contents to be evaluated to determine what kind of error happened.

Steps to Reproduce

  • Set the dotAI application correctly
  • Go to Dev Tools->dotAI
  • Select the Stream Chat option
  • Type a question
  • Response is displayed not chunk by chunk but as a whole

Acceptance Criteria

  • The system streams data as chunks once the HTTP status code is evaluated as 2xx.
  • The fallback strategy remains intact and continues to handle error cases as intended.
  • Testing covers all potential scenarios of 2xx HTTP status codes and proper streaming of bytes.
  • No regression in existing fallback mechanisms.

dotCMS Version

24.10.09

Proposed Objective

Code Maintenance

Proposed Priority

Priority 2 - Important

External Links... Slack Conversations, Support Tickets, Figma Designs, etc.

streaming.results.mp4

No response

Assumptions & Initiation Needs

No response

Quality Assurance Notes & Workarounds

No response

Sub-Tasks & Estimates

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

No branches or pull requests

1 participant