Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QA: Creation of Test Cases for OpenAI Model Configuration Migration #29567

Closed
josemejias11 opened this issue Aug 13, 2024 · 1 comment
Closed

Comments

@josemejias11
Copy link
Contributor

josemejias11 commented Aug 13, 2024

Parent Issue

No response

Task

Create a set of test cases for the migration of OpenAI model configurations from hardcoded values in the code to the dotAI.yml descriptor file. The focus of these test cases will be on ensuring that the application behaves as expected with the new configuration approach and that all functionalities continue to work correctly without

Proposed Objective

Quality Assurance

Proposed Priority

Priority 1 - Show Stopper

Acceptance Criteria

  • All test cases are created.
  • The application passes all end-to-end test scenarios.
  • No regressions are found, and all functionalities work as expected.
  • The application handles errors and invalid configurations gracefully.
  • Performance remains consistent with no significant impact.

External Links... Slack Conversations, Support Tickets, Figma Designs, etc.

No response

Assumptions & Initiation Needs

No response

Quality Assurance Notes & Workarounds

Related to #29281

Sub-Tasks & Estimates

No response

@josemejias11
Copy link
Contributor Author

QA Comment

Tests cases created for the migration of the AI model
https://docs.google.com/spreadsheets/d/1u-_nkTs-ZBhVvUKKhgG0KiySYgUL2GDoZsR9mmGMwq0/edit?usp=sharing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

No branches or pull requests

2 participants