Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test burn-in for Cypress Cloud #25503

Open
muratkeremozcan opened this issue Jan 18, 2023 · 2 comments
Open

Test burn-in for Cypress Cloud #25503

muratkeremozcan opened this issue Jan 18, 2023 · 2 comments
Assignees
Labels
Cypress Cloud Feature request or issue in Cypress Cloud, not App Triaged Issue has been routed to backlog. This is not a commitment to have it prioritized by the team.

Comments

@muratkeremozcan
Copy link

What would you like?

Test burn-in is something that used to be advertised over a year ago as "Coming soon" but it fell off the radar.

We want to ensure that new or edited tests are ran repeatedly to ensure flake-free, stateless, order independent tests.

Why is this needed?

Using Cypress at scale, our biggest point is low quality tests that make their way into main, which end up being flakey and consume maintenance time later.

We want to ensure that any new or edited tests are ran repeatedly to ensure flake-free, stateless, order independent tests.

Handling low-quality tests up front as opposed to having to maintain them later will reduce overall bandwidth we have to invest in testing.

Other

We have manual test burn-in triggers at the moment, but since they're not a part of PRs, they are opt-in and do not get used.

@nagash77 nagash77 added routed-to-cloud Cypress Cloud Feature request or issue in Cypress Cloud, not App labels Jan 19, 2023
@ryanpei
Copy link
Contributor

ryanpei commented Feb 2, 2023

@muratkeremozcan thanks for adding this! We are actively working on this now. I have a couple questions (and, apologies if someone on my team previously asked you some of these):

  1. What numbers are you using for burn-in? For example, a new test needs to show X passes out of Y attempts. And is this a default setting?
  2. How are you thinking about burn-in versus fail retries? Would you consider it sensible to have the same configuration for retries to apply for burn-in? So effectively, any new or modified tests with retries configured will also use that same number of attempts if the 1st attempt passes.
  3. Under what conditions do you want burn-in to apply? For example, on all new or modified tests? Manually select specific tests on each run? Apply to specific groups? Specific branches?
  4. How do you treat flaky tests? Must developers fix all flaky tests in their PR before they can merge to main? What happens if a flaky test suddenly appears in main?

@muratkeremozcan
Copy link
Author

muratkeremozcan commented Feb 22, 2023

@muratkeremozcan thanks for adding this! We are actively working on this now. I have a couple questions (and, apologies if someone on my team previously asked you some of these):

  1. What numbers are you using for burn-in? For example, a new test needs to show X passes out of Y attempts. And is this a default setting?
  2. How are you thinking about burn-in versus fail retries? Would you consider it sensible to have the same configuration for retries to apply for burn-in? So effectively, any new or modified tests with retries configured will also use that same number of attempts if the 1st attempt passes.
  3. Under what conditions do you want burn-in to apply? For example, on all new or modified tests? Manually select specific tests on each run? Apply to specific groups? Specific branches?
  4. How do you treat flaky tests? Must developers fix all flaky tests in their PR before they can merge to main? What happens if a flaky test suddenly appears in main?
  1. It's a wide range; has been 5, 10 or 20. We want all the tests pass. We use a github manual trigger, where the number is customizable. I would imagine Cypress Cloud would have such a setting per project - and it would be ultimate if meta level controls existed; we have 90+ projects...

  2. Mutually exclusively. For us retries is always on and it's set to 2. When burning-in, ideally we do not want to see any retries; that means something wasn't as good as it should be. But, we leave the retries there to see how many times it had to retry to get n number of executions. Data is more important in this case.

  3. Run all new and modified tests. Run them with a priority before any other tests to save costs, just like how Cypress Cloud does it with failed tests first right now. At the moment triggering tests manually is possible with GHA, but the goal is to never even need to perform this chore. I cannot think of a need to apply to specific groups, or anything besides PRs as far as branches go.

  4. It depends. Ideally we are not in a monorepo, and all CI is green. That means "developers fix all flaky tests" to get the PR to merge. In a monorepo, github is limited and it is hard-to-impossible to require Cypress flake job to succeed when sub folders are conditionally triggering tests, or never triggering them.
    If a flakey test appears in main, we have test leads to monitor the test suite quality daily and ideally we treat them like production issues and create Jira tickets from Cy Cloud. The whole goal of the feature request is to significantly reduce and possibly eliminate this maintenance/monitoring work.

@nagash77 nagash77 added Triaged Issue has been routed to backlog. This is not a commitment to have it prioritized by the team. and removed routed-to-cloud labels Apr 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Cypress Cloud Feature request or issue in Cypress Cloud, not App Triaged Issue has been routed to backlog. This is not a commitment to have it prioritized by the team.
Projects
None yet
Development

No branches or pull requests

4 participants