Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail tests on console errors #389

Open
dominiqueclarke opened this issue Oct 1, 2021 · 1 comment
Open

Fail tests on console errors #389

dominiqueclarke opened this issue Oct 1, 2021 · 1 comment

Comments

@dominiqueclarke
Copy link
Contributor

dominiqueclarke commented Oct 1, 2021

As a front end engineer, I want to have insight into when console errors are present and possibly fail synthetics tests on console errors.

#226 Added the ability to capture unhandled page errors in addition to console.error and console.warn.

Here are some of the capabilities I imagine may be beneficial

  • Fail on all errors
    It may be desired to fail synthetics tests for all page errors, either console.error or unhandled page errors.
  • Fail on errors matching regex
    It may be desired to fail synthetics tests on errors whose message match a specific pattern.
  • Fail on a certain amount of errors
    It may be desired to fail after an error threshold, especially when application authors expect a specific amount of known errors but do not anticipate additional errors

Option 1: Introduce a flag to fail by errors

Add --fail-on-page-error <pageerrorconfig> flag

Introducing a flag to fail on page errors would allow users to fail steps on any error by passing --fail-on-page-error, or by passing --fail-on-page-error "regex". We may also need a separate flag --fail-on-page-error-count. When these flags are in use, we'd wait until the step finishes in order to collect helpful debugging information for the entire step, including the offending console logs, and then fail the step.

Option 2. Allow custom failures

Playwright allows you to test your web applications the way users use them, by identifying the presence of specific elements on the page. Console errors are implementation details that are not of interest to users. For this reason, failing on console errors is a very specific use case. It's not often that a major page failure can't easily tested based on visible html elements. However, we recently identified a use case where testing is difficult, and capturing page errors would have identified a bug before release.

There may be other, specific criteria that app authors want their synthetics tests to fail on that we may not be able to anticipate. To mitigate this, we could introduce a step.fail mechanism to allow app authors to opt into failing their tests under specific criteria to may be unknown to the synthetics agent.

App authors can already do this today by throwing an error, but it has some drawbacks. Throwing an error will stop the step without synthetics finishing recording helpful debugging information for the step. In the case of console errors, authors could attempt to parse their own console errors using page.on('pageerror') from playwright, like below:

journey('example journey', ({ page, params }) => {
  page.on('pageerror', (error) => {
      if (error.message === 'boom') {
        throw error;
      }
  });
  ...
});

However, this will cause the journey to exit without capturing the page error within the journey/browserconsole document where it would be most helpful. Also, as far as I can tell, @vigneshshanmugam correct me if I'm wrong, the custom thrown error is not captured in any documents.

By introducing a step.fail mechanism, we can better control capturing debugging information before the step ends. Alternatively, we may be able to adjust how we are handling custom thrown errors to capture debugging information before the step ends.

@paulb-elastic
Copy link
Contributor

I like your idea of a documented mechanism where users can write logic to fail a step in a controlled manner (like your option 2 with something like step.fail), giving them a proper complete step, screenshot, result data etc.

This gives a lot of flexibility to the user who can write any logic that suits
e.g. if (amount_of_items_in_stock < 100) step.fail("maybe some details here");

For specific checks, I think that is pretty flexible. In terms of more general checks (e.g. check the number of errors isn’t over some number, or regex look for a specific type of error, as you’ve described), this almost warrants something that can be shared (you probably don’t want to write that logic manually into each step).

Maybe we could have something like the after() method (as described here), but that’s called after each step (like an afterStep() method).

The user could then write something like checkNumErrorsDontExceed(n) which they call from afterStep().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants