-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show warning messages in command log #4917
Comments
A bunch of warnings already exist in the driver, though they just dump to
cypress/packages/driver/src/cypress/cookies.coffee Lines 48 to 58 in 0cbd954
cypress/packages/driver/src/cypress/cy.coffee Lines 65 to 67 in 0cbd954
cypress/packages/driver/src/cypress/server.coffee Lines 45 to 51 in 0cbd954
cypress/packages/driver/src/cypress/xml_http_request.coffee Lines 127 to 137 in 0cbd954
|
Yeah, we mostly use console.warn for deprecations currently. I like this idea though. I feel like this is what most people would want for the 'uncaught:exception' messages we give. They do not want a test failure, but a warning may be nice. Relevant to error improvements: #3762 |
@jennifer-shehane I like the yellow background with dark yellow/orange text 👍 |
K, I need to rework the design to also work for inline commands - so the situation Ben originally described like
|
Example: unnecessary assertion cy.get('button').should('exist') cy.get('button', { warn: false }).should('exist') While functionally this won't break - the .should('exist') is unnecessary. Example: antipattern? cy.visit(...)
cy.route(...) // definition for a route that has already happened Another cy.route() example warning: https://github.com/cypress-io/cypress/blob/develop/packages/driver/src/cy/commands/xhr.coffee#L419 We don't display cy.then(() => {
throw new Error('foo')
}) You likely don't want to define the listener for the cy.route() after cy.visit() in this situation. Example: limitations / unexpected behavior In certain browsers, the Like caniuse for mdn docs saying what does or doesn't work in browsers. cy.get('input').type('foo{home}', { simulated: true })
cy.get('input').type('foo{home}', { simulated: true, warn: false }) Example: deprecation warnings We do these in the console.warn now, but why not have it in the Command Log? Example: async weirdness/mixing Promises and Cypress Example: uncaught exceptions Implementation:Cypress.log({
warning: ...
}) Should be expandable, but not expanded by default - unless outside of Interactive Mode. Have ability to turn off warnings - maybe by type of warning.
Cypress.Warnings.defaults({
warnOnMixedAsyncUsage: false,
warnOnUncaughtExceptions: false,
}) Warning event (aka. like URL, XHR events...)For uncaught exceptions - we do some complex things to associated the uncaught exception with a test / command - we should just create an uncaught exception event and highlight that also in red. Side discussion As a breaking change - should switch logic to not fail by default on uncaught:exception - make them turn on failure for uncaught exception Maybe add a code frame for the error events - honestly we should just design the UI of the warnings to be exactly like the 'failed' errors - with a 'docsUrl', 'code frame', 'stack trace', etc. DashboardWe could pin warnings to Dashboard results on the specs, "warnings tab?" - code frames - however Dashboard wants to design. Could be multiple warnings per test - would require API changes. |
Another application for displaying a 'warning' instead of throwing. A person could screenshot an element that is 0 width and/or 0 height as described here: #5149 I wouldn't necessarily fail the test in this case but would want to show a warning that there will be no screenshot taken. |
The ability to warn and have these not fail a test run but be visible on cypress dashboard would be fantastic, we have several use cases really only specific to our application where we would like to stub tests in preparation for QA in situations where the amount of tests to write is a huge effort. |
This is exactly my use case as well. I have a complex mechanism of filtering 'dev' tests vs 'QA' tests so that no test I've written fails against the environment it is filtered for but this is not great for non-developers. I want to be able to write tests useful for our QA process that aren't yet satisfied but should be. I'll grant, this is a workflow/organization problem, but being able to illustrate to stakeholders that we can write tests that will eventually all pass is a big selling point for using testing like Cypress. Conversely having those tests just 'fail' right away (cause the code isn't yet developed) doesn't enamor the non-technical people to the process. They just see all the red. It's also bad for a CI pipeline situation (although my filtering mechanism is well suited to that). Anyway, a 'warning' would be useful for some situations although I recognize the potential for abuse (everything becomes a warning). |
I also see the need for Maybe useful as well (or instead of |
Yeah, I could REALLY use a case for passing warnings or annotations to students about the tests they're trying to pass! All green or gray means they just don't read the messages. |
For deprecations I would also find a badge in the Dashboard useful, like you do for flaky tests. We have some projects without active development where tests are just ran in CI daily. Whenever I get a Cypress release note I just upgrade them in a script (I should test locally first, I know, but I don't care if they fail first). There hardly ever is a problem that I didn't foresee by reading the release notes, even for major releases. I do look at the dashboard sometimes and investigate flaky tests. Would be good to be able to catch deprecations in the warning stage though. |
This would be fantastic for the new access accessibility tests I want to bring in. Having tests fail at the moment would be a nightmare as there is a long list of legacy issues. But warnings would put us on the right track to fix them! |
Another good use case for cy.warn(). We have a great suite of Cypress tests that we run as a smoke test whenever we do a new release of our app. These tests run against the live API and database rather than interceptors with fixtures. If the tests are executed against an api + database environment that does not yet contain some of the datasets required for the tests to pass it would be useful to be able to skip those tests and output a cy.warn() rather than having the test fail because of the lack of data. I agree with a comment earlier in this thread that it would also be useful to surface those warnings alongside passing and failing tests in the log and have warnings counted in addition to passes and fails at the top of the test execution window. |
Would be nice to distinguish "passed" tests from "passed with warnings". |
It would be useful to allow commands to print a warning instead of an error in the command log.
This would allow commands to alert the user something they have done may lead to unexpected behavior, such as pressing the
{home}
key while we cannot preform the default action of scrolling. Or calling acy.wait
after acy.visit
.We could have:
The text was updated successfully, but these errors were encountered: