-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
better stack trace/ sourceMaps #881
Comments
Can you provide a screenshot which shows you the incorrect stack trace. I'm having a hard time picturing exactly what you're seeing. That would be helpful. |
hmm, can't actually figure out right now how to invoke a failing command that shows a stack trace 🤔. Throwing an error out right actually shows a correct stack trace pointing to the source code, so that's fine. But, that case was not that important since that doesn't usually happen as a result of a regression/API change. The common use case is that a failing assertion doesn't show any stack trace at all: And from the command log, it's really impossible to figure out which command is really failing: |
The problem is that the stack trace won't help at all. If we provided the stack trace of the assertion failing all you'd see is something about two numbers It sounds like what you'd want is something like a user generated stack trace that somehow points to the command in your test code that failed when it could associate it. I don't believe that's possible in any capacity other than instrumenting the test code and then doing some crazy stuff at that point. I would say that in the vast majority of cases - the command log + the application under test provide more than enough information to understand the source of the failure. In addition to that, you also get a video. Now if you're saying you can't figure out what's going on (and you are running tests iteratively in the GUI) that sounds surprising. My suggestion here would be to use |
Yes.. a sourceMap is probably a better description of what I'd want. The above spec contained many repetitive, very similar-to-each-other commands such as making requests with different credentials and seeing what is returned and rendered. A case could be made that many of these are mostly unit tests and better suited for something other than cypress, but I think doing an e2e test while checking how the client app also behaves at the same time is a good justification for using cypress for this. But, even if just front-end UI specs are concerned -- there are many cases where it gets pretty hard to find out where the failing command is located in the source file, even when one has pretty good idea what part of the app is failing. If the spec file is 300+ LOC, narrowing it down is getting crazy tedious. |
I managed to hack something up. EDIT (20-04-17): see this comment for latest and greatest Putting this in import "./commands";
const stacktraces = [];
before(() => {
stacktraces.length = 0;
});
const _ = Cypress._;
// monkey patch all cy commands to cache stack trace on each call
_.each( Cypress.Commands._commands, ({ name }) => {
const fn = cy[name];
cy[name] = ( ...args ) => {
const ret = fn.call( cy, ...args );
try { throw new Error(); } catch ( e ) {
stacktraces.push({ stack: e.stack, chainerId: ret.chainerId });
}
return ret;
};
});
// on failure, log the stack trace if one is found
Cypress.on( "fail", (err, runnable) => {
console.error( err );
const cmd = _.find( runnable.commands, cmd => {
// "pending" probably means it failed (at the time of this callback,
// the failures are still not resolved)
return cmd.state === "pending" && cmd.chainerId;
});
if ( !cmd ) return;
const data = _.find( stacktraces, { chainerId: cmd.chainerId }) ;
if ( !data || !data.stack ) return;
console.log( data.stack );
}); One problem of this solution is that there are no sourcemaps and thus the stack trace is from the compiled spec file. But better than nothing, eh. |
Yah you'll see the compiled spec source but at least the line numbers should take you to the command that failed right. I see what you're doing here, but I wonder how many other users want to click on a command and have a stack trace pointing at the test code. I suppose we could branch the logic and say - if its a CypressError point to the test code stack, else display the real error. We already hide CypressErrors in the UI so we kinda already have this logic. |
Also this... https://github.com/cypress-io/cypress/blob/develop/packages/driver/src/cypress/commands.coffee#L87 _.each( Cypress.Commands._commands, ({ name }) => { ... })
Cypress.Commands.each(({ name }) => { ... }) |
Yea, this thing actually just logs the stack trace in the console. You can still click on the command in command log to get the default behavior.
Good to know |
Btw, I assume cypress is using babel... could we set Edit: just found #343 |
Yeah this is about to land and then you'll be able to do whatever you want: #684 |
+1 for better stack traces for tests. We're looking at upgrading our test suite to cypress at the Financial Times. We like the way it loads the tests in one iframe and your app in another and we're thinking of using it as an all in one tool - for unit testing, integration and e2e testing. Stack traces might not be very useful when you're doing integration or e2e tests, but very useful when unit testing. I want to be able to click to see the stack trace of my failed assertions, much like mocha provides, so I can can find the test file and the line of code that's failing. If I throw an Error manually from my test file, it makes it to the console just fine But if an assertion fails, it just displays the message. It would be super useful to have the stack trace output. Maybe a combination of what @dwelle suggested with some source maps? @brian-mann Any update on if or when we can expect this? It's been a year since the last comment on this issue. |
Actually I think it would be very helpful to show the origin file + line for all Cypress commands (GET, CLICK, VISIT...), not only when there is an error. For Cypress commands it shouldn't be hard to do? (just look up the stack to find it). |
@yannicklerestif I think that's a good idea, and should be part of this issue. Perhaps enabled by a config option. |
If anyone ends up implementing my workaround from above, you might want to add this to your const browserify = require(`@cypress/browserify-preprocessor`);
const options = browserify.defaultOptions;
options.browserifyOptions.transform[1][1].babelrc = true;
options.browserifyOptions.transform[1][1].retainLines = true;
module.exports = ( on ) => {
on(`file:preprocessor`, browserify(options));
}; So that babel uses and install:
|
@dwelle I tried your code, but there are cases where failing tests are swallowed. I changed all return statements in the |
@scharf hmm, yea that's a mistake (even though the docs are not quite clear that you need to do that). In my codebase I rethrow myself but it seems I forgot to update my comment above. For future reference here's the latest code, including the preprocessing:
const stacktraces = [];
const _ = Cypress._;
before(() => {
stacktraces.length = 0;
});
// monkey patch all cy commands to cache stack trace on each call
Cypress.Commands.each(({ name }) => {
const fn = cy[name];
cy[name] = ( ...args ) => {
const ret = fn.call( cy, ...args );
stacktraces.push({
stack: new Error().stack,
chainerId: ret.chainerId
});
return ret;
};
});
// on failure, log the stack trace if one is found
Cypress.on( 'fail', (err, runnable) => {
const commands = runnable.commands;
const cmdIdx = _.findLastIndex( commands, cmd => {
// "pending" probably means it failed (at the time of this callback,
// the failures are still not resolved, unless synchronoues)
return (cmd.state === 'pending' || cmd.state === 'failed') && cmd.chainerId;
});
const cmd = commands && commands[cmdIdx];
if ( cmd ) {
const data = _.find( stacktraces, { chainerId: cmd.chainerId });
if ( data && data.stack ) {
console.log( data.stack, data.chainerId );
}
}
throw err;
});
const browserify = require('@cypress/browserify-preprocessor');
const options = browserify.defaultOptions;
options.browserifyOptions.transform[1][1].babelrc = true;
options.browserifyOptions.transform[1][1].retainLines = true;
module.exports = ( on ) => {
on('file:preprocessor', browserify(options));
}; So that babel uses and install:
|
How can this be placed in index.ts? |
@dwelle Thanks for the code! It works really well. For folks who use on('file:preprocessor', (file) => {
if (file.filePath.match(/.+\/integration\/components\/.+\.spec\.js/)) {
// Use webpack if unit-testing a Vue component
return webpack(optionsForWebpack)(file)
} else {
return browserify(optionsForBrowserify)(file)
}
}) If anyone has an idea how to configure one or the other so there's no need to use 2 preprocessors when also unit-testing components, that'd be appreciated! |
@Rumpelstinsk right. Below is a fix for that (see EDIT: since cypress uses chai assertions for command assertions, we can't easily differentiate which should take precedence when logging the stack trace. Thus, I log both solutions and you must scan the output manually to get to the root of it. This is the latest iteration in its full form for future reference: (applies to Cypress 3.8.1, for 4.4.1 you'll need to update the
// chai assertions
// -----------------------------------------------------------------------------
let CHAI_ERROR;
chai.use(function (chai) {
chai.Assertion.prototype.assert = (orig => function (...args) {
CHAI_ERROR = new Error();
return orig.call(this, ...args);
})(chai.Assertion.prototype.assert);
});
// Cypress commands
// -----------------------------------------------------------------------------
const stacktraces = [];
const _ = Cypress._;
before(() => {
stacktraces.length = 0;
});
// monkey patch all cy commands to cache stack trace on each call
Cypress.Commands.each(({ name }) => {
const fn = cy[name];
cy[name] = ( ...args ) => {
const ret = fn.call( cy, ...args );
stacktraces.push({
stack: new Error().stack,
chainerId: ret.chainerId
});
return ret;
};
});
// -----------------------------------------------------------------------------
// on failure, log the stack trace if one is found
Cypress.on( 'fail', (err, runnable) => {
const commands = runnable.commands;
const cmdIdx = _.findLastIndex( commands, cmd => {
// "pending" probably means it failed (at the time of this callback,
// the failures are still not resolved, unless synchronoues)
return (cmd.state === 'pending' || cmd.state === 'failed') && cmd.chainerId;
});
const cmd = commands && commands[cmdIdx];
if ( cmd ) {
const data = _.find( stacktraces, { chainerId: cmd.chainerId });
if ( data && data.stack ) {
const stack = CHAI_ERROR
? `COMMAND STACKTRACE:\n\n` + data.stack + `\n\nCHAI STACKTRACE:\n\n` + CHAI_ERROR.stack
: `COMMAND STACKTRACE:\n\n` + data.stack;
CHAI_ERROR = null;
console.log( stack );
}
} else if ( CHAI_ERROR ) {
console.log( CHAI_ERROR.stack );
CHAI_ERROR = null;
}
throw err;
});
const browserify = require('@cypress/browserify-preprocessor');
const options = browserify.defaultOptions;
options.browserifyOptions.transform[1][1].babelrc = true;
options.browserifyOptions.transform[1][1].retainLines = true;
module.exports = ( on ) => {
on('file:preprocessor', browserify(options));
}; So that babel uses and install:
|
This actually breaks the cy commands' stack trace because they use chai asserts, too. So best to log both and scan it manually for the best effect: (I've updated the comment above, but below is just the diff) const cmd = commands && commands[cmdIdx];
if ( cmd ) {
const data = _.find( stacktraces, { chainerId: cmd.chainerId });
if ( data && data.stack ) {
const stack = CHAI_ERROR
? `COMMAND STACKTRACE:\n\n` + data.stack + `\n\nCHAI STACKTRACE:\n\n` + CHAI_ERROR.stack
: `COMMAND STACKTRACE:\n\n` + data.stack;
CHAI_ERROR = null;
console.log( stack );
}
} else if ( CHAI_ERROR ) {
console.log( CHAI_ERROR.stack );
CHAI_ERROR = null;
} |
Just updated to 4.4.1, and while Cypress added some support for stack traces in 4.3.0 & 4.4.0, it still doesn't map correctly to my tests. But there's been some change to internal API --- not sure when exactly, but here's the updated Cypress.on( `fail`, (err) => {
const { commands } = cy.queue;
// must take precedence over CHAI_ERROR to ensure cy.blackbox works
// correctly
if ( cy.__stack__ ) {
console.log(cy.__stack__);
} else {
const cmd = _.findLast( commands, ({ attributes }) => {
// "pending" probably means it failed (at the time of this callback,
// the failures may still not be resolved)
return attributes.logs.find( log => {
return /^(failed|pending)$/.test(log.get().state);
});
});
if ( cmd ) {
const data = _.find( stacktraces, {
chainerId: cmd.attributes.chainerId
});
if ( data && data.stack ) {
const stack = CHAI_ERROR
? `COMMAND STACKTRACE:\n\n` + data.stack + `\n\nCHAI STACKTRACE:\n\n` + CHAI_ERROR.stack
: `COMMAND STACKTRACE:\n\n` + data.stack;
CHAI_ERROR = null;
console.log(stack, data.chainerId);
}
} else if ( CHAI_ERROR ) {
console.log(CHAI_ERROR.stack);
CHAI_ERROR = null;
}
}
throw err;
}); Hopefully, these hacks won't be necessary anymore soon! |
The code for this is done in cypress-io/cypress#3930, but has yet to be released. |
Released in This comment thread has been locked. If you are still experiencing this issue after upgrading to |
Currently, there's no useful stack trace on failure. If the failure is an critical exception of sorts (e.g. command was written incorrectly, or wrong args supplied), the stack trace leads to cypress reporters (not useful), and if the failure is a regular assertion failure or similar, there's no stack trace at all.
If a test has 30+ similar commands, it's very hard to narrow down which command is failing. Multiply this across several files, and updating failing specs e.g. due to regression/API changes is a nightmare.
What we need is some mapping between commands in command log, and the source code.
Is this something on the roadmap?
The text was updated successfully, but these errors were encountered: