Skip to content

🐛 Bug: Hook failures change the shape of the test suite #1955

Open

Description

If a beforeEach hook fails, then all subsequent tests in a suite and all sub-suites are not run. For example:

// hook-test.js
describe("outer context", function() {
  beforeEach(function() {
    throw new Error("this is a failure in a before each hook");
  });
  it("reports the first assertion", function() {

  });
  it("does not report the existence of this test case", function() {

  });
  describe("inner context", function() {
    it("does not report report its existence in the output", function() {

    });
  });

});

reports only a single testcase, even though there are three defined:

$ mocha --reporter min hook-test.js
0 passing (6ms)
  1 failing

  1) outer context "before each" hook for "reports the first assertion":
     Error: this is a failure in a before each hook
      at Context.<anonymous> (test/mocha-test.js:3:11)

As outlined in #1043, this is the intended behavior for both beforeEach as well as other hooks. This makes sense from an efficiency prespective; after all, there is no point in actually running the testcases when it is assured that they are going to fail.

The problem with this is that when you're refactoring a large codebase, or doing something like upgrading an underlying library, or rehabilitating a codebase that has allowed its test suite to get out o sync.... work that might take days or weeks, this behavior alters how many total testcases are reported. So as you make changes, the reporting varies widely. We've seen the pass/total numbers of test cases jump from 95/220, to 4/8, then down to 0/1 in the case of a global hook failure, then back up to 35/160 in a matter of minutes.

This can be very disorienting, and it obscures your overall progress towards your goal which is a completely green suite where all tests are passing. The fact that a test is not run, does not mean that it doesn't exist, and that it is not important.

Rather than exclude a test completely from the output, it makes more sense to not run it, but still report it as a failure. That way, the shape of the test suite remains constant. If I have 225 testcases, then that's what I have, even if only 26 of them are passing. I at least know that I'm 12% there, which test suites are related by the same failure, and I can track total progress.

If this makes sense, it would be something I'd be happy to help implement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    semver-majorimplementation requires increase of "major" version number; "breaking changes"status: accepting prsMocha can use your help with this one!type: featureenhancement proposal

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions