-
Notifications
You must be signed in to change notification settings - Fork 46
Description
I'm running betterer on a large TS codebase, and using it to track progress towards enabling a handful of eslint rules. Because I want betterer to track progress toward (and regressions on) each lint rule separately, I create one betterer test per rule.
However, this leads betterer to start up a new ESLint instance for each betterer test, and each ESLint instance seems to result in a new load + parse + partial typecheck of our code for the Typescript-enabled lint rules. The result is a huge amount of repeat work that makes things very slow.
Concretely:
- I'm running on a 12 core machine and I have only 5 eslint betterer tests (all of which inherit an eslint config that has the
@typescript-eslint/parser) - Just running
tsc --noEmiton my codebase, without any tsbuildinfo cache, takes almost 15 seconds. Therefore, I assume this is close to the lower bound for the linting time of my type-enabled rules. - Currently, my betterer run takes 45 seconds with only 5 tests.
If I simply merge all my 5 eslint tests into one betterer test with 5 eslint rules, betterer's runtime drops to only 20s. To me, this strongly suggests that all the repeat ESLint initialization is the issue.
Of course, if I create one composite betterer test, then I lose the ability to track progress/regressions on each individual lint rule. So, I'm wondering: would it make sense for betterer to be run one test function containing multiple eslint rules, but then split the violations out somehow (by error message, rule name, whatever) for tracking as if they were separate tests? Or, do you know of any other way to speed things up?
45 seconds is pretty excruciatingly slow for a precommit hook (and the precommit hook is barely any faster than a full betterer run, because ts-eslint still has to resolve a bunch of files for typechecking even if the list of staged files is small). My codebase isn't super amenable to being split up into TS projects, which would be part of an obvious mitigation here, but, even if I could do that, betterer's still gonna end up triggering a bunch of repeat work.
So, what do you think of the idea of one test function (with the shared state possibilities that allows) being able to generate output for multiple 'logical' tests?