Closed
Description
We have a performance suite of code where we can test out the full build time of individual projects; however, this performance suite is limited to just the compiler. It doesn't give a sense of work being done in the language service (e.g. the amount of work auto-imports might perform, or multi-project workspace loading might involve, time for find-all-reference, etc.). We need to be able to accurately measure these things.
We want to create a tool to run language service operations and report back on them to discover regressions in performance. We've done similar things on Definitely Typed, but we need other codebases to be run as well. Things to think about:
- When and how do these tests run?
- How does a person kick off these tests for a PR?
- Where can we see the results of these tests if they're kicked off manually? If they're kicked off periodically?
- Can we see time series records of the data?
- Are operations consistent, or randomly applied?
- What counts as an anomaly?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment