Closed
Description
Notes by @RyanCavanaugh
strictFunctionTypes
for our codebase
- Most unsoundness is around visitor-like patterns
- Performance of additional needed
checkDefined
calls is ~zero - Some possible inference issues identified, Jake will reduce offline
- Update: Fixed at Improve logic that chooses co- vs. contra-variant inferences #52123
- Please code review, will merge for 5.0
Internal RWC Tests
- What's the deal?
- These are internal (nonpublic) codebases
- Do we need this?
- Probably not anymore
- These codebases are ancient and are affected by 5.0 changes
- Would greatly prefer if external contributors could see all test collateral/results
- Sufficient coverage from top100, user, etc
- Someone please write down the differences between all these
- Let's remove
CI coverage in general
- We need to be running ALL test suites on EVERY PR so we stop getting surprised
- Please kick off these runs manually for now
- TODO: Automate that
- Performance testing
- Current tests don't give an accurate statistical picture of what's happening
- New perf tools tell you whether results are statistically significant or not, and to what degree
- Getting apples-to-apples hardware is difficult
- But we've inarguably "drifted" slower one unmeasurable step at a time
- Option 1: Measure the unmeasurable
- Pros: Would be good
- Cons: Not possible
- Option 2: Measure some proxy measure of performance instead (allocations, comparisons, clock cycles, etc)
- Pros: Discrete, no error bars
- Cons: Proxy measures might only roughly correlate
- Option 3: Measure release-to-release to keep tabs on how we're doing
- Pros: Works
- Cons: Misses things when they happen
- Option 1: Measure the unmeasurable
- Consensus: Keep investigating perf and perf measures
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment