-
Notifications
You must be signed in to change notification settings - Fork 81
🧪 Improve schema process benchmarking #1579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Run standard test, to make sure the build works, and separate benchmark tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR restructures the schema benchmark testing to separate functional validation from performance benchmarking. The benchmark project setup is extracted into a reusable fixture (schema_benchmark_app) in conftest.py, enabling both standard tests that verify build correctness and dedicated benchmark tests that measure schema processing performance.
Key changes:
- Extracted benchmark project setup into a reusable fixture
- Added standard test in
test_schema.pythat runs during normal testing - Simplified benchmark test to focus on performance measurement
- Moved snapshot assertions to the standard test
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| tests/schema/test_schema.py | Adds standard test that validates the benchmark project builds successfully using snapshot assertions |
| tests/schema/snapshots/test_schema.ambr | Contains new snapshot data for the standard benchmark test validation |
| tests/conftest.py | Introduces schema_benchmark_app fixture that dynamically generates benchmark projects with configurable need counts |
| tests/benchmarks/test_schema_benchmark.py | Simplified to focus on performance benchmarking using pytest-benchmark, removing project setup logic |
| tests/benchmarks/__snapshots__sphinx_lt_8/test_schema_benchmark.ambr | Removed as snapshot assertions moved to standard test |
| tests/benchmarks/__snapshots__sphinx_ge_8/test_schema_benchmark.ambr | Removed as snapshot assertions moved to standard test |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1579 +/- ##
==========================================
+ Coverage 86.87% 87.94% +1.06%
==========================================
Files 56 70 +14
Lines 6532 9630 +3098
==========================================
+ Hits 5675 8469 +2794
- Misses 857 1161 +304
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ubmarco
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Have standard test, to make sure the build works, and separate benchmark test.
The standard test runs during standard testing, e.g. running
tox,and the benchmark test can be run using e.g.
tox -e py312-benchmark -- tests/benchmarks/test_schema_benchmark.py --benchmark-columns=min,max,meanCurrent results on my MacBook: