-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Better eval for beta #23934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better eval for beta #23934
Conversation
|
✅ Hi, I am the SymPy bot (v167). I'm here to help you write a release notes entry. Please read the guide on how to write release notes. Your release notes are in good order. Here is what the release notes will look like: This will be added to https://github.com/sympy/sympy/wiki/Release-Notes-for-1.12. Click here to see the pull request description that was parsed.
Update The release notes on the wiki have been updated. |
|
I'd like to think more about avoiding automatic evaluation based on assumptions and symbolic identities, preferring to have those in So how do you feel about only evaluating these cases when they are explicit numbers (which are the only things you added tests for anyway), and moving the more fully symbolic cases to |
|
Sure! I'm actually thinking that in general one may not always want automatic evaluation for numbers either, only for "trivial" (however that is defined...). There are e.g. the often very useful trigonometric evaluation tables that one may not always want. So going in the direction of moving more time-consuming and/or symbolic evaluation to |
|
This opens up some questions about how to deal with this in general, especially when Also found a printing issue with LaTeX (didn't check the other printers though...). |
|
|
|
Ideally there should be so little automatic evaluation that evaluate=False isn't even necessary. But for things like this, you typically have the same identity in |
290a1ab to
fc7714f
Compare
|
I used this pattern now, having all the actual eval in doit and calling it if both arguments are Numbers. This changes so that direct evaluation of e.g. This particular function becomes a bit messy as it must support one or two arguments, but apart from that I believe that this is probably a quite good pattern. |
|
For some functions, such as |
|
Maybe we should think of ways to make this pattern easier in the |
|
Benchmark results from GitHub Actions Lower numbers are good, higher numbers are bad. A ratio less than 1 Significantly changed benchmark results (PR vs master) Significantly changed benchmark results (master vs previous release) before after ratio
[41d90958] [7135ae37]
<sympy-1.11.1^0>
- 959±2μs 614±1μs 0.64 solve.TimeSparseSystem.time_linear_eq_to_matrix(10)
- 2.80±0.01ms 1.15±0ms 0.41 solve.TimeSparseSystem.time_linear_eq_to_matrix(20)
- 5.64±0.01ms 1.68±0ms 0.30 solve.TimeSparseSystem.time_linear_eq_to_matrix(30)
Full benchmark results can be found as artifacts in GitHub Actions |
|
I rebased. |
References to other Issues or PRs
Brief description of what is fixed or changed
Added a few more cases for evaluating
beta.Other comments
Release Notes
betaevaluates directly for more cases.