Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to qgridnext #2814

Merged
merged 9 commits into from
Sep 9, 2024
Merged

Move to qgridnext #2814

merged 9 commits into from
Sep 9, 2024

Conversation

sarthak-dv
Copy link
Contributor

📝 Description

Type: 🎢 infrastructure

To avoid any future errors that may be caused by the deprecated qgrid library we are temporarily moving to qgridnext until we find a reliable alternative.

📌 Resources

Examples, notebooks, and links to useful references.

🚦 Testing

How did you test these changes?

  • Testing pipeline
  • Other method (describe)
  • My changes can't be tested (explain why)

☑️ Checklist

  • I requested two reviewers for this pull request
  • I updated the documentation according to my changes
  • I built the documentation by applying the build_docs label

Note: If you are not allowed to perform any of these actions, ping (@) a contributor.

@tardis-bot
Copy link
Contributor

tardis-bot commented Aug 29, 2024

*beep* *bop*
Hi human,
I ran ruff on the latest commit (3156958).
Here are the outputs produced.
Results can also be downloaded as artifacts here.
Summarised output:

2	I001	[*] Import block is un-sorted or un-formatted
2	E999	[ ] SyntaxError: Expected an expression
1	E712	[*] Avoid equality comparisons to `False`; use `if not ...:` for false checks
1	F401	[*] `re` imported but unused

Complete output(might be large):

.github/workflows/tests.yml:7:4: E999 SyntaxError: Expected an expression
tardis/visualization/widgets/line_info.py:3:1: I001 [*] Import block is un-sorted or un-formatted
tardis/visualization/widgets/line_info.py:3:8: F401 [*] `re` imported but unused
tardis/visualization/widgets/line_info.py:305:21: E712 Avoid equality comparisons to `False`; use `if not ...:` for false checks
tardis/visualization/widgets/util.py:3:1: I001 [*] Import block is un-sorted or un-formatted
tardis_env3.yml:3:10: E999 SyntaxError: Expected an expression
Found 6 errors.
[*] 3 fixable with the `--fix` option (1 hidden fix can be enabled with the `--unsafe-fixes` option).

@sarthak-dv sarthak-dv marked this pull request as draft August 29, 2024 04:48
@sarthak-dv sarthak-dv marked this pull request as ready for review August 29, 2024 17:09
Copy link

codecov bot commented Aug 29, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.79%. Comparing base (b79b61e) to head (ecc90b1).
Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2814      +/-   ##
==========================================
- Coverage   70.96%   70.79%   -0.18%     
==========================================
  Files         209      209              
  Lines       15638    15637       -1     
==========================================
- Hits        11098    11070      -28     
- Misses       4540     4567      +27     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@jamesgillanders jamesgillanders left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@jamesgillanders
Copy link
Member

@andrewfullard We think it makes sense that this benchmark test fails but we need a second opinion. If it's a problem let us know -- if it isn't, then we will merge once Jaladh has reviewed it

@andrewfullard
Copy link
Contributor

Seems that qgridnext doesn't exist in conda, should be a pip install instead

@sarthak-dv
Copy link
Contributor Author

Seems that qgridnext doesn't exist in conda, should be a pip install instead

Yes, that seems to be the issue here, I've added pip install qgridnext in tests.yml, do I need to add it somewhere else for the benchmarks to pass?

Copy link
Member

@jaladh-singhal jaladh-singhal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this PR @sarthak-dv - I tested the widgets notebooks locally, they work now!

Although, there's a problem with dependency specification - see my comment (it's a small change)

tardis_env3.yml Outdated Show resolved Hide resolved
Comment on lines +83 to +87
- name: Install qgridnext
if: ${{ !inputs.pip_git }}
run: |
pip install qgridnext

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be needed after the change I suggested in env file. Try removing it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried removing this but the tests failed because qgridnext is not installed, this code first installs qgridnext for tests to run. If we don't need this the other option would be to enable pip install -e . --user line in the file but that will install all the dependencies again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@jaladh-singhal
Copy link
Member

@sarthak-dv benchmarks "should" pass after the suggested change.

Unrelated, @andrewfullard do we need to update conda lock files as well?

@andrewfullard
Copy link
Contributor

@sarthak-dv benchmarks "should" pass after the suggested change.

Unrelated, @andrewfullard do we need to update conda lock files as well?

Yes, we will need to regenerate the environment.

@tardis-bot
Copy link
Contributor

tardis-bot commented Sep 4, 2024

*beep* *bop*
Hi human,
I ran benchmarks as you asked comparing master (be4ec9a) and the latest commit (3156958).
Here are the logs produced by ASV.
Results can also be downloaded as artifacts here.

Significantly changed benchmarks:

All benchmarks:

Benchmarks that have stayed the same:

| Change   | Before [be4ec9a4] <master>   | After [31569583]    | Ratio   | Benchmark (Parameter)                                                                                                               |
|----------|------------------------------|---------------------|---------|-------------------------------------------------------------------------------------------------------------------------------------|
|          | 22.7±6μs                     | 28.6±8μs            | ~1.26   | transport_montecarlo_packet_trackers.BenchmarkTransportMontecarloPacketTrackers.time_generate_rpacket_last_interaction_tracker_list |
|          | 2.59±0.5ms                   | 2.89±0.4ms          | ~1.12   | transport_montecarlo_single_packet_loop.BenchmarkTransportMontecarloSinglePacketLoop.time_single_packet_loop                        |
|          | 252±7ns                      | 204±0.01ns          | ~0.81   | spectrum_formal_integral.BenchmarkTransportMontecarloFormalIntegral.time_intensity_black_body                                       |
|          | 3.80±1μs                     | 2.99±2μs            | ~0.79   | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_bad_vpacket                                       |
|          | 1.67±0.01ms                  | 1.82±0.01ms         | 1.09    | transport_montecarlo_main_loop.BenchmarkTransportMontecarloMontecarloMainLoop.time_montecarlo_main_loop                             |
|          | 3.19±0.02ms                  | 3.44±0.01ms         | 1.08    | opacities_opacity_state.BenchmarkOpacitiesOpacityState.time_opacity_state_initialize('scatter')                                     |
|          | 44.0±30μs                    | 47.0±20μs           | 1.07    | transport_montecarlo_interaction.BenchmarkTransportMontecarloInteraction.time_line_emission                                         |
|          | 65.7±0.3ms                   | 69.5±2ms            | 1.06    | transport_montecarlo_packet_trackers.BenchmarkTransportMontecarloPacketTrackers.time_rpacket_trackers_to_dataframe                  |
|          | 7.10±2μs                     | 7.36±2μs            | 1.04    | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_vpacket_volley                                    |
|          | 39.4±0.1s                    | 40.5±0.3s           | 1.03    | run_tardis.BenchmarkRunTardis.time_run_tardis                                                                                       |
|          | 1.04±0m                      | 1.07±0m             | 1.03    | run_tardis.BenchmarkRunTardis.time_run_tardis_rpacket_tracking                                                                      |
|          | 714±0.3ns                    | 737±1ns             | 1.03    | transport_montecarlo_interaction.BenchmarkTransportMontecarloInteraction.time_thomson_scatter                                       |
|          | 3.72±0.01ms                  | 3.77±0.01ms         | 1.01    | opacities_opacity_state.BenchmarkOpacitiesOpacityState.time_opacity_state_initialize('macroatom')                                   |
|          | 2.07±0m                      | 2.07±0m             | 1.00    | spectrum_formal_integral.BenchmarkTransportMontecarloFormalIntegral.time_FormalIntegrator_functions                                 |
|          | 31.4±0.09μs                  | 31.5±0.02μs         | 1.00    | transport_montecarlo_packet_trackers.BenchmarkTransportMontecarloPacketTrackers.time_generate_rpacket_tracker_list                  |
|          | 1.20±0μs                     | 1.16±0μs            | 0.97    | transport_geometry_calculate_distances.BenchmarkTransportGeometryCalculateDistances.time_calculate_distance_boundary                |
|          | 2.21±2μs                     | 2.14±1μs            | 0.97    | transport_montecarlo_estimators_radfield_estimator_calcs.BenchmarkMontecarloMontecarloNumbaPacket.time_update_line_estimators       |
|          | 521±100ns                    | 501±100ns           | 0.96    | opacities_opacity.BenchmarkMontecarloMontecarloNumbaOpacities.time_compton_opacity_calculation                                      |
|          | 561±200ns                    | 531±100ns           | 0.95    | opacities_opacity.BenchmarkMontecarloMontecarloNumbaOpacities.time_pair_creation_opacity_calculation                                |
|          | 6.95±0.5μs                   | 6.63±0.8μs          | 0.95    | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_vpacket                                           |
|          | 631±100ns                    | 591±100ns           | 0.94    | opacities_opacity.BenchmarkMontecarloMontecarloNumbaOpacities.time_photoabsorption_opacity_calculation                              |
|          | 1.60±0.4μs                   | 1.50±0.4μs          | 0.94    | transport_geometry_calculate_distances.BenchmarkTransportGeometryCalculateDistances.time_calculate_distance_line                    |
|          | 3.37±0.6μs                   | 3.16±0.3μs          | 0.94    | transport_montecarlo_vpacket.BenchmarkMontecarloMontecarloNumbaVpacket.time_trace_vpacket_within_shell                              |
|          | 46.4±30μs                    | 43.3±30μs           | 0.93    | transport_montecarlo_interaction.BenchmarkTransportMontecarloInteraction.time_line_scatter                                          |

If you want to see the graph of the results, you can check it here

@sarthak-dv
Copy link
Contributor Author

The tests in test_spectrum_solver.py are showing error, I don’t think it is related to the changes made in this PR. Other than that the benchmarks build is passing after making the changes @jaladh-singhal suggested.

@andrewfullard
Copy link
Contributor

The tests in test_spectrum_solver.py are showing error, I don’t think it is related to the changes made in this PR. Other than that the benchmarks build is passing after making the changes @jaladh-singhal suggested.

Yeah that's fine- the test failures are because of changes to regression data in support of another PR

@tardis-bot
Copy link
Contributor

*beep* *bop*

Hi, human.

The docs workflow has failed

Click here to see the build log.

@andrewfullard
Copy link
Contributor

Looks like the docs build from the lock file, so they won't work until that is updated too.

Copy link
Member

@jaladh-singhal jaladh-singhal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates. @andrewfullard if you're happy, please merge it.

Copy link
Member

@jamesgillanders jamesgillanders left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me -- thanks @sarthak-dv ! I'll leave Andrew to verify the various tests failing are okay, and if so, he can merge.

@andrewfullard andrewfullard merged commit 260207c into tardis-sn:master Sep 9, 2024
20 of 26 checks passed
andrewfullard added a commit that referenced this pull request Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants