Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: optimize get_value_opt in class Function #501

Merged
merged 4 commits into from
Dec 11, 2023

Conversation

styris00
Copy link
Contributor

@styris00 styris00 commented Dec 3, 2023

Pull request type

  • Code changes (bugfix, features)

Checklist

  • Lint (black rocketpy/ tests/) has passed locally
  • All tests (pytest --runslow) have passed locally

Current behavior

Calls to get_value_opt for shepard interpolation were slower.

New behavior

Calls to get_value_opt for shepard interpolation are quicker (using Numpy vectorized operations).

Breaking change

  • No

Copy link

codecov bot commented Dec 3, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (13204ce) 71.17% compared to head (1a822a3) 71.16%.
Report is 7 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #501      +/-   ##
===========================================
- Coverage    71.17%   71.16%   -0.02%     
===========================================
  Files           55       55              
  Lines         9277     9273       -4     
===========================================
- Hits          6603     6599       -4     
  Misses        2674     2674              
Flag Coverage Δ
unittests 71.16% <100.00%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@giovaniceotto
Copy link
Member

Just sending the report made by @styris00 here so that all reviewers can see it.

Optimization_report.pdf

@giovaniceotto
Copy link
Member

giovaniceotto commented Dec 6, 2023

@styris00, can you share the code you used for your execution time reports? Specially the definition of the Function instances.

@styris00
Copy link
Contributor Author

styris00 commented Dec 6, 2023

@styris00, can you share the code you used for your execution time reports? Specially the definition of the Function instances.

I can't manage to send a python file so I put the code below.

To perform the tests below and compare the two versions of get_value_opt, you have to modify the set_get_value_opt method so the Function instances have the attributs :

  • get_value_opt1 : a function that correspond to the former get_value_opt attribut
  • get_value_opt2 : a function that correspond to the new get_value_opt attribut

(You may have to also change the RocketPy import)

Note that the matrix MATRIX_RESULTS displayed by the tests function is the one we put in the table of the optimization report

import sys
sys.path.append('..')

import numpy as np
from RocketPy.rocketpy import Function  # May need to be changed
from timeit import Timer
import random as rd



def create_data(dim, size):  # Create a source to then create a Function Object
    list = []
    for s in range(size):
        tuple = ()
        for d in range(dim): # inputs
            tuple = tuple + (rd.uniform(-1000, 1000),)
        tuple = tuple + (rd.uniform(-50, 50),) # output
        list.append(tuple)
    array = np.array(list)
    return array

def create_point(dim):  # Create a point as a tuple to call get_value_opt
    point = ()
    for d in range(dim):
        point = point + (rd.uniform(-1000, 1000),)
    return point


dimensions = [2, 3, 4, 5]
sizes = [100, 1000, 10000]


def tests(nb = 10):
    MATRIX_RESULTS = [["Data Dim", "Data Length", "Time before", "Time after", "Improvement"]]
    correct_values = 0
    for dim in dimensions:
        for size in sizes:
            print("Dim", dim, "Size:", size)
            correct_values_case = 0
            mean_before = 0
            mean_after = 0
            mean_improvement = 0
            for i in range(nb):
                array = create_data(dim, size)
                point = create_point(dim)
                fun = Function(                           # Definition of the Function instance with array as a source
                    array,
                    interpolation="shepard",
                    extrapolation="natural",
                    title="test function"
                )
                fun.set_get_value_opt() # for fun.get_value_opt1 AND fun.get_value_opt2 to work

                value_ref = fun.get_value_opt1(*point)
                #print(value_ref)
                value_new = fun.get_value_opt2(*point)
                #print(value_new)
                same_value = np.isclose(value_ref, value_new, rtol = 1e-12) # check if the two values are nearly the same (1e-12)
                #print(same_value)
                if same_value:
                    correct_values += 1
                    correct_values_case += 1

                globals()['fun'] = fun
                globals()['np'] = np

                timer = Timer(lambda: fun.get_value_opt1(*point), 'from __main__ import fun, np')
                execution_time_ref = timer.timeit(number=1)
                mean_before = mean_before + execution_time_ref

                timer = Timer(lambda: fun.get_value_opt2(*point), 'from __main__ import fun, np')
                execution_time_new = timer.timeit(number=1)
                mean_after = mean_after + execution_time_new

                improvement = (execution_time_new/execution_time_ref)*100
                mean_improvement = mean_improvement + improvement

            mean_before = mean_before/nb
            mean_after = mean_after/nb
            mean_improvement = mean_improvement/nb
            LINE_RESULTS = [str(dim), str(size), mean_before, mean_after, str(mean_improvement)+" %"]
            MATRIX_RESULTS.append(LINE_RESULTS)
            print("Correct values", correct_values_case, "/", nb)

    print("CORRECT VALUES :", correct_values,"/",nb*len(dimensions)*len(sizes))
    MATRIX_RESULTS = np.array(MATRIX_RESULTS)
    print(MATRIX_RESULTS)

tests(100) # performs the tests

@Gui-FernandesBR Gui-FernandesBR added the Enhancement New feature or request, including adjustments in current codes label Dec 7, 2023
@Gui-FernandesBR Gui-FernandesBR added this to the Release v1.X.0 milestone Dec 7, 2023
TST: Add unit test for get_value_opt method
@Gui-FernandesBR
Copy link
Member

Just sending the report made by @styris00 here so that all reviewers can see it.

Optimization_report.pdf

This report was really nice to read!

Just wondering, what were the computer specification when running all the tests? (for example: i7-7200U, running python 3.10 on ubuntu jammy)

Copy link
Member

@Gui-FernandesBR Gui-FernandesBR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well done! I'm happy seeing this speedup on the code.
Your work was amazing in this PR, thanks for contributions.

@styris00
Copy link
Contributor Author

styris00 commented Dec 7, 2023

Just sending the report made by @styris00 here so that all reviewers can see it.
Optimization_report.pdf

This report was really nice to read!

Just wondering, what were the computer specification when running all the tests? (for example: i7-7200U, running python 3.10 on ubuntu jammy)

The table was created with an AMD Ryzen 7 6800H on Windows 11.
The same improvements (%) were found with an Intel i5-1035G1 on Windows 11.

If you've run these tests too, we wonder if you got roughly the same results ?

@Gui-FernandesBR
Copy link
Member

Just sending the report made by @styris00 here so that all reviewers can see it.
Optimization_report.pdf

This report was really nice to read!
Just wondering, what were the computer specification when running all the tests? (for example: i7-7200U, running python 3.10 on ubuntu jammy)

The table was created with an AMD Ryzen 7 6800H on Windows 11. The same improvements (%) were found with an Intel i5-1035G1 on Windows 11.

If you've run these tests too, we wonder if you got roughly the same results ?

Actually I didn't run the same tests, I am trusting on your analysis.

@Gui-FernandesBR
Copy link
Member

I'm comfortable enough with this PR, I think you can merge it if you are comfortable too, or you can wait for another review if you think it's necessary. The PR's main goal was achieved.

@styris00 styris00 merged commit 0d18940 into develop Dec 11, 2023
@styris00 styris00 deleted the enh/get-value-opt-method branch December 11, 2023 19:11
@Gui-FernandesBR Gui-FernandesBR changed the title ENH: optimization of get_value_opt in set_get_value_opt of class Function ENH: optimize get_value_opt in class Function Feb 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request, including adjustments in current codes
Projects
Status: Closed
Development

Successfully merging this pull request may close these issues.

6 participants