Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python3 support, except parser #229

Closed
wants to merge 22 commits into from
Closed

Conversation

dhblum
Copy link
Collaborator

@dhblum dhblum commented Aug 6, 2020

For #146 and #228

  • Python3 support and pyfmi from anaconda in BOPTEST docker container. Removes dependence on JModelica.org BOPTEST docker container.
  • Python3 support for all repo modules, except parser/parser.py and data/data_generator.py. These still require Python2 due to JModelica.org dependence for compiling FMUs. Therefore, their unittests (testing/makefile test_parser and testing/makefile test_data_generator) also still must be run in jm testing docker image, which contains JModelica.org and Python2.
  • Changes config.py to config.json.

@dhblum
Copy link
Collaborator Author

dhblum commented Aug 6, 2020

@JavierArroyoBastida When you get a chance can you review this?

@javiarrobas javiarrobas self-requested a review August 12, 2020 15:06
@SenHuang19
Copy link
Contributor

Hi Guys,

I got two issues when using this updated Docker:

  • libgfortran3 has been added
  • for some models, it reports a error when simulating the models: assimulo.exception.AssimuloException: No support for SuperLU was detected, please verify that SuperLU and SUNDIALS has been installed correctly.

@dhblum
Copy link
Collaborator Author

dhblum commented Aug 12, 2020

Thanks Sen for taking a look. All of the tests passed 6 days ago, which include running all of the models in the test case container. I'll need to look into these issues. Can you clarify:

  • Are you saying that you needed to install libgfortran3 to your system to run the Docker container? Not sure exactly what the issue is here you're describing.
  • Was the SuperLU error when you initialized the test case, or advanced at some specific time?

@SenHuang19
Copy link
Contributor

Thanks Sen for taking a look. All of the tests passed 6 days ago, which include running all of the models in the test case container. I'll need to look into these issues. Can you clarify:

  • Are you saying that you needed to install libgfortran3 to your system to run the Docker container? Not sure exactly what the issue is here you're describing.
  • Was the SuperLU error when you initialized the test case, or advanced at some specific time?

Sorry for the confusion.

  • Are you saying that you needed to install libgfortran3 to your system to run the Docker container? Not sure exactly what the issue is here you're describing.
    Yes, I mean the libgfortran3 has to be added.
  • Was the SuperLU error when you initialized the test case, or advanced at some specific time?
    Both actually

@dhblum
Copy link
Collaborator Author

dhblum commented Aug 12, 2020

@SenHuang19 Can you please be more specific?

On both Ubuntu 16.04 and 18.04 I ran $ make setup, then $ make build TESTCASE=bestest_air, then $ make run TESTCASE=bestest_air. Then, in a different terminal, I ran $ curl http://127.0.0.1:5000/advance -d '{}' -H "Content-Type: application/json". This simulated the test case ok. I then repeated with TESTCASE=testcase2 and TESTCASE=testcase3, and everything still ran ok.

What test cases are you having issues with? What commands are you running that invoke the errors? What are some system specs (e.g. OS and version) that you are running on?

testing/makefile Outdated Show resolved Hide resolved
testing/utilities.py Outdated Show resolved Hide resolved
Copy link
Contributor

@javiarrobas javiarrobas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me. Most of the tests pass in my local host using a Python3 environment. The tests that don’t pass (test_data_manager, test_forecast, and test_kpis) are due to inability to load Linux compiled model binaries in my Windows platform. However, having them passing in Travis and all the other tests passing locally prove the sanity of those modules.
Apart from that, the tests properly invoke the boptest image (using Python3) or the jmodelica image (using Python2) depending on what’s needed. This way, the use of Python2 is indeed fairly confined only to the use of the parser and the data_generator.
I've included a few comments requesting minor changes. Also I see now that parsing/SimpleRC_noSignalExchangeBlocks.mo is using lowercase whereas the test is using capital letter in here:

self.mo_path = os.path.join(testing_root_dir,'parsing', 'SimpleRC_NoSignalExchangeBlocks.mo')

This needs correction as well.

@dhblum
Copy link
Collaborator Author

dhblum commented Aug 14, 2020

Thanks @JavierArroyoBastida. I addressed your minor changes and also taking care of the seemingly duplicate model file in #230. UPDATE: #230 has been closed and merged to master.

@dhblum
Copy link
Collaborator Author

dhblum commented Aug 26, 2020

We shouldn't merge this until the issue described here is resolved: #146 (comment)

@dhblum
Copy link
Collaborator Author

dhblum commented Nov 10, 2020

Closing until can resolve the above mentioned issues.

@dhblum dhblum closed this Nov 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants