Skip to content

Regression testing of Modelica and ModelicaTest

Thomas Beutlich edited this page Jul 17, 2021 · 27 revisions

Documentation

MSL regression testing started in 2014. See #1392 and MSLRegressionTesting.pdf for the documentation.

Steps

Regression testing consists of various steps to be performed as automated as possible.

Run the pedantic check in Dymola

  1. Load libraries Modelica and ModelicaTest in Dymola and verify that you also loaded the Dymola-specific ModelicaServices

  2. Remove Modelica/Resources/C-Sources/ModelicaInternal.c from the MSL sources in case of testing with Dymola version < 2021

  3. Clone the Release checks repository

git clone https://github.com/modelica-tools/msl-release
  1. Open ReleaseChecks.mo in Dymola

  2. Run the pedantic check on Modelica and ModelicaTest

ReleaseChecks.printExecutables(libraries={"Modelica", "ModelicaTest"})
  1. Check the created file "log_failed.txt" for all models that fail the pedantic translation and create a new GitHub issue. See #3435 for an example report.

Simulate all example and test models with Dymola

  1. Open a git bash and run
git config --get remote.origin.url
git rev-parse --short HEAD
git status --porcelain --untracked-files=no

to determine the Git URL, the Git revision and the git status of the repository under test.

  1. Set up the date utility tool

Extract getdatetime.exe from getdatetime.zip to the working directory

  1. Simulate all example and test models with a default tolerance of 1e-6
ReleaseChecks.simulateExecutables(
  libraries={"Modelica", "ModelicaTest"},
  directories={"modelica://Modelica/Resources/Reference", "modelica://ModelicaTest/Resources/Reference"},
  incrementalRun=false,
  keepResultFile=false,
  numberOfIntervals=5000,
  tolerance=1e-6,
  useTolerance=false,
  gitURL="https://github.com/modelica/ModelicaStandardLibrary.git",
  gitRevision="bd753330a",
  gitStatus="D Modelica/Resources/C-Sources/ModelicaInternal.c, M ModelicaServices/package.mo",
  description="Reg test MSL v4.0.0-rc.1")

Check the options of ReleaseChecks.simulateExecutables. For example, if you want to keep the simulation MAT files containing the trajectories of all variables, you can set the option keepResultFile. Or, you probably want to set the compiler to differ from the default value.

The simulation results will be written to the output directories "modelica://Modelica/Resources/Reference/Modelica" and "modelica://ModelicaTest/Resources/Reference/ModelicaTest", respectively. In case these directories already contain simulation log files or simulation result files from previous runs, they will be deleted first.

Manual steps during and after the run:

  • The simulation of ModelicaTest.Fluid.TestPipesAndValves.DynamicPipeClosingValve does not terminate and needs to be manually stopped during the run. See #3415 for the report.

  • The simulation results of Modelica.Electrical.Analog.Examples.OpAmps.DifferentialAmplifier are invalid due to #3518. The simulation log files need to be manually deleted after the run (to force another simulation by a repeated simulation run).

  1. Repeat the simulation for all failed models with an explicit tolerance of 1e-6
ReleaseChecks.simulateExecutables(
  libraries={"Modelica", "ModelicaTest"},
  directories={"modelica://Modelica/Resources/Reference", "modelica://ModelicaTest/Resources/Reference"},
  incrementalRun=true,
  keepResultFile=false,
  numberOfIntervals=5000,
  tolerance=1e-6,
  useTolerance=true,
  gitURL="https://github.com/modelica/ModelicaStandardLibrary.git",
  gitRevision="bd753330a",
  gitStatus="D Modelica/Resources/C-Sources/ModelicaInternal.c, M ModelicaServices/package.mo",
  description="Reg test MSL v4.0.0-rc.1")
  1. Repeat the simulation for all failed models with an explicit tolerance of 2e-5
ReleaseChecks.simulateExecutables(
  libraries={"Modelica", "ModelicaTest"},
  directories={"modelica://Modelica/Resources/Reference", "modelica://ModelicaTest/Resources/Reference"},
  incrementalRun=true,
  keepResultFile=false,
  numberOfIntervals=5000,
  tolerance=2e-5,
  useTolerance=true,
  gitURL="https://github.com/modelica/ModelicaStandardLibrary.git",
  gitRevision="bd753330a",
  gitStatus="D Modelica/Resources/C-Sources/ModelicaInternal.c, M ModelicaServices/package.mo",
  description="Reg test MSL v4.0.0-rc.1")
  1. Delete the simulation log files of Modelica.Electrical.Analog.Examples.OpAmps.DifferentialAmplifier and repeat the simulation with an explicit tolerance of 1e-8
ReleaseChecks.simulateExecutables(
  libraries={"Modelica", "ModelicaTest"},
  directories={"modelica://Modelica/Resources/Reference", "modelica://ModelicaTest/Resources/Reference"},
  incrementalRun=true,
  keepResultFile=false,
  numberOfIntervals=5000,
  tolerance=1e-8,
  useTolerance=true,
  gitURL="https://github.com/modelica/ModelicaStandardLibrary.git",
  gitRevision="bd753330a",
  gitStatus="D Modelica/Resources/C-Sources/ModelicaInternal.c, M ModelicaServices/package.mo",
  description="Reg test MSL v4.0.0-rc.1")
  1. Search the two output directories for files "check_failed.log", "translate_failed.log" and "simulate_failed.log" and create new GitHub issues.

Run the comparison test

  1. Download the reference result files to be used as the base-line. It is assumed that this directory is in parallel to the MSL root directory.

See the Reference results page how to obtain the reference results.

  1. Set up the CSV Compare tool

Extract Compare.exe from csv-compare.windows-v2.0.3.x64.zip to the root directory of the MSL repository

  1. In the MSL root directory, delete the file "log.txt" in case it already exists from a previous run

  2. Run the comparison with the "bitmap" option

Compare.exe --mode csvTreeCompare --override --bitmap --tolerance 2e-3 --delimiter "," --verbosity 2 .\Modelica\Resources\Reference\ ..\MAP-LIB_ReferenceResults\ --logfile logall.txt --comparisonflag --reportdir C:\temp\MSL400\
  1. Identify the failed models

The workflow is to repeat the comparison run without the "bitmap" option only for the failed models. We need to copy all directories containing a file "compare_failed.log" to a separate location, while maintaining the directory structure. If you prefer, you can create a script for this job, or use a file manager as Total Commander with particular plugins.

For example, in Total Commander:

  • Delete C:\compare\Failed
  • Find all directories (ALT+F7) containing a file "compare_failed.log" using the content plugin FileInDir

grafik

  • Feed the search result to listbox (ALT+L)
  • Select the found directories (CTRL+A) and copy the directory structure to C:\compare\Failed\ using the packer plugin TreeCopyPlus (ALT+F5)

grafik

  1. Repository creation

In case we want to share the comparison reports, we can use a GitHub repository to store the website.

  • Create a new git repository (e.g., "MSL400RC1FAILED") at GitHub
  • Clone this empty repository to C:\compare\MSL400RC1FAILED, for example.
  1. Repeat the comparison without "bitmap" option, but with the "failedonly" option
Compare.exe --mode csvTreeCompare --override --tolerance 2e-3 --delimiter "," --verbosity 2 C:\compare\Failed ..\MAP-LIB_ReferenceResults\ --logfile log.txt --failedonly --comparisonflag --reportdir c:\compare\MSL400RC1FAILED\
  1. Upload the comparison report
  • Add all CSS, HTML and JS files to the repository, commit and push.
  • In the settings page of the GitHub repository, setup GitHub Pages to create a static website from the master branch

grafik

  • Create a new issue at GitHub pointing to the HTML website of the comparison report
  1. Repeat for ModelicaTest

Repeat steps 3 to 8 for ModelicaTest

  1. Upload the simulation log files and simulation data

TODO: Upload to a new branch of https://github.com/modelica/MAP-LIB_ReferenceResults