A dashboard design to monitor unit test for mantid
Run
pip install -r requirements.txt
# example 1
python main.py -u 'https://builds.mantidproject.org/' -p 'build_packages_from_branch' 'main_nightly_deployment_prototype' -n 30 -t "Mac" -f "darwin17.log" -t "Mac" "Linux" "Windows" -f "darwin17.log" "linux-gnu.log" "msys.log"
# example 2
python main.py -u 'http://localhost:99202/' -p 'ctest-sample' 'ctest-sample-2' -n 35 -a 'your-username' 'your-username' -t "Windows" "Mac" "Linux" -f "windows-64-ci.log" "osx-64-ci.log" "linux-64-ci.log" -t "Windows" -f "windows-64-ci.log"
argument | description | Example |
---|---|---|
-u , --jenkins_url |
url to your jenkins project you have to include the ending / |
https://builds.mantidproject.org/ |
-p , --pipeline_name |
list of pipeline name to monitor (past multiple string as list) | 'build_packages_from_branch' 'main_nightly_deployment_prototype' |
-n , --num_past_build |
the number of builds from latest to parse | 15 |
-a , --auth |
username and password pair if jenkins requires login to view builds and artifacts with username as first argument and password as second argument | 'your-username' 'your-password' |
-t , --target |
The list of target name to parse the number of this argument must match the number of pipeline name and the order. Use a flag for a pipeline you should have as mutch of this flag as the number of pipeline in -p |
"Windows" "Mac" "Linux" |
-f , --file_name |
The list of target name to parse the number of this argument must match the number of pipeline name and the order. Use a flag for a pipeline you should have as mutch of this flag as the number of pipeline in -p |
"windows-64-ci.log" "osx-64-ci.log" "linux-64-ci.log" |
The webpage should be in dist/index.html
The main.py does the following
- gather the argument for argparse
- set the general settings in the environment variable section
- copy the resources such as logos and bootstrap and plottyJS from
assets
todist/assets
- load the JSON pickle from
history/{pipeline_name}/{pipeline_name}_by_build_fail_pickle
if exist - update the object by looking at the builds in build range
- save the new object to
history/{pipeline_name}/{pipeline_name}_by_build_fail_pickle
(allow object operation) andhistory/{pipeline_name}/{pipeline_name}_by_build_fail.json
(easier to read version) - generate the
pandas.DataFrame
for the chart data withchart_helper.get_chart_DF
- generate the HTML code for the plotty line chart using
chart_helper.plot_line_chart_plotly
- create json_file for aggregate data table and copy it
- create and copy the json for the data table using
datatable_helper.fail_test_table_data_gen
. This also save tabulated fail test result inhistory/{pipeline_name}/{pipeline_name}_by_name_fail/{pipeline_name}_{agent_key}_failed_detail_store.json
(easier to read version) andhistory/{pipeline_name}/{pipeline_name}_by_name_fail/{pipeline_name}_{agent_key}_failed_detail_store_pickle.json
(allow object operation). This will also update the combined data store inhistory/combined_result/{OS}_combined_failed_detail_store.json
that aggregate fail builds across pipeline with the same platform. - Generate the HTML code for failed test dataTable with
jinja2
andassets/agg_table.html.j2
note that you will have to add the helper functions to the header ofassets/content.html.j2
- generate the HTML code for the page using
jinja2
andassets/content.html.j2
- This is repeated for every pipeline
- A page of aggregate result will be generated for each environment (Windows, MacOS, Linux etc.). This will include the test name, the pipeline and builds that it has failed and the last detected fail date. The last detected failed date is checking if the entry for a test in
history/combined_result/{OS}_combined_failed_detail_store.json
has changed and updating it with current date. - An index page is generated using
jinja2
andassets/index.html.j2
that contains links to all pipeline dashboard and the combined result dashboard - All the pages and related JSON files are copied to
dist
This repo is deploy using the legacy github page mode, you have to enable the github page in the repo settings -> pages
set source to deploy from a branch
and point to the gh-pages
Branch
The update action will run daily at 14:00 UTC
you can also manually run the github action in Actions - Build webpage
workflow.
- if you change the target file name e.g.
darwin17.log
it might cause problem with updating of existing data which might lead to data loss. - The format of the Jenkins url can also change in the future so modify
data_collector.Remote_source
to change the logic of how to build the url - Some of the variables are hard-coded in
main.py
as it is unlikely that they change without a significant system change such as:- Styling
- Location for saving files
- Pattern for log parsing
grok_pattern
- Key of outcome columns for plotting
columns
,x_column
andy_columns