-
Couldn't load subscription status.
- Fork 1.2k
component test ports/fixes in python3 #5082
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
component test ports/fixes in python3 #5082
Conversation
|
initial test results: however still to investigate the discrepency between 104+68 and 159 ??? cc @borisstoyanov @rhtyd |
found it but it yield a discrepency the other direction: seems 15 test-suites either don't have tests or are not being run. |
|
compared to a python2 baseline the differences are:
hence 53. There are however tests that failed in python2 and passed in python3 as well, so it is a bit more complicated. The categories of differnces are:
and last and least
I'll edit this comment with updates tables with differences in decreasing level of interest failures only in python3:
different results:
different times:
exact matches:
only in python2:
|
|
reconsiddering (and re-running) configdrive tests on py2 and py3 I get
these are too similar to give it priority now (@rhtyd) |
|
test_egress_fw_rules.py in a new env shows the new failure succeeding, skipping this test (for now): the difference in time in the first comparison is now an exception in py2 and still a failure in py3: vs addendum; re-test succeeds: |
2ed71bb to
a522a61
Compare
|
multiple ips per nic: python2: and py3 So in conclusion more successes in absolute numbers but much more are run and a lot of exceptions extra. We'll have to revisit this. |
9874adb to
2c61cc0
Compare
|
test_templates brought up to par; py2: and py3: The failure is strange and I am not sure but I think it is in the test, have spent too much time on this one for now, and the goal is being on par with py2 for now. |
072746c to
f2187e3
Compare
5e4a12a to
8b23b45
Compare
|
the test_vpc_* failures show a failure to ssh into a vm or router or do a wget that fails. , indicating that the wget indeed failed even though the test reports success. I'm going to rebase the full test suites for both python2 and python3 and re-run the full comparison. |
6ebe479 to
2c02268
Compare
ec8fbbe to
55a0f5c
Compare
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian. SL-JID 673 |
|
@GabrielBrascher @rhtyd I am running a new comparison between the py2 and py3 tests now, to verify we are at least at par. will let you know. |
|
Will review soon, but +1 great work @DaanHoogland we've always had components test but never run them. With your work, we finally wil have verified list of tests/cases that we expect to work which will further help with running such component tests on weekly basis to review/monitor branch health. I suppose there are still some work left but I would want to get something merged soon that could help 4.16, for example:
|
this scope is not yet clear and I would like to do that in a future PR.
good idea, will do
Travis is already taking a long time (and we are probably not like by travis or the rest of apache) Let's discuss this
yes, but this also seems a separate PR. (and internal to BO mostly, not ACS) thanks for the suggestions @rhtyd |
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian. SL-JID 752 |
|
Packaging result: ✔️ el7 ✔️ el8 ✔️ debian ✔️ suse15. SL-JID 1061 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - I'm +1 to get the first pass of component tests merged that show siginificant progress in which ones can we run now (and they pass). The next steps:
- check/update these against Travis yml i.e. check and include any tests which aren't already added to the list which pass in Travis
- Figure out a subset that can be worked upon as phase2
|
smoke tests and java code have not changed, but still: |
|
@DaanHoogland a Trillian-Jenkins matrix job (centos7 mgmt + xs71, centos7 mgmt + vmware65, centos7 mgmt + kvmcentos7) has been kicked to run smoke tests |
|
Trillian Build Failed (tid-1860) |
|
Trillian Build Failed (tid-1862) |
|
Trillian Build Failed (tid-1861) |
|
@blueorangutan test matrix keepEnv |
|
@DaanHoogland a Trillian-Jenkins matrix job (centos7 mgmt + xs71, centos7 mgmt + vmware65, centos7 mgmt + kvmcentos7) has been kicked to run smoke tests |
|
Trillian Build Failed (tid-1866) |
|
Trillian Build Failed (tid-1864) |
|
Trillian Build Failed (tid-1865) |
|
@blueorangutan test matrix keepEnv |
|
@nvazquez a Trillian-Jenkins matrix job (centos7 mgmt + xs71, centos7 mgmt + vmware65, centos7 mgmt + kvmcentos7) has been kicked to run smoke tests |
|
Trillian test result (tid-1873)
|
|
Trillian test result (tid-1874)
|
|
Trillian test result (tid-1875)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM based on the CI test results and overall code review.
Agree with the idea of merging how it is and work incrementally on new PR(s).
|
LGTM as Gabriel and others have said let's merge this. We don't normally run component tests, but getting incremental PRs merged is a good approach. Smoketests on this have passed (the errors are not related to this PR, these are tests that shouldn't run). |
|
@DaanHoogland pl create another PR to continue any further work. Thanks. |
Description
This PR fixes marvin component tests to run in ported marvin framework for python3
part of #3195
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
Work to do:
new failures in python3
different results:
different times:
exact matches:
####### only in python2:
Screenshots (if appropriate):
How Has This Been Tested?