Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor QA setup #393

Merged
merged 1 commit into from
Jan 22, 2020
Merged

Refactor QA setup #393

merged 1 commit into from
Jan 22, 2020

Conversation

aleksihakli
Copy link
Member

@aleksihakli aleksihakli commented Jan 21, 2020

  • Utilize tox with tox-travis for more UX friendly test orchestration
  • Utilize pytest with pytest-django and pytest-cov for more readable test output
  • Utilize setuptools_scm for inferring package version
  • Improve test matrix definitions for database and Python version combinations
  • Skip false-positive test for cProfiling output

Copy link
Member

@nasirhjafri nasirhjafri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aleksihakli Just two things, we dropped Django 1.11 support in the last release and we are still supporting python 3.5

.travis.yml Show resolved Hide resolved
.travis.yml Outdated Show resolved Hide resolved
tox.ini Outdated Show resolved Hide resolved
tox.ini Outdated Show resolved Hide resolved
@aleksihakli
Copy link
Member Author

@nasirhjafri @auvipy @jezdez the django-silk QA improvements are ready for upstreaming. There is one false-positive case in the tests in the cProfile section that results from a small difference in timings with Python 3.6 interpreter, but otherwise all tests pass and the test bench is considerably improved.

The test also fails on PyPy3 in addition to Python 3.6 and should be refactored to account for differing cProfile outputs or disabled altogether if it is not necessary:

image

Copy link
Member

@nasirhjafri nasirhjafri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGMT 👍 Thanks @aleksihakli

Copy link
Member

@jezdez jezdez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor updates needed..

setup.py Outdated Show resolved Hide resolved
docs/conf.py Outdated Show resolved Hide resolved
@unittest.skipIf(
os.environ.get("CONTINUOUS_INTEGRATION") == "true",
"Skip profiling integration test when running CI on due to frequent false negatives"
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the test should be written differently then.

Copy link
Member Author

@aleksihakli aleksihakli Jan 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very much agree: however, the test is not simple to fix as the result is a more complex object and ideally the format should be validated, not the contents itself.

Hence, the test would ideally use some sort of a regex matcher for a matching list that is inside the list of profiling results.

This test and the max body size test are also the ones that fail on PyPy3 at the moment.

I would opt a skip in this PR and then a fix in a later PR that updates the test to a more appropriate format, but I am not interested in refactoring the tests at this moment due to time constraints.

@aleksihakli
Copy link
Member Author

@jezdez I believe the version is ready for merging now. Thanks for the accurate review insights!

Copy link
Member

@jezdez jezdez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, some test is failing now, not sure why?

@aleksihakli
Copy link
Member Author

Seems like it's the same test. For whatever reason it isn't skipping the test with the @unittest.skipIf decorator. I think I'll refactor the test after all to validate the format of data instead of it's exact output so that the test supports PyPy as well.

@aleksihakli
Copy link
Member Author

Test refactored to test output format instead of specific tool output, seems to be working. Upstreaming the changeset.

@aleksihakli aleksihakli merged commit 3d28203 into jazzband:master Jan 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants