-
-
Notifications
You must be signed in to change notification settings - Fork 3
Updates according to discussion in issues [closes #13, #14, #17, #19, #20, #21, #23, #25, #26, #28] #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh-pages
Are you sure you want to change the base?
Updates according to discussion in issues [closes #13, #14, #17, #19, #20, #21, #23, #25, #26, #28] #29
Changes from all commits
c6e6437
2b358c1
c661c1c
7d1dc29
12ca547
4a67a8c
18b64d8
1905205
88eb92f
9a490f6
f18abd8
6987b70
ead4f26
78b894f
6db8fb4
85b853a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,3 @@ | ||
Please cite as: | ||
|
||
Ed Bennett, Lester Hedges, Matt Williams, "Introduction to automated testing and continuous integration in Python" | ||
Ed Bennett, Lester Hedges, Julian Lenz, Matt Williams, "Introduction to automated testing and continuous integration in Python" |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -414,7 +414,8 @@ being done once. | |
> behaviour of the tests, and pytest prioritises correctness of the tests over | ||
> their performance. | ||
> | ||
> What sort of behavior would functions have that failed in this way? | ||
> What sort of behavior would functions have that failed in this way? Can you | ||
> come up with example code for this? | ||
> | ||
>> ## Solution | ||
>> | ||
|
@@ -425,6 +426,80 @@ being done once. | |
>> | ||
>> Fixtures should only be re-used within groups of tests that do not mutate | ||
>> them. | ||
>> | ||
>> ~~~ | ||
>> @pytest.fixture(scope="session") | ||
>> def initially_empty_list(): | ||
>> return [] | ||
>> | ||
>> | ||
>> @pytest.mark.parametrize("letter", ["a", "b", "c"]) | ||
>> def test_append_letter(initially_empty_list, letter): | ||
>> initially_empty_list.append(letter) | ||
>> assert initially_empty_list == [letter] | ||
>> ~~~ | ||
>> {:. language-python} | ||
> {: .solution} | ||
{: .challenge} | ||
|
||
> ## Better ways to (unit) test | ||
> | ||
> The above example was explicitly constructed to acquire an expensive resource | ||
> and exhibit a big advantage when using a fixture but is it actually a good | ||
> way to test the `word_counts` function? Think about what the `word_counts` is | ||
> supposed to do. Do you need a whole book to test this? | ||
> | ||
> List advantages and disadvantages of the above approach. Then, come up with | ||
> another way of testing it that cures the disadvantages (maybe also loosing | ||
> some of the advantages). Is your approach simpler and less error-prone? | ||
> | ||
> It is safe to assume that whenever to test such a function, it is supposed to | ||
> be used in a larger project. Can you think of a test scenario where the | ||
> original method is the best? | ||
> | ||
>> ## Solution | ||
>> | ||
>> The `word_counts` function is designed to count words in any string. It does | ||
>> not need a whole book to test counting, so we could have also used tiny test | ||
>> strings like `""`, `"hello world"`, `"hello, hello world"` to test all | ||
>> functionality of `word_counts`. In fact, the original approach has a number | ||
>> of disadvantages: | ||
>> | ||
>> * It is (time) expensive because it needs to download the book every time the | ||
>> test suite is run. (2s for a test is a very long time if you want to run | ||
>> that a test suite of hundreds of those every few minutes.) | ||
>> * It is brittle regarding various aspects: | ||
>> - If you don't have an internet connection, your test fails. | ||
>> - If the URL changes, your test fails. | ||
>> - If the content changes, your test fails (we had that a few times). | ||
>> * It is very obscure because you cannot know if the numbers we have given you | ||
>> are correct. Maybe the function has a bug that we don't know about because | ||
>> admittedly we also just used the output of that function to generate our | ||
>> test cases. | ||
>> | ||
>> The one big advantage of the above is that you are using realistic test data. | ||
>> As opposed to the string `"hello world"`, the book likely contains a lot of | ||
>> different words, potentially different capitalisation and spellings, | ||
>> additional punctuation and maybe special characters that your function may or | ||
>> may not handle correctly. You might need a lot of different test strings to | ||
>> cover all these cases (and combinations thereof). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Things I might make explicit here:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought I made the first point already but I will try to be more explicit about that. |
||
>> | ||
>> The alternative approach with tiny test strings cures all of the above | ||
>> listed disadvantages and the tests will be easy to read, understand and | ||
>> verify particularly if you use expressive test function names and parameters | ||
>> `ids`. This is the best way to write a unit test, i.e. a test that is | ||
>> concerned with this single unit of functionality in isolation and will likely | ||
>> be run hundreds of times during a coding session. | ||
>> | ||
>> Nevertheless, in a bigger project you would want to have other kinds of | ||
>> tests, too. The `word_counts` functionality will probably be integrated into | ||
>> a larger aspect of functionality, e.g., a statistical analysis of books. In | ||
>> such a case, it is equally important to test that the integration of the | ||
>> various individually tested units worked correctly. Such integration tests | ||
>> will be run less often than unit tests and might be more meaningful for more | ||
>> realistic circumstances. For such -- and definitely for the even broader | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Markdown doesn't automatically turn |
||
>> end-to-end tests that run a whole program from the (simulated) user input to | ||
>> a final output -- the original approach is well-suited. | ||
> {: .solution} | ||
{: .challenge} | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -338,6 +338,23 @@ next to the first commit, a green tick (passed) next to the second, and nothing | |
> check all code against a defined house style (for example, PEP 8). | ||
{: .callout} | ||
|
||
> ## pre-commit | ||
> | ||
> Another helpful developer tool somewhat related to CI is | ||
> [pre-commit][pre-commit] (or more generally `git` hooks). They allow to | ||
> perform certain actions locally when triggered by various `git` related events | ||
> like before or after a commit, merge, push, etc. A standard use-case is | ||
> running automated formatters or code linters before every commit/push but | ||
> other things are possible, too, like updating a version number. One major | ||
> difference with respect to CI is that each developer on your team has to | ||
> manually install the hooks themselves and, thus, could choose to not do so. As | ||
> opposed to a CI in a central repository, `git` hooks are therefore not capable | ||
> of enforcing anything but are a pure convenience for the programmer while CI | ||
> could be used to reject pushes or pull requests automatically. Furthermore, | ||
> you are supposed to commit often and, hence, committing should be a fast and | ||
> lightweight action. Therefore, the pre-commit developers explicitly discourage | ||
> running expensive test suites as a pre-commit hook. | ||
> {: .callout} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oooops! |
||
|
||
> ## Try it yourself | ||
> | ||
|
@@ -366,3 +383,4 @@ next to the first commit, a green tick (passed) next to the second, and nothing | |
[pypi]: https://pypi.org | ||
[starter-workflows]: https://github.com/actions/starter-workflows | ||
[yaml]: https://en.wikipedia.org/wiki/YAML | ||
[pre-commit]: https://pre-commit.com | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. alphabetise links |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Lots of other instances of each; I won't tag all of them)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, haven't run it through
black
admittedly. It is just a great illustration of how annoying the absence of autoformatting is.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just spotted the word
positve
here too.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And the square bracket started on line 185 should be closed before line 188.