Question about writing tests for issues #1568
Replies: 2 comments
-
For me YES - since years i use a more advanced version of this approach in HtmlUnit. Every (ok in theory) time i face a problem i write a unit test for that case. Then i have some way to annotate the test with the expected behaviour - the correct result. And for tests that are failing i have a second annotation that defines the current (wrong) result. This means the test will pass. This leads to
An the real important thing - if i have a side effect of a change that changes something related to this test i get notified of that at the next test run. This is a real lifesaver for that project |
Beta Was this translation helpful? Give feedback.
-
I agree, it's very helpful to have a snippet of code reproducing an issue and even better if it comes in the form of of a test. And it's already helpful if that snippet/testcode is just pasted into the case. We already have several PR's in the backlog that introduce (breaking) testcases, but they tend not to get much attention I'm afraid A mechanism as @rbri described where you can have testcases that basically check whether something is indeed broken would be helpful to have, so the testcases (with a pointer to a case) can be committed and we get notified automatically if they happen to get fixed along the way, but we could start by writing negative testcases and dumping them in a 'failingTests' folder |
Beta Was this translation helpful? Give feedback.
-
Would it be valuable to have people write failing tests for reported issues, even if they are not accompanied by code that resolves the issue? I'm especially thinking about those issues which may not be covered by test262 or are specific to certain implementation details.
If someone were to write these tests, what is the best way to mark them so that we don't forget about them, but they also won't cause the CI builds to fail?
Beta Was this translation helpful? Give feedback.
All reactions