Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors within test code are not caught in the collection runner #1963

Open
jkrenge opened this issue Apr 2, 2016 · 23 comments
Open

Errors within test code are not caught in the collection runner #1963

jkrenge opened this issue Apr 2, 2016 · 23 comments

Comments

@jkrenge
Copy link

jkrenge commented Apr 2, 2016

Following setup: I execute the collection runner with a run of 41 tests, where I e.g. check one endpoint with this test code:

tests["Status code is 200"] = responseCode.code === 200;

var jsonData = JSON.parse(responseBody);
tests["Length of result"] = jsonData.length === 2;

Which works, unless the endpoint doesn't return valid JSON: So in my instance the request failed, and the endpoint returned 404 and no JSON, but only a string Not found..

The first test failed, which is good, the second test code crashed. So there was an error notification within the collection runner, but which disappeared like after 2 seconds.

But the real problem: The rest of the test run was not executed any more.

@mstaalesen
Copy link

While I agree with your point that the rest of the collection should not be blocked by 1 unexpected event (causing errors), there are workarounds for this issue.

By asserting that you get the expected Status-code before parsing JSON, you will not have this problem. Something like this:

if (responseCode.code === 200) {
var jsonData = JSON.parse(responseBody);
tests["Length of result"] = jsonData.length === 2;
}
else {
// Either set a test to false (for easier understanding whats wrong)
// Or do console.log or similar.
}

@jkrenge
Copy link
Author

jkrenge commented Apr 3, 2016

Good idea, thanks.

@ChrissiFi
Copy link

I'm also finding this plus my tests to check for a code 401 which run fine in the builder are failing in the runner as it is picking up a 400 not a 401.

@mstaalesen
Copy link

If it is picking up a 400, your server returned 400. You should use console.log() on the responsebody to get the return, since you won't get it from the collection runner.

@mikehedman
Copy link

mikehedman commented Apr 21, 2016

I'm going to somewhat disagree with @jkrenge that the real problem is that the rest of the test is not run. While I agree that having the ability to proceed after a fail is a worthy desire, the two issues are just that - two separate issues.
Yet another issue is that the results panel gives no indication that something failed at all. In my current collection, I have 8 items (and the last one is failing). But at the end of the run, the Results panel shows 7 passed 0 failed. I would have rather seen 7 passed 1 failed, or 7 passed 1 failed 1 error...and the content of the error message in the list of individual API call results.

@jkrenge
Copy link
Author

jkrenge commented Apr 21, 2016

Yes, I agree with @mikehedman. This is the best solution.

@ChrissiFi
Copy link

I would far prefer all the tests to run and then to deal with the fails
than the run truncating. I want to get to the point where I can schedule
my tests to run outside working hours and the last thing I want is to find
that the tests stopped after only running 10%, especially as I know that
some of the earlier tests in my suite are currently failing due to
outstanding software changes and preventing the rest running.

On Thu, Apr 21, 2016 at 9:12 PM, Julian Krenge notifications@github.com
wrote:

Yes, I agree with @mikehedman https://github.com/mikehedman. This is
the best solution.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#1963 (comment)

@abhijitkane
Copy link
Member

@ChrissiFi @jkrenge - Hm...I guess there will be mixed opinions on this. There will certainly be users who do not want the run to continue if there's an error (or even a failure). We'll try to add a warning for JSON.parse (one of the most common types of runtime errors) in the UI that encourages users to surround it with a try..catch, or an explicit check like @mstaalesen suggests.

@mikehedman Was there a test failure in your last request, or an error? Test failures will be displayed as (7 passed, 1 failed), but errors in a request's test script will cause the test script for that request to be ignored. We're trying to work around this.

Error: runtime error in the test script (JSON.parse etc).
Failure: a test not passing

@mikehedman
Copy link

@abhijitkane - I was speaking specifically about an error. My interest is for there to be a record of the error/message.
As for continuing to run after a fail (or an error) - most test frameworks continue after a failure (I'm thinking of PHPUnit, and Jasmine). This makes it easier to debug, rather than run to the fail, fix, run to the next fail, fix...

@abhijitkane
Copy link
Member

@mikehedman Noted. Will try to have a persistent message for errors. @aditikul

The difference is that Postman collections are not tests without side-effects. If you're using a collection of 3 requests to automate a certain task, running request 3 if request 2 failed/errored out might cause problems.

@mikehedman
Copy link

@abhijitkane - yes, I see the difficulty. I think there are really two different use cases. For me, I am using this capability for scripting. I do have 'tests', but really just to check for 200 codes so I know if my script ran correctly. But if I was using Postman as a testing framework, my needs would be different.

@ChrissiFi
Copy link

@abhijitkane https://github.com/abhijitkane - thanks - JSON.parse is the
source of all my errors at the moment as I'm just checking API gets in my
phase 1 project (API posts will be following in phase 2 so will result in
more complex tests).

On Tue, Apr 26, 2016 at 4:36 AM, Mike Hedman notifications@github.com
wrote:

@abhijitkane https://github.com/abhijitkane - yes, I see the
difficulty. I think there are really two different use cases. For me, I
am using this capability for scripting. I do have 'tests', but really
just to check for 200 codes so I know if my script ran correctly. But if I
was using Postman as a testing framework, my needs would be different.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#1963 (comment)

@shamasis
Copy link
Member

shamasis commented Aug 5, 2016

So, until we add an assertion library inside postman, the behaviour of postman scripts seem to me accurate. That's why I wrote the post - http://blog.getpostman.com/2015/09/29/writing-a-behaviour-driven-api-testing-environment-within-postman/

I would love to close this issue now and restart discussion once we have our own assertion library inside postman script sandbox. Let me know if it's okay to close this issue.

@mikehedman
Copy link

@shamasis - I would vote to not close this ticket. Yes, there have been a number of issues talked about in this thread, specifically what to do if there is a failure. But the original, real issue, is that errors are not visible after running a collection. I'm drawing a distinction between an error (for example, a syntax error in your test code), and a failure (expected a 200 but got a 404). When running a collection there is a brief popup that there was an error, but then you're kind of just stuck. To figure out where the error is, you need to run each call independently until the error happens again. If using variables derived from other calls, this would be difficult.
I would think that this would be independent of the addition of an assertion library - the need is here now, and would be there after adding assertions.
Thanks

@mistersender
Copy link

+1 - if a collection stops running because the script failed, at the very least the runner should not indicate that all tests were successful. This is misleading and has caused quite a bit of confusion for us

@shamasis
Copy link
Member

I hear you all. I'm assuming I'm misinterpreting things a bit. Apologies. looping in @a85.

@a85 - can runner highlight syntax errors like in Newman?

@mistersender
Copy link

@shamasis i think this has been addressed with the recent updates to collection runner?

@mstaalesen
Copy link

While to some extent it is solved, if there is a javascript error a red line is posted asking to check dev tools. However, if there are a lot of tests, the error message will be hidden. And if the tester does not scroll down to the bottom, it is is still possible to miss that there was a problem with the tests themselves.

@sdnts
Copy link

sdnts commented Jan 12, 2017

@mstaalesen We're looking into making test script failures more apparent in the updated Runner. Will keep you guys updated here.

@mikehedman
Copy link

Just a note for the folks working on this - THANKS!! I really love the new test runner, and this morning had a step fail in the middle, and the drop down that shows the request and response info was super helpful!
Just a little nit - the fact that there is a dropdown isn't very well 'publicized' in the results screen. I only found it by noticing that when the mouse scrolls over the step description, the description turns into a clickable link. Maybe styling it as a link, or adding a "Details" button would make more people aware of the feature.
But this new runner is awesome!!

@sdnts
Copy link

sdnts commented Jan 26, 2017

@mikehedman Thanks for the feedback! We'll try making it more apparent :)

@pelennor2170
Copy link

It's a lot more clear in the Collection Runner that there were errors during the test than previous versions (but still could be more obvious), but what would be really useful would be a configurable per test run option that any test script failures cause the associated test to fail as well. This would be useful for both the Collection Runner and even more so for tests run via the command line (otherwise you have no way of knowing anything went wrong during the test). As for whether to continue or abort the run following the test script error, perhaps that could be a per run configurable option too.

@mverma-va
Copy link

+1 when my test script fails for some reason, I still want to continue my next api call. We should add these as some option or setting for the whole collection or per request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests