-
-
Notifications
You must be signed in to change notification settings - Fork 23
Add exit code tests #275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add exit code tests #275
Conversation
Right now, this only has checks for the analyze command, but I wanted to get feedback on the usage of the helper functions in test. Sorry if it triggers a lot of CI failures! |
I like the idea! These are my thoughts on the test implementation. :D
WDYT? :) |
I think I chose the worst possible example command 💥 I want to test more than just Eask file errors and warnings, which I might have implied by this choice... A better example might be I'll add a few more examples in a new subfolder of |
No worries! I'm more keen to test this feature explicitly, so it's easier to track and test in one place. 🤔 |
561a100
to
570077d
Compare
Hey @jcs090218 sorry this PR fell off the TODO list over the xmas period. Turns out Eask has a lot of features to test! This PR has tests that cover a few different issues, so I'm gonna separate them out into smaller PRs for your consideration. |
570077d
to
e87618d
Compare
Wow, I didn’t realize the PR had so many changes—87 files and 1,725 lines added! No problem at all! I truly appreciate and am grateful that people are willing to put in the time and effort to improve this project! 😄
That sounds great! I'm not sure if I can handle reviewing such a massive PR! 👍 |
e87618d
to
1eddbb3
Compare
Ok I'm still trying to figure out the best way to break down this PR. But now we have more options after adding Jest! I can:
I think the choice partly depends on whether you want the new tests to go into Since Jest is better at grouping tests within files and at reporting where the failure came from vs a shell script test, I'd prefer to put the new tests in individual test files. I think the tests for |
I agreed. "Test files for each command" is the way to go. It's more explicit. :) |
Before changing the behavior of errors as discussed in #273 , I wanted to add more tests that check the current behavior.
This PR adds tests which check, for each command:
--strict
is setwhen
--allow-error
is set, then exit with errorI'll add some script helpers that check error statuses and note any discrepancies I see.
I'll comment out and mark any currently failing expectations with
#FIXME
so that the CI run is clean.