Skip to content

Add exit code tests #275

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 27 commits into
base: master
Choose a base branch
from

Conversation

joshbax189
Copy link
Contributor

Before changing the behavior of errors as discussed in #273 , I wanted to add more tests that check the current behavior.

This PR adds tests which check, for each command:

  • bad input exits with error
  • input with warnings exits with error when --strict is set
  • multiple inputs (i.e. commands which apply to multiple files or have multiple steps) should execute as far as possible
    when --allow-error is set, then exit with error

I'll add some script helpers that check error statuses and note any discrepancies I see.
I'll comment out and mark any currently failing expectations with #FIXME so that the CI run is clean.

@joshbax189
Copy link
Contributor Author

Right now, this only has checks for the analyze command, but I wanted to get feedback on the usage of the helper functions in test. Sorry if it triggers a lot of CI failures!

@jcs090218 jcs090218 added the CI label Nov 5, 2024
@jcs090218
Copy link
Member

I like the idea! These are my thoughts on the test implementation. :D

  • Open a new workflows/streams.yml and test it under the test/streams/ folder instead of testing for each command. Since most commands are implemented similarly (Node.js forward Emacs' streams), testing for each command could be more manageable.
  • From the point above, move all test-related files under test/streams/ (testing.sh, Eask-normal, Eask-warn, etc.)

WDYT? :)

@joshbax189
Copy link
Contributor Author

I think I chose the worst possible example command 💥 I want to test more than just Eask file errors and warnings, which I might have implied by this choice...

A better example might be eask compile. I want a normal .el file, an .el file that compiles with a warning and one that errors. I want to specifically test the "recover after error" behavior of the compile command itself, not just the warnings from the Eask file. E.g. test that eask compile --allow-error error.el normal.el should exit with non-zero status, but also compile normal.elc. (It does, by the way).

I'll add a few more examples in a new subfolder of test, like you suggested, then check in again later.

@jcs090218
Copy link
Member

No worries! I'm more keen to test this feature explicitly, so it's easier to track and test in one place. 🤔

@joshbax189 joshbax189 force-pushed the feat/exit-code-tests branch 2 times, most recently from 561a100 to 570077d Compare December 12, 2024 00:29
@joshbax189
Copy link
Contributor Author

Hey @jcs090218 sorry this PR fell off the TODO list over the xmas period. Turns out Eask has a lot of features to test!

This PR has tests that cover a few different issues, so I'm gonna separate them out into smaller PRs for your consideration.

@joshbax189 joshbax189 force-pushed the feat/exit-code-tests branch from 570077d to e87618d Compare March 17, 2025 19:28
@jcs090218
Copy link
Member

Hey @jcs090218 sorry this PR fell off the TODO list over the xmas period. Turns out Eask has a lot of features to test!

Wow, I didn’t realize the PR had so many changes—87 files and 1,725 lines added! No problem at all! I truly appreciate and am grateful that people are willing to put in the time and effort to improve this project! 😄

This PR has tests that cover a few different issues, so I'm gonna separate them out into smaller PRs for your consideration.

That sounds great! I'm not sure if I can handle reviewing such a massive PR! 👍

@joshbax189 joshbax189 force-pushed the feat/exit-code-tests branch from e87618d to 1eddbb3 Compare March 31, 2025 21:18
@joshbax189
Copy link
Contributor Author

Ok I'm still trying to figure out the best way to break down this PR. But now we have more options after adding Jest!

I can:

  • port each of the new tests to Jest within this PR, or
  • make a new PR for each command, so the project files and tests can be checked individually

I think the choice partly depends on whether you want the new tests to go into exit-status.test.js or into the test files for each command, e.g. like analyze.test.js in 1eddbb3

Since Jest is better at grouping tests within files and at reporting where the failure came from vs a shell script test, I'd prefer to put the new tests in individual test files. I think the tests for --allow-error and --strict make more sense in the contexts of the commands rather than in a group of their own. WDYT?

@jcs090218
Copy link
Member

I think the choice partly depends on whether you want the new tests to go into exit-status.test.js or into the test files for each command, e.g. like analyze.test.js in 1eddbb3

Since Jest is better at grouping tests within files and at reporting where the failure came from vs a shell script test, I'd prefer to put the new tests in individual test files. I think the tests for --allow-error and --strict make more sense in the contexts of the commands rather than in a group of their own. WDYT?

I agreed. "Test files for each command" is the way to go. It's more explicit. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants