Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNM] Add documentation for hypothetical pre-timeout scripts #2018

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3,25 +3,34 @@ icon: material/ray-start-arrow
status: experimental
---

# Setup scripts
# Scripts

<!-- md:version 0.9.59 -->

!!! experimental "Experimental: This feature is not yet stable"
!!! experimental "Experimental: Setup scripts are not yet stable"

- **Enable with:** Add `experimental = ["setup-scripts"]` to `.config/nextest.toml`
- **Tracking issue:** [#978](https://github.com/nextest-rs/nextest/issues/978)

Nextest supports running _setup scripts_ before tests are run. Setup scripts can be scoped to
particular tests via [filtersets](../filtersets/index.md).
!!! experimental "Experimental: Pre-timeout scripts are not yet stable"

Setup scripts are configured in two parts: _defining scripts_, and _setting up rules_ for when they should be executed.
- **Enable with:** Add `experimental = ["pre-timeout-scripts"]` to `.config/nextest.toml`
- **Tracking issue:** [#978](https://github.com/nextest-rs/nextest/issues/978)

Nextest supports running _scripts_ when certain events occur during a test run. Scripts can be scoped to particular tests via [filtersets](../filtersets/index.md).

Nextest currently recognizes two types of scripts:

* _Setup scripts_, which are executed at the start of a test run.
* _Pre-timeout scripts_, which are executed before nextest begins terminating a test that has exceeded its timeout.

Scripts are configured in two parts: _defining scripts_, and _setting up rules_ for when they should be executed.

## Defining scripts

Setup scripts are defined using the top-level `script` configuration. For example, to define a script named "my-script", which runs `my-script.sh`:
Scripts are defined using the top-level `script` configuration. For example, to define a script named "my-script", which runs `my-script.sh`:

```toml title="Setup script definition in <code>.config/nextest.toml</code>"
```toml title="Script definition in <code>.config/nextest.toml</code>"
[script.my-script]
command = 'my-script.sh'
Comment on lines 34 to 35
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's probably worth making this script.setup.my-script. I think the scripts are going to have fairly different semantics and settings etc. We can make this change because scripts are currently experimental.

I would still put all script names in a single namespace, though (i.e. not allow any name overlaps) to reduce confusion.

```
Expand All @@ -36,16 +45,16 @@ command = 'script.sh -c "Hello, world!"'
command = ['script.sh', '-c', 'Hello, world!']
```

Setup scripts can have the following configuration options attached to them:
Scripts can have the following configuration options attached to them:

- **`slow-timeout`**: Mark a setup script [as slow](../features/slow-tests.md) or [terminate it](../features/slow-tests.md#terminating-tests-after-a-timeout), using the same configuration as for tests. By default, setup scripts are not marked as slow or terminated (this is different from the slow timeout for tests).
- **`leak-timeout`**: Mark setup scripts [leaky](../features/leaky-tests.md) after a timeout, using the same configuration as for tests. By default, the leak timeout is 100ms.
- **`slow-timeout`**: Mark a script [as slow](../features/slow-tests.md) or [terminate it](../features/slow-tests.md#terminating-tests-after-a-timeout), using the same configuration as for tests. By default, scripts are not marked as slow or terminated (this is different from the slow timeout for tests).
- **`leak-timeout`**: Mark scripts [leaky](../features/leaky-tests.md) after a timeout, using the same configuration as for tests. By default, the leak timeout is 100ms.
- **`capture-stdout`**: `true` if the script's standard output should be captured, `false` if not. By default, this is `false`.
- **`capture-stderr`**: `true` if the script's standard error should be captured, `false` if not. By default, this is `false`.

### Example

```toml title="Advanced setup script definition"
```toml title="Advanced script definition"
[script.db-generate]
command = 'cargo run -p db-generate'
slow-timeout = { period = "60s", terminate-after = 2 }
Expand All @@ -56,7 +65,7 @@ capture-stderr = false

## Setting up rules

In configuration, you can create rules for when to use scripts on a per-profile basis. This is done via the `profile.<profile-name>.scripts` array. For example, you can set up a script that generates a database if tests from the `db-tests` package, or any packages that depend on it, are run.
In configuration, you can create rules for when to use scripts on a per-profile basis. This is done via the `profile.<profile-name>.scripts` array. For example, you can configure a setup script that generates a database if tests from the `db-tests` package, or any packages that depend on it, are run.

```toml title="Basic rules"
[[profile.default.scripts]]
Expand All @@ -66,7 +75,7 @@ setup = 'db-generate'

(This example uses the `rdeps` [filterset](../filtersets/index.md) predicate.)

Setup scripts can also filter based on platform, using the rules listed in [_Specifying platforms_](../configuration/specifying-platforms.md):
Scripts can also filter based on platform, using the rules listed in [_Specifying platforms_](../configuration/specifying-platforms.md):

```toml title="Platform-specific rules"
[[profile.default.scripts]]
Expand All @@ -82,13 +91,32 @@ filter = 'test(/^script_tests::/)'
setup = ['script1', 'script2']
```

Executing pre-timeout scripts follows the same pattern. For example, you can configure a pre-timeout script for every test that contains `slow` in its name.

```toml title="Basic pre-timeout rules"
[[profile.default.scripts]]
filter = 'test(slow)'
pre-timeout = 'capture-backtrace'
```

A single rule can specify any number of setup scripts and any number of pre-timeout scripts.

```toml title="Combination rules"
[[profile.default.scripts]]
filter = 'test(slow)'
setup = ['setup-1', 'setup-2']
pre-timeout = ['pre-timeout-1', 'pre-timeout-2']
```

## Script execution

### Setup scripts

A given setup script _S_ is only executed if the current profile has at least one rule where the `filter` and `platform` predicates match the current execution environment, and the setup script _S_ is listed in `setup`.

Setup scripts are executed serially, in the order they are defined (_not_ the order they're specified in the rules). If any setup script exits with a non-zero exit code, the entire test run is terminated.

### Environment variables
#### Environment variables
Comment on lines +113 to +119
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not hugely important at the moment, but I think this level of nesting suggests using a different page for pre-timeout scripts. I think there should probably be a scripts/index.md page with an overview and common options, and then separate pages for setup and pre-timeout scripts.


Setup scripts can define environment variables that will be exposed to tests that match the script. This is done by writing to the `$NEXTEST_ENV` environment variable from within the script.

Expand Down Expand Up @@ -126,6 +154,18 @@ fn my_env_test() {
}
```

### Pre-timeout scripts

A given pre-timeout script _S_ is executed when the current profile has at least one rule where the `platform` predicates match the current execution environment, the script _S_ is listed in `pre-timeout`, and a test matching the `filter` has reached its configured timeout.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the things it would be useful to add here is a practical use case that's outlined. For setup scripts, database setup or preparing artifacts for individual test use are illustrative examples. For pre-timeout scripts, it would be good to mention grabbing stack traces and logs. That kind of use case would also help shape their functionality.

It would also be useful to cover some more of the points from my post:

Should the script be responsible for killing the test, or should nextest do so (after waiting for the script to exit)? I'd lean towards the latter.

Covering this would be useful -- I'm still leaning towards killing the process group anyway, but saying it explicitly would be nice.

Do we still want to allow a grace period (SIGTERM before SIGKILL) in this case?

This is worth mentioning as well. Pre-timeout scripts add a bunch of states to the state machine, because each a unit of work now has a sidecar process involved as well. One approach is to add the pre-timeout script to the process group (Unix) / job object (Windows) -- that would provide fairly clear semantics, I think. The script and the test then live and die together -- the code that waits for the child process to exit can now wait on both processes.

What should the cwd of the script process be?

This should just be the same as the cwd of the test, I'm quite sure.

How should information about the script being run be communicated to the user?

The states presented to the user blow up a bit here.

One of the things that's worth thinking about, I think, is passing through stdout and stderr -- unlike setup scripts, pre-timeout scripts aren't executed serially, so capturing stdout/stderr generally makes a lot of sense.

But! There's also a compelling use case for pre-timeout scripts: to put you into an interactive debugger, effectively acting as a break point. In that case, you do want to pass through stdout and stderr, and you want to keep processing existing tests in the background, but (I think) not start new ones. Even if we don't solve it here, it's worth keeping that case in mind.

Does nextest need to create a directory for the script to write logs and such out to? Even if not necessary, is it a good to have?

I really would like this, personally. That information can then be attached in all sorts of ways, e.g. to JUnit reports.

I think it would also be useful to have a protocol for the pre-timeout script to give information back to nextest -- for example, the list of files written out and/or to include. This can just be a file the script writes to, similar to NEXTEST_ENV for setup scripts.


The exact details for much of this can be outlined in an architecture doc (this is more usage-oriented), but a quick summary of a lot of this would be useful.


Pre-timeout scripts are executed serially, in the order they are defined (_not_ the order they're specified in the rules). If any pre-timeout script exits with a non-zero exit code, an error is logged but the test run continues.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth supporting multiple pre-timeout scripts for a single test at all? That seems like it'll add confusion.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm partial to just one script. Users can wrap multiple scripts themselves if need be.


Nextest sets the following environment variables when executing a pre-timeout script:

* **`NEXTEST_PRE_TIMEOUT_TEST_PID`**: the ID of the process running the test.
* **`NEXTEST_PRE_TIMEOUT_TEST_NAME`**: the name of the running test.
* **`NEXTEST_PRE_TIMEOUT_TEST_BINARY`**: the name of the binary running the test.
Comment on lines +165 to +167
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd call the last one BINARY_ID. Maybe also pass in the components of the binary ID as separate variables for convenience.


## Setup scripts in JUnit output

<!-- md:version 0.9.86 -->
Expand Down
Loading