Skip to content

[Feature] Implement run time limits#930

Merged
jan-janssen merged 12 commits intomainfrom
run_time
Feb 21, 2026
Merged

[Feature] Implement run time limits#930
jan-janssen merged 12 commits intomainfrom
run_time

Conversation

@jan-janssen
Copy link
Member

@jan-janssen jan-janssen commented Feb 21, 2026

Summary by CodeRabbit

  • New Features

    • Added a configurable per-task runtime limit (run_time_limit) exposed in executor/resource configurations and propagated to job submissions and spawners.
    • SLURM command generation accepts a runtime limit and appends a corresponding --time flag when provided.
  • Documentation

    • Docstrings and public parameter descriptions updated to document the new run_time_limit option.
  • Tests

    • Added unit tests covering runtime-limit behavior and SLURM command generation.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 21, 2026

📝 Walkthrough

Walkthrough

Adds a new run_time_limit parameter (seconds) across executors, spawners, and SLURM command generation; the value is propagated into resource dictionaries and used where scheduler interfaces accept duration (Flux jobspec.duration, SLURM --time, PySqa run_time_max). Also exposes ExecutorlibSocketError in the public API. No other control-flow changes.

Changes

Cohort / File(s) Summary
Executor config & docs
src/executorlib/executor/flux.py, src/executorlib/executor/single.py, src/executorlib/executor/slurm.py
Added run_time_limit to default resource_dicts and updated docstrings; parameter is exposed in executor configs but not enforced by executor runtime logic.
SLURM command generation
src/executorlib/standalone/command.py, tests/unit/standalone/test_slurm_command.py
generate_slurm_command() gained run_time_limit: Optional[int]; when set, appends --time=<minutes> computed as (run_time_limit // 60) + 1. Tests updated to assert the new flag.
Flux spawner & tests
src/executorlib/task_scheduler/interactive/spawner_flux.py, tests/unit/executor/test_flux_job.py
FluxPythonSpawner now accepts run_time_limit, stores it, and sets jobspec.duration when provided. Unit test added to exercise run_time_limit behavior for Flux tasks.
PySqa spawners (interactive & file)
src/executorlib/task_scheduler/interactive/spawner_pysqa.py, src/executorlib/task_scheduler/file/spawner_pysqa.py
PysqaSpawner constructor added run_time_limit and passes run_time_max=self._run_time_limit into queue submission (submit_job / submit_kwargs).
SLURM spawner integration
src/executorlib/task_scheduler/interactive/spawner_slurm.py
SrunSpawner now accepts and stores run_time_limit and forwards it into generate_slurm_command() when building the srun/slurm invocation.
Public API
src/executorlib/api.py
Exports ExecutorlibSocketError from the standalone interactive communication module and adds it to __all__.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Executor as Executor (Flux/Slurm/Single)
    participant Spawner
    participant CmdGen as generate_slurm_command
    participant Scheduler as Scheduler (Flux/Slurm/PySqa)

    Client->>Executor: submit(task, resource_dict{..., run_time_limit})
    Executor->>Spawner: create spawner(resource_dict includes run_time_limit)
    Spawner->>CmdGen: build command (if Slurm) with run_time_limit -> --time
    Spawner->>Scheduler: submit job (jobspec.duration / run_time_max set when provided)
    Scheduler-->>Spawner: job accepted / started
    Spawner-->>Executor: return job handle
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰
I nibble seconds and hop them neat,
From Flux to Slurm, a tidy beat,
A carrot-timed hop, a scheduler rhyme,
Tasks wear minutes, bounded by time.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: implementing run time limits across executors and spawners, which is the primary focus of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch run_time

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jan-janssen jan-janssen marked this pull request as draft February 21, 2026 11:26
@jan-janssen jan-janssen linked an issue Feb 21, 2026 that may be closed by this pull request
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/executorlib/executor/single.py (1)

437-474: ⚠️ Potential issue | 🔴 Critical

Strip run_time_limit from resource_dict before passing to MpiExecSpawner.

create_single_node_executor removes scheduler-specific keys (threads_per_core, gpus_per_core, slurm_cmd_args) before passing resource_dict as executor_kwargs to the task schedulers. However, run_time_limit is not removed. When executor_kwargs is passed down to _execute_multiple_tasks, any unconsumed keys are forwarded as **kwargs to the spawner instantiation. Since MpiExecSpawner (via SubprocessSpawner) does not accept run_time_limit and has no **kwargs capture, this raises a TypeError at runtime if a user sets run_time_limit in resource_dict for SingleNodeExecutor.

🐛 Proposed fix
    if "slurm_cmd_args" in resource_dict:
        del resource_dict["slurm_cmd_args"]
+    if "run_time_limit" in resource_dict:
+        del resource_dict["run_time_limit"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/executorlib/executor/single.py` around lines 437 - 474, The resource_dict
passed from create_single_node_executor still contains run_time_limit which gets
forwarded as executor_kwargs into the task schedulers and ultimately to
MpiExecSpawner/SubprocessSpawner (via _execute_multiple_tasks), causing a
TypeError; remove "run_time_limit" from resource_dict alongside the other keys
(threads_per_core, gpus_per_core, slurm_cmd_args) before constructing
BlockAllocationTaskScheduler or OneProcessTaskScheduler so the spawner is not
instantiated with an unsupported keyword argument.
🧹 Nitpick comments (2)
src/executorlib/executor/flux.py (1)

167-174: run_time_limit missing from FluxJobExecutor.default_resource_dict.

FluxClusterExecutor.default_resource_dict (line 371) explicitly sets "run_time_limit": None, but FluxJobExecutor.default_resource_dict does not include it, despite the docstring advertising it as a valid key. All other shared keys are present in both defaults. While the behaviour is identical at runtime (the spawner defaults to None), the inconsistency is confusing.

♻️ Suggested fix
 default_resource_dict: dict = {
     "cores": 1,
     "threads_per_core": 1,
     "gpus_per_core": 0,
     "cwd": None,
     "openmpi_oversubscribe": False,
     "slurm_cmd_args": [],
+    "run_time_limit": None,
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/executorlib/executor/flux.py` around lines 167 - 174,
FluxJobExecutor.default_resource_dict is missing the "run_time_limit" key which
FluxClusterExecutor.default_resource_dict includes; update the dict in
FluxJobExecutor.default_resource_dict to add "run_time_limit": None so the two
defaults are consistent with the docstring and other shared keys (compare with
FluxClusterExecutor.default_resource_dict to mirror its structure).
src/executorlib/executor/slurm.py (1)

161-168: run_time_limit missing from SlurmClusterExecutor and SlurmJobExecutor default resource dicts (Lines 161-168, 386-393).

Both SLURM executor default dicts omit "run_time_limit": None, inconsistent with FluxClusterExecutor which includes it. The docstrings advertise it as a valid key for all executors.

♻️ Suggested fix (apply to both `SlurmClusterExecutor` and `SlurmJobExecutor` `default_resource_dict`)
 default_resource_dict: dict = {
     "cores": 1,
     "threads_per_core": 1,
     "gpus_per_core": 0,
     "cwd": None,
     "openmpi_oversubscribe": False,
     "slurm_cmd_args": [],
+    "run_time_limit": None,
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/executorlib/executor/slurm.py` around lines 161 - 168, The Slurm
executors' default_resource_dicts are missing the "run_time_limit" key; update
the default_resource_dict in both SlurmClusterExecutor and SlurmJobExecutor to
include "run_time_limit": None so it matches FluxClusterExecutor and the
documented API; locate the default_resource_dict definitions inside the
SlurmClusterExecutor and SlurmJobExecutor classes and add the "run_time_limit"
entry to each default dict.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/executorlib/standalone/command.py`:
- Around line 164-165: The current conversion of run_time_limit to minutes uses
floor+1 and over-allocates for exact multiples; change the calculation in the
block that appends to command_prepend_lst (the line building "--time=" from
run_time_limit) to use a proper ceiling: replace the expression run_time_limit
// 60 + 1 with a correct ceiling computation (e.g., use math.ceil(run_time_limit
/ 60) or integer math (run_time_limit + 59) // 60) so exact multiples map to the
exact minute count.

---

Outside diff comments:
In `@src/executorlib/executor/single.py`:
- Around line 437-474: The resource_dict passed from create_single_node_executor
still contains run_time_limit which gets forwarded as executor_kwargs into the
task schedulers and ultimately to MpiExecSpawner/SubprocessSpawner (via
_execute_multiple_tasks), causing a TypeError; remove "run_time_limit" from
resource_dict alongside the other keys (threads_per_core, gpus_per_core,
slurm_cmd_args) before constructing BlockAllocationTaskScheduler or
OneProcessTaskScheduler so the spawner is not instantiated with an unsupported
keyword argument.

---

Nitpick comments:
In `@src/executorlib/executor/flux.py`:
- Around line 167-174: FluxJobExecutor.default_resource_dict is missing the
"run_time_limit" key which FluxClusterExecutor.default_resource_dict includes;
update the dict in FluxJobExecutor.default_resource_dict to add
"run_time_limit": None so the two defaults are consistent with the docstring and
other shared keys (compare with FluxClusterExecutor.default_resource_dict to
mirror its structure).

In `@src/executorlib/executor/slurm.py`:
- Around line 161-168: The Slurm executors' default_resource_dicts are missing
the "run_time_limit" key; update the default_resource_dict in both
SlurmClusterExecutor and SlurmJobExecutor to include "run_time_limit": None so
it matches FluxClusterExecutor and the documented API; locate the
default_resource_dict definitions inside the SlurmClusterExecutor and
SlurmJobExecutor classes and add the "run_time_limit" entry to each default
dict.

Comment on lines +164 to +165
if run_time_limit is not None:
command_prepend_lst += ["--time=" + str(run_time_limit // 60 + 1)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

--time conversion uses floor+1 instead of ceiling, wasting a minute for exact multiples of 60.

run_time_limit // 60 + 1 produces floor(seconds / 60) + 1, not ceil(seconds / 60). For exact multiples (e.g., run_time_limit=3600), this allocates 61 minutes instead of 60. Non-exact multiples (e.g., 3601 s) happen to produce the correct ceiling, so the formula is inconsistent.

🐛 Proposed fix using proper ceiling arithmetic
     if run_time_limit is not None:
-        command_prepend_lst += ["--time=" + str(run_time_limit // 60 + 1)]
+        command_prepend_lst += ["--time=" + str((run_time_limit + 59) // 60)]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/executorlib/standalone/command.py` around lines 164 - 165, The current
conversion of run_time_limit to minutes uses floor+1 and over-allocates for
exact multiples; change the calculation in the block that appends to
command_prepend_lst (the line building "--time=" from run_time_limit) to use a
proper ceiling: replace the expression run_time_limit // 60 + 1 with a correct
ceiling computation (e.g., use math.ceil(run_time_limit / 60) or integer math
(run_time_limit + 59) // 60) so exact multiples map to the exact minute count.

@jan-janssen
Copy link
Member Author

======================================================================
ERROR: test_executor_dependence_mixed (unit.task_scheduler.file.test_serial.TestCacheExecutorSerial.test_executor_dependence_mixed)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/work/executorlib/executorlib/tests/unit/task_scheduler/file/test_serial.py", line 57, in test_executor_dependence_mixed
    self.assertEqual(fs2.result(), 4)
                     ~~~~~~~~~~^^
  File "/home/runner/miniconda3/envs/test/lib/python3.13/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ~~~~~~~~~~~~~~~~~^^
  File "/home/runner/miniconda3/envs/test/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = '/home/runner/work/executorlib/executorlib/tests/executorlib_cache/my_funct9021974f353a79d2e7f81553404444b2_o.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

----------------------------------------------------------------------

@codecov
Copy link

codecov bot commented Feb 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.89%. Comparing base (f2275f4) to head (3997de5).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #930      +/-   ##
==========================================
+ Coverage   93.87%   93.89%   +0.02%     
==========================================
  Files          38       38              
  Lines        1974     1981       +7     
==========================================
+ Hits         1853     1860       +7     
  Misses        121      121              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jan-janssen jan-janssen marked this pull request as ready for review February 21, 2026 15:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/unit/executor/test_flux_job.py`:
- Around line 120-136: In test_run_time_limit, remove the redundant assertion
AssertTrue(f2.done()) that follows f2.result() (since f2.result() already
guarantees completion) and to avoid potential flakiness replace the direct
AssertTrue(f1.done()) with a short bounded poll/wait that checks f1.done()
repeatedly (e.g., loop with small sleep and total timeout of a few seconds)
before asserting True; locate these in the test_run_time_limit function where f1
and f2 are submitted via FluxJobExecutor and delayed_calc is used.

In `@tests/unit/standalone/test_slurm_command.py`:
- Around line 58-62: The time rounding logic that builds the SLURM time option
currently uses run_time_limit // 60 + 1 which over-allocates when run_time_limit
is an exact multiple of 60; replace that expression with
math.ceil(run_time_limit / 60) (import math if not already imported) in the
function that constructs the SLURM command/time flag (the code path using the
run_time_limit variable and producing the "--time=" value) so that exact-minute
limits map to the correct minute value and partial minutes round up correctly.

Comment on lines +120 to +136
def test_run_time_limit(self):
with FluxJobExecutor(
max_cores=1,
resource_dict={"cores": 1},
flux_executor=self.executor,
block_allocation=False,
pmi_mode=pmi,
) as p:
f1 = p.submit(delayed_calc, 1, resource_dict={"run_time_limit": 1})
f2 = p.submit(delayed_calc, 2, resource_dict={"run_time_limit": 5})
self.assertFalse(f1.done())
self.assertFalse(f2.done())
self.assertEqual(f2.result(), 2)
self.assertTrue(f1.done())
self.assertTrue(f2.done())
with self.assertRaises(ExecutorlibSocketError):
f1.result()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Timing logic is sound; one redundant assertion and a minor flakiness risk to be aware of.

The core timing invariant holds in both sequential and parallel execution:

  • Sequential (max_cores=1 gates f2 behind f1): f1 is killed at t≈1 s, f2 completes at t≈3 s → f1 has been dead for ~2 s when f2.result() returns.
  • Parallel (if block_allocation=False submits both Flux jobs independently): f1 is killed at t≈1 s, f2 completes at t≈2 s → f1 still dead before f2.result() returns.

Two minor points:

  1. Line 134 is redundantassertTrue(f2.done()) is trivially true after f2.result() already returned on line 132. Removing it would tighten the intent of the test.

  2. Potential flakiness at line 133assertTrue(f1.done()) assumes Flux's kill-signal propagation back to the Python future completes within the ~1–2 s window before f2.result() returns. This is reasonable in practice, but if Flux's async notification path experiences transient latency the assertion can fail spuriously. Consider adding a short polling loop or a bounded wait only if this turns out to be flaky in CI.

♻️ Suggested cleanup (remove redundant assertion)
             self.assertEqual(f2.result(), 2)
             self.assertTrue(f1.done())
-            self.assertTrue(f2.done())
             with self.assertRaises(ExecutorlibSocketError):
                 f1.result()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/executor/test_flux_job.py` around lines 120 - 136, In
test_run_time_limit, remove the redundant assertion AssertTrue(f2.done()) that
follows f2.result() (since f2.result() already guarantees completion) and to
avoid potential flakiness replace the direct AssertTrue(f1.done()) with a short
bounded poll/wait that checks f1.done() repeatedly (e.g., loop with small sleep
and total timeout of a few seconds) before asserting True; locate these in the
test_run_time_limit function where f1 and f2 are submitted via FluxJobExecutor
and delayed_calc is used.

Comment on lines +58 to 62
run_time_limit=250,
)
self.assertEqual(len(command_lst), 12)
reply_lst = ['srun', '-n', '1', '-D', '/tmp/test', '-N', '1', '--cpus-per-task=2', '--gpus-per-task=1', '--exact', '--oversubscribe', '--help']
self.assertEqual(len(command_lst), 13)
reply_lst = ['srun', '-n', '1', '-D', '/tmp/test', '-N', '1', '--cpus-per-task=2', '--gpus-per-task=1', '--exact', '--oversubscribe', '--time=5', '--help']
self.assertEqual(command_lst, reply_lst)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

ast-grep --pattern 'def generate_slurm_command($$$)'

Repository: pyiron/executorlib

Length of output: 4450


Replace run_time_limit // 60 + 1 with ceiling division to prevent over-allocating time for jobs with exact minute boundaries.

The current implementation at line 165 of src/executorlib/standalone/command.py uses run_time_limit // 60 + 1, which always adds an extra minute. For jobs where run_time_limit is a multiple of 60 seconds (e.g., 60, 120, 180), this allocates unnecessary overhead: run_time_limit=60 becomes --time=2 instead of --time=1. Use math.ceil(run_time_limit / 60) instead to allocate exactly the minimum required time: ceiling ensures protection against under-allocation (e.g., 61 seconds → 2 minutes) without over-allocating when the limit is already a whole number of minutes.

🧰 Tools
🪛 Ruff (0.15.1)

[error] 61-61: Probable insecure usage of temporary file or directory: "/tmp/test"

(S108)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/standalone/test_slurm_command.py` around lines 58 - 62, The time
rounding logic that builds the SLURM time option currently uses run_time_limit
// 60 + 1 which over-allocates when run_time_limit is an exact multiple of 60;
replace that expression with math.ceil(run_time_limit / 60) (import math if not
already imported) in the function that constructs the SLURM command/time flag
(the code path using the run_time_limit variable and producing the "--time="
value) so that exact-minute limits map to the correct minute value and partial
minutes round up correctly.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/executorlib/task_scheduler/file/spawner_pysqa.py (1)

72-91: ⚠️ Potential issue | 🔴 Critical

run_time_limit is never removed from resource_dict, so it leaks into qa.submit_job() as an unknown kwarg

Every other key that is extracted and renamed follows a delete-after-use pattern: cwd is deleted on line 75, threads_per_core on line 78, and the unsupported keys on lines 84–86. run_time_limit is the only extracted key that is left in resource_dict. Because submit_kwargs.update(resource_dict) runs at line 91, when run_time_limit is present it ends up being forwarded to qa.submit_job() as both run_time_max=<value> (correct) and run_time_limit=<value> (unknown). Depending on pysqa's submit_job implementation this will either raise a TypeError for an unexpected keyword argument or silently pass an undefined template variable.

🐛 Proposed fix — delete `run_time_limit` from `resource_dict` after extraction
         submit_kwargs = {
             "command": " ".join(command),
             "dependency_list": [str(qid) for qid in task_dependent_lst],
             "working_directory": os.path.abspath(cwd),
             "run_time_max": resource_dict.get("run_time_limit"),
         }
         if "cwd" in resource_dict:
             del resource_dict["cwd"]
+        if "run_time_limit" in resource_dict:
+            del resource_dict["run_time_limit"]
         if "threads_per_core" in resource_dict:

Alternatively, use pop at the point of extraction to keep it in one place:

-        submit_kwargs = {
-            "command": " ".join(command),
-            "dependency_list": [str(qid) for qid in task_dependent_lst],
-            "working_directory": os.path.abspath(cwd),
-            "run_time_max": resource_dict.get("run_time_limit"),
-        }
+        submit_kwargs = {
+            "command": " ".join(command),
+            "dependency_list": [str(qid) for qid in task_dependent_lst],
+            "working_directory": os.path.abspath(cwd),
+            "run_time_max": resource_dict.pop("run_time_limit", None),
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/executorlib/task_scheduler/file/spawner_pysqa.py` around lines 72 - 91,
resource_dict still contains "run_time_limit" after you map it to
submit_kwargs["run_time_max"], causing it to be forwarded to qa.submit_job() as
an unexpected kwarg; remove or pop "run_time_limit" from resource_dict right
after creating the "run_time_max" entry (the code manipulating resource_dict and
submit_kwargs in this block, including the variables resource_dict,
submit_kwargs and the later call qa.submit_job()) so
submit_kwargs.update(resource_dict) won't reintroduce "run_time_limit".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/executorlib/task_scheduler/file/spawner_pysqa.py`:
- Around line 72-91: resource_dict still contains "run_time_limit" after you map
it to submit_kwargs["run_time_max"], causing it to be forwarded to
qa.submit_job() as an unexpected kwarg; remove or pop "run_time_limit" from
resource_dict right after creating the "run_time_max" entry (the code
manipulating resource_dict and submit_kwargs in this block, including the
variables resource_dict, submit_kwargs and the later call qa.submit_job()) so
submit_kwargs.update(resource_dict) won't reintroduce "run_time_limit".

@jan-janssen jan-janssen merged commit 2a94f49 into main Feb 21, 2026
35 checks passed
@jan-janssen jan-janssen deleted the run_time branch February 21, 2026 17:54
@mgt16-LANL
Copy link

Let me test this in a minute - looks really cool

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Documentation] Flux add duration to jobspec

2 participants