[Fix] Rename resource_dict to executor_kwargs#936
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughRenames Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/executorlib/executor/flux.py (3)
183-224:⚠️ Potential issue | 🔴 Critical
FluxJobExecutorpasses staleresource_dict=keyword tocreate_flux_executor, causingTypeErrorat runtime.Both call paths in
FluxJobExecutor.__init__(the dependency-enabled path at Line 187 and thedisable_dependenciespath at Line 213) still useresource_dict=resource_dictwhen callingcreate_flux_executor. Sincecreate_flux_executor's parameter was renamed toexecutor_kwargs(Line 455), these calls will immediately raise:TypeError: create_flux_executor() got an unexpected keyword argument 'resource_dict'🐛 Proposed fix
executor=create_flux_executor( max_workers=max_workers, cache_directory=cache_directory, max_cores=max_cores, - resource_dict=resource_dict, + executor_kwargs=resource_dict, pmi_mode=pmi_mode, flux_executor=flux_executor, flux_executor_nesting=flux_executor_nesting, flux_log_files=flux_log_files, hostname_localhost=hostname_localhost, block_allocation=block_allocation, init_function=init_function, log_obj_size=log_obj_size, wait=wait, ),Apply the same fix to both occurrences (lines 187 and 213).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/executorlib/executor/flux.py` around lines 183 - 224, In FluxJobExecutor.__init__, both calls to create_flux_executor currently pass the old keyword resource_dict=resource_dict which no longer exists; update both occurrences to pass the map under the new parameter name executor_kwargs=resource_dict (i.e., replace resource_dict=resource_dict with executor_kwargs=resource_dict) so create_flux_executor receives the renamed argument; ensure you change this in the dependency-enabled path and the disable_dependencies path inside FluxJobExecutor.__init__ to avoid the TypeError.
387-400:⚠️ Potential issue | 🔴 Critical
FluxClusterExecutorpasses staleresource_dict=tocreate_pysqa_block_allocation_scheduler, causingTypeErrorat runtime.Line 396 still uses the old
resource_dict=resource_dictkeyword argument when callingcreate_pysqa_block_allocation_scheduler. That function's parameter was renamed toexecutor_kwargs(spawner_pysqa.py Line 230). The equivalent fix inSlurmClusterExecutorwas applied correctly (slurm.py Line 192). This path is broken for everyFluxClusterExecutor(block_allocation=True)invocation.🐛 Proposed fix
super().__init__( executor=create_pysqa_block_allocation_scheduler( max_cores=max_cores, cache_directory=cache_directory, hostname_localhost=hostname_localhost, log_obj_size=log_obj_size, pmi_mode=pmi_mode, init_function=init_function, max_workers=max_workers, - resource_dict=resource_dict, + executor_kwargs=resource_dict, pysqa_config_directory=pysqa_config_directory, backend="flux", ) )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/executorlib/executor/flux.py` around lines 387 - 400, FluxClusterExecutor is still passing the removed keyword resource_dict= to create_pysqa_block_allocation_scheduler which now expects executor_kwargs; update the call inside FluxClusterExecutor (where create_pysqa_block_allocation_scheduler is invoked) to replace resource_dict=resource_dict with executor_kwargs=resource_dict (or rename the variable if needed), matching the fix applied in SlurmClusterExecutor and ensuring the keyword name aligns with the function signature.
406-425:⚠️ Potential issue | 🔴 CriticalFix parameter name at line 412: use
resource_dict=resource_dictinstead ofexecutor_kwargs=resource_dict.The
create_file_executorfunction signature expectsresource_dictas its first parameter, notexecutor_kwargs. The call insingle.py(line 351) andslurm.py(line 209) both correctly useresource_dict=resource_dict, while this call passesexecutor_kwargs=resource_dict, which will raiseTypeErrorat runtime because the requiredresource_dictparameter is missing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/executorlib/executor/flux.py` around lines 406 - 425, The call to create_file_executor inside the super().__init__ block passes the wrong keyword: it uses executor_kwargs=resource_dict but the function expects resource_dict; update the keyword to resource_dict=resource_dict in the create_file_executor invocation (the surrounding call in the constructor that currently includes executor_kwargs, flux_executor=None, pmi_mode, etc.) so the required resource_dict parameter is provided and no TypeError is raised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/executorlib/executor/flux.py`:
- Around line 183-224: In FluxJobExecutor.__init__, both calls to
create_flux_executor currently pass the old keyword resource_dict=resource_dict
which no longer exists; update both occurrences to pass the map under the new
parameter name executor_kwargs=resource_dict (i.e., replace
resource_dict=resource_dict with executor_kwargs=resource_dict) so
create_flux_executor receives the renamed argument; ensure you change this in
the dependency-enabled path and the disable_dependencies path inside
FluxJobExecutor.__init__ to avoid the TypeError.
- Around line 387-400: FluxClusterExecutor is still passing the removed keyword
resource_dict= to create_pysqa_block_allocation_scheduler which now expects
executor_kwargs; update the call inside FluxClusterExecutor (where
create_pysqa_block_allocation_scheduler is invoked) to replace
resource_dict=resource_dict with executor_kwargs=resource_dict (or rename the
variable if needed), matching the fix applied in SlurmClusterExecutor and
ensuring the keyword name aligns with the function signature.
- Around line 406-425: The call to create_file_executor inside the
super().__init__ block passes the wrong keyword: it uses
executor_kwargs=resource_dict but the function expects resource_dict; update the
keyword to resource_dict=resource_dict in the create_file_executor invocation
(the surrounding call in the constructor that currently includes
executor_kwargs, flux_executor=None, pmi_mode, etc.) so the required
resource_dict parameter is provided and no TypeError is raised.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/unit/task_scheduler/file/test_serial.py (2)
124-131:⚠️ Potential issue | 🔴 CriticalIncomplete rename:
execute_tasks_h5no longer acceptsresource_dictkwarg.Three test methods pass
"resource_dict"as a key in thekwargsdict forexecute_tasks_h5, but the function signature was renamed toexecutor_kwargs. These tests will fail withTypeError.🐛 Proposed fix
process = Thread( target=execute_tasks_h5, kwargs={ "future_queue": q, "execute_function": execute_in_subprocess, - "resource_dict": {"cores": 1, "cwd": None, "cache_directory": cache_dir}, + "executor_kwargs": {"cores": 1, "cwd": None, "cache_directory": cache_dir}, "terminate_function": terminate_subprocess, }, )Apply this same change in
test_executor_function(line 129),test_executor_function_dependence_kwargs(line 169), andtest_executor_function_dependence_args(line 209).Also applies to: 164-171, 204-211
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/task_scheduler/file/test_serial.py` around lines 124 - 131, Tests are passing a now-removed kwarg name "resource_dict" into execute_tasks_h5; update the Thread kwargs to use the new parameter name "executor_kwargs" instead of "resource_dict" wherever execute_tasks_h5 is invoked in the tests (specifically in the test methods test_executor_function, test_executor_function_dependence_kwargs, and test_executor_function_dependence_args) so the call matches the new execute_tasks_h5 signature.
78-84:⚠️ Potential issue | 🔴 CriticalIncomplete rename:
FileTaskSchedulerno longer acceptsresource_dict.The
FileTaskScheduler.__init__signature was renamed fromresource_dicttoexecutor_kwargs, but these three tests still passresource_dict=.... They will raiseTypeError: __init__() got an unexpected keyword argument 'resource_dict'at runtime.🐛 Proposed fix
def test_executor_working_directory(self): cwd = os.path.join(os.path.dirname(__file__), "..", "..", "..", "static") with FileTaskScheduler( - resource_dict={"cwd": cwd}, execute_function=execute_in_subprocess + executor_kwargs={"cwd": cwd}, execute_function=execute_in_subprocess ) as exe: fs1 = exe.submit(list_files_in_working_directory) self.assertEqual(fs1.result(), os.listdir(cwd)) def test_executor_error(self): cwd = os.path.join(os.path.dirname(__file__), "..", "..", "..", "static") with FileTaskScheduler( - resource_dict={"cwd": cwd}, execute_function=execute_in_subprocess + executor_kwargs={"cwd": cwd}, execute_function=execute_in_subprocess ) as exe: fs1 = exe.submit(get_error, a=1) with self.assertRaises(ValueError): fs1.result() self.assertEqual(len(os.listdir(cwd)), 1) def test_executor_error_file(self): cwd = os.path.join(os.path.dirname(__file__), "..", "..", "..", "static") with FileTaskScheduler( - resource_dict={"cwd": cwd, "error_log_file": "error.out"}, + executor_kwargs={"cwd": cwd, "error_log_file": "error.out"}, execute_function=execute_in_subprocess ) as exe:Also applies to: 86-94, 96-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/task_scheduler/file/test_serial.py` around lines 78 - 84, The tests use the old constructor keyword "resource_dict" for FileTaskScheduler which no longer accepts it; update the tests (e.g., test_executor_working_directory and the other failing tests) to pass the same dict under the new parameter name executor_kwargs when instantiating FileTaskScheduler (e.g., FileTaskScheduler(executor_kwargs={"cwd": cwd}, execute_function=execute_in_subprocess)); ensure you update every test site that currently uses resource_dict to use executor_kwargs so __init__ no longer raises a TypeError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/unit/task_scheduler/file/test_serial.py`:
- Around line 124-131: Tests are passing a now-removed kwarg name
"resource_dict" into execute_tasks_h5; update the Thread kwargs to use the new
parameter name "executor_kwargs" instead of "resource_dict" wherever
execute_tasks_h5 is invoked in the tests (specifically in the test methods
test_executor_function, test_executor_function_dependence_kwargs, and
test_executor_function_dependence_args) so the call matches the new
execute_tasks_h5 signature.
- Around line 78-84: The tests use the old constructor keyword "resource_dict"
for FileTaskScheduler which no longer accepts it; update the tests (e.g.,
test_executor_working_directory and the other failing tests) to pass the same
dict under the new parameter name executor_kwargs when instantiating
FileTaskScheduler (e.g., FileTaskScheduler(executor_kwargs={"cwd": cwd},
execute_function=execute_in_subprocess)); ensure you update every test site that
currently uses resource_dict to use executor_kwargs so __init__ no longer raises
a TypeError.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/unit/task_scheduler/file/test_serial.py (1)
47-47:⚠️ Potential issue | 🟠 MajorFix constructor calls in lines 81, 89, and 99: use
executor_kwargs=instead ofresource_dict=.Lines 81, 89, and 99 pass
resource_dict=toFileTaskScheduler(), but the constructor parameter was renamed toexecutor_kwargs. Update these calls fromFileTaskScheduler(resource_dict={"cwd": cwd}, ...)toFileTaskScheduler(executor_kwargs={"cwd": cwd}, ...).Note: Line 47 is correct—
submit()still acceptsresource_dictas a parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/task_scheduler/file/test_serial.py` at line 47, Update the constructor calls to FileTaskScheduler: replace any keyword argument resource_dict= passed into FileTaskScheduler(...) with executor_kwargs= (e.g. change FileTaskScheduler(resource_dict={"cwd": cwd}, ...) to FileTaskScheduler(executor_kwargs={"cwd": cwd}, ...)) so the calls match the renamed constructor parameter; leave submit(...) calls that use resource_dict unchanged.src/executorlib/executor/flux.py (1)
366-374:⚠️ Potential issue | 🟡 MinorInconsistency:
run_time_limitincluded inFluxClusterExecutordefaults but not inFluxJobExecutor.
FluxClusterExecutor(line 373) sets"run_time_limit": Noneas a default indefault_resource_dict, ensuring the key is always present inexecutor_kwargspassed downstream.FluxJobExecutor(lines 96–103) does not includerun_time_limitin its defaults, so the key is only present if explicitly provided by the caller. While current downstream usage relies on.get()or value-level checks (e.g.,resource_dict.get("run_time_limit"),if self._run_time_limit is not None), this inconsistency between the two executors warrants clarification: is the difference intentional, or should both follow the same pattern?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/executorlib/executor/flux.py` around lines 366 - 374, FluxClusterExecutor defines "run_time_limit" in its default_resource_dict but FluxJobExecutor's default_resource_dict (in FluxJobExecutor) omits it, causing inconsistent presence of the key downstream; update FluxJobExecutor's default_resource_dict to include "run_time_limit": None (matching FluxClusterExecutor) so executor_kwargs always contains the key, and ensure any code using resource_dict.get("run_time_limit") or self._run_time_limit continues to work unchanged.
🧹 Nitpick comments (1)
src/executorlib/executor/single.py (1)
410-421:executor_kwargsdocstring omitsrun_time_limit.The
SingleNodeExecutorclass-level docstring (line 43) and its__init__docstring (line 128) documentrun_time_limitas a valid key insideresource_dict/executor_kwargs, but thecreate_single_node_executordocstring does not list it.📝 Suggested docstring addition
executor_kwargs (dict): A dictionary of arguments required by the executor. With the following keys: - cores (int): number of MPI cores to be used for each function call - threads_per_core (int): number of OpenMP threads to be used for each function call - gpus_per_core (int): number of GPUs per worker - defaults to 0 - cwd (str/None): current working directory where the parallel python task is executed - openmpi_oversubscribe (bool): adds the `--oversubscribe` command line flag (OpenMPI and SLURM only) - default False - slurm_cmd_args (list): Additional command line arguments for the srun call (SLURM only) - error_log_file (str): Name of the error log file to use for storing exceptions raised by the Python functions submitted to the Executor. + - run_time_limit (int): The maximum runtime in seconds for each task. Default: None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/executorlib/executor/single.py` around lines 410 - 421, Add documentation for the run_time_limit key to the create_single_node_executor docstring: mention that executor_kwargs (aka resource_dict in SingleNodeExecutor.__init__) may include run_time_limit (int/float) to set a per-task runtime cap (seconds) and briefly describe its behavior and default; update the docstring near the executor_kwargs listing in create_single_node_executor to mirror the existing descriptions in SingleNodeExecutor class-level docstring and __init__ so all three places (SingleNodeExecutor, __init__, create_single_node_executor) consistently document run_time_limit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/executorlib/executor/flux.py`:
- Around line 366-374: FluxClusterExecutor defines "run_time_limit" in its
default_resource_dict but FluxJobExecutor's default_resource_dict (in
FluxJobExecutor) omits it, causing inconsistent presence of the key downstream;
update FluxJobExecutor's default_resource_dict to include "run_time_limit": None
(matching FluxClusterExecutor) so executor_kwargs always contains the key, and
ensure any code using resource_dict.get("run_time_limit") or
self._run_time_limit continues to work unchanged.
In `@tests/unit/task_scheduler/file/test_serial.py`:
- Line 47: Update the constructor calls to FileTaskScheduler: replace any
keyword argument resource_dict= passed into FileTaskScheduler(...) with
executor_kwargs= (e.g. change FileTaskScheduler(resource_dict={"cwd": cwd}, ...)
to FileTaskScheduler(executor_kwargs={"cwd": cwd}, ...)) so the calls match the
renamed constructor parameter; leave submit(...) calls that use resource_dict
unchanged.
---
Nitpick comments:
In `@src/executorlib/executor/single.py`:
- Around line 410-421: Add documentation for the run_time_limit key to the
create_single_node_executor docstring: mention that executor_kwargs (aka
resource_dict in SingleNodeExecutor.__init__) may include run_time_limit
(int/float) to set a per-task runtime cap (seconds) and briefly describe its
behavior and default; update the docstring near the executor_kwargs listing in
create_single_node_executor to mirror the existing descriptions in
SingleNodeExecutor class-level docstring and __init__ so all three places
(SingleNodeExecutor, __init__, create_single_node_executor) consistently
document run_time_limit.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #936 +/- ##
==========================================
+ Coverage 93.86% 93.91% +0.05%
==========================================
Files 38 38
Lines 1988 1988
==========================================
+ Hits 1866 1867 +1
+ Misses 122 121 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Summary by CodeRabbit