You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/setup/mwm-workflow-spec.md
+21-35Lines changed: 21 additions & 35 deletions
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ A workflow is a standard template that contains a list of tasks that can be ran
24
24
25
25
The first task to be ran, will always be the first task in the list. The next task/tasks to be ran must be listed in the [Task Destinations](#task-destinations) of the task. A workflow requires at least one task.
26
26
27
-
Workflows can be created or updated via the [Workflow API](https://github.com/Project-MONAI/monai-deploy-workflow-manager/blob/develop/docs/api/rest/workflow.md).
27
+
Workflows can be created or updated via the [Workflow API](https://github.com/Project-MONAI/monai-deploy-workflow-manager/blob/develop/docs/api/rest/workflow.md).
28
28
29
29
# Contents
30
30
@@ -142,7 +142,7 @@ The following is an example of the structure of a workflow.
142
142
The following is an example of a complete workflow:
@@ -240,7 +240,7 @@ It also defines the "PROD_PACS" output destination, meaning that it can be used:
240
240
Tasks are the basic building block of a workflow. They are provided as a list - the first Task in the list is executed when the workflow is triggered.
241
241
Subsequent tasks are triggered by the `task_destinations` specified by previous tasks.
242
242
243
-
# Task Object
243
+
# Task Object
244
244
245
245
### Task Types
246
246
These tasks are borken down into different types:
@@ -292,8 +292,8 @@ The following are examples of the task json structure including required args fo
292
292
]
293
293
},
294
294
"task_destinations": [
295
-
{
296
-
"name": "export-task-id"
295
+
{
296
+
"name": "export-task-id"
297
297
}
298
298
]
299
299
}
@@ -364,7 +364,7 @@ Depending of the type of task, the task object may contain additional fields.
364
364
Router tasks don't have additional fields. They are used to contain `task_destinations` so that workflow processing can be directed to the desired next step.
365
365
366
366
#### Export
367
-
These are task types that allow for artifacts to be exported based on the input artifacts list. This task type should not have Out artifacts listed.
367
+
These are task types that allow for artifacts to be exported based on the input artifacts list. This task type should not have Out artifacts listed.
368
368
The task also requires these extra attributes:-
369
369
370
370
| Property | Type | Description |
@@ -403,7 +403,7 @@ Example (output sent to another task if the patient is female, otherwise to PACS
403
403
Export destinations define an external location to which the output of the task can be sent. This will take the form of an event published to a pub/sub service notifying of an available export to a specific destination reference. Most commonly, the export location will be a PACs system and the notification will be picked up by the Monai Informatics Gateway.
404
404
405
405
#### Plugin
406
-
These are tasks are Named the same as the installed Pluging.
406
+
These are tasks are Named the same as the installed Pluging.
407
407
The task also requires these extra attributes:-
408
408
409
409
| Property | Type | Description |
@@ -414,7 +414,7 @@ The task also requires these extra attributes:-
414
414
The args requirements for argo plugin can be found [here](#argo).
415
415
416
416
### Task Arguments
417
-
Each task plugin requires specific arguments to be provided in the args dictionary. This allows all task types to support as many additional values as necessary without the need to bloat the workflow spec.
417
+
Each task plugin requires specific arguments to be provided in the args dictionary. This allows all task types to support as many additional values as necessary without the need to bloat the workflow spec.
418
418
419
419
#### Argo
420
420
The Argo plugin triggers workflows pre-deployed onto an [Argo workflow server](https://argoproj.github.io/argo-events/).
@@ -425,25 +425,11 @@ The Task's "args" object should contain the following fields:
425
425
426
426
| Property | Type | Required | Description |
427
427
|------|------|------|------|
428
-
|workflow_template_name|str|Yes|The ID of this workflow as registered on the Argo server.|
429
-
|namespace|str|Yes|The namespace of the argo workflow.|
430
-
|server_url|url|Yes|The URL of the Argo server.|
431
-
|allow_insecure|bool|No|Allow insecure connections to argo from the plug-in.|
432
-
|parameters|dictionary|No|Key value pairs, Argo parameters that will be passed on to the Argo workflow.|
433
-
|priority_class|string|No|The name of a valid Kubernetes priority class to be assigned to the Argo workflow pods|
434
-
|resources|dictionary|No|A resource requests & limits object (see below). These will be applied to the Argo workflow pods|
435
-
436
-
##### Resource Request Object
437
-
438
-
Resource request parameters should be included in the task args object dictionary, as a string dictionary. The resources dictionary and all included values below are optional.
|cpu_reservation|url|A valid [Kubernetes CPU request value](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu).|
444
-
|gpu_limit|dictionary|The number of GPUs to be used by this task.|
445
-
|memory_limit|string|The maximum amount of memory this task may use|
446
-
|cpu_limit|object|The maximum amount of CPU this task may use. See |
428
+
|workflow_template_name|string|Yes|The ID of this workflow as registered on the Argo server.|
429
+
|priority_class|string|No|The name of a valid Kubernetes priority class to be assigned to the Argo workflow pods.|
430
+
|gpu_required|string|No|Whether a GPU is to be used by this task.|
431
+
|memory_gb|string|No|The maximum amount of memory in gigabytes this task may use.|
432
+
|cpu|string|No|The maximum amount of CPU this task may use.|
447
433
448
434
For more information about Kubernetes requests & limits, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/.
449
435
@@ -501,7 +487,7 @@ As you can see in the example below, input artifacts require a _value_. This is
501
487
502
488
#### DICOM Input
503
489
504
-
If payload DICOM inputs are to be used in a given task, the value of the input must be `context.input.dicom`. This will to resolve to the `{payloadId}/dcm` folder within Minio / S3.
490
+
If payload DICOM inputs are to be used in a given task, the value of the input must be `context.input.dicom`. This will to resolve to the `{payloadId}/dcm` folder within Minio / S3.
505
491
506
492
Example:
507
493
```json
@@ -700,11 +686,11 @@ The following examples both function the same and act as an AND condition.
700
686
## Evaluators
701
687
Conditional evaluators are logical statement strings that may be used to determine which tasks are executed. They can make use of the execution context _metadata_ and dicom tags. All conditions must evaluate to true in order for the task to be triggered.
702
688
703
-
[A detailed breakdown of conditional logic can be found here.](https://github.com/Project-MONAI/monai-deploy-workflow-manager/blob/develop/guidelines/mwm-conditionals.md)
689
+
[A detailed breakdown of conditional logic can be found here.](https://github.com/Project-MONAI/monai-deploy-workflow-manager/blob/develop/guidelines/mwm-conditionals.md)
704
690
705
691
### Supported Evaulators
706
692
707
-
693
+
708
694
Conditional evaluators should support evaluating workflow variables against predefined values with the following operators:
709
695
710
696
< (Valid for integers)
@@ -765,9 +751,9 @@ Example (status):
765
751
766
752
#### Result Metadata & Execution Stats - Using Dictionary Values
767
753
768
-
The Result Metadata and Execution Stats are populated by the plugin and are added to the workflow instance once a task is completed to provide some output of a task. Each plugin will have its own implementation to populate the result metadata.
754
+
The Result Metadata and Execution Stats are populated by the plugin and are added to the workflow instance once a task is completed to provide some output of a task. Each plugin will have its own implementation to populate the result metadata.
769
755
770
-
Because `result` and `execution_stats` are a dictionary, the section after `context.executions.task_id.result` or `context.executions.task_id.execution_stats` is the key to be checked in the result/execution_stats dictionary.
756
+
Because `result` and `execution_stats` are a dictionary, the section after `context.executions.task_id.result` or `context.executions.task_id.execution_stats` is the key to be checked in the result/execution_stats dictionary.
771
757
772
758
For conditional statements, the key specified is case sensitive and must match exactly to the key which has been output by the model and saved in the result/execution_stats dictionary.
773
759
@@ -807,9 +793,9 @@ The result metadata for an Argo task is populated by a `metadata.json` that is i
807
793
}
808
794
```
809
795
810
-
If metadata is to be used in a conditional the `metadata.json` must be present somewhere in the output directory and a valid JSON dictionary. It will automatically be imported if it is in the directory.
796
+
If metadata is to be used in a conditional the `metadata.json` must be present somewhere in the output directory and a valid JSON dictionary. It will automatically be imported if it is in the directory.
811
797
812
-
An example format of the metadata.json can be found below:
798
+
An example format of the metadata.json can be found below:
813
799
814
800
execution stats are populated from the argo execution values returned automatically.
815
801
@@ -916,4 +902,4 @@ Name:
916
902
Description:
917
903
```python
918
904
{{context.workflow.description}} =='This workflow is a valid workflow'
0 commit comments