Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 8880860

Browse files
authoredMar 14, 2025··
Merge branch 'main' into parallelization_scribe
2 parents cd813c3 + 31bd706 commit 8880860

14 files changed

+140
-56
lines changed
 

‎.all-contributorsrc

+41
Original file line numberDiff line numberDiff line change
@@ -649,6 +649,47 @@
649649
"bug",
650650
"doc"
651651
]
652+
},
653+
{
654+
"login": "Sunitalama2",
655+
"name": "Sunitalama2",
656+
"avatar_url": "https://avatars.githubusercontent.com/u/195969937?v=4",
657+
"profile": "https://github.com/Sunitalama2",
658+
"contributions": [
659+
"tutorial",
660+
"doc"
661+
]
662+
},
663+
{
664+
"login": "trogers623",
665+
"name": "trogers623",
666+
"avatar_url": "https://avatars.githubusercontent.com/u/195968055?v=4",
667+
"profile": "https://github.com/trogers623",
668+
"contributions": [
669+
"tutorial",
670+
"doc"
671+
]
672+
},
673+
{
674+
"login": "eseigeldanf",
675+
"name": "eseigeldanf",
676+
"avatar_url": "https://avatars.githubusercontent.com/u/195968072?v=4",
677+
"profile": "https://github.com/eseigeldanf",
678+
"contributions": [
679+
"bug",
680+
"tutorial",
681+
"doc"
682+
]
683+
},
684+
{
685+
"login": "clhendrikse",
686+
"name": "Chloe Hendrikse",
687+
"avatar_url": "https://avatars.githubusercontent.com/u/70405404?v=4",
688+
"profile": "https://github.com/clhendrikse",
689+
"contributions": [
690+
"tutorial",
691+
"doc"
692+
]
652693
}
653694
],
654695
"contributorsPerLine": 7,

‎README.md

+18-12
Large diffs are not rendered by default.

‎docs/installation.md

+23-5
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77

88
### Table of contents
99
1. [Supported platforms and dependencies](#dependencies)
10-
2. [Desktop installation step-by-step guide](#desktop)
11-
3. [Server/command-line step-by-step guide](#cli)
10+
2. [Server/command-line step-by-step guide](#cli)
11+
3. [Desktop installation step-by-step guide](#desktop)
1212
4. [Detailed installation instructions](#detailed)
1313
1. [Conda](#conda)
1414
2. [PyPI](#pypi)
@@ -18,14 +18,32 @@
1818
- macOS x86 (Intel) and M (ARM) processors
1919
- Windows 64-bit, x86 processors
2020

21-
### Desktop installation step-by-step guide <a name="desktop"></a>
2221

23-
<iframe src="https://scribehow.com/embed/Install_PlantCV_via_Jupyter_Lab_Desktop__cS9d6VcxRcuDPGZxDfQycw" width="100%" height="640" allowfullscreen frameborder="0"></iframe>
22+
### Server/command line step-by-step guide <a name="cli"></a>
23+
24+
Use the server/command line installation if you plan to create PlantCV workflows and run workflows in parallel.
25+
Click through our step-by-step guide below to install PlantCV through conda.
2426

25-
### Server/command-line step-by-step guide <a name="cli"></a>
2627

2728
<iframe src="https://scribehow.com/embed/Installing_PlantCV__MacOSLinux__awAP9Xm2SgWV4SMZadm9CQ" width="640" height="640" allowfullscreen frameborder="0"></iframe>
2829

30+
31+
!!!note
32+
Once you have installed PlantCV, to get started see our [guide to using PlantCV with Jupyter Notebooks](https://plantcv.readthedocs.io/en/stable/jupyter/)
33+
and our [guide to developing workflows in PlantCV](https://plantcv.readthedocs.io/en/stable/analysis_approach/#developing-image-processing-workflows-workflow-development).
34+
35+
---
36+
37+
### Desktop installation step-by-step guide <a name="desktop"></a>
38+
This is a simple install option if you would just like to test out PlantCV.
39+
If you plan to use PlantCV for your analyses and run your workflows in parallel, we recommend using the command line installation above.
40+
41+
Click through our step-by-step guide below to install PlantCV through the JupyterLab Desktop app.
42+
43+
<iframe src="https://scribehow.com/embed/Install_PlantCV_via_Jupyter_Lab_Desktop__cS9d6VcxRcuDPGZxDfQycw" width="100%" height="640" allowfullscreen frameborder="0"></iframe>
44+
45+
---
46+
2947
### Detailed installation instructions <a name="detailed"></a>
3048

3149
PlantCV requires Python (tested with versions 3.9, 3.10, and 3.11) and these [Python packages](https://github.com/danforthcenter/plantcv/blob/main/pyproject.toml).

‎docs/kmeans_classifier.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The first function (`pcv.predict_kmeans`) takes a target image and uses a trained kmeans model produced by [`pcv.learn.train_kmeans`](train_kmeans.md) to classify regions of the target image by the trained clusters. The second function (`pcv.mask_kmeans`) takes a list of clusters and produces the combined mask from clusters of interest. The target and training images may be in grayscale or RGB image format.
44

5-
**plantcv.kmeans_classifier.predict_kmeans**(img, model_path="./kmeansout.fit", patch_size=10)
5+
**plantcv.predict_kmeans**(img, model_path="./kmeansout.fit", patch_size=10)
66

77
**outputs** An image with regions colored and labeled according to cluster assignment
88

@@ -18,7 +18,7 @@ The first function (`pcv.predict_kmeans`) takes a target image and uses a traine
1818
- **Example use below**
1919

2020

21-
**plantcv.kmeans_classifier.mask_kmeans**(labeled_img, k, cat_list=None)
21+
**plantcv.mask_kmeans**(labeled_img, k, cat_list=None)
2222

2323
**outputs** Either a combined mask of the requestedlist of clusters or a dictionary of each cluster as a separate mask with keys corresponding to the cluster number
2424

‎docs/parallel_config.md

+14-14
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ to run workflows in parallel.
88
Create a configuration file from a template:
99

1010
```bash
11-
plantcv-run-workflow --template my_config.txt
11+
plantcv-run-workflow --template my_config.json
1212
```
1313

1414
*class* **plantcv.parallel.WorkflowConfig**
@@ -64,8 +64,8 @@ Validate parameters/structure of configuration data.
6464
* **img_outdir**: (str, default = "."): path/name of output directory where images will be saved.
6565

6666

67-
* **tmp_dir**: (str, default = `None`): path/name of parent folder for the temporary directory, uses system default
68-
temporary directory when `None`.
67+
* **tmp_dir**: (str, default = `"."`): path/name of parent folder for the temporary directory, defaults to the
68+
current working directory.
6969

7070

7171
* **start_date**: (str, default = `None`): start date used to filter images. Images will be analyzed that are newer
@@ -107,15 +107,15 @@ for downstream analysis. The default, `filepath` will create groups of single im
107107
example of a multi-image group could be to pair VIS and NIR images (e.g. `["timestamp", "camera", "rotation"]`). Supported
108108
metadata terms are listed [here](pipeline_parallel.md).
109109

110-
* **group_name** (str, default = `"imgtype"`): either a metadata term used to create a unique name for each image in an
110+
* **group_name** (str, default = `"auto"`): either a metadata term used to create a unique name for each image in an
111111
image group (created by `groupby`), or `"auto"` to generate a numbered image sequence `image1, image2, ...`. The resulting
112112
names are used to access individual image filepaths in a workflow.
113113

114114
* **cleanup**: (bool, default =`True`): remove temporary job directory if `True`.
115115

116116

117-
* **append**: (bool, default = `True`): if `True` will append results to an existing json file. If `False`, will delete
118-
previous results stored in the specified JSON file.
117+
* **append**: (bool, default = `False`): if `False`, will delete previous results stored in the specified JSON file.
118+
If `True` will append results to an existing json file.
119119

120120

121121
* **cluster** (str, default = "LocalCluster"): LocalCluster will run PlantCV workflows on a single machine. All valid
@@ -155,32 +155,32 @@ After defining the cluster, parameters are used to define the size of and reques
155155
environment. These settings are defined in the `cluster_config` parameter. We define by default the following
156156
parameters:
157157

158-
**n_workers**: (int, required, default = 1): the number of workers/slots to request from the cluster. Because we
158+
* **n_workers**: (int, required, default = 1): the number of workers/slots to request from the cluster. Because we
159159
generally use 1 CPU per image analysis workflow, this is effectively the maximum number of concurrently running
160160
workflows.
161161

162-
**cores**: (int, required, default = 1): the number of compute cores per workflow. This should be left as 1 unless a
162+
* **cores**: (int, required, default = 1): the number of compute cores per workflow. This should be left as 1 unless a
163163
workflow is designed to use multiple CPUs/cores/threads.
164164

165-
**memory**: (str, required, default = "1GB"): the amount of memory/RAM used per workflow. Can be set as a number plus
165+
* **memory**: (str, required, default = "1GB"): the amount of memory/RAM used per workflow. Can be set as a number plus
166166
units (KB, MB, GB, etc.).
167167

168-
**disk**: (str, required, default = "1GB"): the amount of disk space used per workflow. Can be set as a number plus
168+
* **disk**: (str, required, default = "1GB"): the amount of disk space used per workflow. Can be set as a number plus
169169
units (KB, MB, GB, etc.).
170170

171-
**log_directory**: (str, optional, default = `None`): directory where worker logs are stored. Can be set to a path or
171+
* **log_directory**: (str, optional, default = `None`): directory where worker logs are stored. Can be set to a path or
172172
environmental variable.
173173

174-
**local_directory**: (str, optional, default = `None`): dask working directory location. Can be set to a path or
174+
* **local_directory**: (str, optional, default = `None`): dask working directory location. Can be set to a path or
175175
environmental variable.
176176

177-
**job_extra_directives**: (dict, optional, default = `None`): extra parameters sent to the scheduler. Specified as a dictionary
177+
* **job_extra_directives**: (dict, optional, default = `None`): extra parameters sent to the scheduler. Specified as a dictionary
178178
of key-value pairs (e.g. `{"getenv": "true"}`).
179179

180180
!!! note
181181
`n_workers` is the only parameter used by `LocalCluster`, all others are currently ignored. `n_workers`, `cores`,
182182
`memory`, and `disk` are required by the other clusters. All other parameters are optional. Additional parameters
183-
defined in the [dask-jobqueu API](https://jobqueue.dask.org/en/latest/api.html) can be supplied.
183+
defined in the [dask-jobqueue API](https://jobqueue.dask.org/en/latest/api.html) can be supplied.
184184

185185
### Example
186186

‎docs/pipeline_parallel.md

+9-8
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,15 @@ a configuration file can be edited and input.
2222
To create a configuration file, run the following:
2323

2424
```bash
25-
plantcv-run-workflow --template my_config.txt
25+
plantcv-run-workflow --template my_config.json
2626

2727
```
2828

2929
The code above saves a text configuration file in JSON format using the built-in defaults for parameters. The parameters can be modified
3030
directly in Python as demonstrated in the [WorkflowConfig documentation](parallel_config.md). A configuration can be
3131
saved at any time using the `save_config` method to save for later use. Alternatively, open the saved config
32-
file with your favorite text editor and adjust the parameters as needed.
32+
file with your favorite text editor and adjust the parameters as needed (refer to the attributes section of
33+
[WorkflowConfig documentation](parallel_config.md) for details about each parameter).
3334

3435
**Some notes on JSON format:**
3536

@@ -103,7 +104,7 @@ Sample image filename: `cam1_16-08-06-16:45_el1100s1_p19.jpg`
103104
"filename_metadata": ["camera", "timestamp", "id", "other"],
104105
"workflow": "/home/mgehan/pat-edger/round1-python-pipelines/2016-08_pat-edger_brassica-cam1-splitimg.py",
105106
"img_outdir": "/shares/mgehan_share/raw_data/raw_image/2016-08_pat-edger/data/split-round1/split-cam1/output",
106-
"tmp_dir": null,
107+
"tmp_dir": "."",
107108
"start_date": null,
108109
"end_date": null,
109110
"imgformat": "jpg",
@@ -115,7 +116,7 @@ Sample image filename: `cam1_16-08-06-16:45_el1100s1_p19.jpg`
115116
"groupby": ["filepath"],
116117
"group_name": "auto",
117118
"cleanup": true,
118-
"append": true,
119+
"append": false,
119120
"cluster": "HTCondorCluster",
120121
"cluster_config": {
121122
"n_workers": 16,
@@ -179,7 +180,7 @@ in a list to the `filename_metadata` parameter.
179180
"filename_metadata": ["camera", "plantbarcode", "timestamp"],
180181
"workflow": "user-workflow.py",
181182
"img_outdir": "output_directory",
182-
"tmp_dir": null,
183+
"tmp_dir": ".",
183184
"start_date": null,
184185
"end_date": null,
185186
"imgformat": "jpg",
@@ -191,7 +192,7 @@ in a list to the `filename_metadata` parameter.
191192
"groupby": ["filepath"],
192193
"group_name": "auto",
193194
"cleanup": true,
194-
"append": true,
195+
"append": false,
195196
"cluster": "HTCondorCluster",
196197
"cluster_config": {
197198
"n_workers": 16,
@@ -231,7 +232,7 @@ To identify each image within our workflow, we will name them based on the `imgt
231232
"filename_metadata": ["imgtype", "timestamp", "id", "other"],
232233
"workflow": "/home/mgehan/pat-edger/round1-python-pipelines/2016-08_pat-edger_brassica-cam1-splitimg.py",
233234
"img_outdir": "/shares/mgehan_share/raw_data/raw_image/2016-08_pat-edger/data/split-round1/split-cam1/output",
234-
"tmp_dir": null,
235+
"tmp_dir": ".",
235236
"start_date": null,
236237
"end_date": null,
237238
"imgformat": "jpg",
@@ -243,7 +244,7 @@ To identify each image within our workflow, we will name them based on the `imgt
243244
"groupby": ["timestamp"],
244245
"group_name": "imgtype",
245246
"cleanup": true,
246-
"append": true,
247+
"append": false,
247248
"cluster": "HTCondorCluster",
248249
"cluster_config": {
249250
"n_workers": 16,

‎plantcv/parallel/__init__.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ def __init__(self):
2121
self.workflow = ""
2222
self.img_outdir = "./output_images"
2323
self.include_all_subdirs = True
24-
self.tmp_dir = None
24+
self.tmp_dir = "."
2525
self.start_date = None
2626
self.end_date = None
2727
self.imgformat = "png"
@@ -31,9 +31,9 @@ def __init__(self):
3131
self.writeimg = False
3232
self.other_args = {}
3333
self.groupby = ["filepath"]
34-
self.group_name = "imgtype"
34+
self.group_name = "auto"
3535
self.cleanup = True
36-
self.append = True
36+
self.append = False
3737
self.cluster = "LocalCluster"
3838
self.cluster_config = {
3939
"n_workers": 1,

‎plantcv/parallel/cli.py

+11
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
import time
66
import datetime
77
import plantcv.parallel
8+
import plantcv.utils
89
import tempfile
910
import shutil
1011

@@ -119,6 +120,16 @@ def main():
119120
print(f"Processing results took {process_results_clock_time} seconds.", file=sys.stderr)
120121
###########################################
121122

123+
# Convert json results to csv files
124+
###########################################
125+
# Convert results start time
126+
convert_results_start_time = time.time()
127+
print("Converting json to csv... ", file=sys.stderr)
128+
plantcv.utils.json2csv(config.json, config.json)
129+
convert_results_clock_time = time.time() - convert_results_start_time
130+
print(f"Processing results took {convert_results_clock_time} seconds.", file=sys.stderr)
131+
###########################################
132+
122133
# Cleanup
123134
if config.cleanup is True:
124135
shutil.rmtree(config.tmp_dir)

‎plantcv/plantcv/closing.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ def closing(gray_img, kernel=None):
2222
fatal_error("Input image must be grayscale or binary")
2323

2424
# If image is binary use the faster method
25-
if len(np.unique(gray_img)) == 2:
25+
if len(np.unique(gray_img)) <= 2:
2626
bool_img = morphology.binary_closing(gray_img, kernel)
2727
filtered_img = np.copy(bool_img.astype(np.uint8) * 255)
2828
# Otherwise use method appropriate for grayscale images

‎plantcv/plantcv/fill_holes.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ def fill_holes(bin_img):
2222
:return filtered_img: numpy.ndarray
2323
"""
2424
# Make sure the image is binary
25-
if len(np.shape(bin_img)) != 2 or len(np.unique(bin_img)) != 2:
25+
if len(np.shape(bin_img)) != 2 or len(np.unique(bin_img)) > 2:
2626
fatal_error("Image is not binary")
2727

2828
# Cast binary image to boolean

‎plantcv/plantcv/kmeans_classifier.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ def predict_kmeans(img, model_path="./kmeansout.fit", patch_size=10):
4848
reshape_params = [[h - 2*mg + 1, w - 2*mg + 1], [h - 2*mg, w - 2*mg]]
4949
# Takes care of even vs odd numbered patch size reshaping
5050
labeled = train_labels.reshape(reshape_params[patch_size % 2][0], reshape_params[patch_size % 2][1])
51+
labeled = labeled.astype("uint8")
5152
_debug(visual=labeled, filename=os.path.join(params.debug_outdir, "_labeled_img.png"))
5253
return labeled
5354

@@ -70,7 +71,7 @@ def mask_kmeans(labeled_img, k, cat_list=None):
7071
mask_dict = {}
7172
L = [*range(k)]
7273
for i in L:
73-
mask_light = np.where(labeled_img == i, 255, 0)
74+
mask_light = np.where(labeled_img == i, 255, 0).astype("uint8")
7475
_debug(visual=mask_light, filename=os.path.join(params.debug_outdir, "_kmeans_mask_"+str(i)+".png"))
7576
mask_dict[str(i)] = mask_light
7677
return mask_dict
@@ -80,9 +81,9 @@ def mask_kmeans(labeled_img, k, cat_list=None):
8081
params.debug = None
8182
for idx, i in enumerate(cat_list):
8283
if idx == 0:
83-
mask_light = np.where(labeled_img == i, 255, 0)
84+
mask_light = np.where(labeled_img == i, 255, 0).astype("uint8")
8485
else:
85-
mask_light = pcv.logical_or(mask_light, np.where(labeled_img == i, 255, 0))
86+
mask_light = pcv.logical_or(mask_light, np.where(labeled_img == i, 255, 0).astype("uint8"))
8687
params.debug = debug
8788
_debug(visual=mask_light, filename=os.path.join(params.debug_outdir, "_kmeans_combined_mask.png"))
8889
return mask_light

‎plantcv/plantcv/transform/auto_correct_color.py

+5-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Automatically detect a color card and color correct to standard chip values
22

3-
from plantcv.plantcv import params
3+
from plantcv.plantcv import params, deprecation_warning
44
from plantcv.plantcv.transform.detect_color_card import detect_color_card
55
from plantcv.plantcv.transform.color_correction import get_color_matrix, std_color_matrix, affine_color_correction
66

@@ -28,7 +28,10 @@ def auto_correct_color(rgb_img, label=None, **kwargs):
2828
# Set lable to params.sample_label if None
2929
if label is None:
3030
label = params.sample_label
31-
31+
deprecation_warning(
32+
"The 'label' parameter is no longer utilized, since color chip size is now metadata. "
33+
"It will be removed in PlantCV v5.0."
34+
)
3235
# Get keyword arguments and set defaults if not set
3336
labeled_mask = detect_color_card(rgb_img=rgb_img, min_size=kwargs.get("min_size", 1000),
3437
radius=kwargs.get("radius", 20),

‎plantcv/plantcv/transform/detect_color_card.py

+5-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
import cv2
77
import math
88
import numpy as np
9-
from plantcv.plantcv import params, outputs, fatal_error
9+
from plantcv.plantcv import params, outputs, fatal_error, deprecation_warning
1010
from plantcv.plantcv._debug import _debug
1111

1212

@@ -108,7 +108,10 @@ def detect_color_card(rgb_img, label=None, **kwargs):
108108
# Set lable to params.sample_label if None
109109
if label is None:
110110
label = params.sample_label
111-
111+
deprecation_warning(
112+
"The 'label' parameter is no longer utilized, since color chip size is now metadata. "
113+
"It will be removed in PlantCV v5.0."
114+
)
112115
# Get keyword arguments and set defaults if not set
113116
min_size = kwargs.get("min_size", 1000) # Minimum size for _is_square chip filtering
114117
radius = kwargs.get("radius", 20) # Radius of circles to draw on the color chips
There was a problem loading the remainder of the diff.

0 commit comments

Comments
 (0)
Please sign in to comment.