-
Notifications
You must be signed in to change notification settings - Fork 1
Add image stack scale options and adjust image stack ingest #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds modular stack-pattern detection and per-channel stack reading, introduces stack voxel-scale parsing/prefill/prompting (env var NAPARI_OME_ARROW_STACK_SCALE), refactors layer-mode inference into new helper modules, updates Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/napari
participant Reader as _reader.py
participant Stack as _reader_stack
participant Infer as _reader_infer
participant IO as _reader_omearrow
participant Dialog as QtDialog
User->>Reader: reader_function(file(s))
Reader->>Stack: detect pattern / collect files
Stack-->>Reader: pattern + channel_names
Reader->>Infer: _get_layer_mode(sample_path, image_type_hint?)
Infer-->>Reader: layer_mode
alt multiple channels & image mode
loop per channel
Reader->>Stack: _replace_channel_placeholder(pattern, channel)
Reader->>IO: _read_one(per-channel files, stack_scale_override?)
alt no env scale
Reader->>Dialog: _prompt_stack_scale(sample_path, default)
Dialog-->>Reader: user scale
end
IO-->>Reader: array + metadata
Reader-->>User: emit LayerData (array, meta, name)
end
else
Reader->>IO: _read_one(single-source)
IO-->>Reader: array + metadata
Reader-->>User: emit LayerData
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested Reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Important Action Needed: IP Allowlist UpdateIf your organization protects your Git platform with IP whitelisting, please add the new CodeRabbit IP address to your allowlist:
Reviews will stop working after February 8, 2026 if the new IP is not added to your allowlist. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@src/napari_ome_arrow/_reader.py`:
- Around line 837-840: The z_key function's regex currently uses an escaped
backslash sequence that matches a literal "\d" instead of digits; change the
pattern in z_key (the re.search call) to use a proper digit-capturing group,
e.g. re.search(r"Z[S]?(\d+)", path.name) (or re.search(r"[Zz]S?(\d+)",
path.name) if you need to accept lowercase 'z'), so match.group(1) returns the
numeric Z index correctly and permits integer ordering in _files_from_pattern.
In `@tests/test_reader.py`:
- Around line 219-221: The lambda passed to reader_mod for "_get_layer_mode"
uses unused parameters `sample_path` and `image_type_hint` which triggers Ruff
ARG005; rename them to `_sample_path` and `_image_type_hint` (i.e., prefix both
unused lambda parameters with an underscore) so the unused-argument lint rule is
satisfied while preserving the same behavior of the lambda.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 129-135: The README example for NAPARI_OME_ARROW_STACK_SCALE is
ambiguous: clarify that the single-file example `napari "stack/img_000.tif"`
does not trigger CLI stack-pattern detection and is only representative of
dragging multiple files into the GUI; either state explicitly that stack
detection is GUI-only for that form or replace the example with a multi-file CLI
example (e.g., a numeric range or glob) and keep the env var name
NAPARI_OME_ARROW_STACK_SCALE and the "stack" examples so readers can locate and
update the two example lines.
In `@src/napari_ome_arrow/_reader.py`:
- Around line 1249-1252: Remove the redundant setdefault and nested check for
scale: when scale is not None, simply assign add_kwargs["scale"] = scale (drop
add_kwargs.setdefault("scale", scale) and the extra if scale is not None branch)
so there's a single clear assignment; locate the block using the add_kwargs and
scale symbols in _reader.py and replace the three-line sequence with one
assignment guarded by the existing if scale is not None.
🧹 Nitpick comments (2)
src/napari_ome_arrow/_reader.py (2)
114-120: Add parentheses to clarify operator precedence.The
and/orchain is technically correct but the intent is clearer with explicit parentheses, as flagged by Ruff RUF021.♻️ Suggested fix
looks_zarr = ( s.endswith((".ome.zarr", ".zarr")) or ".zarr/" in s - or p.exists() - and p.is_dir() - and p.suffix.lower() == ".zarr" + or (p.exists() and p.is_dir() and p.suffix.lower() == ".zarr") )
976-998: Consider returning from else block for clarity.Per Ruff TRY300, the
returnstatement could be placed in anelseblock to improve exception handling clarity.♻️ Suggested fix
def _scale_from_ome_arrow( obj: OMEArrow, ) -> tuple[float, float, float] | None: try: record = obj.data.as_py() pixels_meta = record.get("pixels_meta", {}) if record else {} if not any( k in pixels_meta for k in ( "physical_size_z", "physical_size_y", "physical_size_x", ) ): return None z = float(pixels_meta.get("physical_size_z") or 1.0) y = float(pixels_meta.get("physical_size_y") or 1.0) x = float(pixels_meta.get("physical_size_x") or 1.0) if z == 1.0 and y == 1.0 and x == 1.0: return None - return z, y, x except Exception: return None + else: + return z, y, x
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 65-66: Update the README line about "Stack scale prompt +
override" to explicitly state that the voxel-scale prompt only appears when no
scale metadata or NAPARI_OME_ARROW_STACK_SCALE override is present and only when
a Qt UI is available (i.e., not in headless runs); reference the environment
variable NAPARI_OME_ARROW_STACK_SCALE and the stack (Z,Y,X or T,C,Z,Y,X) format
so users know to prefill it to avoid the prompt in non-UI or metadata-containing
scenarios.
In `@src/napari_ome_arrow/_reader.py`:
- Around line 1306-1365: The per-channel loop calls _read_one repeatedly which
can trigger the stack-scale prompt each time; create a small cache for the
resolved stack scale (e.g., resolved_stack_scale = None) before the for label in
channel_names loop, obtain/resolve the scale once (by calling a single
preliminary _read_one or a new helper like _get_stack_scale_for_pattern) and
store it, then update calls to _read_one to accept an optional resolved_scale
parameter (or read metadata without prompting) so subsequent channel reads reuse
the cached scale; keep existing rgb fallback using _read_rgb_stack_pattern and
preserve behavior of _strip_channel_axis/_maybe_set_viewer_3d and how
add_kwargs/name are set.
jenna-tomkinson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I would recommend more docstrings for the script just to easier parsing since it is a big file, also might recommend converting that script into parts. Overall, nice work!
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
|
Thanks @jenna-tomkinson ! Addressed your comments and now merging this in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/napari_ome_arrow/_reader_infer.py`:
- Around line 17-53: In _normalize_image_type, the current "image" substring
check runs before label/mask/seg checks causing strings like
"segmentation_image" to be classified as "image"; fix by testing for label-like
tokens (the set
{"labels","label","mask","masks","segmentation","seg","outlines","outline"} and
substring checks "label"|"mask"|"seg"|"outline") first and return "labels" if
any match, and only after that evaluate image tokens (exact matches
"image","intensity","raw" and maybe "image" substring), ensuring the label-path
precedes the "image" substring check in the _normalize_image_type function.
In `@src/napari_ome_arrow/_reader_omearrow.py`:
- Around line 418-470: The .npy fallback calls _maybe_set_viewer_3d on arrays
that may still include a channel axis, which makes 2D multichannel data be
mis-detected as 3D; fix by detecting when a channel axis is present (e.g. the
same condition used earlier: arr.ndim == 3 and arr.shape[0] <= 6 or
add_kwargs.get("channel_axis") is not None) and call _maybe_set_viewer_3d on a
channel-stripped view (e.g. arr[0, ...] or by removing the channel_axis) instead
of the full arr so 3D-toggle only considers spatial dimensions; update the .npy
fallback block around _maybe_set_viewer_3d and reuse the same channel-detection
logic used when setting add_kwargs["channel_axis"].
In `@src/napari_ome_arrow/_reader_stack.py`:
- Around line 341-366: The current _files_from_pattern function uses
re.escape(name) which prevents regex wildcards like .* from working; change the
handling so the pattern is treated as a regex (not fully escaped) and only
convert the angle-bracket placeholders into a non-slash wildcard group.
Concretely, replace the re.escape(name) step with using the raw name string
(e.g. name_raw = name), then run regex = re.sub(r"<[^>]+>", r"[^/]+", name_raw)
and compile that with anchors, keeping the rest of the function (candidates,
matched, z_key, sorted return) unchanged to restore support for patterns
containing regex wildcards in _files_from_pattern.
| def _normalize_image_type(value: Any) -> str | None: | ||
| """Normalize an image type hint to "image" or "labels". | ||
|
|
||
| Args: | ||
| value: Raw hint value from metadata or user input. | ||
|
|
||
| Returns: | ||
| "image", "labels", or None if it cannot be inferred. | ||
| """ | ||
| if value is None: | ||
| return None | ||
| # Normalize arbitrary input to a comparable string token. | ||
| text = str(value).strip().lower() | ||
| if not text: | ||
| return None | ||
| if text in {"image", "intensity", "raw"} or "image" in text: | ||
| return "image" | ||
| if ( | ||
| text | ||
| in { | ||
| "labels", | ||
| "label", | ||
| "mask", | ||
| "masks", | ||
| "segmentation", | ||
| "seg", | ||
| "outlines", | ||
| "outline", | ||
| } | ||
| or "label" in text | ||
| or "mask" in text | ||
| or "seg" in text | ||
| or "outline" in text | ||
| ): | ||
| return "labels" | ||
| return None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Label-like image_type values can be misclassified as image.
The "image" in text check runs before label/mask/seg tokens, so values like "segmentation_image" or "label_image" will return "image" instead of "labels".
🔧 Suggested fix
- if text in {"image", "intensity", "raw"} or "image" in text:
- return "image"
if (
text
in {
"labels",
"label",
"mask",
"masks",
"segmentation",
"seg",
"outlines",
"outline",
}
or "label" in text
or "mask" in text
or "seg" in text
or "outline" in text
):
return "labels"
+ if text in {"image", "intensity", "raw"} or "image" in text:
+ return "image"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _normalize_image_type(value: Any) -> str | None: | |
| """Normalize an image type hint to "image" or "labels". | |
| Args: | |
| value: Raw hint value from metadata or user input. | |
| Returns: | |
| "image", "labels", or None if it cannot be inferred. | |
| """ | |
| if value is None: | |
| return None | |
| # Normalize arbitrary input to a comparable string token. | |
| text = str(value).strip().lower() | |
| if not text: | |
| return None | |
| if text in {"image", "intensity", "raw"} or "image" in text: | |
| return "image" | |
| if ( | |
| text | |
| in { | |
| "labels", | |
| "label", | |
| "mask", | |
| "masks", | |
| "segmentation", | |
| "seg", | |
| "outlines", | |
| "outline", | |
| } | |
| or "label" in text | |
| or "mask" in text | |
| or "seg" in text | |
| or "outline" in text | |
| ): | |
| return "labels" | |
| return None | |
| def _normalize_image_type(value: Any) -> str | None: | |
| """Normalize an image type hint to "image" or "labels". | |
| Args: | |
| value: Raw hint value from metadata or user input. | |
| Returns: | |
| "image", "labels", or None if it cannot be inferred. | |
| """ | |
| if value is None: | |
| return None | |
| # Normalize arbitrary input to a comparable string token. | |
| text = str(value).strip().lower() | |
| if not text: | |
| return None | |
| if ( | |
| text | |
| in { | |
| "labels", | |
| "label", | |
| "mask", | |
| "masks", | |
| "segmentation", | |
| "seg", | |
| "outlines", | |
| "outline", | |
| } | |
| or "label" in text | |
| or "mask" in text | |
| or "seg" in text | |
| or "outline" in text | |
| ): | |
| return "labels" | |
| if text in {"image", "intensity", "raw"} or "image" in text: | |
| return "image" | |
| return None |
🤖 Prompt for AI Agents
In `@src/napari_ome_arrow/_reader_infer.py` around lines 17 - 53, In
_normalize_image_type, the current "image" substring check runs before
label/mask/seg checks causing strings like "segmentation_image" to be classified
as "image"; fix by testing for label-like tokens (the set
{"labels","label","mask","masks","segmentation","seg","outlines","outline"} and
substring checks "label"|"mask"|"seg"|"outline") first and return "labels" if
any match, and only after that evaluate image tokens (exact matches
"image","intensity","raw" and maybe "image" substring), ensuring the label-path
precedes the "image" substring check in the _normalize_image_type function.
| # 🔹 Ask viewer to switch to 3D if there is a real Z-stack | ||
| _maybe_set_viewer_3d(arr) | ||
|
|
||
| if looks_stack: | ||
| # Stack scale can be inferred or user-provided. | ||
| scale_override = ( | ||
| scale_override | ||
| if scale_override is not None | ||
| else inferred_scale | ||
| ) | ||
| scale = ( | ||
| _scale_for_array(arr, mode, add_kwargs, scale_override) | ||
| if scale_override is not None | ||
| else None | ||
| ) | ||
| else: | ||
| scale = ( | ||
| _scale_for_array(arr, mode, add_kwargs, inferred_scale) | ||
| if inferred_scale is not None | ||
| else None | ||
| ) | ||
| if scale is not None: | ||
| add_kwargs["scale"] = scale | ||
|
|
||
| return arr, add_kwargs, layer_type | ||
|
|
||
| # ---- bare .npy fallback ----------------------------------------- | ||
| if looks_npy: | ||
| # Minimal fallback for plain numpy arrays. | ||
| arr = np.load(src) | ||
| if arr.ndim == 1: | ||
| n = int(np.sqrt(arr.size)) | ||
| if n * n == arr.size: | ||
| arr = arr.reshape(n, n) | ||
| else: | ||
| raise ValueError( | ||
| f".npy is 1D and not a square image: {arr.shape}" | ||
| ) | ||
|
|
||
| if mode == "image": | ||
| if arr.ndim == 3 and arr.shape[0] <= 6: | ||
| add_kwargs["channel_axis"] = 0 | ||
| layer_type = "image" | ||
| else: | ||
| # labels | ||
| if arr.ndim == 3: # treat as (C, Y, X) → first channel | ||
| arr = arr[0, ...] | ||
| arr = _as_labels(arr) | ||
| add_kwargs.setdefault("opacity", 0.7) | ||
| layer_type = "labels" | ||
|
|
||
| # 🔹 Same 3D toggle for npy-based data | ||
| _maybe_set_viewer_3d(arr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using channel axis as Z when toggling 3D.
_maybe_set_viewer_3d is called on arrays that may still include a channel axis (notably the .npy fallback). That can flip the viewer to 3D for 2D multichannel data.
🔧 Suggested fix (strip channel axis for 3D detection only)
- _maybe_set_viewer_3d(arr)
+ arr_for_3d = arr
+ channel_axis = add_kwargs.get("channel_axis")
+ if channel_axis is not None:
+ try:
+ arr_for_3d = np.take(arr, 0, axis=int(channel_axis))
+ except Exception:
+ arr_for_3d = arr
+ _maybe_set_viewer_3d(arr_for_3d)
@@
- _maybe_set_viewer_3d(arr)
+ arr_for_3d = arr
+ channel_axis = add_kwargs.get("channel_axis")
+ if channel_axis is not None:
+ try:
+ arr_for_3d = np.take(arr, 0, axis=int(channel_axis))
+ except Exception:
+ arr_for_3d = arr
+ _maybe_set_viewer_3d(arr_for_3d)🧰 Tools
🪛 Ruff (0.14.14)
[warning] 453-455: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@src/napari_ome_arrow/_reader_omearrow.py` around lines 418 - 470, The .npy
fallback calls _maybe_set_viewer_3d on arrays that may still include a channel
axis, which makes 2D multichannel data be mis-detected as 3D; fix by detecting
when a channel axis is present (e.g. the same condition used earlier: arr.ndim
== 3 and arr.shape[0] <= 6 or add_kwargs.get("channel_axis") is not None) and
call _maybe_set_viewer_3d on a channel-stripped view (e.g. arr[0, ...] or by
removing the channel_axis) instead of the full arr so 3D-toggle only considers
spatial dimensions; update the .npy fallback block around _maybe_set_viewer_3d
and reuse the same channel-detection logic used when setting
add_kwargs["channel_axis"].
| def _files_from_pattern(pattern: str) -> list[Path]: | ||
| """List files that match a stack pattern. | ||
|
|
||
| Args: | ||
| pattern: Stack pattern string. | ||
|
|
||
| Returns: | ||
| Files matching the pattern, sorted by Z-like tokens when possible. | ||
| """ | ||
| p = Path(pattern) | ||
| folder = p.parent | ||
| name = p.name | ||
| # Convert <...> placeholders to a wildcard regex. | ||
| regex = re.escape(name) | ||
| regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex) | ||
| compiled = re.compile(f"^{regex}$") | ||
| candidates = [f for f in folder.iterdir() if f.is_file()] | ||
| matched = [f for f in candidates if compiled.match(f.name)] | ||
|
|
||
| def z_key(path: Path) -> tuple[int, str]: | ||
| # Prefer Z-like numeric ordering when present. | ||
| match = re.search(r"Z[S]?(\d+)", path.name) | ||
| return (int(match.group(1)) if match else -1, path.name) | ||
|
|
||
| return sorted(matched, key=z_key) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Support regex wildcards in stack patterns.
re.escape makes patterns like .* literal, so the prompt’s “regex like .*” guidance (and the fallback patterns) won’t match any files. This can break RGB fallback for valid stack patterns.
🔧 Suggested fix
- # Convert <...> placeholders to a wildcard regex.
- regex = re.escape(name)
- regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)
+ # Convert <...> placeholders to a wildcard regex, while preserving
+ # explicit regex tokens like ".*" in user-supplied patterns.
+ if re.search(r"[.*+?{}\[\]|()\\]", name):
+ regex = re.sub(r"<[^>]+>", r"[^/]+", name)
+ else:
+ regex = re.escape(name)
+ regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _files_from_pattern(pattern: str) -> list[Path]: | |
| """List files that match a stack pattern. | |
| Args: | |
| pattern: Stack pattern string. | |
| Returns: | |
| Files matching the pattern, sorted by Z-like tokens when possible. | |
| """ | |
| p = Path(pattern) | |
| folder = p.parent | |
| name = p.name | |
| # Convert <...> placeholders to a wildcard regex. | |
| regex = re.escape(name) | |
| regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex) | |
| compiled = re.compile(f"^{regex}$") | |
| candidates = [f for f in folder.iterdir() if f.is_file()] | |
| matched = [f for f in candidates if compiled.match(f.name)] | |
| def z_key(path: Path) -> tuple[int, str]: | |
| # Prefer Z-like numeric ordering when present. | |
| match = re.search(r"Z[S]?(\d+)", path.name) | |
| return (int(match.group(1)) if match else -1, path.name) | |
| return sorted(matched, key=z_key) | |
| def _files_from_pattern(pattern: str) -> list[Path]: | |
| """List files that match a stack pattern. | |
| Args: | |
| pattern: Stack pattern string. | |
| Returns: | |
| Files matching the pattern, sorted by Z-like tokens when possible. | |
| """ | |
| p = Path(pattern) | |
| folder = p.parent | |
| name = p.name | |
| # Convert <...> placeholders to a wildcard regex, while preserving | |
| # explicit regex tokens like ".*" in user-supplied patterns. | |
| if re.search(r"[.*+?{}\[\]|()\\]", name): | |
| regex = re.sub(r"<[^>]+>", r"[^/]+", name) | |
| else: | |
| regex = re.escape(name) | |
| regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex) | |
| compiled = re.compile(f"^{regex}$") | |
| candidates = [f for f in folder.iterdir() if f.is_file()] | |
| matched = [f for f in candidates if compiled.match(f.name)] | |
| def z_key(path: Path) -> tuple[int, str]: | |
| # Prefer Z-like numeric ordering when present. | |
| match = re.search(r"Z[S]?(\d+)", path.name) | |
| return (int(match.group(1)) if match else -1, path.name) | |
| return sorted(matched, key=z_key) |
🤖 Prompt for AI Agents
In `@src/napari_ome_arrow/_reader_stack.py` around lines 341 - 366, The current
_files_from_pattern function uses re.escape(name) which prevents regex wildcards
like .* from working; change the handling so the pattern is treated as a regex
(not fully escaped) and only convert the angle-bracket placeholders into a
non-slash wildcard group. Concretely, replace the re.escape(name) step with
using the raw name string (e.g. name_raw = name), then run regex =
re.sub(r"<[^>]+>", r"[^/]+", name_raw) and compile that with anchors, keeping
the rest of the function (candidates, matched, z_key, sorted return) unchanged
to restore support for patterns containing regex wildcards in
_files_from_pattern.
Description
This PR adds options for indicating image stack scales both from images and manually (when unavailable from the data). I also found that we needed some adjustments to the image stack processing.
Closes #12
What kind of change(s) are included?
Checklist
Please ensure that all boxes are checked before indicating that this pull request is ready for review.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores