Skip to content

Conversation

@d33bs
Copy link
Member

@d33bs d33bs commented Jan 21, 2026

Description

This PR adds options for indicating image stack scales both from images and manually (when unavailable from the data). I also found that we needed some adjustments to the image stack processing.

Closes #12

What kind of change(s) are included?

  • Documentation (changes docs or other related content)
  • Bug fix (fixes an issue).
  • Enhancement (adds functionality).
  • Breaking change (these changes would cause existing functionality to not work as expected).

Checklist

Please ensure that all boxes are checked before indicating that this pull request is ready for review.

  • I have read and followed the CONTRIBUTING.md guidelines.
  • I have searched for existing content to ensure this is not a duplicate.
  • I have performed a self-review of these additions (including spelling, grammar, and related).
  • These changes pass all pre-commit checks.
  • I have added comments to my code to help provide understanding
  • I have added a test which covers the code changes found within this PR
  • I have deleted all non-relevant text in this pull request template.

Summary by CodeRabbit

  • New Features

    • Stack scale support: prompt or prefill voxel spacing when loading image stacks; per-stack override via NAPARI_OME_ARROW_STACK_SCALE (Z,Y,X or T,C,Z,Y,X).
    • Improved stack handling: detect multi-file numbered stacks, infer channelized stacks and produce per-channel layers when applicable.
  • Documentation

    • README updated with stack scale details, usage examples, and multi-select/stack-pattern guidance.
  • Tests

    • Added/updated tests for stack patterns, per-channel layers, nonsequential tokens, and scale overrides.
  • Chores

    • Updated pre-commit configuration and package-data declaration.

@coderabbitai
Copy link

coderabbitai bot commented Jan 21, 2026

📝 Walkthrough

Walkthrough

Adds modular stack-pattern detection and per-channel stack reading, introduces stack voxel-scale parsing/prefill/prompting (env var NAPARI_OME_ARROW_STACK_SCALE), refactors layer-mode inference into new helper modules, updates _get_layer_mode and _read_one signatures, adds numerous reader helpers and tests, and updates README and packaging/config.

Changes

Cohort / File(s) Summary
Configuration
\.pre-commit-config.yaml``
Bumped pre-commit hook revisions: pyproject-fmt and astral-sh/ruff-pre-commit updated.
Packaging
\pyproject.toml``
Reworked setuptools package-data declaration to include *.yaml via [tool.setuptools.package-data].
Documentation
\README.md``
Added "Stack scale prompt + override" docs, CLI examples, stack-pattern/multi-file stack usage, and NAPARI_OME_ARROW_STACK_SCALE format.
Core reader entry
src/napari_ome_arrow/_reader.py
Rewrote to import and expose many modular helpers; updated signatures `_get_layer_mode(..., *, image_type_hint: str
Reader helpers — inference
src/napari_ome_arrow/_reader_infer.py
New module: image/labels normalization and inference utilities (_normalize_image_type, _infer_layer_mode_from_*, _looks_like_ome_source).
Reader helpers — napari UI
src/napari_ome_arrow/_reader_napari.py
New module: napari viewer helpers (_maybe_set_viewer_3d, _as_labels, _enable_grid, _strip_channel_axis).
Reader helpers — OME-Arrow I/O
src/napari_ome_arrow/_reader_omearrow.py
New module: parquet/Vortex/OME-Arrow reading, _read_one implementation for many source types, scale extraction, and robust error handling.
Reader helpers — stack patterns
src/napari_ome_arrow/_reader_stack.py
New module: comprehensive stack-pattern handling — suggestion, prompting, file collection, channel placeholder handling, scale parsing/formatting/inference, RGB-stack assembly.
Types
src/napari_ome_arrow/_reader_types.py
New type aliases: PathLike and LayerData.
Tests
\tests/test_reader.py``
Updated/added tests for stack-pattern behavior, per-channel layers, scale override via env var, and image_type-based mode inference; mocks adjusted for new signatures.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/napari
    participant Reader as _reader.py
    participant Stack as _reader_stack
    participant Infer as _reader_infer
    participant IO as _reader_omearrow
    participant Dialog as QtDialog

    User->>Reader: reader_function(file(s))
    Reader->>Stack: detect pattern / collect files
    Stack-->>Reader: pattern + channel_names
    Reader->>Infer: _get_layer_mode(sample_path, image_type_hint?)
    Infer-->>Reader: layer_mode
    alt multiple channels & image mode
        loop per channel
            Reader->>Stack: _replace_channel_placeholder(pattern, channel)
            Reader->>IO: _read_one(per-channel files, stack_scale_override?)
            alt no env scale
                Reader->>Dialog: _prompt_stack_scale(sample_path, default)
                Dialog-->>Reader: user scale
            end
            IO-->>Reader: array + metadata
            Reader-->>User: emit LayerData (array, meta, name)
        end
    else
        Reader->>IO: _read_one(single-source)
        IO-->>Reader: array + metadata
        Reader-->>User: emit LayerData
    end
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested Reviewers

  • jenna-tomkinson

Poem

🐰 I hop through filenames, sniff each stack we find,
Channels split and patterns matched, scales neatly lined,
A prompt or env var nudges voxels into place,
I stitch each layer, label, name — then off I race,
Hooray — stacks load tidy, every channel primed!

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add image stack scale options and adjust image stack ingest' clearly summarizes the main changes—introducing scale options for image stacks and adjusting stack processing logic.
Linked Issues check ✅ Passed The PR fully addresses issue #12 requirements: adds automatic scale extraction from metadata via _infer_stack_scale_from_pattern, supports manual override via NAPARI_OME_ARROW_STACK_SCALE environment variable and _prompt_stack_scale, and integrates stack scale handling throughout the reader pipeline.
Out of Scope Changes check ✅ Passed All code changes are directly related to image stack scale handling or supporting refactoring; minor updates to pre-commit hooks, README documentation, and package configuration are all in scope for supporting the stack scale feature implementation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Important

Action Needed: IP Allowlist Update

If your organization protects your Git platform with IP whitelisting, please add the new CodeRabbit IP address to your allowlist:

  • 136.113.208.247/32 (new)
  • 34.170.211.100/32
  • 35.222.179.152/32

Reviews will stop working after February 8, 2026 if the new IP is not added to your allowlist.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@d33bs
Copy link
Member Author

d33bs commented Jan 21, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@src/napari_ome_arrow/_reader.py`:
- Around line 837-840: The z_key function's regex currently uses an escaped
backslash sequence that matches a literal "\d" instead of digits; change the
pattern in z_key (the re.search call) to use a proper digit-capturing group,
e.g. re.search(r"Z[S]?(\d+)", path.name) (or re.search(r"[Zz]S?(\d+)",
path.name) if you need to accept lowercase 'z'), so match.group(1) returns the
numeric Z index correctly and permits integer ordering in _files_from_pattern.

In `@tests/test_reader.py`:
- Around line 219-221: The lambda passed to reader_mod for "_get_layer_mode"
uses unused parameters `sample_path` and `image_type_hint` which triggers Ruff
ARG005; rename them to `_sample_path` and `_image_type_hint` (i.e., prefix both
unused lambda parameters with an underscore) so the unused-argument lint rule is
satisfied while preserving the same behavior of the lambda.

@d33bs
Copy link
Member Author

d33bs commented Jan 22, 2026

@coderabbitai full review

@coderabbitai
Copy link

coderabbitai bot commented Jan 22, 2026

✅ Actions performed

Full review triggered.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 129-135: The README example for NAPARI_OME_ARROW_STACK_SCALE is
ambiguous: clarify that the single-file example `napari "stack/img_000.tif"`
does not trigger CLI stack-pattern detection and is only representative of
dragging multiple files into the GUI; either state explicitly that stack
detection is GUI-only for that form or replace the example with a multi-file CLI
example (e.g., a numeric range or glob) and keep the env var name
NAPARI_OME_ARROW_STACK_SCALE and the "stack" examples so readers can locate and
update the two example lines.

In `@src/napari_ome_arrow/_reader.py`:
- Around line 1249-1252: Remove the redundant setdefault and nested check for
scale: when scale is not None, simply assign add_kwargs["scale"] = scale (drop
add_kwargs.setdefault("scale", scale) and the extra if scale is not None branch)
so there's a single clear assignment; locate the block using the add_kwargs and
scale symbols in _reader.py and replace the three-line sequence with one
assignment guarded by the existing if scale is not None.
🧹 Nitpick comments (2)
src/napari_ome_arrow/_reader.py (2)

114-120: Add parentheses to clarify operator precedence.

The and/or chain is technically correct but the intent is clearer with explicit parentheses, as flagged by Ruff RUF021.

♻️ Suggested fix
     looks_zarr = (
         s.endswith((".ome.zarr", ".zarr"))
         or ".zarr/" in s
-        or p.exists()
-        and p.is_dir()
-        and p.suffix.lower() == ".zarr"
+        or (p.exists() and p.is_dir() and p.suffix.lower() == ".zarr")
     )

976-998: Consider returning from else block for clarity.

Per Ruff TRY300, the return statement could be placed in an else block to improve exception handling clarity.

♻️ Suggested fix
 def _scale_from_ome_arrow(
     obj: OMEArrow,
 ) -> tuple[float, float, float] | None:
     try:
         record = obj.data.as_py()
         pixels_meta = record.get("pixels_meta", {}) if record else {}
         if not any(
             k in pixels_meta
             for k in (
                 "physical_size_z",
                 "physical_size_y",
                 "physical_size_x",
             )
         ):
             return None
         z = float(pixels_meta.get("physical_size_z") or 1.0)
         y = float(pixels_meta.get("physical_size_y") or 1.0)
         x = float(pixels_meta.get("physical_size_x") or 1.0)
         if z == 1.0 and y == 1.0 and x == 1.0:
             return None
-        return z, y, x
     except Exception:
         return None
+    else:
+        return z, y, x

@d33bs
Copy link
Member Author

d33bs commented Jan 22, 2026

@coderabbitai full review

@coderabbitai
Copy link

coderabbitai bot commented Jan 22, 2026

✅ Actions performed

Full review triggered.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 65-66: Update the README line about "Stack scale prompt +
override" to explicitly state that the voxel-scale prompt only appears when no
scale metadata or NAPARI_OME_ARROW_STACK_SCALE override is present and only when
a Qt UI is available (i.e., not in headless runs); reference the environment
variable NAPARI_OME_ARROW_STACK_SCALE and the stack (Z,Y,X or T,C,Z,Y,X) format
so users know to prefill it to avoid the prompt in non-UI or metadata-containing
scenarios.

In `@src/napari_ome_arrow/_reader.py`:
- Around line 1306-1365: The per-channel loop calls _read_one repeatedly which
can trigger the stack-scale prompt each time; create a small cache for the
resolved stack scale (e.g., resolved_stack_scale = None) before the for label in
channel_names loop, obtain/resolve the scale once (by calling a single
preliminary _read_one or a new helper like _get_stack_scale_for_pattern) and
store it, then update calls to _read_one to accept an optional resolved_scale
parameter (or read metadata without prompting) so subsequent channel reads reuse
the cached scale; keep existing rgb fallback using _read_rgb_stack_pattern and
preserve behavior of _strip_channel_axis/_maybe_set_viewer_3d and how
add_kwargs/name are set.

@d33bs d33bs marked this pull request as ready for review January 26, 2026 22:18
@d33bs d33bs requested a review from jenna-tomkinson February 2, 2026 20:57
Copy link
Member

@jenna-tomkinson jenna-tomkinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I would recommend more docstrings for the script just to easier parsing since it is a big file, also might recommend converting that script into parts. Overall, nice work!

d33bs and others added 8 commits February 3, 2026 15:41
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
Co-Authored-By: Jenna Tomkinson <107513215+jenna-tomkinson@users.noreply.github.com>
@d33bs
Copy link
Member Author

d33bs commented Feb 3, 2026

Thanks @jenna-tomkinson ! Addressed your comments and now merging this in.

@d33bs d33bs merged commit acf8ab3 into WayScience:main Feb 3, 2026
7 checks passed
@d33bs d33bs deleted the image-stack-scales branch February 3, 2026 23:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@src/napari_ome_arrow/_reader_infer.py`:
- Around line 17-53: In _normalize_image_type, the current "image" substring
check runs before label/mask/seg checks causing strings like
"segmentation_image" to be classified as "image"; fix by testing for label-like
tokens (the set
{"labels","label","mask","masks","segmentation","seg","outlines","outline"} and
substring checks "label"|"mask"|"seg"|"outline") first and return "labels" if
any match, and only after that evaluate image tokens (exact matches
"image","intensity","raw" and maybe "image" substring), ensuring the label-path
precedes the "image" substring check in the _normalize_image_type function.

In `@src/napari_ome_arrow/_reader_omearrow.py`:
- Around line 418-470: The .npy fallback calls _maybe_set_viewer_3d on arrays
that may still include a channel axis, which makes 2D multichannel data be
mis-detected as 3D; fix by detecting when a channel axis is present (e.g. the
same condition used earlier: arr.ndim == 3 and arr.shape[0] <= 6 or
add_kwargs.get("channel_axis") is not None) and call _maybe_set_viewer_3d on a
channel-stripped view (e.g. arr[0, ...] or by removing the channel_axis) instead
of the full arr so 3D-toggle only considers spatial dimensions; update the .npy
fallback block around _maybe_set_viewer_3d and reuse the same channel-detection
logic used when setting add_kwargs["channel_axis"].

In `@src/napari_ome_arrow/_reader_stack.py`:
- Around line 341-366: The current _files_from_pattern function uses
re.escape(name) which prevents regex wildcards like .* from working; change the
handling so the pattern is treated as a regex (not fully escaped) and only
convert the angle-bracket placeholders into a non-slash wildcard group.
Concretely, replace the re.escape(name) step with using the raw name string
(e.g. name_raw = name), then run regex = re.sub(r"<[^>]+>", r"[^/]+", name_raw)
and compile that with anchors, keeping the rest of the function (candidates,
matched, z_key, sorted return) unchanged to restore support for patterns
containing regex wildcards in _files_from_pattern.

Comment on lines +17 to +53
def _normalize_image_type(value: Any) -> str | None:
"""Normalize an image type hint to "image" or "labels".

Args:
value: Raw hint value from metadata or user input.

Returns:
"image", "labels", or None if it cannot be inferred.
"""
if value is None:
return None
# Normalize arbitrary input to a comparable string token.
text = str(value).strip().lower()
if not text:
return None
if text in {"image", "intensity", "raw"} or "image" in text:
return "image"
if (
text
in {
"labels",
"label",
"mask",
"masks",
"segmentation",
"seg",
"outlines",
"outline",
}
or "label" in text
or "mask" in text
or "seg" in text
or "outline" in text
):
return "labels"
return None

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Label-like image_type values can be misclassified as image.

The "image" in text check runs before label/mask/seg tokens, so values like "segmentation_image" or "label_image" will return "image" instead of "labels".

🔧 Suggested fix
-    if text in {"image", "intensity", "raw"} or "image" in text:
-        return "image"
     if (
         text
         in {
             "labels",
             "label",
             "mask",
             "masks",
             "segmentation",
             "seg",
             "outlines",
             "outline",
         }
         or "label" in text
         or "mask" in text
         or "seg" in text
         or "outline" in text
     ):
         return "labels"
+    if text in {"image", "intensity", "raw"} or "image" in text:
+        return "image"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _normalize_image_type(value: Any) -> str | None:
"""Normalize an image type hint to "image" or "labels".
Args:
value: Raw hint value from metadata or user input.
Returns:
"image", "labels", or None if it cannot be inferred.
"""
if value is None:
return None
# Normalize arbitrary input to a comparable string token.
text = str(value).strip().lower()
if not text:
return None
if text in {"image", "intensity", "raw"} or "image" in text:
return "image"
if (
text
in {
"labels",
"label",
"mask",
"masks",
"segmentation",
"seg",
"outlines",
"outline",
}
or "label" in text
or "mask" in text
or "seg" in text
or "outline" in text
):
return "labels"
return None
def _normalize_image_type(value: Any) -> str | None:
"""Normalize an image type hint to "image" or "labels".
Args:
value: Raw hint value from metadata or user input.
Returns:
"image", "labels", or None if it cannot be inferred.
"""
if value is None:
return None
# Normalize arbitrary input to a comparable string token.
text = str(value).strip().lower()
if not text:
return None
if (
text
in {
"labels",
"label",
"mask",
"masks",
"segmentation",
"seg",
"outlines",
"outline",
}
or "label" in text
or "mask" in text
or "seg" in text
or "outline" in text
):
return "labels"
if text in {"image", "intensity", "raw"} or "image" in text:
return "image"
return None
🤖 Prompt for AI Agents
In `@src/napari_ome_arrow/_reader_infer.py` around lines 17 - 53, In
_normalize_image_type, the current "image" substring check runs before
label/mask/seg checks causing strings like "segmentation_image" to be classified
as "image"; fix by testing for label-like tokens (the set
{"labels","label","mask","masks","segmentation","seg","outlines","outline"} and
substring checks "label"|"mask"|"seg"|"outline") first and return "labels" if
any match, and only after that evaluate image tokens (exact matches
"image","intensity","raw" and maybe "image" substring), ensuring the label-path
precedes the "image" substring check in the _normalize_image_type function.

Comment on lines +418 to +470
# 🔹 Ask viewer to switch to 3D if there is a real Z-stack
_maybe_set_viewer_3d(arr)

if looks_stack:
# Stack scale can be inferred or user-provided.
scale_override = (
scale_override
if scale_override is not None
else inferred_scale
)
scale = (
_scale_for_array(arr, mode, add_kwargs, scale_override)
if scale_override is not None
else None
)
else:
scale = (
_scale_for_array(arr, mode, add_kwargs, inferred_scale)
if inferred_scale is not None
else None
)
if scale is not None:
add_kwargs["scale"] = scale

return arr, add_kwargs, layer_type

# ---- bare .npy fallback -----------------------------------------
if looks_npy:
# Minimal fallback for plain numpy arrays.
arr = np.load(src)
if arr.ndim == 1:
n = int(np.sqrt(arr.size))
if n * n == arr.size:
arr = arr.reshape(n, n)
else:
raise ValueError(
f".npy is 1D and not a square image: {arr.shape}"
)

if mode == "image":
if arr.ndim == 3 and arr.shape[0] <= 6:
add_kwargs["channel_axis"] = 0
layer_type = "image"
else:
# labels
if arr.ndim == 3: # treat as (C, Y, X) → first channel
arr = arr[0, ...]
arr = _as_labels(arr)
add_kwargs.setdefault("opacity", 0.7)
layer_type = "labels"

# 🔹 Same 3D toggle for npy-based data
_maybe_set_viewer_3d(arr)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid using channel axis as Z when toggling 3D.

_maybe_set_viewer_3d is called on arrays that may still include a channel axis (notably the .npy fallback). That can flip the viewer to 3D for 2D multichannel data.

🔧 Suggested fix (strip channel axis for 3D detection only)
-        _maybe_set_viewer_3d(arr)
+        arr_for_3d = arr
+        channel_axis = add_kwargs.get("channel_axis")
+        if channel_axis is not None:
+            try:
+                arr_for_3d = np.take(arr, 0, axis=int(channel_axis))
+            except Exception:
+                arr_for_3d = arr
+        _maybe_set_viewer_3d(arr_for_3d)
@@
-        _maybe_set_viewer_3d(arr)
+        arr_for_3d = arr
+        channel_axis = add_kwargs.get("channel_axis")
+        if channel_axis is not None:
+            try:
+                arr_for_3d = np.take(arr, 0, axis=int(channel_axis))
+            except Exception:
+                arr_for_3d = arr
+        _maybe_set_viewer_3d(arr_for_3d)
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 453-455: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In `@src/napari_ome_arrow/_reader_omearrow.py` around lines 418 - 470, The .npy
fallback calls _maybe_set_viewer_3d on arrays that may still include a channel
axis, which makes 2D multichannel data be mis-detected as 3D; fix by detecting
when a channel axis is present (e.g. the same condition used earlier: arr.ndim
== 3 and arr.shape[0] <= 6 or add_kwargs.get("channel_axis") is not None) and
call _maybe_set_viewer_3d on a channel-stripped view (e.g. arr[0, ...] or by
removing the channel_axis) instead of the full arr so 3D-toggle only considers
spatial dimensions; update the .npy fallback block around _maybe_set_viewer_3d
and reuse the same channel-detection logic used when setting
add_kwargs["channel_axis"].

Comment on lines +341 to +366
def _files_from_pattern(pattern: str) -> list[Path]:
"""List files that match a stack pattern.

Args:
pattern: Stack pattern string.

Returns:
Files matching the pattern, sorted by Z-like tokens when possible.
"""
p = Path(pattern)
folder = p.parent
name = p.name
# Convert <...> placeholders to a wildcard regex.
regex = re.escape(name)
regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)
compiled = re.compile(f"^{regex}$")
candidates = [f for f in folder.iterdir() if f.is_file()]
matched = [f for f in candidates if compiled.match(f.name)]

def z_key(path: Path) -> tuple[int, str]:
# Prefer Z-like numeric ordering when present.
match = re.search(r"Z[S]?(\d+)", path.name)
return (int(match.group(1)) if match else -1, path.name)

return sorted(matched, key=z_key)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Support regex wildcards in stack patterns.

re.escape makes patterns like .* literal, so the prompt’s “regex like .*” guidance (and the fallback patterns) won’t match any files. This can break RGB fallback for valid stack patterns.

🔧 Suggested fix
-    # Convert <...> placeholders to a wildcard regex.
-    regex = re.escape(name)
-    regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)
+    # Convert <...> placeholders to a wildcard regex, while preserving
+    # explicit regex tokens like ".*" in user-supplied patterns.
+    if re.search(r"[.*+?{}\[\]|()\\]", name):
+        regex = re.sub(r"<[^>]+>", r"[^/]+", name)
+    else:
+        regex = re.escape(name)
+        regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _files_from_pattern(pattern: str) -> list[Path]:
"""List files that match a stack pattern.
Args:
pattern: Stack pattern string.
Returns:
Files matching the pattern, sorted by Z-like tokens when possible.
"""
p = Path(pattern)
folder = p.parent
name = p.name
# Convert <...> placeholders to a wildcard regex.
regex = re.escape(name)
regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)
compiled = re.compile(f"^{regex}$")
candidates = [f for f in folder.iterdir() if f.is_file()]
matched = [f for f in candidates if compiled.match(f.name)]
def z_key(path: Path) -> tuple[int, str]:
# Prefer Z-like numeric ordering when present.
match = re.search(r"Z[S]?(\d+)", path.name)
return (int(match.group(1)) if match else -1, path.name)
return sorted(matched, key=z_key)
def _files_from_pattern(pattern: str) -> list[Path]:
"""List files that match a stack pattern.
Args:
pattern: Stack pattern string.
Returns:
Files matching the pattern, sorted by Z-like tokens when possible.
"""
p = Path(pattern)
folder = p.parent
name = p.name
# Convert <...> placeholders to a wildcard regex, while preserving
# explicit regex tokens like ".*" in user-supplied patterns.
if re.search(r"[.*+?{}\[\]|()\\]", name):
regex = re.sub(r"<[^>]+>", r"[^/]+", name)
else:
regex = re.escape(name)
regex = re.sub(r"\\<[^>]+\\>", r"[^/]+", regex)
compiled = re.compile(f"^{regex}$")
candidates = [f for f in folder.iterdir() if f.is_file()]
matched = [f for f in candidates if compiled.match(f.name)]
def z_key(path: Path) -> tuple[int, str]:
# Prefer Z-like numeric ordering when present.
match = re.search(r"Z[S]?(\d+)", path.name)
return (int(match.group(1)) if match else -1, path.name)
return sorted(matched, key=z_key)
🤖 Prompt for AI Agents
In `@src/napari_ome_arrow/_reader_stack.py` around lines 341 - 366, The current
_files_from_pattern function uses re.escape(name) which prevents regex wildcards
like .* from working; change the handling so the pattern is treated as a regex
(not fully escaped) and only convert the angle-bracket placeholders into a
non-slash wildcard group. Concretely, replace the re.escape(name) step with
using the raw name string (e.g. name_raw = name), then run regex =
re.sub(r"<[^>]+>", r"[^/]+", name_raw) and compile that with anchors, keeping
the rest of the function (candidates, matched, z_key, sorted return) unchanged
to restore support for patterns containing regex wildcards in
_files_from_pattern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Ensure image stacks can receive scale information

2 participants