Skip to content

Support image tracking for device types and modules#17

Merged
marcinpsk merged 26 commits intomainfrom
module-images
Feb 20, 2026
Merged

Support image tracking for device types and modules#17
marcinpsk merged 26 commits intomainfrom
module-images

Conversation

@marcinpsk
Copy link
Owner

@marcinpsk marcinpsk commented Feb 17, 2026

Summary by CodeRabbit

  • New Features

    • Detects and reports missing front/rear images and supports image uploads for device and module types.
    • Rich-powered progress UI with optional remaining-time display (--show-remaining-time).
  • Improvements

    • Faster imports via parallel/chunked fetching, preloading and caching of components; improved create/update flows for device/module types.
    • Condensed/deferred console output with grouped logs, richer per-run summaries, and better interrupt handling.
  • Tests

    • Expanded coverage for image detection, progress/logging, import filtering, and API/component caching.
  • Chores

    • Replaced tqdm with Rich.

Swap tqdm dependency for rich and update get_progress_wrapper() to use
rich.progress.track(). Update related docstrings and comments.
Upload images for existing device types regardless of mode, exclude
image properties from property updates, add module-type image attachment
support via extras API, and add change detector tests.
@coderabbitai
Copy link

coderabbitai bot commented Feb 17, 2026

Important

Review skipped

This PR was authored by the user configured for CodeRabbit reviews. By default, CodeRabbit skips reviewing PRs authored by this user. It's recommended to use a dedicated user account to post CodeRabbit review feedback.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • ✅ Review completed - (🔄 Check again to review again)

Walkthrough

Adds image-property detection/reporting and upload/preload support; replaces tqdm with a Rich-based progress UI and deferred console logging; expands NetBox API with caching, preloading, batched/component creation and image flows; hardens repo parsing for interrupts; and adds many unit tests across these areas.

Changes

Cohort / File(s) Summary
Change detection & reporting
change_detector.py, tests/test_change_detector.py
Adds IMAGE_PROPERTIES and COMPONENT_ALIASES_BY_CANONICAL; implements ChangeDetector._compare_image_properties and integrates image checks into detect_changes; updates alias handling and detailed per-device change/report logging; adds unit tests for front/rear/missing/empty-image cases.
NetBox API, preloading & images
netbox_api.py, tests/test_netbox_api.py
Introduces concurrency and chunked fetches, endpoint caching/preloading, image discovery/upload/attachment APIs, and expanded DeviceTypes/NetBox create/update flows to support images and component creation; adds tests for preload, caching, filtering and verbose logging.
Progress UI & CLI orchestration
nb-dt-import.py, tests/test_nb_dt_import.py, pyproject.toml
Replaces tqdm with rich, adds MyProgress, custom progress columns, get_progress_panel/get_progress_wrapper, --show-remaining-time, and selection/filtering helpers; integrates Rich into parsing/import flows and adds related tests.
Deferred / grouped logging
log_handler.py, tests/test_log_handler.py
Adds console deferral plumbing (set_console, start_progress_group, end_progress_group, _emit), buffers messages during progress groups and flushes on end; updates verbose_log/log to use _emit; tests for buffering and nested groups.
Repo parsing robustness
repo.py, tests/test_repo.py
Adds docstrings, refactors path helpers, makes slug filtering tolerant to missing slug (falls back to model), and adds KeyboardInterrupt-safe cancellation for parallel parsing; test for missing-slug parsing.
Tests & integration coverage
tests/* (new/expanded suites)
Adds and expands tests across change detection, NetBox API preloads/images, progress UI, logging, and repo parsing to cover new behaviors and edge cases.

Sequence Diagram(s)

sequenceDiagram
    participant Importer as Import Process
    participant Detector as ChangeDetector
    participant API as NetBox Client
    participant NB as NetBox Service

    Importer->>Detector: detect_changes(yaml_data, netbox_dt)
    Detector->>Detector: _compare_image_properties(yaml_data, netbox_dt)
    Note over Detector: YAML True + NetBox empty/None -> emit PropertyChange
    Detector-->>Importer: change_report (includes property_changes)

    Importer->>API: select actions (create/update) using change_report
    API->>API: preload component caches (global & scoped) with progress
    API->>NB: upload image attachments and device/module images
    NB-->>API: 200/201 responses
    API-->>Importer: create/upload results and progress updates
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐰 I hopped through YAML lanes and peeked in every file,

Front and rear I counted twice and flagged each missing tile.
Rich bars hummed while caches grew and uploads took their flight,
Logs stayed snug inside my pouch and flushed at morning light.
Hoppity-hop — device images tucked in tight.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.88% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Support image tracking for device types and modules' directly and concisely summarizes the main change across the PR—adding image tracking functionality for device types and modules, which is consistently reflected in changes to change_detector.py, nb-dt-import.py, netbox_api.py, and related test files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@change_detector.py`:
- Around line 475-478: The hint "use --verbose" is printed even when verbose
mode is already active; update the conditional around verbose_only (which is
computed from report.modified_device_types and ct_removed) so you only call
self.handle.log(f"  ({verbose_only} more without removals — use --verbose to
list)") when verbose mode is off—i.e., wrap that log in a check of the current
verbosity flag (use your existing flag such as self.options.verbose or
self.verbose) and leave the verbose_log path untouched.

In `@netbox_api.py`:
- Around line 305-329: The current logic uses module_type_ids_with_images to
skip uploading all images for a module if any attachment exists; change this to
perform per-file deduplication: for the module_type_res (module_type_res.id and
module_type_res.model) query existing attachments via the DeviceTypes
image/attachments API (use the same client on self.device_types to list/get
attachments), build a set of existing attachment filenames, then iterate
image_files and only call self.device_types.upload_image_attachment for files
whose basename is not in that set; replace or augment
module_type_ids_with_images with a mapping from module_type_res.id to uploaded
filenames (or update it after successful uploads) so partially uploaded modules
will get only their missing images uploaded.
- Around line 1107-1141: The upload_image_attachment function currently only
catches requests.RequestException but open(image_path, "rb") can raise
FileNotFoundError/OSError; update the method to add an except block for OSError
(or FileNotFoundError) around the file open/POST logic (before/alongside the
existing requests exception handler) that logs a clear error via self.handle
(include image_path and the exception) and returns or re-raises as appropriate
so missing files don't surface as request errors and the images counter isn't
touched; reference upload_image_attachment, the open(image_path, "rb") call, and
the existing requests.RequestException handler when making this change.
- Around line 314-315: IMAGE_EXTENSIONS is being recreated on each iteration of
the module types loop; move it to module scope (top-level constant) so it is
defined once and reused. Update references in the loop that filter image_files
(the list comprehension using os.path.splitext and .lower()) to use the
module-level IMAGE_EXTENSIONS constant, and remove the local reassignment inside
the loop to avoid repeated allocation.

In `@tests/test_change_detector.py`:
- Around line 1-4: Tests import unused symbols causing lint F401: remove the
unused imports pytest and PropertyChange from tests/test_change_detector.py so
only required symbols remain (e.g., keep MagicMock and ChangeDetector); update
the import line that currently reads "import pytest\nfrom unittest.mock import
MagicMock\n\nfrom change_detector import ChangeDetector, PropertyChange" to
exclude pytest and PropertyChange.

@marcinpsk marcinpsk changed the title Module images Support image tracking for device types and modules Feb 17, 2026
- Only show "use --verbose" hint when verbose mode is off
- Deduplicate module-type image uploads per-file instead of per-module
- Handle OSError in upload_image_attachment for missing files
- Move IMAGE_EXTENSIONS to module scope
- Remove unused imports in test file
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
netbox_api.py (1)

1096-1113: ⚠️ Potential issue | 🟡 Minor

upload_images still lacks OSError handling, unlike upload_image_attachment.

Line 1099's open(path, "rb") can raise FileNotFoundError / OSError if the image file is missing or unreadable. The except on Line 1106 only catches requests.RequestException, so the error would propagate uncaught (the finally block handles cleanup but not logging). The newer upload_image_attachment method (Line 1153) already handles OSError — apply the same pattern here for consistency.

🛡️ Proposed fix
         except requests.RequestException as e:
             self.handle.log(f"Error uploading images for device type {device_type}: {e}")
+        except OSError as e:
+            self.handle.log(f"Error reading image file for device type {device_type}: {e}")
         finally:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 1096 - 1113, The upload_images function currently
opens files with open(path, "rb") inside the for loop and only catches
requests.RequestException, so FileNotFoundError/OSError can escape; update
upload_images to mirror upload_image_attachment by adding an except OSError (or
broad OSError/IOError) branch around the file open loop to log the
missing/unreadable file (include device_type and path) and continue/abort as
appropriate, ensuring file_handles cleanup in the finally block still executes;
reference the symbols upload_images, file_handles, open(path, "rb"),
device_type, and the existing upload_image_attachment error handling for the
exact logging/message pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 309-337: The module-type image deduplication works but uses
module_type_existing_images.get(module_type_res.id, set()) then reassigns
module_type_existing_images[module_type_res.id] = existing; simplify by using
module_type_existing_images.setdefault(module_type_res.id, existing_set) to
obtain-or-create the set in one step (reference module_type_existing_images,
module_type_res.id, and the loop that checks os.path.basename(img_path) against
existing), ensuring you still add uploaded filenames into that set so the cache
stays updated.

---

Outside diff comments:
In `@netbox_api.py`:
- Around line 1096-1113: The upload_images function currently opens files with
open(path, "rb") inside the for loop and only catches requests.RequestException,
so FileNotFoundError/OSError can escape; update upload_images to mirror
upload_image_attachment by adding an except OSError (or broad OSError/IOError)
branch around the file open loop to log the missing/unreadable file (include
device_type and path) and continue/abort as appropriate, ensuring file_handles
cleanup in the finally block still executes; reference the symbols
upload_images, file_handles, open(path, "rb"), device_type, and the existing
upload_image_attachment error handling for the exact logging/message pattern.

---

Duplicate comments:
In `@change_detector.py`:
- Around line 473-477: The verbose hint should only be shown when verbose mode
is off: update the conditional that decides to log the "--verbose" hint so it
checks verbose_only = len(report.modified_device_types) - ct_removed and only
calls self.handle.log("  ({verbose_only} more without removals — use --verbose
to list)") when verbose_only > 0 and not self.handle.args.verbose; ensure this
check is used where modified device types are reported (see verbose_only,
ct_removed, report.modified_device_types, self.handle.args.verbose, and
self.handle.log).
- Around line 454-460: Replace the hardcoded tuple ("front_image", "rear_image")
used in the property-update filter in netbox_api.py with an import of the shared
IMAGE_PROPERTIES constant so both modules use the same source of truth;
specifically, import IMAGE_PROPERTIES and use it in place of the literal tuple
wherever the filter checks image property names (the existing literal
("front_image", "rear_image") should be removed and replaced by
IMAGE_PROPERTIES).

- Catch OSError in upload_images for missing/unreadable image files
- Use setdefault() instead of get()+reassign for image dedup map
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
netbox_api.py (1)

285-336: ⚠️ Potential issue | 🟠 Major

Image uploads silently skipped for existing module types when only_new=True.

For existing module types, continue at line 292 exits the loop iteration before reaching the image upload block at lines 309–336. Contrast with the device-type path (lines 160–171), where images are uploaded before the only_new check, explicitly labeled "always, regardless of mode." The module-type flow is inconsistent: existing module types never receive their missing images in only_new mode.

The fix is to replicate the device-type pattern by moving the image upload code (or extracting a helper) to run before line 292, keeping lines 309–336 for newly created module types only.

🐛 Sketch of the fix
         try:
             module_type_res = all_module_types[curr_mt["manufacturer"]["slug"]][curr_mt["model"]]
+
+            # Upload missing images — always, regardless of only_new (mirrors device-type behavior)
+            if src_file and src_file != "Unknown":
+                _upload_module_images(
+                    self, module_type_res, src_file, module_type_existing_images
+                )
+
             if only_new:
                 self.handle.verbose_log(
                     f"Module Type Cached: {module_type_res.manufacturer.name} - "
                     + f"{module_type_res.model} - {module_type_res.id}. Skipping."
                 )
                 continue
             ...
         except KeyError:
             module_type_res = self.netbox.dcim.module_types.create(curr_mt)
             ...

-        # Discover and upload module-type images
-        if src_file and src_file != "Unknown":
-            ...
+        # (image upload now handled above for both existing and new types)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 285 - 336, The module-type branch skips image
uploads when only_new=True because the loop does a continue after detecting a
cached module_type_res; move the image discovery/upload logic (the block that
builds image_dir, filters IMAGE_EXTENSIONS, checks module_type_existing_images
and calls device_types.upload_image_attachment) to run immediately after
resolving module_type_res for cached entries (i.e., before the `if only_new: ...
continue`), or extract that logic into a helper (e.g.,
upload_module_type_images(module_type_res, src_file,
module_type_existing_images)) and call it both for cached and newly-created
module types; keep the existing image-upload block after creation for
newly-created types only and retain the current error/creation handling around
module_types.create.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 313-316: The current construction of image_dir uses str.replace on
os.path.dirname(src_file) which can falsely match substrings (e.g.,
"cisco-module-types") or silently produce incorrect paths; change image_dir
creation to use path-component-aware logic: convert src_file to a pathlib.Path,
locate the path part equal to "module-types", replace only that path component
with "module-images" (or raise/log if "module-types" is not present), then
append module_name to form the final directory; update references to image_dir,
src_file, and module_name accordingly (use pathlib.Path methods such as .parts,
constructing a new Path from the modified parts and / module_name).
- Around line 271-274: The deduplication key should use NetBox's stable
attachment label instead of the storage filename: when populating
module_type_existing_images (the loop over self.netbox.extras.image_attachments
filtering dcim.moduletype), replace using os.path.basename(att.image) with
att.name (the stored human-readable label), and when checking before upload (the
check that currently compares basenames of img_path to the set), compare
os.path.splitext(os.path.basename(img_path))[0] against the values in
module_type_existing_images so the extension/unique-storage-suffix differences
are ignored.
- Around line 329-336: The call site unconditionally marks files as uploaded by
calling existing.add(os.path.basename(img_path)) regardless of whether
device_types.upload_image_attachment actually succeeded; change
upload_image_attachment to return a boolean success/failure (True on success,
False on any caught exception) and update the caller where
device_types.upload_image_attachment(...) is invoked to check the returned value
before calling existing.add(...): only call
existing.add(os.path.basename(img_path)) when the boolean indicates success.
Ensure upload_image_attachment preserves existing exception handling but maps
outcomes to True/False so the dedup cache is only updated on successful uploads.

---

Outside diff comments:
In `@netbox_api.py`:
- Around line 285-336: The module-type branch skips image uploads when
only_new=True because the loop does a continue after detecting a cached
module_type_res; move the image discovery/upload logic (the block that builds
image_dir, filters IMAGE_EXTENSIONS, checks module_type_existing_images and
calls device_types.upload_image_attachment) to run immediately after resolving
module_type_res for cached entries (i.e., before the `if only_new: ...
continue`), or extract that logic into a helper (e.g.,
upload_module_type_images(module_type_res, src_file,
module_type_existing_images)) and call it both for cached and newly-created
module types; keep the existing image-upload block after creation for
newly-created types only and retain the current error/creation handling around
module_types.create.

- Upload images for cached module types even in only_new mode
- Extract _upload_module_type_images helper for clarity
- Use pathlib for path-component-aware module-types→module-images swap
- Deduplicate by attachment name instead of storage filename
- Return success boolean from upload_image_attachment to avoid
  caching failed uploads
@marcinpsk
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 17, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 174-177: The "Device Type Cached: … Skipping." message is
misleading because images may have been uploaded earlier; update the log in the
only_new branch to clarify precisely what is being skipped (e.g., "Skipping DB
create for cached device type" or "Device Type cached: skipping creation (images
already handled)") so callers know images were not skipped; locate the
self.handle.verbose_log call that uses dt.manufacturer.name, dt.model, dt.id and
replace the message string accordingly.
- Around line 354-362: The path-replacement using parts.index("module-types")
assumes only one "module-types" in the path which can mis-target the wrong
segment if it appears multiple times; update the block around src_path, parts,
idx, module_name, image_dir to either use parts[::-1].index(...) (or rindex
equivalent) to replace the last occurrence for robustness, or if keeping the
current behavior, add a concise code comment above the idx =
parts.index("module-types") line documenting the assumption that "module-types"
appears only once and that we intentionally replace the first occurrence.

- Update only_new log to note images were already handled
- Replace last "module-types" path component for robustness
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 217-218: Update the verbose log call that currently uses
self.handle.verbose_log(f"Device Type Cached: {dt.manufacturer.name} - " +
f"{dt.model} - {dt.id}") to a more descriptive, consistent message similar to
the "Skipping updates (images already handled)." line; include context that this
path was reached when only_new is False and there were no updates/change_report
entries (e.g., "Device Type cached (no updates or images already handled):
{dt.manufacturer.name} - {dt.model} - {dt.id}") so readers can immediately see
why caching occurred; locate the log by the call to self.handle.verbose_log and
the dt variables.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
netbox_api.py (1)

357-404: 🧹 Nitpick | 🔵 Trivial

Duplicate API call for module-type image attachments.

create_module_types (line 367) fetches image_attachments.filter(object_type="dcim.moduletype") unconditionally, but filter_actionable_module_types (line 308) already fetches the same data. In --update mode, both are called in sequence from nb-dt-import.py, resulting in a redundant API round-trip.

Consider passing the pre-fetched module_type_existing_images from the caller or caching it.

♻️ Sketch: accept pre-fetched image map
-    def create_module_types(self, module_types, progress=None, only_new=False, all_module_types=None):
+    def create_module_types(self, module_types, progress=None, only_new=False, all_module_types=None, existing_images=None):
         ...
-        module_type_existing_images = {}
-        for att in self.netbox.extras.image_attachments.filter(object_type="dcim.moduletype"):
-            names = module_type_existing_images.setdefault(att.object_id, set())
-            if att.name:
-                names.add(att.name)
+        if existing_images is not None:
+            module_type_existing_images = existing_images
+        else:
+            module_type_existing_images = {}
+            for att in self.netbox.extras.image_attachments.filter(object_type="dcim.moduletype"):
+                names = module_type_existing_images.setdefault(att.object_id, set())
+                if att.name:
+                    names.add(att.name)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 357 - 404, create_module_types currently always
refetches image attachments via module_type_existing_images causing duplicate
API calls; change create_module_types to accept an optional pre-fetched map
(e.g., module_type_existing_images=None) and only call
self.netbox.extras.image_attachments.filter(object_type="dcim.moduletype") when
that argument is None; update callers (such as filter_actionable_module_types or
nb-dt-import.py) to pass their pre-fetched map, and ensure
_upload_module_type_images still consumes the same module_type_existing_images
structure for per-file deduplication.
log_handler.py (1)

86-93: 🧹 Nitpick | 🔵 Trivial

Pre-existing mutable default argument created_ports: list = [].

This is a well-known Python pitfall — mutable defaults are shared across calls. Since the list is only iterated (not mutated), it doesn't cause a bug here, but it's still a footgun.

♻️ Safer default
-    def log_device_ports_created(self, created_ports: list = [], port_type: str = "port"):
+    def log_device_ports_created(self, created_ports: list = None, port_type: str = "port"):
+        if created_ports is None:
+            created_ports = []

Same for log_module_ports_created on line 95.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@log_handler.py` around lines 86 - 93, The functions log_device_ports_created
and log_module_ports_created use a mutable default argument (created_ports: list
= []), which is unsafe; change their signatures to default created_ports to None
and inside the function set created_ports = created_ports or [] (or if
created_ports is None: created_ports = []), then proceed to iterate and return
len(created_ports); this removes the shared mutable default while preserving
current behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@log_handler.py`:
- Around line 62-69: The flush in end_progress_group currently prints deferred
messages with raw print() and should route through the configured console like
_emit does; update end_progress_group to iterate self._deferred_messages and for
each message call the same console-aware emission (i.e., if self._console is set
and has a print method call self._console.print(message) else call built-in
print(message)), then clear self._deferred_messages; reference
end_progress_group, _deferred_messages, _console, and _emit to mirror the
existing console-routing pattern used by log()/verbose_log().

In `@nb-dt-import.py`:
- Around line 33-59: The render method in ItemsPerSecondColumn duplicates the
fallback speed computation for finished and running tasks; refactor by
extracting the shared logic into a private helper like
_compute_fallback_speed(self, task, base_attr) or _get_effective_speed(self,
task, finished) that takes the task and a flag/attribute name and returns a
float or None, then call that helper from render to obtain speed (use
task.finished_speed for finished branch and task.speed for running branch) and
keep the existing Text formatting (return Text("- it/s") or Text(f"{speed:.1f}
it/s")) in render.

In `@tests/test_nb_dt_import.py`:
- Around line 32-53: Add a new unit test exercising the only_new=True branch of
log_run_mode: create a test named test_log_run_mode_reports_only_new_enabled
that loads the nb_dt_import module (or uses the nb_dt_import fixture),
constructs a MagicMock handle and SimpleNamespace args with only_new=True,
update=False, remove_components=False, calls nb_dt_import.log_run_mode(handle,
args), then asserts that one of the handle.log call messages contains
"--only-new enabled" to verify the only_new branch is logged; reference
log_run_mode and the only_new argument to locate the code path to test.
- Around line 90-102: Add two unit tests for has_missing_device_images to cover
the missing negative branches: 1) a test that passes change_report=None and
asserts False (e.g.,
test_has_missing_device_images_returns_false_for_none_report) and 2) a test that
passes a report whose modified_device_types only include non-image
property_changes and asserts False (e.g.,
test_has_missing_device_images_returns_false_when_no_image_changes); reference
the has_missing_device_images function in tests/test_nb_dt_import.py and use
SimpleNamespace to build the report objects like existing tests so they exercise
both additional return paths.
- Around line 204-218: Add a test that verifies ItemsPerSecondColumn returns
task.finished_speed when finished and finished_speed is not None: create a new
test (e.g., test_items_per_second_column_uses_finished_speed_when_available)
that instantiates nb_dt_import.ItemsPerSecondColumn(), calls column.render(...)
with a SimpleNamespace having finished=True, speed=None, finished_speed=5.0 (or
another numeric value), and asserts that str(rendered) == "5.0 it/s"; reference
ItemsPerSecondColumn.render and the finished_speed field to ensure the primary
fast-path is covered.
- Around line 169-184: The tests
test_items_per_second_column_handles_empty_speed and
test_items_per_second_column_renders_speed_value pass a superfluous
finished_speed attribute when creating SimpleNamespace instances for a
non-finished task; remove the finished_speed=None from those SimpleNamespace
constructors so they only include finished=False and speed (or no speed) when
calling ItemsPerSecondColumn().render (refer to ItemsPerSecondColumn.render and
the two test functions to locate the changes).
- Around line 134-137: The test currently asserts ordered list equality for the
selected device types, which couples the test to iteration order; change the
assertion in tests/test_nb_dt_import.py (the block comparing selected to the
list with entries for cisco/A and juniper/C) to an order-insensitive check
(e.g., compare as sets of serializable items, use assertCountEqual, or sort both
lists by a stable key such as ("manufacturer"["slug"], "model", "slug") before
asserting equality) and apply the same change to the analogous assertion in
test_select_device_types_for_update_mode_scopes_to_new_and_modified (lines
~163-166) so the tests validate membership only, not order.
- Around line 7-12: The tests re-execute nb-dt-import.py each time because
load_nb_dt_import_module() calls spec.loader.exec_module without registering the
module; replace that pattern with a module-scoped pytest fixture (e.g., def
nb_dt_import(): scope="module") that loads the module once, registers it in
sys.modules under "nb_dt_import", and returns the module; update all tests to
accept the fixture parameter (nb_dt_import) and remove local calls to
load_nb_dt_import_module(), ensuring top-level imports/side-effects run only
once per test module.

In `@tests/test_netbox_api.py`:
- Around line 200-219: The test uses MagicMock(name=...) which only sets the
mock's display name, not a .name attribute; update the
test_fetch_global_endpoint_records_falls_back_to_all_iteration to assign an
actual .name on each mock (or use a lightweight object with a .name attribute)
so any future assertions that access item.name won't break — locate the list
construction (currently records = [MagicMock(name=f"item-{idx}") ...]) and set
each mock.name = f"item-{idx}" (or replace MagicMock with
types.SimpleNamespace(name=...)) before passing records into
DeviceTypes._fetch_global_endpoint_records.

---

Outside diff comments:
In `@log_handler.py`:
- Around line 86-93: The functions log_device_ports_created and
log_module_ports_created use a mutable default argument (created_ports: list =
[]), which is unsafe; change their signatures to default created_ports to None
and inside the function set created_ports = created_ports or [] (or if
created_ports is None: created_ports = []), then proceed to iterate and return
len(created_ports); this removes the shared mutable default while preserving
current behavior.

In `@netbox_api.py`:
- Around line 357-404: create_module_types currently always refetches image
attachments via module_type_existing_images causing duplicate API calls; change
create_module_types to accept an optional pre-fetched map (e.g.,
module_type_existing_images=None) and only call
self.netbox.extras.image_attachments.filter(object_type="dcim.moduletype") when
that argument is None; update callers (such as filter_actionable_module_types or
nb-dt-import.py) to pass their pre-fetched map, and ensure
_upload_module_type_images still consumes the same module_type_existing_images
structure for per-file deduplication.

…alls, test improvements

- Fix mutable default argument (list=[]) in log_device/module_ports_created
- Route end_progress_group through console-aware emission instead of raw print()
- Extract _effective_speed helper in ItemsPerSecondColumn to deduplicate logic
- Add _fetch_module_type_existing_images() and pass pre-fetched map from
  filter_actionable_module_types to create_module_types to avoid duplicate API call
- Convert load_nb_dt_import_module() to module-scoped pytest fixture
- Fix MagicMock(name=...) not setting .name attribute in test_netbox_api.py
- Make selection test assertions order-insensitive
- Remove superfluous finished_speed=None from non-finished test SimpleNamespaces
- Add tests: only_new log_run_mode, negative has_missing_device_images,
  finished_speed when available
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@log_handler.py`:
- Around line 68-69: The console.print calls in end_progress_group and _emit
currently use Rich's default markup=True which can misinterpret log content;
update both places (the console.print calls in the end_progress_group method and
the _emit method) to pass markup=False (i.e., console.print(message,
markup=False)) so log strings containing brackets or user-supplied content are
printed literally and won't raise rich.markup.MarkupError or produce garbled
output.

In `@nb-dt-import.py`:
- Around line 97-112: The generator iterator() can exit early and never call
progress.stop_task(task_id); wrap the iteration and counting logic in a
try/finally so progress.stop_task(task_id) (and the final progress.update when
total is None) always runs, and ensure the final on_step() is invoked in the
finally block; update the code paths around iterable, count, total, task_id,
progress.update and progress.stop_task to perform cleanup in the finally to
guarantee the task is stopped even if the consumer breaks out early.

In `@netbox_api.py`:
- Around line 333-346: filter_actionable_module_types currently triggers
_get_cached_or_fetch once per (module_type_id, endpoint) via the loop over
component_keys, causing many individual API calls; add a bulk-preload step at
the start of filter_actionable_module_types that, when there are existing module
type ids, collects all required (endpoint_attr, cache_name) pairs for module
components (use ENDPOINT_CACHE_MAP and component_keys), groups them by endpoint,
and calls a new or existing bulk preload helper (analogous to
_preload_global/_preload_scoped) to fetch all components for those module_type
ids and populate self.device_types.cached_components with keys like ("module",
id) before the per-module loop so _get_cached_or_fetch hits the cache instead of
issuing per-id filter() calls; update or add a helper method on device_types to
accept (endpoint, cache_name, list_of_module_ids) and fill the cache
accordingly.

---

Duplicate comments:
In `@nb-dt-import.py`:
- Line 455: The call is coercing an explicit empty dict into None by using
"module_type_existing_images or None", causing create_module_types to treat a
legitimately-empty result as "not provided" and re-fetch via
_fetch_module_type_existing_images; replace that expression with a direct
pass-through of the variable
(module_type_existing_images=module_type_existing_images) so create_module_types
can distinguish None (unset) from {} (checked, nothing found) and avoid the
redundant API call; check usages of filter_actionable_module_types,
create_module_types, and _fetch_module_type_existing_images to ensure no other
places perform the same coercion.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@nb-dt-import.py`:
- Around line 429-435: The module-type parsing branch calls
dtl_repo.parse_files(files, slugs=args.slugs) directly (producing module_types)
without progress tracking; modify the code to wrap the files iterable with the
same get_progress_wrapper used for device types (the helper used around
dtl_repo.get_devices/get_progress_wrapper) so parse_files is iterated with
progress updates—i.e., pass the progress-wrapped iterable of files into
dtl_repo.parse_files (keep the module_types variable and slugs=args.slugs
usage).
- Around line 300-319: Remove the redundant reassignment of preload_vendor_scope
inside the conditional: preload_vendor_scope is already initialized to None and
never changed, so delete the inner "preload_vendor_scope = None" and simplify
the call to netbox.device_types.start_component_preload by passing the existing
variable (or None literal if you prefer) directly; update any related usage
(e.g., cache_preload_job and the call to
netbox.device_types.start_component_preload) to reflect that
preload_vendor_scope is not reassigned and remove the unused local assignment to
avoid misleading code.
- Around line 307-327: Replace the unnecessary dict parse_hook_ref with a simple
local variable (e.g., parse_fn = None) and update on_parse_step to call
parse_fn() when not None; then assign parse_fn = pump_preload where currently
parse_hook_ref["fn"] = pump_preload. Specifically, change references to
parse_hook_ref and parse_hook_ref["fn"] in the on_parse_step closure and the
preload branch to use parse_fn, leaving the rest (get_progress_wrapper,
parse_progress, pump_preload,
netbox.device_types.start_component_preload/pump_preload) unchanged.

…ss_wrapper(progress, files, desc=Parsing Module Types) before

  passing to parse_files
   2. Redundant preload_vendor_scope = None — removed the inner reassignment on line 317
   3. parse_hook_ref dict → nonlocal — replaced with parse_fn = None and nonlocal parse_fn in the closure
   4. Bulk preload for module components — added preload_module_type_components() on DeviceTypes that pre-populates the cache per
  endpoint, and called it from filter_actionable_module_types before the per-module loop
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@nb-dt-import.py`:
- Around line 307-312: Remove the unnecessary "nonlocal parse_fn" declaration
inside the on_parse_step function: on_parse_step only reads the parse_fn
variable defined in the enclosing scope (parse_fn = None) and never assigns to
it, so delete the nonlocal statement to avoid implying mutation while leaving
the enclosing parse_fn definition intact.
- Around line 196-209: The log message about --remove-components is being
emitted even when --only-new is active (where --remove-components is
inapplicable); update log_run_mode(handle, args) to avoid printing the
remove-components guidance when args.only_new is true—i.e., check args.only_new
first and either return or skip the remove-components branch, and only
evaluate/log the args.remove_components vs "use --remove-components with
--update" message when not args.only_new (still keep the existing messages for
args.update and args.only_new).

In `@netbox_api.py`:
- Around line 820-830: The loop that calls future_map[endpoint_name].result() in
_preload_global/_preload_scoped (and the other result() call sites) must
defensively catch exceptions so a single failed
_fetch_global_endpoint_records/_fetch_scoped_endpoint_records call doesn't abort
the entire preload; wrap the future.result() call in a try/except that catches
pynetbox.RequestError, concurrent.futures.CancelledError and a broad Exception
fallback, call self.handle.exception(...) (or self.handle.error(...) with
context) to log the failure, then either set records_by_endpoint[endpoint_name]
to an empty list or trigger a per-item retry/fallback fetch for that endpoint
before continuing to remove endpoint_name from pending and compute final_total
as currently done so the executor cleanup in finally still runs normally.
- Around line 839-855: When an update is dequeued from progress_updates but
endpoint_name not in pending, re-enqueue the tuple instead of dropping it so the
advance isn't lost; modify the loop handling progress_updates (the block using
progress_updates.get(timeout=0.1) that checks endpoint_name, advance, pending,
and task_ids) to call progress_updates.put((endpoint_name, advance)) before
continuing when the endpoint is no longer in pending, then proceed as before to
update progress for task_id if present.
- Around line 322-360: The code calls _find_existing_module_type twice per
module_type; build a single-pass mapping from each module_type (use its unique
identifier like module_type.get("slug") or the module_type object itself) to the
found existing_module and populate existing_module_ids from that mapping, then
reuse the mapping in the main loop to look up existing_module instead of calling
_find_existing_module_type again; ensure you still call
device_types.preload_module_type_components(existing_module_ids, component_keys)
after the first pass and then reference the stored existing_module entries when
evaluating module_type_existing_images, component checks (variables:
_find_existing_module_type, existing_module_ids, module_types,
actionable_module_types, module_type_existing_images,
device_types.preload_module_type_components).
- Around line 906-924: The progress task creation can leave tasks running for
endpoints absent from futures: either only create task_ids for endpoints present
in futures or ensure you stop any leftover tasks after handling future_map;
update the code in the block that creates task_ids/components/future_map so
task_ids is built only for endpoint_name values that exist in futures (check
membership when constructing task_ids) or, if you prefer minimal change, after
the for future in as_completed(...) loop iterate the keys of task_ids that were
not written to records_by_endpoint and call progress.update(task_ids[name],
completed=1) followed by progress.stop_task(task_ids[name]) to guarantee every
started task is completed and stopped (referencing task_ids, components,
futures, future_map, progress.update, and progress.stop_task).
- Around line 1096-1125: The preload_module_type_components implementation
(function preload_module_type_components) currently calls endpoint.filter(...)
inside a loop over each module type ID causing N calls; change it to call
endpoint.filter(...) once per endpoint with module_type_id set to the list
module_type_ids (e.g., filter(module_type_id=module_type_ids)) and then
distribute the returned items into the per-module-type cache entries (use the
module type id from each item — e.g., item.module_type.id or item.module_type_id
— to pick the right cache[("module", mid)] and set cache[("module",
mid)][item.name]=item). Also update the function docstring to say "one filter()
call per endpoint (filtering by module_type_id=[...])" instead of the current
incorrect statement.

---

Duplicate comments:
In `@nb-dt-import.py`:
- Line 301: The top-level variable preload_vendor_scope is never mutated and can
be removed; delete the assignment preload_vendor_scope = None and replace its
two usages (the references to preload_vendor_scope in the code that currently
read that variable) with direct None literals. Verify there are no other
references to preload_vendor_scope (search for that symbol) and run
tests/linters to ensure no name errors after removing it.

In `@netbox_api.py`:
- Line 220: The verbose log in the cached-with-no-changes path currently emits
only "Device Type Cached: {manufacturer} - {model} - {id}", which is ambiguous;
update the verbose_log call in the same code block that uses dt (the Device Type
object referenced as dt.manufacturer.name / dt.model / dt.id) to include the
reason this path was taken (e.g., "Skipping updates (images already handled)" or
"cached (no changes)") so it matches the intent of the only_new path and clearly
indicates whether images/components were skipped.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@nb-dt-import.py`:
- Line 398: The get_devices() implementation currently concatenates paths and
will break when base paths lack trailing slashes; change get_devices() (in
repo.py) to build glob patterns with os.path.join(base_path, folder,
f"*.{extension}") (and import os if needed), and then update callers in
nb-dt-import.py to call dtl_repo.get_devices(dtl_repo.get_devices_path(),
args.vendors) (and the analogous call to use dtl_repo.get_modules_path() where
modules are loaded) so the new helper methods are used instead of manual string
concatenation.

---

Duplicate comments:
In `@nb-dt-import.py`:
- Around line 399-411: Remove the dead variable preload_vendor_scope (it’s set
to None and never mutated) and pass None directly where vendor_slugs is used
instead; also remove the unnecessary nonlocal parse_fn declaration inside the
on_parse_step function since parse_fn is only read there (keep the parse_fn
closure usage as-is). Locate the preload_vendor_scope declaration and the
vendor_slugs=preload_vendor_scope call sites and update them to use None, and
locate the on_parse_step definition to delete the redundant nonlocal parse_fn
line.
- Around line 302-307: The current unconditional logging about
--remove-components is confusing when args.only_new is set; update the block
around args.remove_components and handle.log so it only runs when not
args.only_new. Specifically, wrap the existing if args.remove_components / else
handle.log(...) block in a check like if not args.only_new: ... (or replace the
else message with a short note that --remove-components is irrelevant when
args.only_new is active) so the hint about using --remove-components with
--update is not shown in --only-new mode; refer to the args.remove_components
and args.only_new flags and the handle.log calls to locate and change the code.

In `@netbox_api.py`:
- Around line 404-413: In filter_actionable_module_types you call
_find_existing_module_type twice per module (once to collect existing_module_ids
and again to filter), causing redundant lookups; change the first loop to store
the found result (map module_type -> existing_module) while building
existing_module_ids, call device_types.preload_module_type_components with
existing_module_ids, and then reuse the cached existing_module objects in the
subsequent filtering loop instead of calling _find_existing_module_type again;
keep references to module_types, all_module_types, existing_module_ids, and
device_types.preload_module_type_components to locate where to implement the
cache.
- Around line 993-995: The future.result() calls in _preload_global and
_preload_scoped (which consume futures created for
_fetch_global_endpoint_records and _fetch_scoped_endpoint_records) lack
exception handling so any pynetbox.RequestError or network exception bubbles
out; wrap each future_map[endpoint_name].result() call in a try/except Exception
as e block, call self.handle.exception(...) with a clear message including
endpoint_name and the exception, and set a safe fallback into
records_by_endpoint[endpoint_name] (e.g., an empty list or dict consistent with
callers) so the import continues; keep the existing finally that shuts down the
executor unchanged.
- Around line 1080-1093: preload_all_components currently calls future.result()
for futures from future_map without handling exceptions and builds task_ids for
all components which can leave progress tasks dangling; change
preload_all_components (and ensure consistency with start_component_preload) to:
construct task_ids only for the same endpoint set used to build
future_map/futures, iterate as_completed and for each future call
future.result() inside a try/except to catch and record/log exceptions (e.g.,
set records_by_endpoint[endpoint_name] to an empty list or error marker), and
ensure progress.update and progress.stop_task are always called in a finally
block so every started task is stopped even on error.
- Around line 1335-1344: preload_module_type_components currently calls
endpoint.filter() once per module_type_id causing M×K API calls; change it to
call endpoint.filter() once per endpoint with module_type_id set to the full
module_type_ids list (i.e., batch the IDs into a single filter call via
endpoint.filter(module_type_id=module_type_ids, ...)), then iterate the returned
items and place each item into cached_components using the correct key (use the
item's module_type identifier or module_type.id/module_type_id) so cache entries
remain grouped by ( "module", mid ); update use of _get_filter_kwargs or build
per-endpoint filter kwargs to accept the list of IDs before calling
endpoint.filter to avoid per-ID calls.
- Around line 1008-1018: The loop is discarding queued progress advances when
endpoint_name is no longer in pending, causing under-reported totals; instead of
dropping the dequeued (endpoint_name, advance) on the `if endpoint_name not in
pending: continue` path, still look up the corresponding task id from `task_ids`
and call `progress.update(task_id, advance=advance)` when a `task_id` exists. In
other words, in the `if pending and not had_updates:` block where you call
`progress_updates.get(timeout=0.1)`, remove the early `continue` and ensure you
call `task_id = task_ids.get(endpoint_name)` and `progress.update(...)` whenever
`task_id is not None` (keeping the existing `queue.Empty` handling).

@marcinpsk
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 1004-1008: The except tuple currently lists pynetbox.RequestError
and concurrent.futures.CancelledError alongside Exception which is redundant;
update the try/except in the block that assigns
records_by_endpoint[endpoint_name] = future_map[endpoint_name].result() (and the
other similar blocks) to either catch the specific exceptions only (e.g., except
(pynetbox.RequestError, concurrent.futures.CancelledError) as exc:) or, if you
want a catch‑all, use only except Exception as exc:, then call
self.handle.log(f"Preload failed for {endpoint_name}: {exc}") and set
records_by_endpoint[endpoint_name] = [] as before.
- Around line 744-782: The setup after creating the ThreadPoolExecutor can raise
(e.g., in _get_endpoint_totals, progress.add_task, or executor.submit) and
currently isn't fully protected; extend the existing try/except that surrounds
_get_endpoint_totals to cover the entire setup block that builds
progress_updates, task_ids, update_progress, futures, and the return payload so
that any exception calls executor.shutdown(wait=False, cancel_futures=True) and
then re-raises; specifically, wrap the code that computes endpoint_totals,
populates task_ids via progress.add_task, defines update_progress, and
constructs futures via executor.submit in a single try/except and ensure the
except handler references the same executor variable to shut it down before
re-raising.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 1180-1192: The current assignment
self.cached_components[endpoint_name] = cache in _preload_scoped (and similarly
in _preload_global) overwrites any previously cached entries for that endpoint
and can drop entries when preloading is called incrementally; change this to
merge rather than replace by retrieving the existing dict (e.g. via
self.cached_components.setdefault(endpoint_name, {})) and updating it with the
newly built cache, and add a brief comment on _preload_scoped/_preload_global
explaining why we merge to preserve prior preloads.
- Around line 726-742: The vendor_slugs branch in start_component_preload
creates the ThreadPoolExecutor (executor) and then submits tasks to
_fetch_scoped_endpoint_records inside a dict comprehension without protection,
risking an executor/future leak if executor.submit raises; fix by wrapping
executor creation and the submission loop in the same try/except/finally (or try
to build futures incrementally and use a finally to shutdown executor on any
exception) so that on error you call executor.shutdown(wait=False) (or the same
shutdown semantics used in the non-vendor path) before re-raising; mirror the
cleanup behavior used in the non-vendor branch to ensure no leaked executors or
partially-submitted futures.

---

Duplicate comments:
In `@netbox_api.py`:
- Around line 1004-1008: The code previously caught a redundant exception tuple;
update the exception handler around future_map[endpoint_name].result() in the
try/except that populates records_by_endpoint to use a single broad Exception
(i.e., "except Exception as exc:") and keep the existing
self.handle.log(f"Preload failed for {endpoint_name}: {exc}") and
records_by_endpoint[endpoint_name] = [] behavior; target the block that
references records_by_endpoint, future_map, endpoint_name, and self.handle.log
to make this change.

@marcinpsk
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 1380-1391: The call to endpoint.filter(**{filter_key: id_list})
can create very long URLs; modify preload_module_type_components (and similarly
_fetch_scoped_endpoint_records) to iterate id_list in fixed-size chunks
(introduce FILTER_CHUNK_SIZE, e.g. 200) and call endpoint.filter for each chunk
rather than the whole list at once; add a small helper (_chunked or use
itertools.islice) to yield slices of id_list, keep the pre-population of
cache.setdefault(("module", mid), {}) based on the full id_list, and then
populate cache entries from each chunked endpoint.filter call.
- Around line 1240-1260: _fetch_scoped_endpoint_records currently issues one
endpoint.filter() per dt_id; change it to call endpoint.filter() once with
device_type_id (or devicetype_id for older NetBox) set to the full dt_ids list
(i.e. use _get_filter_kwargs to produce a list-valued filter), then iterate the
returned records and group them into records_by_dt using each record's
device_type.id (similar to how preload_module_type_components groups by
module_type.id); call progress_callback(endpoint_name, len(dt_ids)) once per
endpoint instead of per-ID; finally update the _preload_scoped docstring to
state the progress_callback signature is (endpoint_name, advance) where advance
is the batch size.

---

Duplicate comments:
In `@netbox_api.py`:
- Line 264: Update the verbose log call in the cached branch (the
self.handle.verbose_log that currently prints dt.manufacturer.name, dt.model,
dt.id) to be as descriptive as the only_new path’s message ("Skipping updates
(images already handled).") — include context that the device type is cached,
why no further action will be taken, and what will happen next (e.g., "Skipping
image update (already handled)"). Locate the log at the Device Type cache
handling (referencing dt.manufacturer.name, dt.model, dt.id) and replace the
terse message with one that combines the device brand/model/id and a clear
explanation of the skip/next-step.

…_fetch_scoped_endpoint_records, improve cached device-type log message
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 274-277: The verbose log currently always says "No pending
updates; skipping component creation." even when a dt_change was applied; update
the logging in the block that calls self.handle.verbose_log to inspect dt_change
(or a local flag tracking applied changes) and emit a different message when
changes were applied (e.g., "changes applied" with brief dt_change info) versus
when truly no updates occurred (keep the existing "No pending updates; skipping
component creation."). Ensure you reference dt.manufacturer.name, dt.model,
dt.id and include dt_change or the flag in the message so the log accurately
reflects which path was taken.
- Around line 20-23: The _chunked function currently assumes a sliceable
Sequence by calling len(seq) and seq[i : i + size]; update it to either document
that requirement or make it accept any iterable by reimplementing using
itertools.islice (referencing _chunked and itertools.islice) — if choosing
documentation, change the _chunked docstring to state it requires a
Sequence/Sized and sliceable object; if choosing the iterable variant, rewrite
_chunked to accept any iterable and yield size-length chunks using
itertools.islice so callers no longer need to provide lists.
- Around line 218-229: The device-type image path is built by doing a blind
string replace on os.path.dirname(src_file) which can replace substrings inside
other path components; in create_device_types where saved_images and image paths
are derived (and where device_types.upload_images is later called), replace that
string-replace logic with the same path-component-aware approach used in
_discover_module_image_files: add a small helper that converts src_file to a
pathlib.Path, finds the path part named "device-types" and substitutes that
single path component with "elevation-images" (or builds the new path by walking
parts and swapping that segment), then use that resulting Path for image_base
and saved_images; update the image discovery for both existing-device branch
(where dt and saved_images are used) and the new-device branch to use this
helper so uploads aren’t silently skipped.

---

Duplicate comments:
In `@netbox_api.py`:
- Around line 839-854: The _apply_progress_updates loop currently drops
progress_updates entries for endpoints that have no pending task
(allowed_endpoints check and missing task_id) which is inconsistent with the
slow-paths in _preload_global/_preload_scoped that re-enqueue non-pending items;
update _apply_progress_updates (the while loop reading progress_updates and the
subsequent for loop using task_ids and progress.update) to match the slow-path
behavior by re-enqueuing items for endpoints without a pending task_id back onto
progress_updates (or, if you prefer the drop behavior, add a clear comment
explaining the intentional silent drop and why completed tasks are safe to
ignore) so the intent is explicit and behavior is consistent across code paths.

…tiated update/cached log, and drop-comment in _apply_progress_updates
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
netbox_api.py (2)

99-109: ⚠️ Potential issue | 🟡 Minor

Guard version_split[1] access against a single-component version string.

self.netbox.version.split(".") typically produces ["X", "Y"], but an unexpected value (e.g., "3" from a nonstandard NetBox build) would yield a single-element list, causing IndexError at lines 103 and 107.

🛡️ Proposed defensive fix
         version_split = [int(x) for x in self.netbox.version.split(".")]
+        if len(version_split) < 2:
+            self.handle.log(f"Unexpected NetBox version format: {self.netbox.version!r}. Feature flags unchanged.")
+            return

         if version_split[0] > 3 or (version_split[0] == 3 and version_split[1] >= 2):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 99 - 109, Guard against single-component version
strings when splitting self.netbox.version: ensure version_split has at least
two numeric components before indexing version_split[1] (used in the checks that
set self.modules and self.new_filters and call self.handle.log). Modify the
logic around version_split (and the conditions that set self.modules and
self.new_filters) to either validate/normalize the parsed parts (e.g., pad
missing components as 0 or convert to a tuple of (major, minor) safely) or
explicitly check len(version_split) >= 2 before comparing version_split[1], so
the comparisons for the 3.2 and 4.1 checks cannot raise IndexError.

193-210: ⚠️ Potential issue | 🟡 Minor

except ValueError fallback may emit a misleading "Error locating image file" log.

When "device-types" is absent from the source path, the fallback sets image_base = os.path.dirname(src_file) (the YAML's own directory). If the YAML has front_image: true, glob finds nothing there and line 209 logs "Error locating image file using '...'", which implies a missing image file rather than an unresolvable image directory.

_discover_module_image_files (line 608) handles the same ValueError by returning [] silently, which is the more informative outcome for operators (no spurious error).

🛡️ Proposed fix — align with module-type behaviour
             try:
                 _idx = len(_parts) - 1 - _parts[::-1].index("device-types")
                 _parts[_idx] = "elevation-images"
                 image_base = str(Path(*_parts))
             except ValueError:
-                image_base = os.path.dirname(src_file)
+                # Cannot derive image directory — "device-types" not in path.
+                # Skip image discovery silently (mirrors _discover_module_image_files behaviour).
+                for i in ["front_image", "rear_image"]:
+                    device_type.pop(i, None)
+                image_base = None
+
+            if image_base is None:
+                pass  # image discovery skipped; continue to component creation
+            else:
             for i in ["front_image", "rear_image"]:

Alternatively, extract this logic into a _discover_device_image_files helper (as suggested in the previous review) that returns {} on ValueError, consistent with _discover_module_image_files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 193 - 210, The current except ValueError fallback
sets image_base to the YAML dir and then proceeds to glob, producing misleading
"Error locating image file" logs; change the behavior to match
_discover_module_image_files by not attempting the glob when "device-types" is
absent: in the block around _src/_parts/image_base, catch ValueError and set
image_base = None (or return an empty dict) and skip the for-loop glob logic
that builds image_glob/saved_images (or call a new helper
_discover_device_image_files that returns {} on ValueError), ensuring no error
log is emitted when the directory cannot be resolved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 1032-1061: In _preload_global, pending is built from
future_map.keys() but does not exclude endpoints already marked finished by
pump_preload_progress (preload_job["finished_endpoints"]), causing
double-finalisation via progress.update/stop_task; fix by filtering pending to
remove any endpoint present in preload_job["finished_endpoints"] (or by removing
such endpoints from pending immediately after constructing future_map) so that
progress.update and progress.stop_task are only called for endpoints still
active; update references in _preload_global (and any callers such as
preload_all_components) to ensure task_ids and records_by_endpoint handling
remains consistent for skipped endpoints.

---

Outside diff comments:
In `@netbox_api.py`:
- Around line 99-109: Guard against single-component version strings when
splitting self.netbox.version: ensure version_split has at least two numeric
components before indexing version_split[1] (used in the checks that set
self.modules and self.new_filters and call self.handle.log). Modify the logic
around version_split (and the conditions that set self.modules and
self.new_filters) to either validate/normalize the parsed parts (e.g., pad
missing components as 0 or convert to a tuple of (major, minor) safely) or
explicitly check len(version_split) >= 2 before comparing version_split[1], so
the comparisons for the 3.2 and 4.1 checks cannot raise IndexError.
- Around line 193-210: The current except ValueError fallback sets image_base to
the YAML dir and then proceeds to glob, producing misleading "Error locating
image file" logs; change the behavior to match _discover_module_image_files by
not attempting the glob when "device-types" is absent: in the block around
_src/_parts/image_base, catch ValueError and set image_base = None (or return an
empty dict) and skip the for-loop glob logic that builds image_glob/saved_images
(or call a new helper _discover_device_image_files that returns {} on
ValueError), ensuring no error log is emitted when the directory cannot be
resolved.

…e, exclude finished_endpoints from _preload_global pending
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
netbox_api.py (2)

213-232: 🧹 Nitpick | 🔵 Trivial

Using KeyError as control flow is a code smell.

The pattern of raise KeyError("Device type not found") at line 232 to jump to the except KeyError at line 306 mixes real key errors (e.g., from device_type["manufacturer"]) with intentional control flow. A dict.get() returning None could instead be handled with an if dt is None: <create logic> / else: <update logic> pattern, making the flow explicit and avoiding accidentally swallowing real KeyErrors from nested dict accesses.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 213 - 232, The code currently raises
KeyError("Device type not found") in the device type lookup to drive control
flow; replace that with explicit conditional handling: after trying primary
lookup via device_types.existing_device_types[(manufacturer_slug,
device_type["model"])] and the slug fallback via
device_types.existing_device_types_by_slug, check if dt is None and call the
"create" branch (or return a not-found result) there, otherwise proceed with the
"update" branch; remove the artificial raise KeyError and adjust the
corresponding except block (in the surrounding method) so it no longer relies on
catching KeyError for normal flow—only let real KeyError from dict accesses
surface or catch more specific exceptions, and use dict.get(...) or safe
accesses for device_type["manufacturer"] to avoid accidental KeyError masking.

1473-1477: 🧹 Nitpick | 🔵 Trivial

_create_generic mutates caller-owned item dicts by injecting parent_key.

Line 1477 adds device_type or module_type to each dict in to_create, which are references to the original items list elements. If create_device_types/create_module_types iterates the same YAML dicts across multiple calls or inspects them afterward, the injected keys would persist. Currently safe because YAML dicts are consumed once, but fragile.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 1473 - 1477, The _create_generic function
currently mutates caller-owned dicts by setting parent_key on items in
to_create; avoid side-effects by creating shallow copies before adding the
parent_id. Locate _create_generic and the variables to_create, parent_key and
items, and change the logic so you build new dicts (e.g., via a shallow copy of
each item) and set new[parent_key] = parent_id on the copy, then use those
copies for further processing instead of modifying the original items.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Line 38: The docstring claims a "Singleton-style" interface but __new__
currently just calls super().__new__ causing new instances each time; either
remove "Singleton-style" from the module/class docstring or implement real
singleton behavior by changing NetBoxAPI.__new__ to cache and return a single
instance (e.g., use a class attribute like NetBoxAPI._instance, create and store
it on first call, and return it on subsequent calls) while preserving existing
initialization logic.
- Around line 1033-1037: When preload_job contains finished_endpoints we
currently drop their futures and never collect results; update _preload_global
to first extract finished = preload_job.get("finished_endpoints", set()) and for
each endpoint in finished that exists in future_map await/collect its future
result into records_by_endpoint (and update any stats) before subtracting
finished from pending so we still avoid double stop_task calls; in short, ensure
you eagerly gather results for endpoints in preload_job["finished_endpoints"]
from future_map (using the same result-handling logic used later in the pending
loop) prior to entering the progress-wait loop to avoid losing preloaded data
from those endpoints.

---

Outside diff comments:
In `@netbox_api.py`:
- Around line 213-232: The code currently raises KeyError("Device type not
found") in the device type lookup to drive control flow; replace that with
explicit conditional handling: after trying primary lookup via
device_types.existing_device_types[(manufacturer_slug, device_type["model"])]
and the slug fallback via device_types.existing_device_types_by_slug, check if
dt is None and call the "create" branch (or return a not-found result) there,
otherwise proceed with the "update" branch; remove the artificial raise KeyError
and adjust the corresponding except block (in the surrounding method) so it no
longer relies on catching KeyError for normal flow—only let real KeyError from
dict accesses surface or catch more specific exceptions, and use dict.get(...)
or safe accesses for device_type["manufacturer"] to avoid accidental KeyError
masking.
- Around line 1473-1477: The _create_generic function currently mutates
caller-owned dicts by setting parent_key on items in to_create; avoid
side-effects by creating shallow copies before adding the parent_id. Locate
_create_generic and the variables to_create, parent_key and items, and change
the logic so you build new dicts (e.g., via a shallow copy of each item) and set
new[parent_key] = parent_id on the copy, then use those copies for further
processing instead of modifying the original items.

…ts results, replace KeyError flow-control with explicit branching, shallow-copy items in _create_generic
@marcinpsk
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
netbox_api.py (1)

1502-1513: ⚠️ Potential issue | 🟡 Minor

Index-based error mapping assumes excep.error length matches to_create.

If pynetbox returns an error list shorter than to_create (e.g., the server truncated the response), to_create[i] will not exceed bounds since enumerate(excep.error) governs the loop. However, if the list is longer, to_create[i] raises IndexError.

This is a very unlikely edge case given pynetbox's 1:1 error mapping, but a bounds check would make it robust.

🛡️ Defensive bounds check
                 if isinstance(excep.error, list):
-                    for i, error in enumerate(excep.error):
+                    for i, error in enumerate(excep.error[:len(to_create)]):
                         if error:
                             item_name = to_create[i].get("name", "Unknown")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 1502 - 1513, The loop handling
pynetbox.RequestError can index past to_create when excep.error is longer than
to_create; in the except block for pynetbox.RequestError (referencing excep,
to_create, component_name, context_str and self.handle.log) add a defensive
bounds check before accessing to_create[i] (e.g., if i < len(to_create) then get
item_name = to_create[i].get("name", "Unknown") else set item_name = f"index
{i}" or use error-provided identifier) and log accordingly so you never raise
IndexError while preserving useful diagnostic text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 99-100: The version parsing in netbox_api.py currently does int(x)
directly on components from self.netbox.version (used in the _raw and
version_split logic), which will raise ValueError for suffixes like "4.1-beta";
update the parsing to sanitize each component before int conversion (e.g., trim
whitespace and extract leading numeric portion or drop trailing non-digits) so
that converting to int is safe, preserve the existing padding behavior to
produce a (major, minor) tuple, and ensure this change is applied where _raw and
version_split are computed.
- Around line 250-259: The current code does a linear scan over
change_report.modified_device_types for each device_type (setting dt_change),
producing O(N×M) behavior; instead, before the main for device_type in iterator
loop, pre-index change_report.modified_device_types into a dict keyed by
(change.manufacturer_slug, change.model) when update and change_report are
truthy, then inside the loop replace the for-change scan with a single dict
lookup using (manufacturer_slug, device_type.get("model", "")) to set dt_change
(or None if not found); reference change_report.modified_device_types,
dt_change, update, manufacturer_slug, and device_type.get("model", "") when
making the change.
- Around line 1073-1084: The code currently re-enqueues a progress update when
endpoint_name not in pending, which is wasteful because the next loop will
immediately drop it; in the function handling progress updates (the block around
the current logic that reads from progress_updates.get), remove the
progress_updates.put((endpoint_name, advance)) call and simply ignore/drop the
update when endpoint_name not in pending so that only updates for endpoints in
pending are processed (use the existing task_ids lookup and
progress.update(task_id, advance=advance) path unchanged).

---

Outside diff comments:
In `@netbox_api.py`:
- Around line 1502-1513: The loop handling pynetbox.RequestError can index past
to_create when excep.error is longer than to_create; in the except block for
pynetbox.RequestError (referencing excep, to_create, component_name, context_str
and self.handle.log) add a defensive bounds check before accessing to_create[i]
(e.g., if i < len(to_create) then get item_name = to_create[i].get("name",
"Unknown") else set item_name = f"index {i}" or use error-provided identifier)
and log accordingly so you never raise IndexError while preserving useful
diagnostic text.

…rop stale progress re-enqueue, guard _create_generic error handler bounds
@marcinpsk
Copy link
Owner Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
netbox_api.py (1)

208-219: ⚠️ Potential issue | 🟡 Minor

Silent discard of image flag when source path has no "device-types" component.

When ValueError is raised (line 204–208), image_base is set to None and a comment explains why. But in the loop below (line 212), when device_type[i] is True and image_base is None, the flag is silently deleted (line 219) with no log entry. A YAML with front_image: true whose source file lives outside the expected device-types/… layout will have its image silently skipped — operators will see no feedback.

The existing error path for glob-misses (line 218) already logs "Error locating image file using '…'". Apply the same principle here.

🛠️ Proposed fix
             for i in ["front_image", "rear_image"]:
                 if i in device_type:
                     if device_type[i] and image_base is not None:
                         image_glob = f"{image_base}/{device_type['slug']}.{i.split('_')[0]}.*"
                         images = glob.glob(image_glob, recursive=False)
                         if images:
                             saved_images[i] = images[0]
                         else:
                             self.handle.log(f"Error locating image file using '{image_glob}'")
+                    elif device_type[i] and image_base is None:
+                        self.handle.log(
+                            f"Cannot derive image path for '{i}' of {device_type.get('slug', '?')}: "
+                            f"source path '{src_file}' does not contain a 'device-types' component."
+                        )
                     del device_type[i]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@netbox_api.py` around lines 208 - 219, When ValueError sets image_base to
None, the code silently deletes true image flags; update the loop that iterates
over ["front_image","rear_image"] (the block referencing device_type,
image_base, image_glob, images, saved_images, and self.handle.log) to log a
clear message when device_type[i] is truthy but image_base is None before
deleting the flag (e.g., "Skipping image discovery for '{slug}' because source
path lacks 'device-types'"), then delete device_type[i] as before so operators
see why the image was skipped.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 1214-1222: In _preload_scoped, the blocking get path currently
re-enqueues updates for endpoints no longer in pending
(progress_updates.put((endpoint_name, advance))) which is inconsistent with the
drain loop and _preload_global; change the behavior in the try/except block so
that when endpoint_name not in pending you simply drop the update and continue
(i.e., remove the progress_updates.put(...) call), still calling continue to
skip applying progress, so pending, task_ids and progress usage remains
unchanged and stale updates are not re-queued.
- Around line 1019-1042: start_component_preload can create new task_ids when
preload_job["task_ids"] is None, but the already_done branch assumes
pump_preload_progress stopped those tasks and therefore doesn't stop the
newly-created tasks—leaving them running; to fix, after creating task_ids (the
dict comprehension) and computing already_done from
preload_job.get("finished_endpoints"), call
progress.stop_task(task_ids[endpoint_name]) for each endpoint in already_done
(handling exceptions like in the pending loop), then collect results into
records_by_endpoint as before and remove those endpoints from pending; reference
symbols: start_component_preload, pump_preload_progress, preload_job, task_ids,
already_done, future_map, records_by_endpoint, pending.

---

Outside diff comments:
In `@netbox_api.py`:
- Around line 208-219: When ValueError sets image_base to None, the code
silently deletes true image flags; update the loop that iterates over
["front_image","rear_image"] (the block referencing device_type, image_base,
image_glob, images, saved_images, and self.handle.log) to log a clear message
when device_type[i] is truthy but image_base is None before deleting the flag
(e.g., "Skipping image discovery for '{slug}' because source path lacks
'device-types'"), then delete device_type[i] as before so operators see why the
image was skipped.

…y_done tasks in _preload_global, drop stale updates in _preload_scoped
@marcinpsk
Copy link
Owner Author

@CodeRabbit review

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 189-194: The pre-indexed change_by_key lookup is good, but the
code uses inconsistent checks for dt_change (once as truthiness `if dt_change:`
and elsewhere as identity `if dt_change is not None:`) which could cause
divergent behavior if a change object is ever falsy; update both checks to the
same style (prefer `if dt_change is not None:`) where you retrieve entries from
change_by_key (built from change_report.modified_device_types) and ensure the
.get(...) usage that fetches dt_change is followed by the normalized identity
check before applying or counting changes.

---

Duplicate comments:
In `@netbox_api.py`:
- Around line 1039-1050: For endpoints in already_done you stop the task without
marking it complete, causing a 0% visual state; before calling
progress.stop_task(task_ids[endpoint_name]) (inside the loop handling
already_done and using task_ids/progress), call the progress API to mark the
task completed (e.g. progress.update_task(task_ids[endpoint_name],
completed=total) or progress.update_task(...,
completed=progress.get_task_total(task_ids[endpoint_name])) — ensure you obtain
the correct total for that task and set completed to that value, then call
progress.stop_task; update the block that handles
already_done/records_by_endpoint to perform this update-before-stop.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@netbox_api.py`:
- Around line 1133-1135: The _preload_scoped docstring is outdated: it claims
"per-device-type API calls" but the implementation now batches NetBox filter()
calls per chunk; update the docstring for _preload_scoped to describe that it
deduplicates and sorts device_type_ids, then performs batched/chunked filter()
requests (not individual per-device-type calls), optionally using
progress_wrapper, preload_job, and progress to report progress. Mention that it
fetches component templates for the given device type IDs in chunks to reduce
API calls.

@marcinpsk marcinpsk merged commit 0fde990 into main Feb 20, 2026
6 checks passed
@marcinpsk marcinpsk deleted the module-images branch February 20, 2026 12:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant