Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BottomUp model pipeline #52

Merged
merged 25 commits into from
Jun 26, 2024
Merged

Add BottomUp model pipeline #52

merged 25 commits into from
Jun 26, 2024

Conversation

gitttt-1234
Copy link
Contributor

@gitttt-1234 gitttt-1234 commented May 21, 2024

This PR adds the data processing, model training and inference pipeline for BottomUp models where all the nodes across all instances are predicted at one-shot and the nodes belonging to a certain instance are combine suing the inference.paf_grouping.PAFScorer module.

The inference modules are refactored, addressing #48.

Summary by CodeRabbit

  • New Features

    • Introduced new predictor classes for running inference with different SLEAP-NN models: TopDownPredictor, SingleInstancePredictor, and BottomUpPredictor.
    • Added support for a new pipeline option "BottomUp" in the configuration.
  • Documentation

    • Updated configuration documentation to include new pipeline options and head configurations for the BottomUp model.

Copy link
Contributor

coderabbitai bot commented May 21, 2024

Walkthrough

The changes introduce new predictor classes (TopDownPredictor, SingleInstancePredictor, BottomUpPredictor) for running inference on different SLEAP-NN models. These classes provide methods for model initialization, data pipeline creation, and converting results into SLEAP-specific data structures. Additionally, documentation updates include new configurations for the BottomUp model and other enhancements.

Changes

File/Path Change Summary
sleap_nn/inference/predictors.py Introduced new classes (Predictor, TopDownPredictor, SingleInstancePredictor, BottomUpPredictor) for model inference with related methods.
docs/config.md Added support for new pipeline option "BottomUp" in data_config and new configurations for pafs_gen, PartAffinityFieldsHead, and CenteredInstanceConfmapsHead.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Predictor
    participant Model
    participant DataPipeline
    participant SLEAPData

    User ->> Predictor: Initialize from trained models
    Predictor ->> Model: Load model
    User ->> Predictor: Create data pipeline
    Predictor ->> DataPipeline: Initialize pipeline
    User ->> Predictor: Run inference
    Predictor ->> Model: Predict data
    Model ->> Predictor: Return predictions
    Predictor ->> SLEAPData: Convert to SLEAP format
    SLEAPData ->> User: Return labeled frames
Loading

Possibly related issues

Poem

In the land of code so bright,
Predictors take flight,
Models dance in the night,
Converting data with might.
BottomUp joins the fight,
SLEAP-NN shines in light,
CodeRabbit's delight! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@gitttt-1234 gitttt-1234 changed the base branch from main to divya/fix-model-pipeline May 21, 2024 17:29
Copy link

codecov bot commented May 21, 2024

Codecov Report

Attention: Patch coverage is 96.32249% with 26 lines in your changes missing coverage. Please review.

Project coverage is 97.71%. Comparing base (f093ce2) to head (a4b9722).
Report is 3 commits behind head on main.

Files Patch % Lines
sleap_nn/inference/topdown.py 92.34% 14 Missing ⚠️
sleap_nn/inference/predictors.py 96.62% 12 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #52      +/-   ##
==========================================
+ Coverage   96.64%   97.71%   +1.07%     
==========================================
  Files          23       29       +6     
  Lines        1818     2672     +854     
==========================================
+ Hits         1757     2611     +854     
  Misses         61       61              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

This was linked to issues May 23, 2024
@gitttt-1234 gitttt-1234 marked this pull request as ready for review May 24, 2024 00:52
@gitttt-1234 gitttt-1234 changed the base branch from divya/fix-model-pipeline to main May 24, 2024 01:21
@gitttt-1234 gitttt-1234 changed the base branch from main to divya/fix-model-pipeline May 24, 2024 01:22
@gitttt-1234 gitttt-1234 changed the base branch from divya/fix-model-pipeline to main May 24, 2024 01:32
@gitttt-1234 gitttt-1234 changed the base branch from main to divya/fix-model-pipeline May 24, 2024 01:32
@gitttt-1234 gitttt-1234 requested a review from talmo May 24, 2024 01:35
@gitttt-1234 gitttt-1234 changed the base branch from divya/fix-model-pipeline to main May 24, 2024 17:46
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Outside diff range and nitpick comments (23)
tests/inference/test_single_instance.py (1)

1-1: Add a module-level docstring to describe the purpose and contents of this test file.

tests/inference/test_bottomup.py (1)

1-1: Add a module-level docstring to describe the purpose and contents of this test file.

tests/data/test_confmaps.py (1)

Line range hint 1-1: Add a module-level docstring to describe the purpose and contents of this test file.

tests/inference/test_paf_grouping.py (18)

Line range hint 1-1: Add a module-level docstring to describe the purpose and scope of the tests in this file.


Line range hint 28-28: Add a docstring to test_get_connection_candidates to explain the purpose of the test and its parameters.


Line range hint 44-44: Add a docstring to test_make_line_subs to explain the purpose of the test and its parameters.


Line range hint 58-58: Add a docstring to test_get_paf_lines to explain the purpose of the test and its parameters.


Line range hint 78-78: Add a docstring to test_compute_distance_penalty to explain the purpose of the test and its parameters.


Line range hint 94-94: Add a docstring to test_score_paf_lines to explain the purpose of the test and its parameters.


Line range hint 117-117: Add a docstring to test_score_paf_lines_batch to explain the purpose of the test and its parameters.


Line range hint 153-153: Add a docstring to test_match_candidates_sample to explain the purpose of the test and its parameters.


Line range hint 179-179: Add a docstring to test_match_candidates_batch to explain the purpose of the test and its parameters.


Line range hint 205-205: Add a docstring to test_toposort_edges to explain the purpose of the test and its parameters.


Line range hint 245-245: Add a docstring to test_assign_connections_to_instances to explain the purpose of the test and its parameters.


Line range hint 339-339: Add a docstring to test_make_predicted_instances to explain the purpose of the test and its parameters.


Line range hint 367-367: Add a docstring to test_group_instances_sample to explain the purpose of the test and its parameters.


Line range hint 415-415: Add a docstring to test_group_instances_batch to explain the purpose of the test and its parameters.


Line range hint 476-476: Add a docstring to test_paf_scorer_from_config to explain the purpose of the test and its parameters.


Line range hint 487-487: Add a docstring to test_paf_scorer_score_paf_lines to explain the purpose of the test and its parameters.


Line range hint 531-531: Add a docstring to test_paf_scorer_match_candidates to explain the purpose of the test and its parameters.


Line range hint 569-569: Add a docstring to test_paf_scorer_group_instances to explain the purpose of the test and its parameters.

tests/data/test_pipelines.py (2)

Line range hint 1-1: Add a module-level docstring to describe the purpose and contents of this test module.


460-460: Ensure consistency in the configuration of conf_map_gen across different tests to avoid discrepancies in expected outcomes.

Also applies to: 525-525

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 06f2a63 and d4482c1.
Files selected for processing (27)
  • docs/config.md (6 hunks)
  • docs/config_bottomup.yaml (1 hunks)
  • docs/config_centroid.yaml (4 hunks)
  • sleap_nn/architectures/model.py (4 hunks)
  • sleap_nn/data/confidence_maps.py (6 hunks)
  • sleap_nn/data/edge_maps.py (1 hunks)
  • sleap_nn/data/instance_cropping.py (1 hunks)
  • sleap_nn/data/pipelines.py (2 hunks)
  • sleap_nn/data/providers.py (1 hunks)
  • sleap_nn/inference/bottomup.py (1 hunks)
  • sleap_nn/inference/paf_grouping.py (4 hunks)
  • sleap_nn/inference/predictors.py (15 hunks)
  • sleap_nn/inference/single_instance.py (1 hunks)
  • sleap_nn/inference/topdown.py (1 hunks)
  • sleap_nn/training/model_trainer.py (6 hunks)
  • tests/assets/minimal_instance_bottomup/initial_config.yaml (1 hunks)
  • tests/assets/minimal_instance_bottomup/training_config.yaml (1 hunks)
  • tests/data/test_confmaps.py (2 hunks)
  • tests/data/test_instance_cropping.py (1 hunks)
  • tests/data/test_pipelines.py (4 hunks)
  • tests/fixtures/datasets.py (1 hunks)
  • tests/inference/test_bottomup.py (1 hunks)
  • tests/inference/test_paf_grouping.py (1 hunks)
  • tests/inference/test_predictors.py (1 hunks)
  • tests/inference/test_single_instance.py (1 hunks)
  • tests/inference/test_topdown.py (1 hunks)
  • tests/training/test_model_trainer.py (3 hunks)
Files skipped from review due to trivial changes (1)
  • sleap_nn/data/edge_maps.py
Additional Context Used
LanguageTool (51)
docs/config.md (51)

Near line 6: Loose punctuation mark.
Context: ... four main sections: - 1. data_config: Creating a data pipeline. - 2. `model_...


Near line 8: Loose punctuation mark.
Context: ...ng a data pipeline. - 2. model_config: Initialise the sleap-nn backbone and he...


Near line 10: Loose punctuation mark.
Context: ... and head models. - 3. trainer_config: Hyperparameters required to train the m...


Near line 12: Loose punctuation mark.
Context: ...with Lightning. - 4. inference_config: Inference related configs. Note:...


Near line 16: Loose punctuation mark.
Context: ... for val_data_loader. - data_config: - provider: (str) Provider class...


Near line 19: Loose punctuation mark.
Context: ...psPipeline" or "BottomUp". - train: - labels_path: (str) Path to ...


Near line 21: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 22: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 23: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels and this is ...


Near line 24: Possible missing article found.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...


Near line 31: Loose punctuation mark.
Context: ...e same factor. - preprocessing: - anchor_ind: (int) Index...


Near line 32: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 34: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...


Near line 34: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 36: Loose punctuation mark.
Context: ...n. - augmentation_config: - random crop`: (Dict[...


Near line 64: Loose punctuation mark.
Context: ... to train structure) - model_config: - init_weight: (str) model weigh...


Near line 67: Loose punctuation mark.
Context: ...win_B_Weights"]. - backbone_config: - backbone_type: (str) Backbo...


Near line 78: This phrase might be redundant. Consider either removing or replacing the adjective ‘additional’.
Context: ... - middle_block: (bool) If True, add an additional block at the end of the encoder. default: Tru...


Near line 80: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 93: This phrase might be redundant. Consider either removing or replacing the adjective ‘additional’.
Context: ... - middle_block: (bool) If True, add an additional block at the end of the encoder. default: Tru...


Near line 95: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 121: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 125: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 130: Loose punctuation mark.
Context: .... Default: "tiny". - arch: Dictionary of embed dimension, depths a...


Near line 134: Loose punctuation mark.
Context: .... Default: "tiny". - arch: Dictionary of embed dimension, depths a...


Near line 147: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 150: Possible missing comma found.
Context: ...List[dict]) List of heads in the model. For eg, BottomUp model has both 'MultiInsta...


Near line 156: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 157: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...


Near line 157: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 160: Loose punctuation mark.
Context: ...ic output in multi-head models. ] - trainer_config: - `train_data...


Near line 168: Possible missing article found.
Context: ...he batch size. If False and the size of dataset is not divisible by the batch size, the...


Near line 172: Possible typo: you repeated a word
Context: ...ease note that the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and...


Near line 172: Possible typo: you repeated a word
Context: ... the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and the callback is...


Near line 202: Possible missing comma found.
Context: ...onitored has stopped decreasing; in max mode it will be reduced when the quantity mo...


Near line 206: Possible missing comma found.
Context: ...tience`: (int) Number of epochs with no improvement after which learning rate will be reduc...


Near line 208: Possible missing comma found.
Context: ...arning rate of all param groups or each group respectively. Default: 0. - `inferen...


Near line 210: Loose punctuation mark.
Context: ...ely. Default: 0. - inference_config: - device: (str) Device on which t...


Near line 212: Loose punctuation mark.
Context: ... "ideep", "hip", "msnpu"). - data: - path: (str) Path to .slp ...


Near line 219: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 220: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 221: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels and this is ...


Near line 222: Possible missing article found.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...


Near line 232: Loose punctuation mark.
Context: ... the default. - preprocessing: - anchor_ind: (int) Inde...


Near line 233: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 237: Loose punctuation mark.
Context: ...the input image. - peak_threshold: float between 0 and 1. Minimum confid...


Near line 238: Loose punctuation mark.
Context: ... be ignored. - integral_refinement: If None, returns the grid-aligned pea...


Near line 239: Loose punctuation mark.
Context: ... regression. - integral_patch_size: Size of patches to crop around each rou...


Near line 240: Loose punctuation mark.
Context: ... integer scalar. - return_confmaps: If True, predicted confidence maps wi...


Near line 241: Loose punctuation mark.
Context: ... values and points. - return_pafs: If True, predicted part affinity fiel...


Near line 242: Loose punctuation mark.
Context: ...es and points. - return_paf_graph: If True, the part affinity field grap...

Ruff (28)
tests/data/test_confmaps.py (1)

1-1: Missing docstring in public module

tests/data/test_instance_cropping.py (2)

1-1: Missing docstring in public module


10-10: Missing docstring in public function

tests/data/test_pipelines.py (1)

1-1: Missing docstring in public module

tests/inference/test_bottomup.py (1)

1-1: Missing docstring in public module

tests/inference/test_paf_grouping.py (18)

1-1: Missing docstring in public module


28-28: Missing docstring in public function


44-44: Missing docstring in public function


58-58: Missing docstring in public function


78-78: Missing docstring in public function


94-94: Missing docstring in public function


117-117: Missing docstring in public function


153-153: Missing docstring in public function


179-179: Missing docstring in public function


205-205: Missing docstring in public function


245-245: Missing docstring in public function


339-339: Missing docstring in public function


367-367: Missing docstring in public function


415-415: Missing docstring in public function


476-476: Missing docstring in public function


487-487: Missing docstring in public function


531-531: Missing docstring in public function


569-569: Missing docstring in public function

tests/inference/test_predictors.py (1)

1-1: Missing docstring in public module

tests/inference/test_single_instance.py (1)

1-1: Missing docstring in public module

tests/inference/test_topdown.py (1)

1-1: Missing docstring in public module

tests/training/test_model_trainer.py (2)

71-71: Missing docstring in public function


271-271: Missing docstring in public function

Markdownlint (193)
docs/config.md (193)

17: Expected: 2; Actual: 4
Unordered list indentation


18: Expected: 2; Actual: 4
Unordered list indentation


19: Expected: 2; Actual: 4
Unordered list indentation


20: Expected: 4; Actual: 8
Unordered list indentation


21: Expected: 4; Actual: 8
Unordered list indentation


26: Expected: 4; Actual: 8
Unordered list indentation


28: Expected: 4; Actual: 8
Unordered list indentation


30: Expected: 4; Actual: 8
Unordered list indentation


31: Expected: 4; Actual: 8
Unordered list indentation


32: Expected: 6; Actual: 12
Unordered list indentation


33: Expected: 6; Actual: 12
Unordered list indentation


34: Expected: 6; Actual: 12
Unordered list indentation


35: Expected: 6; Actual: 12
Unordered list indentation


36: Expected: 6; Actual: 12
Unordered list indentation


37: Expected: 8; Actual: 16
Unordered list indentation


38: Expected: 8; Actual: 16
Unordered list indentation


39: Expected: 8; Actual: 16
Unordered list indentation


40: Expected: 10; Actual: 20
Unordered list indentation


41: Expected: 12; Actual: 24
Unordered list indentation


42: Expected: 12; Actual: 24
Unordered list indentation


43: Expected: 12; Actual: 24
Unordered list indentation


44: Expected: 12; Actual: 24
Unordered list indentation


45: Expected: 12; Actual: 24
Unordered list indentation


46: Expected: 12; Actual: 24
Unordered list indentation


47: Expected: 12; Actual: 24
Unordered list indentation


48: Expected: 12; Actual: 24
Unordered list indentation


49: Expected: 12; Actual: 24
Unordered list indentation


50: Expected: 10; Actual: 20
Unordered list indentation


51: Expected: 12; Actual: 24
Unordered list indentation


52: Expected: 12; Actual: 24
Unordered list indentation


54: Expected: 12; Actual: 24
Unordered list indentation


55: Expected: 12; Actual: 24
Unordered list indentation


56: Expected: 12; Actual: 24
Unordered list indentation


57: Expected: 12; Actual: 24
Unordered list indentation


58: Expected: 12; Actual: 24
Unordered list indentation


59: Expected: 12; Actual: 24
Unordered list indentation


60: Expected: 12; Actual: 24
Unordered list indentation


61: Expected: 12; Actual: 24
Unordered list indentation


62: Expected: 2; Actual: 4
Unordered list indentation


65: Expected: 2; Actual: 4
Unordered list indentation


66: Expected: 2; Actual: 4
Unordered list indentation


67: Expected: 2; Actual: 4
Unordered list indentation


68: Expected: 4; Actual: 8
Unordered list indentation


69: Expected: 4; Actual: 8
Unordered list indentation


70: Expected: 6; Actual: 12
Unordered list indentation


71: Expected: 6; Actual: 12
Unordered list indentation


72: Expected: 6; Actual: 12
Unordered list indentation


73: Expected: 6; Actual: 12
Unordered list indentation


74: Expected: 6; Actual: 12
Unordered list indentation


76: Expected: 6; Actual: 12
Unordered list indentation


78: Expected: 6; Actual: 12
Unordered list indentation


79: Expected: 6; Actual: 12
Unordered list indentation


83: Expected: 6; Actual: 12
Unordered list indentation


84: Expected: 6; Actual: 12
Unordered list indentation


88: Expected: 6; Actual: 12
Unordered list indentation


89: Expected: 6; Actual: 12
Unordered list indentation


91: Expected: 6; Actual: 12
Unordered list indentation


93: Expected: 6; Actual: 12
Unordered list indentation


94: Expected: 6; Actual: 12
Unordered list indentation


98: Expected: 6; Actual: 12
Unordered list indentation


99: Expected: 6; Actual: 12
Unordered list indentation


103: Expected: 6; Actual: 12
Unordered list indentation


104: Expected: 6; Actual: 12
Unordered list indentation


105: Expected: 4; Actual: 8
Unordered list indentation


106: Expected: 6; Actual: 12
Unordered list indentation


107: Expected: 6; Actual: 12
Unordered list indentation


108: Expected: 8; Actual: 16
Unordered list indentation


109: Expected: 8; Actual: 16
Unordered list indentation


110: Expected: 6; Actual: 12
Unordered list indentation


111: Expected: 6; Actual: 12
Unordered list indentation


112: Expected: 6; Actual: 12
Unordered list indentation


113: Expected: 6; Actual: 12
Unordered list indentation


114: Expected: 6; Actual: 12
Unordered list indentation


115: Expected: 6; Actual: 12
Unordered list indentation


116: Expected: 6; Actual: 12
Unordered list indentation


117: Expected: 6; Actual: 12
Unordered list indentation


118: Expected: 6; Actual: 12
Unordered list indentation


119: Expected: 6; Actual: 12
Unordered list indentation


120: Expected: 6; Actual: 12
Unordered list indentation


124: Expected: 6; Actual: 12
Unordered list indentation


128: Expected: 4; Actual: 8
Unordered list indentation


129: Expected: 6; Actual: 12
Unordered list indentation


130: Expected: 6; Actual: 12
Unordered list indentation


133: Expected: 6; Actual: 12
Unordered list indentation


134: Expected: 6; Actual: 12
Unordered list indentation


137: Expected: 6; Actual: 12
Unordered list indentation


138: Expected: 6; Actual: 12
Unordered list indentation


139: Expected: 6; Actual: 12
Unordered list indentation


140: Expected: 6; Actual: 12
Unordered list indentation


141: Expected: 6; Actual: 12
Unordered list indentation


142: Expected: 6; Actual: 12
Unordered list indentation


143: Expected: 6; Actual: 12
Unordered list indentation


144: Expected: 6; Actual: 12
Unordered list indentation


145: Expected: 6; Actual: 12
Unordered list indentation


146: Expected: 6; Actual: 12
Unordered list indentation


150: Expected: 2; Actual: 4
Unordered list indentation


163: Expected: 2; Actual: 4
Unordered list indentation


164: Expected: 4; Actual: 8
Unordered list indentation


165: Expected: 4; Actual: 8
Unordered list indentation


166: Expected: 4; Actual: 8
Unordered list indentation


167: Expected: 4; Actual: 8
Unordered list indentation


168: Expected: 4; Actual: 8
Unordered list indentation


169: Expected: 4; Actual: 8
Unordered list indentation


170: Expected: 2; Actual: 4
Unordered list indentation


171: Expected: 2; Actual: 4
Unordered list indentation


172: Expected: 4; Actual: 8
Unordered list indentation


173: Expected: 4; Actual: 8
Unordered list indentation


174: Expected: 4; Actual: 8
Unordered list indentation


175: Expected: 4; Actual: 8
Unordered list indentation


176: Expected: 4; Actual: 8
Unordered list indentation


177: Expected: 2; Actual: 4
Unordered list indentation


178: Expected: 4; Actual: 8
Unordered list indentation


179: Expected: 4; Actual: 8
Unordered list indentation


180: Expected: 4; Actual: 8
Unordered list indentation


181: Expected: 2; Actual: 4
Unordered list indentation


182: Expected: 2; Actual: 4
Unordered list indentation


183: Expected: 2; Actual: 4
Unordered list indentation


184: Expected: 2; Actual: 4
Unordered list indentation


185: Expected: 2; Actual: 4
Unordered list indentation


186: Expected: 2; Actual: 4
Unordered list indentation


187: Expected: 2; Actual: 4
Unordered list indentation


188: Expected: 2; Actual: 4
Unordered list indentation


189: Expected: 2; Actual: 4
Unordered list indentation


190: Expected: 2; Actual: 4
Unordered list indentation


191: Expected: 4; Actual: 8
Unordered list indentation


192: Expected: 4; Actual: 8
Unordered list indentation


193: Expected: 4; Actual: 8
Unordered list indentation


194: Expected: 4; Actual: 8
Unordered list indentation


195: Expected: 4; Actual: 8
Unordered list indentation


196: Expected: 4; Actual: 8
Unordered list indentation


197: Expected: 2; Actual: 4
Unordered list indentation


198: Expected: 2; Actual: 4
Unordered list indentation


199: Expected: 4; Actual: 8
Unordered list indentation


200: Expected: 4; Actual: 8
Unordered list indentation


201: Expected: 2; Actual: 4
Unordered list indentation


202: Expected: 4; Actual: 8
Unordered list indentation


203: Expected: 4; Actual: 8
Unordered list indentation


204: Expected: 4; Actual: 8
Unordered list indentation


205: Expected: 4; Actual: 8
Unordered list indentation


206: Expected: 4; Actual: 8
Unordered list indentation


207: Expected: 4; Actual: 8
Unordered list indentation


208: Expected: 4; Actual: 8
Unordered list indentation


211: Expected: 2; Actual: 4
Unordered list indentation


212: Expected: 2; Actual: 4
Unordered list indentation


213: Expected: 4; Actual: 8
Unordered list indentation


214: Expected: 4; Actual: 8
Unordered list indentation


216: Expected: 4; Actual: 8
Unordered list indentation


218: Expected: 4; Actual: 8
Unordered list indentation


219: Expected: 4; Actual: 8
Unordered list indentation


224: Expected: 4; Actual: 8
Unordered list indentation


225: Expected: 4; Actual: 8
Unordered list indentation


226: Expected: 4; Actual: 8
Unordered list indentation


227: Expected: 6; Actual: 12
Unordered list indentation


228: Expected: 6; Actual: 12
Unordered list indentation


229: Expected: 6; Actual: 12
Unordered list indentation


230: Expected: 6; Actual: 12
Unordered list indentation


232: Expected: 4; Actual: 8
Unordered list indentation


233: Expected: 6; Actual: 12
Unordered list indentation


234: Expected: 6; Actual: 12
Unordered list indentation


235: Expected: 6; Actual: 12
Unordered list indentation


236: Expected: 6; Actual: 12
Unordered list indentation


237: Expected: 2; Actual: 4
Unordered list indentation


238: Expected: 2; Actual: 4
Unordered list indentation


239: Expected: 2; Actual: 4
Unordered list indentation


240: Expected: 2; Actual: 4
Unordered list indentation


241: Expected: 2; Actual: 4
Unordered list indentation


242: Expected: 2; Actual: 4
Unordered list indentation


16: Expected: 0 or 2; Actual: 1
Trailing spaces


33: Expected: 0 or 2; Actual: 1
Trailing spaces


35: Expected: 0 or 2; Actual: 1
Trailing spaces


39: Expected: 0 or 2; Actual: 1
Trailing spaces


40: Expected: 0 or 2; Actual: 1
Trailing spaces


64: Expected: 0 or 2; Actual: 1
Trailing spaces


110: Expected: 0 or 2; Actual: 1
Trailing spaces


112: Expected: 0 or 2; Actual: 1
Trailing spaces


129: Expected: 0 or 2; Actual: 1
Trailing spaces


133: Expected: 0 or 2; Actual: 1
Trailing spaces


155: Expected: 0 or 2; Actual: 1
Trailing spaces


162: Expected: 0 or 2; Actual: 1
Trailing spaces


188: Expected: 0 or 2; Actual: 1
Trailing spaces


211: Expected: 0 or 2; Actual: 1
Trailing spaces


223: Expected: 0 or 2; Actual: 1
Trailing spaces


232: Expected: 0 or 2; Actual: 1
Trailing spaces


236: Expected: 0 or 2; Actual: 1
Trailing spaces


240: Expected: 0 or 2; Actual: 1
Trailing spaces


241: Expected: 0 or 2; Actual: 1
Trailing spaces


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


204: null
Spaces inside emphasis markers


204: null
Spaces inside emphasis markers


242: null
Files should end with a single newline character

Additional comments not posted (43)
tests/data/test_instance_cropping.py (1)

29-29: Set max_instances to control the number of instances processed. Consider adding a comment explaining why the limit is set to 3.

docs/config_centroid.yaml (2)

8-8: The change in scale from 1.0 to 0.5 should be verified to ensure it aligns with the intended data preprocessing adjustments for the centroid model.

Verification successful

The initial script did not yield any results, which suggests that the scale parameter set to 0.5 might not be present in other configuration files. To ensure thorough verification, we need to check for the scale parameter in other configurations and model files.


The change in the scale parameter from 1.0 to 0.5 in docs/config_centroid.yaml is consistent with the intended data preprocessing adjustments for the centroid model. The verification script confirms that this change is isolated to the specified configuration file and does not conflict with other configurations or model files.

  • docs/config_centroid.yaml: The scale parameter is set to 0.5.

This change appears to be intentional and does not introduce inconsistencies with other parts of the codebase.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the scale parameter change in related configuration files and model usage.
fd "config_*.yaml" | xargs rg "scale: 0.5"

Length of output: 44



Script:

#!/bin/bash
# Description: Verify the scale parameter change in all YAML configuration files and Python model files.
fd -e yaml -e py | xargs rg "scale:"

Length of output: 6435


229-229: Adjusting the scale parameter in the inference configuration is critical. Confirm that this change aligns with the expected model input dimensions and preprocessing during inference.

Verification successful

The search results indicate that the scale parameter is used in various parts of the codebase, including inference scripts and configuration files. The presence of scale: 0.5 in docs/config_centroid.yaml aligns with other instances where scale is set to different values, suggesting that the parameter is indeed used for resizing images during inference.

Given this context, it appears that the change to scale: 0.5 in the inference configuration is consistent with the model's expected input dimensions and preprocessing steps.

  • docs/config_centroid.yaml: scale: 0.5
  • sleap_nn/inference/topdown.py: input_scale: float = 1.0, and self.input_scale = input_scale
  • sleap_nn/inference/single_instance.py: input_scale: float = 1.0, and self.input_scale = input_scale
  • sleap_nn/inference/predictors.py: input_scale=self.centroid_config.inference_config.data.scale

These references confirm that the scale parameter is used to resize images before inference, and setting it to 0.5 should be valid.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Confirm the scale parameter adjustment in the inference configuration aligns with model expectations.
fd "config_*.yaml" | xargs rg "scale: 0.5"

Length of output: 44



Script:

#!/bin/bash
# Description: Search for the usage of the `scale` parameter in the codebase to verify its impact and alignment with model requirements.
rg 'scale' --glob '*.py' --glob '*.yaml'

Length of output: 21430

tests/assets/minimal_instance_bottomup/initial_config.yaml (1)

1-238: This new configuration file for the BottomUp model appears comprehensive and well-structured, covering essential aspects like data handling, preprocessing, and model settings. Ensure all parameters align with the BottomUp model's requirements and intended use cases.

tests/assets/minimal_instance_bottomup/training_config.yaml (1)

1-249: This training configuration file for the BottomUp model is well-prepared, covering training data loaders, model checkpoints, and early stopping settings. Verify that all configurations are optimized for the BottomUp model's training and align with the model's performance expectations.

sleap_nn/architectures/model.py (1)

56-56: The modification in the get_backbone function to ensure correct application of the backbone configuration is crucial for model integrity and performance.

docs/config_bottomup.yaml (1)

1-270: The configuration file for the BottomUp model is comprehensive, covering all necessary settings for data handling, model architecture, and inference. Ensure that all parameters are correctly set to optimize the BottomUp model's performance.

sleap_nn/data/providers.py (1)

71-75: The addition of the edge_idxs property is a valuable enhancement for the BottomUp model, facilitating graph-based operations by providing easy access to edge indices.

sleap_nn/inference/bottomup.py (4)

10-50: The class BottomUpInferenceModel is well-documented and structured. The attributes are clearly defined and relevant to the bottom-up inference process.


52-78: Initialization of BottomUpInferenceModel is robust, with clear default values and type annotations. Good use of optional parameters for flexibility.


80-101: The method _generate_cms_peaks effectively handles peak detection and adjustment for stride and scale. The use of nested tensors for output is appropriate for handling structured data.


103-161: The forward method is comprehensive, handling the prediction of confidence maps and the inference of peak coordinates. The method efficiently integrates various components like peak detection and PAF scoring.

sleap_nn/data/confidence_maps.py (3)

Line range hint 3-36: The function make_confmaps is well-documented with clear parameter descriptions and return type. The implementation uses efficient tensor operations for generating confidence maps.


Line range hint 55-79: The function make_multi_confmaps extends the functionality to handle multiple instances. The use of torch.maximum to combine confidence maps is a good choice for this context.


129-129: The use of unsqueeze in the MultiConfidenceMapGenerator class is appropriate for handling tensor dimensions correctly. This ensures that the data is correctly formatted for subsequent operations.

tests/fixtures/datasets.py (1)

33-37: The fixture minimal_instance_bottomup_ckpt correctly sets up a path for the BottomUp model checkpoint. This is essential for testing the BottomUp model functionality.

sleap_nn/data/pipelines.py (1)

293-381: The BottomUpPipeline class is well-implemented with clear documentation and a structured approach to building the training pipeline. The integration of PartAffinityFieldsGenerator is particularly noteworthy for handling part affinity fields in the bottom-up model.

tests/training/test_model_trainer.py (3)

12-17: The import changes correctly include the new BottomUpModel class, aligning with the PR's objective to add BottomUp model pipeline functionalities.


226-249: The configuration setup for the BottomUp model within the test function is correctly implemented, aligning with the expected parameters for preprocessing and model head configurations.


258-269: The model initialization and training steps are correctly implemented, with appropriate assertions to check the model's output shape and global step.

sleap_nn/inference/topdown.py (3)

233-311: LGTM! Ensure all function calls match the new signature.


16-81: LGTM! Ensure all function calls match the new signature.


313-420: LGTM! Ensure all function calls match the new signature.

Verification successful

The instances of FindInstancePeaks in the codebase match the new constructor signature. Here are the relevant locations:

  • tests/inference/test_topdown.py:
    • Line 60
    • Line 217
    • Line 234
  • tests/inference/test_inference.py:
    • Line 69
    • Line 405
    • Line 422
  • sleap_nn/inference/predictors.py:
    • Line 318

These instances correctly use the new parameters for FindInstancePeaks.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify all function calls to `FindInstancePeaks` match the new signature.

# Test: Search for the function usage. Expect: Only occurances of the new signature.
rg --type python $'FindInstancePeaks'

Length of output: 67



Script:

#!/bin/bash
# Description: Verify all function calls to `FindInstancePeaks` match the new signature.

# Test: Search for the function usage. Expect: Only occurrences of the new signature.
rg --type py 'FindInstancePeaks'

Length of output: 3887



Script:

#!/bin/bash
# Description: Extract lines where `FindInstancePeaks` is instantiated to verify the new signature.

# Extract lines from test files and other relevant files.
rg --type py 'FindInstancePeaks\(' -n

Length of output: 680



Script:

#!/bin/bash
# Description: Extract relevant lines where `FindInstancePeaks` is instantiated to verify the new signature.

# Extract lines from test files and other relevant files.
rg --type py -A 5 -B 5 'FindInstancePeaks\('

Length of output: 6124

sleap_nn/training/model_trainer.py (5)

303-311: LGTM! Ensure all function calls match the new signature.


602-681: LGTM! Ensure all function calls match the new signature.


602-681: LGTM! Ensure all function calls match the new signature.


602-681: LGTM! Ensure all function calls match the new signature.


Line range hint 17-681: LGTM! Ensure all function calls match the new signature.

Verification successful

The search results show multiple instances of ModelTrainer being used in the test files. Each instance appears to be using the new signature correctly, as they all pass a config object to the ModelTrainer constructor.

  • tests/training/test_model_trainer.py
  • tests/inference/test_topdown.py
  • tests/inference/test_bottomup.py
  • tests/inference/test_inference.py

These instances confirm that the ModelTrainer class is being used correctly with the new signature in the test files.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify all function calls to `ModelTrainer` match the new signature.

# Test: Search for the function usage. Expect: Only occurances of the new signature.
rg --type python $'ModelTrainer'

Length of output: 62



Script:

#!/bin/bash
# Description: Verify all function calls to `ModelTrainer` match the new signature.

# Test: Search for the function usage. Expect: Only occurrences of the new signature.
rg 'ModelTrainer'

Length of output: 2107



Script:

#!/bin/bash
# Description: Display lines around `ModelTrainer` usage in test files to verify correct usage.

# Test: Display lines around `ModelTrainer` usage in test files.
rg 'ModelTrainer' -A 5 -B 5 tests/

Length of output: 16793

docs/config.md (4)

18-18: Addition of "BottomUp" pipeline option.

This addition aligns with the PR's objective to introduce a new model pipeline for BottomUp models. Ensure that the implementation details of this pipeline are well-documented elsewhere in the codebase.


35-35: Introduction of pafs_gen configuration for the BottomUp model.

This configuration is crucial for defining the generation of Part Affinity Fields (PAFs), which are essential for the BottomUp model's functionality. It's good to see that the configuration mirrors the existing conf_map_gen, maintaining consistency in the configuration structure.


150-160: Addition of new head types in head_configs.

The inclusion of 'PartAffinityFieldsHead' and 'CenteredInstanceConfmapsHead' is a significant enhancement for the BottomUp model. This change supports the PR's goal of expanding the model's capabilities. Ensure that these new head types are properly integrated and tested within the model's architecture.


236-242: Addition of new inference configuration options.

The new options return_confmaps, return_pafs, and return_paf_graph provide flexibility in the output of the inference process, which is beneficial for detailed analysis and debugging. It's important to ensure that these options are handled correctly in the inference code to avoid any potential performance overhead.

tests/data/test_pipelines.py (4)

15-15: The addition of BottomUpPipeline to the import list aligns with the new features introduced in the PR.


584-584: The introduction of the BottomUpPipeline test function is crucial for validating the new pipeline's integration and functionality.


584-851: The comprehensive testing of the BottomUpPipeline across various configurations ensures robustness and functionality. The tests cover different scenarios including scaling, padding, and random cropping, which are essential for validating the pipeline's behavior under different conditions.


584-851: The tests for BottomUpPipeline are well-designed to cover a variety of scenarios, ensuring that the pipeline behaves as expected under different configurations. This is crucial for the reliability of the new pipeline.

sleap_nn/inference/predictors.py (4)

854-884: Introduction of the BottomUpPredictor class.

This class encapsulates the functionality for the BottomUp model prediction, aligning with the PR's objectives to enhance bottom-up processing capabilities.


896-956: Initialization logic in _initialize_inference_model.

The method correctly sets up the inference model with necessary configurations and scorers, ensuring that the model is ready for inference tasks.


1001-1081: Setup of data loading pipeline in make_pipeline.

The method effectively handles different data providers and sets up the data pipeline accordingly. It is well-structured to accommodate both LabelsReader and VideoReader.


1083-1197: Conversion of inference results in _make_labeled_frames_from_generator.

The method accurately converts raw inference results into structured data, facilitating further analysis and visualization. The logic is clear and aligns with the framework's data handling standards.

sleap_nn/inference/paf_grouping.py (3)

974-981: Ensure proper handling of device-specific tensors before conversion to NumPy.

This conversion is necessary for operations not supported directly in PyTorch, and the handling of device transfer is correctly implemented.


176-176: Ensure tensor operations are performed on the correct device.

Also applies to: 178-180

Verification successful

Ensure tensor operations are performed on the correct device.

  • The make_line_subs function consistently moves tensors to the device of the input tensors (peaks_sample).
Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify that tensor operations in `make_line_subs` are consistently using the device of input tensors.

# Test: Search for device-specific operations in `make_line_subs`. Expect: All operations should use the device of input tensors.
rg --type python 'make_line_subs' sleap_nn/inference/paf_grouping.py

Length of output: 99



Script:

#!/bin/bash
# Description: Verify that tensor operations in `make_line_subs` are consistently using the device of input tensors.

# Test: Search for the definition of `make_line_subs` and inspect its device-specific operations.
rg 'def make_line_subs' -A 20 sleap_nn/inference/paf_grouping.py

Length of output: 1251



Script:

#!/bin/bash
# Description: Verify that tensor operations in `make_line_subs` are consistently using the device of input tensors.

# Test: Search for the definition of `make_line_subs` and capture more lines to inspect its device-specific operations.
rg 'def make_line_subs' -A 50 sleap_nn/inference/paf_grouping.py

Length of output: 2745



Script:

#!/bin/bash
# Description: Verify that tensor operations in `make_line_subs` are consistently using the device of input tensors.

# Test: Search for the definition of `make_line_subs` and capture additional lines to inspect its device-specific operations.
rg 'def make_line_subs' -A 100 sleap_nn/inference/paf_grouping.py

Length of output: 4225


445-447: Verify the calculation of max_edge_length for accuracy.

Verification successful

The calculation of max_edge_length in the score_paf_lines_batch function appears to be accurate. It correctly multiplies the max_edge_length_ratio by the maximum dimension of pafs and the pafs_stride, which logically computes the maximum edge length based on the provided parameters.

  • max_edge_length_ratio: A scalar that represents the maximum expected length of a connected pair of points.
  • max(pafs.shape[-1], pafs.shape[-2], pafs.shape[-3]): The maximum dimension of the pafs tensor.
  • pafs_stride: The stride of the pafs.

This calculation ensures that the max_edge_length is appropriately scaled according to the dimensions and stride of the pafs.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the calculation of `max_edge_length` in `score_paf_lines_batch`.

# Test: Search for usage of `max_edge_length` in `score_paf_lines_batch`. Expect: Correct calculation based on image dimensions and stride.
rg --type python 'max_edge_length' sleap_nn/inference/paf_grouping.py

Length of output: 100



Script:

#!/bin/bash
# Description: Verify the calculation of `max_edge_length` in `score_paf_lines_batch`.

# Test: Search for usage of `max_edge_length` in `score_paf_lines_batch`. Expect: Correct calculation based on image dimensions and stride.
rg 'max_edge_length' sleap_nn/inference/paf_grouping.py

Length of output: 1042

sleap_nn/inference/paf_grouping.py Show resolved Hide resolved
sleap_nn/inference/paf_grouping.py Outdated Show resolved Hide resolved
tests/training/test_model_trainer.py Show resolved Hide resolved
tests/inference/test_topdown.py Show resolved Hide resolved
tests/inference/test_predictors.py Show resolved Hide resolved
tests/inference/test_predictors.py Show resolved Hide resolved
tests/inference/test_predictors.py Show resolved Hide resolved
tests/inference/test_predictors.py Show resolved Hide resolved
docs/config_centroid.yaml Show resolved Hide resolved
sleap_nn/architectures/model.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between d4482c1 and bdd4d2f.
Files selected for processing (4)
  • docs/config.md (4 hunks)
  • sleap_nn/architectures/model.py (2 hunks)
  • sleap_nn/inference/predictors.py (11 hunks)
  • sleap_nn/training/model_trainer.py (4 hunks)
Files skipped from review as they are similar to previous changes (2)
  • sleap_nn/architectures/model.py
  • sleap_nn/training/model_trainer.py
Additional Context Used
LanguageTool (42)
docs/config.md (42)

Near line 6: Loose punctuation mark.
Context: ... four main sections: - 1. data_config: Creating a data pipeline. - 2. `model_...


Near line 8: Loose punctuation mark.
Context: ...ng a data pipeline. - 2. model_config: Initialise the sleap-nn backbone and he...


Near line 10: Loose punctuation mark.
Context: ... and head models. - 3. trainer_config: Hyperparameters required to train the m...


Near line 12: Loose punctuation mark.
Context: ...with Lightning. - 4. inference_config: Inference related configs. Note:...


Near line 16: Loose punctuation mark.
Context: ... for val_data_loader. - data_config: - provider: (str) Provider class...


Near line 19: Loose punctuation mark.
Context: ...psPipeline" or "BottomUp". - train: - labels_path: (str) Path to ...


Near line 21: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 22: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 23: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels and this is ...


Near line 24: Possible missing article found.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...


Near line 31: Loose punctuation mark.
Context: ...e same factor. - preprocessing: - anchor_ind: (int) Index...


Near line 32: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 34: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 36: Loose punctuation mark.
Context: ...n. - augmentation_config: - random crop`: (Dict[...


Near line 64: Loose punctuation mark.
Context: ... to train structure) - model_config: - init_weight: (str) model weigh...


Near line 67: Loose punctuation mark.
Context: ...win_B_Weights"]. - backbone_config: - backbone_type: (str) Backbo...


Near line 78: This phrase might be redundant. Consider either removing or replacing the adjective ‘additional’.
Context: ... - middle_block: (bool) If True, add an additional block at the end of the encoder. default: Tru...


Near line 80: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 103: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 108: Loose punctuation mark.
Context: .... Default: "tiny". - arch: Dictionary of embed dimension, depths a...


Near line 120: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...


Near line 123: Possible missing comma found.
Context: ...List[dict]) List of heads in the model. For eg, BottomUp model has both 'MultiInsta...


Near line 128: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 129: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 139: Possible missing article found.
Context: ...he batch size. If False and the size of dataset is not divisible by the batch size, the...


Near line 143: Possible typo: you repeated a word
Context: ...ease note that the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and...


Near line 143: Possible typo: you repeated a word
Context: ... the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and the callback is...


Near line 173: Possible missing comma found.
Context: ...onitored has stopped decreasing; in max mode it will be reduced when the quantity mo...


Near line 181: Loose punctuation mark.
Context: ...ely. Default: 0. - inference_config: - device: (str) Device on which t...


Near line 183: Loose punctuation mark.
Context: ... "ideep", "hip", "msnpu"). - data: - path: (str) Path to .slp ...


Near line 190: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 191: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 192: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels and this is ...


Near line 193: Possible missing article found.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...


Near line 203: Loose punctuation mark.
Context: ... the default. - preprocessing: - anchor_ind: (int) Inde...


Near line 204: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 208: Loose punctuation mark.
Context: ...the input image. - peak_threshold: float between 0 and 1. Minimum confid...


Near line 209: Loose punctuation mark.
Context: ... be ignored. - integral_refinement: If None, returns the grid-aligned pea...


Near line 210: Loose punctuation mark.
Context: ... regression. - integral_patch_size: Size of patches to crop around each rou...


Near line 211: Loose punctuation mark.
Context: ... integer scalar. - return_confmaps: If True, predicted confidence maps wi...


Near line 212: Loose punctuation mark.
Context: ... values and points. - return_pafs: If True, predicted part affinity fiel...


Near line 213: Loose punctuation mark.
Context: ...es and points. - return_paf_graph: If True, the part affinity field grap...

Markdownlint (177)
docs/config.md (177)

17: Expected: 2; Actual: 4
Unordered list indentation


18: Expected: 2; Actual: 4
Unordered list indentation


19: Expected: 2; Actual: 4
Unordered list indentation


20: Expected: 4; Actual: 8
Unordered list indentation


21: Expected: 4; Actual: 8
Unordered list indentation


26: Expected: 4; Actual: 8
Unordered list indentation


28: Expected: 4; Actual: 8
Unordered list indentation


30: Expected: 4; Actual: 8
Unordered list indentation


31: Expected: 4; Actual: 8
Unordered list indentation


32: Expected: 6; Actual: 12
Unordered list indentation


33: Expected: 6; Actual: 12
Unordered list indentation


34: Expected: 6; Actual: 12
Unordered list indentation


35: Expected: 6; Actual: 12
Unordered list indentation


36: Expected: 6; Actual: 12
Unordered list indentation


37: Expected: 8; Actual: 16
Unordered list indentation


38: Expected: 8; Actual: 16
Unordered list indentation


39: Expected: 8; Actual: 16
Unordered list indentation


40: Expected: 10; Actual: 20
Unordered list indentation


41: Expected: 12; Actual: 24
Unordered list indentation


42: Expected: 12; Actual: 24
Unordered list indentation


43: Expected: 12; Actual: 24
Unordered list indentation


44: Expected: 12; Actual: 24
Unordered list indentation


45: Expected: 12; Actual: 24
Unordered list indentation


46: Expected: 12; Actual: 24
Unordered list indentation


47: Expected: 12; Actual: 24
Unordered list indentation


48: Expected: 12; Actual: 24
Unordered list indentation


49: Expected: 12; Actual: 24
Unordered list indentation


50: Expected: 10; Actual: 20
Unordered list indentation


51: Expected: 12; Actual: 24
Unordered list indentation


52: Expected: 12; Actual: 24
Unordered list indentation


54: Expected: 12; Actual: 24
Unordered list indentation


55: Expected: 12; Actual: 24
Unordered list indentation


56: Expected: 12; Actual: 24
Unordered list indentation


57: Expected: 12; Actual: 24
Unordered list indentation


58: Expected: 12; Actual: 24
Unordered list indentation


59: Expected: 12; Actual: 24
Unordered list indentation


60: Expected: 12; Actual: 24
Unordered list indentation


61: Expected: 12; Actual: 24
Unordered list indentation


62: Expected: 2; Actual: 4
Unordered list indentation


65: Expected: 2; Actual: 4
Unordered list indentation


66: Expected: 2; Actual: 4
Unordered list indentation


67: Expected: 2; Actual: 4
Unordered list indentation


68: Expected: 4; Actual: 8
Unordered list indentation


69: Expected: 4; Actual: 8
Unordered list indentation


70: Expected: 6; Actual: 12
Unordered list indentation


71: Expected: 6; Actual: 12
Unordered list indentation


72: Expected: 6; Actual: 12
Unordered list indentation


73: Expected: 6; Actual: 12
Unordered list indentation


74: Expected: 6; Actual: 12
Unordered list indentation


76: Expected: 6; Actual: 12
Unordered list indentation


78: Expected: 6; Actual: 12
Unordered list indentation


79: Expected: 6; Actual: 12
Unordered list indentation


83: Expected: 6; Actual: 12
Unordered list indentation


84: Expected: 6; Actual: 12
Unordered list indentation


88: Expected: 6; Actual: 12
Unordered list indentation


89: Expected: 6; Actual: 12
Unordered list indentation


90: Expected: 4; Actual: 8
Unordered list indentation


91: Expected: 6; Actual: 12
Unordered list indentation


92: Expected: 8; Actual: 16
Unordered list indentation


93: Expected: 8; Actual: 16
Unordered list indentation


94: Expected: 6; Actual: 12
Unordered list indentation


95: Expected: 6; Actual: 12
Unordered list indentation


96: Expected: 6; Actual: 12
Unordered list indentation


97: Expected: 6; Actual: 12
Unordered list indentation


98: Expected: 6; Actual: 12
Unordered list indentation


99: Expected: 6; Actual: 12
Unordered list indentation


100: Expected: 6; Actual: 12
Unordered list indentation


101: Expected: 6; Actual: 12
Unordered list indentation


102: Expected: 6; Actual: 12
Unordered list indentation


106: Expected: 4; Actual: 8
Unordered list indentation


107: Expected: 6; Actual: 12
Unordered list indentation


108: Expected: 6; Actual: 12
Unordered list indentation


111: Expected: 6; Actual: 12
Unordered list indentation


112: Expected: 6; Actual: 12
Unordered list indentation


113: Expected: 6; Actual: 12
Unordered list indentation


114: Expected: 6; Actual: 12
Unordered list indentation


115: Expected: 6; Actual: 12
Unordered list indentation


116: Expected: 6; Actual: 12
Unordered list indentation


117: Expected: 6; Actual: 12
Unordered list indentation


118: Expected: 6; Actual: 12
Unordered list indentation


119: Expected: 6; Actual: 12
Unordered list indentation


123: Expected: 2; Actual: 4
Unordered list indentation


134: Expected: 2; Actual: 4
Unordered list indentation


135: Expected: 4; Actual: 8
Unordered list indentation


136: Expected: 4; Actual: 8
Unordered list indentation


137: Expected: 4; Actual: 8
Unordered list indentation


138: Expected: 4; Actual: 8
Unordered list indentation


139: Expected: 4; Actual: 8
Unordered list indentation


140: Expected: 4; Actual: 8
Unordered list indentation


141: Expected: 2; Actual: 4
Unordered list indentation


142: Expected: 2; Actual: 4
Unordered list indentation


143: Expected: 4; Actual: 8
Unordered list indentation


144: Expected: 4; Actual: 8
Unordered list indentation


145: Expected: 4; Actual: 8
Unordered list indentation


146: Expected: 4; Actual: 8
Unordered list indentation


147: Expected: 4; Actual: 8
Unordered list indentation


148: Expected: 2; Actual: 4
Unordered list indentation


149: Expected: 4; Actual: 8
Unordered list indentation


150: Expected: 4; Actual: 8
Unordered list indentation


151: Expected: 4; Actual: 8
Unordered list indentation


152: Expected: 2; Actual: 4
Unordered list indentation


153: Expected: 2; Actual: 4
Unordered list indentation


154: Expected: 2; Actual: 4
Unordered list indentation


155: Expected: 2; Actual: 4
Unordered list indentation


156: Expected: 2; Actual: 4
Unordered list indentation


157: Expected: 2; Actual: 4
Unordered list indentation


158: Expected: 2; Actual: 4
Unordered list indentation


159: Expected: 2; Actual: 4
Unordered list indentation


160: Expected: 2; Actual: 4
Unordered list indentation


161: Expected: 2; Actual: 4
Unordered list indentation


162: Expected: 4; Actual: 8
Unordered list indentation


163: Expected: 4; Actual: 8
Unordered list indentation


164: Expected: 4; Actual: 8
Unordered list indentation


165: Expected: 4; Actual: 8
Unordered list indentation


166: Expected: 4; Actual: 8
Unordered list indentation


167: Expected: 4; Actual: 8
Unordered list indentation


168: Expected: 2; Actual: 4
Unordered list indentation


169: Expected: 2; Actual: 4
Unordered list indentation


170: Expected: 4; Actual: 8
Unordered list indentation


171: Expected: 4; Actual: 8
Unordered list indentation


172: Expected: 2; Actual: 4
Unordered list indentation


173: Expected: 4; Actual: 8
Unordered list indentation


174: Expected: 4; Actual: 8
Unordered list indentation


175: Expected: 4; Actual: 8
Unordered list indentation


176: Expected: 4; Actual: 8
Unordered list indentation


177: Expected: 4; Actual: 8
Unordered list indentation


178: Expected: 4; Actual: 8
Unordered list indentation


179: Expected: 4; Actual: 8
Unordered list indentation


182: Expected: 2; Actual: 4
Unordered list indentation


183: Expected: 2; Actual: 4
Unordered list indentation


184: Expected: 4; Actual: 8
Unordered list indentation


185: Expected: 4; Actual: 8
Unordered list indentation


187: Expected: 4; Actual: 8
Unordered list indentation


189: Expected: 4; Actual: 8
Unordered list indentation


190: Expected: 4; Actual: 8
Unordered list indentation


195: Expected: 4; Actual: 8
Unordered list indentation


196: Expected: 4; Actual: 8
Unordered list indentation


197: Expected: 4; Actual: 8
Unordered list indentation


198: Expected: 6; Actual: 12
Unordered list indentation


199: Expected: 6; Actual: 12
Unordered list indentation


200: Expected: 6; Actual: 12
Unordered list indentation


201: Expected: 6; Actual: 12
Unordered list indentation


203: Expected: 4; Actual: 8
Unordered list indentation


204: Expected: 6; Actual: 12
Unordered list indentation


205: Expected: 6; Actual: 12
Unordered list indentation


206: Expected: 6; Actual: 12
Unordered list indentation


207: Expected: 6; Actual: 12
Unordered list indentation


208: Expected: 2; Actual: 4
Unordered list indentation


209: Expected: 2; Actual: 4
Unordered list indentation


210: Expected: 2; Actual: 4
Unordered list indentation


211: Expected: 2; Actual: 4
Unordered list indentation


212: Expected: 2; Actual: 4
Unordered list indentation


213: Expected: 2; Actual: 4
Unordered list indentation


16: Expected: 0 or 2; Actual: 1
Trailing spaces


33: Expected: 0 or 2; Actual: 1
Trailing spaces


35: Expected: 0 or 2; Actual: 1
Trailing spaces


39: Expected: 0 or 2; Actual: 1
Trailing spaces


40: Expected: 0 or 2; Actual: 1
Trailing spaces


64: Expected: 0 or 2; Actual: 1
Trailing spaces


94: Expected: 0 or 2; Actual: 1
Trailing spaces


107: Expected: 0 or 2; Actual: 1
Trailing spaces


127: Expected: 0 or 2; Actual: 1
Trailing spaces


133: Expected: 0 or 2; Actual: 1
Trailing spaces


159: Expected: 0 or 2; Actual: 1
Trailing spaces


182: Expected: 0 or 2; Actual: 1
Trailing spaces


194: Expected: 0 or 2; Actual: 1
Trailing spaces


203: Expected: 0 or 2; Actual: 1
Trailing spaces


207: Expected: 0 or 2; Actual: 1
Trailing spaces


211: Expected: 0 or 2; Actual: 1
Trailing spaces


212: Expected: 0 or 2; Actual: 1
Trailing spaces


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


175: null
Spaces inside emphasis markers


175: null
Spaces inside emphasis markers


213: null
Files should end with a single newline character

Additional comments not posted (10)
docs/config.md (4)

18-18: Add "BottomUp" to the list of pipeline options.

This addition aligns with the PR's objective to introduce a BottomUp model pipeline, ensuring the documentation is consistent with the new feature.


35-35: Add "pafs_gen" configuration for the BottomUp model.

This configuration is crucial for defining the generation of Part Affinity Fields (PAFs), which are essential for the BottomUp model's functionality.


123-131: Update "head_configs" to include new head types for the BottomUp model.

The inclusion of 'PartAffinityFieldsHead' and 'MultiInstanceConfmapsHead' is necessary for the BottomUp model to function correctly, as these components handle different aspects of the model's output.


207-213: Add new inference configuration options.

The addition of pafs_output_stride, return_confmaps, return_pafs, and return_paf_graph provides necessary controls for the inference process, particularly for handling output from the BottomUp model.

sleap_nn/inference/predictors.py (6)

33-33: Added import for BottomUpModel.

This import is necessary for the new BottomUpPredictor class introduced in this PR.


37-37: Added import for BottomUpInferenceModel.

This import is necessary for the new BottomUpPredictor class to function correctly.


98-104: Introduced handling for PartAffinityFieldsHead in model type detection.

This change is crucial for supporting the new BottomUpPredictor which relies on PartAffinityFieldsHead.


155-161: Handling nested tensors during prediction output processing.

This addition ensures that nested tensors are correctly converted to numpy arrays, which is important for compatibility with other parts of the system that expect numpy arrays.

Also applies to: 207-213


441-456: Added a KeyFilter to the data pipeline to ensure only necessary keys are passed through.

This is a good practice to ensure that only relevant data is processed, which can improve performance and reduce memory usage.


846-992: Introduction of BottomUpPredictor class.

This class is well-structured and aligns with the objectives of the PR to add bottom-up model pipeline capabilities. It handles initialization, preprocessing, and prediction for a trained BottomUp SLEAP-NN model.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Outside diff range and nitpick comments (3)
tests/architectures/test_encoder_decoder.py (2)

Line range hint 10-10: Add a docstring to describe the purpose and functionality of test_simple_conv_block.

+ """Test the SimpleConvBlock for correct layer types and sequence."""

Line range hint 33-33: Add a docstring to describe the purpose and functionality of test_simple_upsampling_block.

+ """Test the SimpleUpsamplingBlock for correct upsampling behavior and output shapes."""
tests/architectures/test_swint.py (1)

Line range hint 1-1: Add a module docstring to provide an overview of the file's purpose and scope.

+ """Tests for Swin Transformer architecture."""
Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between bdd4d2f and 88ad792.
Files selected for processing (8)
  • sleap_nn/inference/paf_grouping.py (4 hunks)
  • sleap_nn/inference/predictors.py (14 hunks)
  • tests/architectures/test_encoder_decoder.py (1 hunks)
  • tests/architectures/test_swint.py (1 hunks)
  • tests/data/test_normalization.py (2 hunks)
  • tests/inference/test_predictors.py (1 hunks)
  • tests/inference/test_topdown.py (1 hunks)
  • tests/training/test_model_trainer.py (3 hunks)
Files skipped from review as they are similar to previous changes (1)
  • sleap_nn/inference/paf_grouping.py
Additional Context Used
Ruff (9)
tests/architectures/test_encoder_decoder.py (3)

1-1: Missing docstring in public module


10-10: Missing docstring in public function


33-33: Missing docstring in public function

tests/architectures/test_swint.py (1)

1-1: Missing docstring in public module

tests/data/test_normalization.py (1)

1-1: Missing docstring in public module

tests/inference/test_predictors.py (1)

1-1: Missing docstring in public module

tests/inference/test_topdown.py (1)

1-1: Missing docstring in public module

tests/training/test_model_trainer.py (2)

71-71: Missing docstring in public function


271-271: Missing docstring in public function

Additional comments not posted (14)
tests/data/test_normalization.py (1)

26-30: The grayscale conversion test is correctly implemented.

tests/architectures/test_encoder_decoder.py (1)

123-123: The addition of transpose_convs_batch_norm_before_activation enhances model configurability.

tests/architectures/test_swint.py (1)

196-240: The tests for custom architecture configurations are comprehensive and correctly validate the expected outputs.

tests/inference/test_topdown.py (1)

26-66: The initialize_model function is correctly implemented to set up the model and data pipeline for inference testing.

tests/training/test_model_trainer.py (1)

Line range hint 226-430: The test_bottomup_model function is correctly implemented to test the BottomUp model training with comprehensive checks on output shapes and configurations.

tests/inference/test_predictors.py (3)

8-32: LGTM! The test is well-structured and covers various assertions to validate the functionality of the TopDownPredictor.


196-236: LGTM! The test is well-structured and covers various assertions to validate the functionality of the SingleInstancePredictor.


311-435: LGTM! The test is well-structured and covers various assertions to validate the functionality of the BottomUpPredictor.

sleap_nn/inference/predictors.py (6)

33-33: The BottomUpModel class has been added to the import list.

This addition aligns with the PR's objective to enhance the framework with bottom-up model capabilities.


37-37: Introduction of the BottomUpInferenceModel class.

This is a crucial addition for supporting the new bottom-up inference capabilities as described in the PR summary.


98-104: Addition of the BottomUpPredictor to handle specific model types.

This change is essential for supporting the new bottom-up model types and is consistent with the overall goal of the PR.


153-159: Handling of nested tensors during prediction output processing.

This ensures that the output tensors are correctly handled, especially when they are nested, which is important for the robustness of the inference process.


195-211: Enhanced preprocessing and tensor handling in the VideoReader data provider.

These changes are crucial for ensuring that video data is correctly preprocessed and that tensors are properly managed during batch processing.


837-1176: Introduction of the BottomUpPredictor class.

This new class is central to the implementation of the bottom-up model prediction capabilities. It handles initialization, preprocessing, and prediction, aligning perfectly with the PR's objectives to enhance the framework's capabilities.

tests/data/test_normalization.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 88ad792 and 9c0d164.
Files selected for processing (2)
  • sleap_nn/inference/predictors.py (17 hunks)
  • tests/inference/test_predictors.py (1 hunks)
Additional Context Used
Ruff (1)
tests/inference/test_predictors.py (1)

1-1: Missing docstring in public module

Additional comments not posted (18)
tests/inference/test_predictors.py (5)

1-1: Add a module-level docstring to describe the purpose and scope of the tests in this file.


8-12: The test function test_topdown_predictor is well-documented and the logic appears sound. Ensure that the test covers all relevant cases for the TopDownPredictor.


237-237: Add a docstring to test_single_instance_predictor to explain the purpose of the test and its parameters.


353-353: Add a docstring to test_bottomup_predictor to explain the purpose of the test and its parameters.


8-8: Add a docstring to test_topdown_predictor to explain the purpose of the test and its parameters.

sleap_nn/inference/predictors.py (13)

883-943: The initialization method for the BottomUpPredictor is well-implemented, correctly setting up the model and PAF scorer based on the provided configurations.


951-986: The method from_trained_models correctly handles model loading and configuration, ensuring that the model is set to the appropriate device.


988-1068: The make_pipeline method is robust, handling different data providers and correctly setting up the data pipeline.


1070-1180: The method _make_labeled_frames_from_generator effectively converts generator output into SLEAP-specific labeled frames, handling the data structure conversion appropriately.


945-948: The data_config property method correctly retrieves the data configuration from the overall configuration.


Line range hint 283-304: The initialization method for the TopDownPredictor is well-implemented, setting up the model layers based on the provided configurations.


Line range hint 283-304: The method from_trained_models for TopDownPredictor correctly handles model loading and configuration, ensuring that the model is set to the appropriate device.


Line range hint 283-304: The make_pipeline method for TopDownPredictor is robust, handling different data providers and correctly setting up the data pipeline.


Line range hint 283-304: The method _make_labeled_frames_from_generator for TopDownPredictor effectively converts generator output into SLEAP-specific labeled frames, handling the data structure conversion appropriately.


Line range hint 283-304: The initialization method for the SingleInstancePredictor is well-implemented, setting up the model based on the provided configurations.


Line range hint 283-304: The method from_trained_models for SingleInstancePredictor correctly handles model loading and configuration, ensuring that the model is set to the appropriate device.


Line range hint 283-304: The make_pipeline method for SingleInstancePredictor is robust, handling different data providers and correctly setting up the data pipeline.


Line range hint 283-304: The method _make_labeled_frames_from_generator for SingleInstancePredictor effectively converts generator output into SLEAP-specific labeled frames, handling the data structure conversion appropriately.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 9c0d164 and b7ad774.
Files selected for processing (3)
  • sleap_nn/architectures/encoder_decoder.py (1 hunks)
  • tests/architectures/test_encoder_decoder.py (1 hunks)
  • tests/architectures/test_unet.py (1 hunks)
Additional Context Used
Ruff (5)
tests/architectures/test_encoder_decoder.py (3)

1-1: Missing docstring in public module


10-10: Missing docstring in public function


33-33: Missing docstring in public function

tests/architectures/test_unet.py (2)

1-1: Missing docstring in public module


10-10: Missing docstring in public function

Additional comments not posted (3)
tests/architectures/test_encoder_decoder.py (1)

108-136: Added new configuration parameters to SimpleUpsamplingBlock for more flexible upsampling strategies. Ensure that these changes are reflected in the corresponding documentation and that all dependent code is updated accordingly.

tests/architectures/test_unet.py (1)

174-199: Introduced block_contraction in Encoder configuration to potentially optimize model performance. Verify that this new configuration does not adversely affect the accuracy or performance of the model in real-world scenarios.

sleap_nn/architectures/encoder_decoder.py (1)

240-255: Refined the block_contraction logic in the Encoder class to allow for a configurable reduction in the number of filters. This enhancement could lead to more efficient models, especially in resource-constrained environments.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (1)
tests/architectures/test_unet.py (1)

Line range hint 10-10: Please add a docstring to test_unet_reference to explain its purpose and the specific aspects of UNet it tests.

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between b7ad774 and 2cc34ee.
Files selected for processing (2)
  • sleap_nn/inference/paf_grouping.py (4 hunks)
  • tests/architectures/test_unet.py (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • sleap_nn/inference/paf_grouping.py
Additional Context Used
Ruff (2)
tests/architectures/test_unet.py (2)

1-1: Missing docstring in public module


10-10: Missing docstring in public function

Additional comments not posted (1)
tests/architectures/test_unet.py (1)

174-199: The addition of the block contraction test is well-integrated and correctly checks the expected outputs and features. Good job ensuring comprehensive testing coverage for this configuration.

sleap_nn/data/confidence_maps.py Outdated Show resolved Hide resolved
sleap_nn/data/confidence_maps.py Outdated Show resolved Hide resolved
sleap_nn/data/confidence_maps.py Outdated Show resolved Hide resolved
sleap_nn/data/edge_maps.py Outdated Show resolved Hide resolved
sleap_nn/data/providers.py Outdated Show resolved Hide resolved
sleap_nn/inference/bottomup.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 2cc34ee and b092232.
Files selected for processing (10)
  • sleap_nn/data/augmentation.py (2 hunks)
  • sleap_nn/data/confidence_maps.py (11 hunks)
  • sleap_nn/data/instance_centroids.py (1 hunks)
  • sleap_nn/data/instance_cropping.py (3 hunks)
  • sleap_nn/data/normalization.py (1 hunks)
  • sleap_nn/data/pipelines.py (2 hunks)
  • sleap_nn/data/providers.py (2 hunks)
  • sleap_nn/inference/bottomup.py (1 hunks)
  • sleap_nn/inference/predictors.py (1 hunks)
  • tests/inference/test_predictors.py (1 hunks)
Files skipped from review due to trivial changes (2)
  • sleap_nn/data/instance_centroids.py
  • sleap_nn/data/normalization.py
Files skipped from review as they are similar to previous changes (3)
  • sleap_nn/data/confidence_maps.py
  • sleap_nn/data/instance_cropping.py
  • sleap_nn/data/providers.py
Additional Context Used
Ruff (1)
tests/inference/test_predictors.py (1)

1-1: Missing docstring in public module

Additional comments not posted (11)
sleap_nn/inference/bottomup.py (2)

10-78: The implementation of BottomUpInferenceModel is comprehensive and well-documented.


103-160: The forward method is well-structured and handles multiple outputs effectively.

sleap_nn/data/augmentation.py (1)

Line range hint 294-310: Update in __iter__ method correctly handles reshaping of instances.

sleap_nn/data/pipelines.py (1)

293-381: The BottomUpPipeline class is well-implemented and aligns with the PR objectives.

tests/inference/test_predictors.py (4)

1-1: Add a module-level docstring to describe the purpose and scope of the tests in this file.


8-8: Add a docstring to test_topdown_predictor to explain the purpose of the test and its parameters.


237-237: Add a docstring to test_single_instance_predictor to explain the purpose of the test and its parameters.


353-353: Add a docstring to test_bottomup_predictor to explain the purpose of the test and its parameters.

sleap_nn/inference/predictors.py (3)

44-79: The base class Predictor is well-defined with clear documentation and attributes. Good use of type hints for clarity.


162-170: The method _convert_tensors_to_numpy efficiently handles tensor conversion. Ensure that is_nested attribute is properly defined in the tensor class.


286-371: The TopDownPredictor class is well-documented and the methods are clearly defined. However, ensure that the centroid_config and confmap_config are always synchronized when used.

Verification successful

The search results indicate that both centroid_config and confmap_config are used multiple times within the sleap_nn/inference/predictors.py file. Importantly, there is a conditional check if self.centroid_config and self.confmap_config: which suggests that the synchronization between these two configurations is being handled.

  • centroid_config is used in various contexts, including model initialization and configuration settings.
  • confmap_config is similarly used for model initialization and configuration settings.

The presence of the conditional check if self.centroid_config and self.confmap_config: ensures that both configurations are considered together when necessary.

Based on this evidence, it appears that the synchronization concern is addressed in the code.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify synchronization of `centroid_config` and `confmap_config`.

# Test: Search for usage of these configs to ensure they are used together. Expect: Consistent usage.
rg --type python 'centroid_config' 'sleap_nn/inference/predictors.py'
rg --type python 'confmap_config' 'sleap_nn/inference/predictors.py'

Length of output: 196



Script:

#!/bin/bash
# Description: Verify synchronization of `centroid_config` and `confmap_config`.

# Test: Search for usage of these configs to ensure they are used together. Expect: Consistent usage.
rg 'centroid_config' sleap_nn/inference/predictors.py
rg 'confmap_config' sleap_nn/inference/predictors.py

Length of output: 5182

sleap_nn/inference/bottomup.py Show resolved Hide resolved
sleap_nn/inference/bottomup.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
@gitttt-1234 gitttt-1234 requested a review from talmo May 29, 2024 16:57
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between b092232 and 3288030.
Files selected for processing (1)
  • sleap_nn/inference/predictors.py (1 hunks)
Additional comments not posted (1)
sleap_nn/inference/predictors.py (1)

44-79: The class Predictor is well-documented and serves as a base for other predictors. Good use of Python's attrs library for concise attribute definitions.

sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Outdated Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 3288030 and 4655651.
Files selected for processing (2)
  • docs/config.md (4 hunks)
  • sleap_nn/inference/predictors.py (1 hunks)
Additional Context Used
LanguageTool (47)
docs/config.md (47)

Near line 6: Loose punctuation mark.
Context: ... four main sections: - 1. data_config: Creating a data pipeline. - 2. `model_...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 8: Loose punctuation mark.
Context: ...ng a data pipeline. - 2. model_config: Initialise the sleap-nn backbone and he...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 10: Loose punctuation mark.
Context: ... and head models. - 3. trainer_config: Hyperparameters required to train the m...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 12: Loose punctuation mark.
Context: ...with Lightning. - 4. inference_config: Inference related configs. Note:...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 16: Loose punctuation mark.
Context: ... for val_data_loader. - data_config: - provider: (str) Provider class...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 19: Loose punctuation mark.
Context: ...psPipeline" or "BottomUp". - train: - labels_path: (str) Path to ...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 21: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 22: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 23: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels and this is ...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 24: Possible missing article found.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...
Rule ID: AI_HYDRA_LEO_MISSING_A


Near line 31: Loose punctuation mark.
Context: ...e same factor. - preprocessing: - anchor_ind: (int) Index...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 32: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 34: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 34: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...
Rule ID: EN_WORDINESS_PREMIUM_WITH_RESPECT_TO


Near line 36: Loose punctuation mark.
Context: ...n. - augmentation_config: - random crop`: (Dict[...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 64: Loose punctuation mark.
Context: ... to train structure) - model_config: - init_weight: (str) model weigh...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 67: Loose punctuation mark.
Context: ...win_B_Weights"]. - backbone_config: - backbone_type: (str) Backbo...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 78: This phrase might be redundant. Consider either removing or replacing the adjective ‘additional’.
Context: ... - middle_block: (bool) If True, add an additional block at the end of the encoder. default: Tru...
Rule ID: ADD_AN_ADDITIONAL


Near line 80: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 103: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 108: Loose punctuation mark.
Context: .... Default: "tiny". - arch: Dictionary of embed dimension, depths a...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 120: Possible missing comma found.
Context: ... for upsampling. Interpolation is faster but transposed convolutions may...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 123: Possible missing comma found.
Context: ...List[dict]) List of heads in the model. For eg, BottomUp model has both 'MultiInsta...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 128: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 129: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 129: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...
Rule ID: EN_WORDINESS_PREMIUM_WITH_RESPECT_TO


Near line 139: Possible missing article found.
Context: ...he batch size. If False and the size of dataset is not divisible by the batch size, the...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 143: Possible typo: you repeated a word
Context: ...ease note that the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and...
Rule ID: ENGLISH_WORD_REPEAT_RULE


Near line 143: Possible typo: you repeated a word
Context: ... the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and the callback is...
Rule ID: ENGLISH_WORD_REPEAT_RULE


Near line 146: Possible missing comma found.
Context: ... - monitor: (str) Quantity to monitor for e.g., "val_loss". When None, this saves...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 173: Possible missing comma found.
Context: ...onitored has stopped decreasing; in max mode it will be reduced when the quantity mo...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 177: Possible missing comma found.
Context: ...tience`: (int) Number of epochs with no improvement after which learning rate will be reduc...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 179: Possible missing comma found.
Context: ...arning rate of all param groups or each group respectively. Default: 0. - `inferen...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 181: Loose punctuation mark.
Context: ...ely. Default: 0. - inference_config: - device: (str) Device on which t...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 183: Loose punctuation mark.
Context: ... "ideep", "hip", "msnpu"). - data: - path: (str) Path to .slp ...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 190: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 191: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 192: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels and this is ...
Rule ID: AI_HYDRA_LEO_MISSING_THE


Near line 193: Possible missing article found.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...
Rule ID: AI_HYDRA_LEO_MISSING_A


Near line 203: Loose punctuation mark.
Context: ... the default. - preprocessing: - anchor_ind: (int) Inde...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 204: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...
Rule ID: AI_HYDRA_LEO_MISSING_COMMA


Near line 208: Loose punctuation mark.
Context: ...the input image. - peak_threshold: float between 0 and 1. Minimum confid...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 209: Loose punctuation mark.
Context: ... be ignored. - integral_refinement: If None, returns the grid-aligned pea...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 210: Loose punctuation mark.
Context: ... regression. - integral_patch_size: Size of patches to crop around each rou...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 211: Loose punctuation mark.
Context: ... integer scalar. - return_confmaps: If True, predicted confidence maps wi...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 212: Loose punctuation mark.
Context: ... values and points. - return_pafs: If True, predicted part affinity fiel...
Rule ID: UNLIKELY_OPENING_PUNCTUATION


Near line 213: Loose punctuation mark.
Context: ...es and points. - return_paf_graph: If True, the part affinity field grap...
Rule ID: UNLIKELY_OPENING_PUNCTUATION

Markdownlint (185)
docs/config.md (185)

17: Expected: 2; Actual: 4
Unordered list indentation


18: Expected: 2; Actual: 4
Unordered list indentation


19: Expected: 2; Actual: 4
Unordered list indentation


20: Expected: 4; Actual: 8
Unordered list indentation


21: Expected: 4; Actual: 8
Unordered list indentation


26: Expected: 4; Actual: 8
Unordered list indentation


28: Expected: 4; Actual: 8
Unordered list indentation


30: Expected: 4; Actual: 8
Unordered list indentation


31: Expected: 4; Actual: 8
Unordered list indentation


32: Expected: 6; Actual: 12
Unordered list indentation


33: Expected: 6; Actual: 12
Unordered list indentation


34: Expected: 6; Actual: 12
Unordered list indentation


35: Expected: 6; Actual: 12
Unordered list indentation


36: Expected: 6; Actual: 12
Unordered list indentation


37: Expected: 8; Actual: 16
Unordered list indentation


38: Expected: 8; Actual: 16
Unordered list indentation


39: Expected: 8; Actual: 16
Unordered list indentation


40: Expected: 10; Actual: 20
Unordered list indentation


41: Expected: 12; Actual: 24
Unordered list indentation


42: Expected: 12; Actual: 24
Unordered list indentation


43: Expected: 12; Actual: 24
Unordered list indentation


44: Expected: 12; Actual: 24
Unordered list indentation


45: Expected: 12; Actual: 24
Unordered list indentation


46: Expected: 12; Actual: 24
Unordered list indentation


47: Expected: 12; Actual: 24
Unordered list indentation


48: Expected: 12; Actual: 24
Unordered list indentation


49: Expected: 12; Actual: 24
Unordered list indentation


50: Expected: 10; Actual: 20
Unordered list indentation


51: Expected: 12; Actual: 24
Unordered list indentation


52: Expected: 12; Actual: 24
Unordered list indentation


54: Expected: 12; Actual: 24
Unordered list indentation


55: Expected: 12; Actual: 24
Unordered list indentation


56: Expected: 12; Actual: 24
Unordered list indentation


57: Expected: 12; Actual: 24
Unordered list indentation


58: Expected: 12; Actual: 24
Unordered list indentation


59: Expected: 12; Actual: 24
Unordered list indentation


60: Expected: 12; Actual: 24
Unordered list indentation


61: Expected: 12; Actual: 24
Unordered list indentation


62: Expected: 2; Actual: 4
Unordered list indentation


65: Expected: 2; Actual: 4
Unordered list indentation


66: Expected: 2; Actual: 4
Unordered list indentation


67: Expected: 2; Actual: 4
Unordered list indentation


68: Expected: 4; Actual: 8
Unordered list indentation


69: Expected: 4; Actual: 8
Unordered list indentation


70: Expected: 6; Actual: 12
Unordered list indentation


71: Expected: 6; Actual: 12
Unordered list indentation


72: Expected: 6; Actual: 12
Unordered list indentation


73: Expected: 6; Actual: 12
Unordered list indentation


74: Expected: 6; Actual: 12
Unordered list indentation


76: Expected: 6; Actual: 12
Unordered list indentation


78: Expected: 6; Actual: 12
Unordered list indentation


79: Expected: 6; Actual: 12
Unordered list indentation


83: Expected: 6; Actual: 12
Unordered list indentation


84: Expected: 6; Actual: 12
Unordered list indentation


88: Expected: 6; Actual: 12
Unordered list indentation


89: Expected: 6; Actual: 12
Unordered list indentation


90: Expected: 4; Actual: 8
Unordered list indentation


91: Expected: 6; Actual: 12
Unordered list indentation


92: Expected: 8; Actual: 16
Unordered list indentation


93: Expected: 8; Actual: 16
Unordered list indentation


94: Expected: 6; Actual: 12
Unordered list indentation


95: Expected: 6; Actual: 12
Unordered list indentation


96: Expected: 6; Actual: 12
Unordered list indentation


97: Expected: 6; Actual: 12
Unordered list indentation


98: Expected: 6; Actual: 12
Unordered list indentation


99: Expected: 6; Actual: 12
Unordered list indentation


100: Expected: 6; Actual: 12
Unordered list indentation


101: Expected: 6; Actual: 12
Unordered list indentation


102: Expected: 6; Actual: 12
Unordered list indentation


106: Expected: 4; Actual: 8
Unordered list indentation


107: Expected: 6; Actual: 12
Unordered list indentation


108: Expected: 6; Actual: 12
Unordered list indentation


111: Expected: 6; Actual: 12
Unordered list indentation


112: Expected: 6; Actual: 12
Unordered list indentation


113: Expected: 6; Actual: 12
Unordered list indentation


114: Expected: 6; Actual: 12
Unordered list indentation


115: Expected: 6; Actual: 12
Unordered list indentation


116: Expected: 6; Actual: 12
Unordered list indentation


117: Expected: 6; Actual: 12
Unordered list indentation


118: Expected: 6; Actual: 12
Unordered list indentation


119: Expected: 6; Actual: 12
Unordered list indentation


123: Expected: 2; Actual: 4
Unordered list indentation


124: Expected: 4; Actual: 8
Unordered list indentation


125: Expected: 4; Actual: 8
Unordered list indentation


126: Expected: 6; Actual: 12
Unordered list indentation


127: Expected: 6; Actual: 12
Unordered list indentation


128: Expected: 6; Actual: 12
Unordered list indentation


129: Expected: 6; Actual: 12
Unordered list indentation


130: Expected: 6; Actual: 12
Unordered list indentation


131: Expected: 6; Actual: 12
Unordered list indentation


134: Expected: 2; Actual: 4
Unordered list indentation


135: Expected: 4; Actual: 8
Unordered list indentation


136: Expected: 4; Actual: 8
Unordered list indentation


137: Expected: 4; Actual: 8
Unordered list indentation


138: Expected: 4; Actual: 8
Unordered list indentation


139: Expected: 4; Actual: 8
Unordered list indentation


140: Expected: 4; Actual: 8
Unordered list indentation


141: Expected: 2; Actual: 4
Unordered list indentation


142: Expected: 2; Actual: 4
Unordered list indentation


143: Expected: 4; Actual: 8
Unordered list indentation


144: Expected: 4; Actual: 8
Unordered list indentation


145: Expected: 4; Actual: 8
Unordered list indentation


146: Expected: 4; Actual: 8
Unordered list indentation


147: Expected: 4; Actual: 8
Unordered list indentation


148: Expected: 2; Actual: 4
Unordered list indentation


149: Expected: 4; Actual: 8
Unordered list indentation


150: Expected: 4; Actual: 8
Unordered list indentation


151: Expected: 4; Actual: 8
Unordered list indentation


152: Expected: 2; Actual: 4
Unordered list indentation


153: Expected: 2; Actual: 4
Unordered list indentation


154: Expected: 2; Actual: 4
Unordered list indentation


155: Expected: 2; Actual: 4
Unordered list indentation


156: Expected: 2; Actual: 4
Unordered list indentation


157: Expected: 2; Actual: 4
Unordered list indentation


158: Expected: 2; Actual: 4
Unordered list indentation


159: Expected: 2; Actual: 4
Unordered list indentation


160: Expected: 2; Actual: 4
Unordered list indentation


161: Expected: 2; Actual: 4
Unordered list indentation


162: Expected: 4; Actual: 8
Unordered list indentation


163: Expected: 4; Actual: 8
Unordered list indentation


164: Expected: 4; Actual: 8
Unordered list indentation


165: Expected: 4; Actual: 8
Unordered list indentation


166: Expected: 4; Actual: 8
Unordered list indentation


167: Expected: 4; Actual: 8
Unordered list indentation


168: Expected: 2; Actual: 4
Unordered list indentation


169: Expected: 2; Actual: 4
Unordered list indentation


170: Expected: 4; Actual: 8
Unordered list indentation


171: Expected: 4; Actual: 8
Unordered list indentation


172: Expected: 2; Actual: 4
Unordered list indentation


173: Expected: 4; Actual: 8
Unordered list indentation


174: Expected: 4; Actual: 8
Unordered list indentation


175: Expected: 4; Actual: 8
Unordered list indentation


176: Expected: 4; Actual: 8
Unordered list indentation


177: Expected: 4; Actual: 8
Unordered list indentation


178: Expected: 4; Actual: 8
Unordered list indentation


179: Expected: 4; Actual: 8
Unordered list indentation


182: Expected: 2; Actual: 4
Unordered list indentation


183: Expected: 2; Actual: 4
Unordered list indentation


184: Expected: 4; Actual: 8
Unordered list indentation


185: Expected: 4; Actual: 8
Unordered list indentation


187: Expected: 4; Actual: 8
Unordered list indentation


189: Expected: 4; Actual: 8
Unordered list indentation


190: Expected: 4; Actual: 8
Unordered list indentation


195: Expected: 4; Actual: 8
Unordered list indentation


196: Expected: 4; Actual: 8
Unordered list indentation


197: Expected: 4; Actual: 8
Unordered list indentation


198: Expected: 6; Actual: 12
Unordered list indentation


199: Expected: 6; Actual: 12
Unordered list indentation


200: Expected: 6; Actual: 12
Unordered list indentation


201: Expected: 6; Actual: 12
Unordered list indentation


203: Expected: 4; Actual: 8
Unordered list indentation


204: Expected: 6; Actual: 12
Unordered list indentation


205: Expected: 6; Actual: 12
Unordered list indentation


206: Expected: 6; Actual: 12
Unordered list indentation


207: Expected: 6; Actual: 12
Unordered list indentation


208: Expected: 2; Actual: 4
Unordered list indentation


209: Expected: 2; Actual: 4
Unordered list indentation


210: Expected: 2; Actual: 4
Unordered list indentation


211: Expected: 2; Actual: 4
Unordered list indentation


212: Expected: 2; Actual: 4
Unordered list indentation


213: Expected: 2; Actual: 4
Unordered list indentation


16: Expected: 0 or 2; Actual: 1
Trailing spaces


33: Expected: 0 or 2; Actual: 1
Trailing spaces


35: Expected: 0 or 2; Actual: 1
Trailing spaces


39: Expected: 0 or 2; Actual: 1
Trailing spaces


40: Expected: 0 or 2; Actual: 1
Trailing spaces


64: Expected: 0 or 2; Actual: 1
Trailing spaces


94: Expected: 0 or 2; Actual: 1
Trailing spaces


107: Expected: 0 or 2; Actual: 1
Trailing spaces


127: Expected: 0 or 2; Actual: 1
Trailing spaces


133: Expected: 0 or 2; Actual: 1
Trailing spaces


159: Expected: 0 or 2; Actual: 1
Trailing spaces


182: Expected: 0 or 2; Actual: 1
Trailing spaces


194: Expected: 0 or 2; Actual: 1
Trailing spaces


203: Expected: 0 or 2; Actual: 1
Trailing spaces


207: Expected: 0 or 2; Actual: 1
Trailing spaces


211: Expected: 0 or 2; Actual: 1
Trailing spaces


212: Expected: 0 or 2; Actual: 1
Trailing spaces


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


54: null
Spaces inside emphasis markers


175: null
Spaces inside emphasis markers


175: null
Spaces inside emphasis markers


213: null
Files should end with a single newline character

Additional comments not posted (12)
docs/config.md (8)

18-18: Add "BottomUp" to the pipeline options.

This addition aligns with the PR's objective to introduce a comprehensive pipeline for BottomUp models.


35-35: Addition of pafs_gen configuration for the BottomUp model.

This configuration is crucial for defining the generation of Part Affinity Fields, which are essential for the BottomUp approach.


123-123: Update to head_configs to include new configurations for the BottomUp model.

The inclusion of 'PartAffinityFieldsHead' and 'MultiInstanceConfmapsHead' is necessary for the new model's functionality.


126-126: Specify body parts for model heads.

This configuration allows for flexible specification of body parts, which is important for customizing model outputs.


127-127: Define edges for 'PartAffinityFieldsHead'.

This is a critical addition for the BottomUp model as it defines the connections between body parts.


128-128: Anchor point configuration for 'CenteredInstanceConfmapsHead'.

Properly setting the anchor point can significantly enhance model accuracy by ensuring consistent geometry.


207-207: Add pafs_output_stride configuration.

This setting is important for defining the resolution of the output part affinity fields, which impacts both performance and accuracy.


211-213: Return additional outputs for inference.

These options (return_confmaps, return_pafs, return_paf_graph) provide flexibility in the output, allowing users to choose what additional data they need post-inference.

sleap_nn/inference/predictors.py (4)

883-1218: Addition of BottomUpPredictor class.

The newly added BottomUpPredictor class is well-structured and aligns with the objectives of the PR to enhance the BottomUp model pipeline. The methods within the class are well-documented and the attributes are clearly defined. Good use of type annotations for clarity.


655-881: Addition of SingleInstancePredictor class.

The SingleInstancePredictor class has been successfully added and is consistent with the PR's aim to improve inference capabilities. The class methods are appropriately abstracted and the documentation is clear, aiding in maintainability and understanding.


286-652: Refactor of TopDownPredictor class.

The refactoring of the TopDownPredictor class appears comprehensive and well-executed. The changes are in line with the PR's description of enhancing the inference modules. The class now handles multiple configurations and model types more robustly.


44-284: General enhancements in Predictor base class.

The modifications in the Predictor base class, including the addition of new attributes and methods, enhance its functionality and flexibility. These changes support a wider range of model types and configurations, which is crucial for the scalability of the inference process.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 4655651 and a4b9722.

Files selected for processing (1)
  • sleap_nn/inference/predictors.py (1 hunks)
Additional context used
Ruff
sleap_nn/inference/predictors.py

82-82: typing.Text is deprecated, use str (UP019)

Replace with str


244-244: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling (B904)


386-386: typing.Text is deprecated, use str (UP019)

Replace with str


387-387: typing.Text is deprecated, use str (UP019)

Replace with str


575-575: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


616-616: Loop control variable org_size not used within loop body (B007)

Rename unused org_size to _org_size


697-697: typing.Text is deprecated, use str (UP019)

Replace with str


813-813: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


851-851: Loop control variable org_size not used within loop body (B007)

Rename unused org_size to _org_size


951-952: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


957-957: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


962-962: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


967-967: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


972-972: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()


1004-1004: typing.Text is deprecated, use str (UP019)

Replace with str


1125-1125: Use key in dict instead of key in dict.keys() (SIM118)

Remove .keys()

sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
sleap_nn/inference/predictors.py Show resolved Hide resolved
@talmo talmo merged commit 0d7034a into main Jun 26, 2024
6 checks passed
@talmo talmo deleted the divya/bottom-up-model branch June 26, 2024 22:18
@coderabbitai coderabbitai bot mentioned this pull request Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor inference modules Bottom-up predictor
2 participants