Skip to content

Conversation

@ebrahimebrahim
Copy link
Contributor

@ebrahimebrahim ebrahimebrahim commented Nov 19, 2024

Close #139

This also close #145 finally, though most of the work to actually close that was in #179.

Remaining TODO

Outdated comments (ignore this section)

(The below comments are from before virtual fitting was as developed as it is now, I just don't want to delete the history.)

This is intended to be a minimal integration of the virtual fitting algorithm OpenwaterHealth/OpenLIFU-python#147 into SlicerOpenLIFU. After this is done we can work on refining the virtual fitting "workflow" and all that, but here we just want to click "virtual fit", have the algorithm run, and then see the transducer position update.

Some development on openlifu-python is needed before this can be ready.


Note to self: I started work on the branch update_vf_placeholder_to_work_with_P#179 at 934e86d which is in this PR, and that branch is likely to be merged to main before this PR. So when rebasing this PR to main I may find myself having to remove the first commit.

@ebrahimebrahim ebrahimebrahim linked an issue Nov 19, 2024 that may be closed by this pull request
Comment on lines 80 to 110
def update_transform(self, transform_matrix:np.ndarray, transform_matrix_units:Optional[str]=None):
""" Update the transducer transform by postcomposing an additional matrix on top of the current transform.
The transform_matrix is assumed to be in "openlifu" style transducer coordinates, which is currently hardcoded to being LPS,
so this function does the needed conversions.
This function is useful for applying transform updates that come from algorithms in openlifu-python,
where the transform would be in openlifu conventions.
"""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sadhana-r I just want to make you aware that this SlicerOpenLIFUTransducer.update_transform function is going to be part of my virtual fitting PR. I think your transducer tracking work would also involve an update to the transducer transform, so this could save you the effort of having to write that function.

@ltetrel
Copy link

ltetrel commented Nov 20, 2024

another thing I am wondering is where the skin surface is extracted. maybe the skin surface is assumed to be one of the inputs into the virtual fitting algorithm? Since we aren't doing Implement the virtual fitting optimization metric OpenLIFU-python#148 yet and we haven't decided how we'd approach that

IMO the surface extraction should be done by the virtual fitting because it is really specific to the search of transducer position. But I am not sure if we will handle the timing, short-term we can have it done by Slicer API easily (on the UI: I made a segmentation using auto/otsu thresholding, then right-clik on seg and Export visible segments to models)

@ltetrel
Copy link

ltetrel commented Nov 20, 2024

we are currently resampling the volume into the simulation coordinate grid, but we probably want to have a separate "virtual fitting" coordinate grid, so we might need another version of make_xarray_in_transducer_coords_from_volume. see OpenwaterHealth/OpenLIFU-python#165 (comment)

Having everything in LPS can actually help user-interaction Slicer-side (if we stay in same, i.e. LPS coords, we can show output from vf to the user, like surface, transducer pos etc...).

@ebrahimebrahim ebrahimebrahim force-pushed the 139-add-virtual-fitting-ui branch 2 times, most recently from 0f4e025 to 94e52ab Compare February 14, 2025 06:05
@ebrahimebrahim ebrahimebrahim force-pushed the 139-add-virtual-fitting-ui branch 6 times, most recently from e91f63f to a3320f6 Compare March 13, 2025 03:02
@ebrahimebrahim ebrahimebrahim marked this pull request as ready for review March 13, 2025 03:02
@ebrahimebrahim ebrahimebrahim force-pushed the 139-add-virtual-fitting-ui branch from a3320f6 to 0d1a1a4 Compare March 13, 2025 03:05
@ebrahimebrahim
Copy link
Contributor Author

@sadhana-r @arhowe00 Added you both as reviewers for visibility but I am not expecting anything from this review, you can just approve :)

What this adds is:

  • virtual fitting placeholder removed and replaced by actual virtual fitting: when you click "virtual fit" the algorithm runs and at the end the transducer transform is updated
  • virtual fitting status messages are shown in the bottom status bar while the algorithm runs

(@sadhana-r If this gets merged before yours then there will be an annoyance of conflicting with the skinseg tools and IJK transform modifications that are also in one of your working branches, sorry if so! I can always help out if a rebase gets gnarly don't hesitate to let me know!)

@sadhana-r
Copy link
Contributor

(@sadhana-r If this gets merged before yours then there will be an annoyance of conflicting with the skinseg tools and IJK transform modifications that are also in one of your working branches, sorry if so! I can always help out if a rebase gets gnarly don't hesitate to let me know!)

I am going to merge my PR now

@ebrahimebrahim ebrahimebrahim force-pushed the 139-add-virtual-fitting-ui branch from 0d1a1a4 to 0df1624 Compare March 13, 2025 17:51
@ebrahimebrahim
Copy link
Contributor Author

(just rebased to main, still awaiting approval)

Copy link
Contributor

@sadhana-r sadhana-r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool to have this integrated! I just tested it out on a couple sessions and it works well. I noticed that the virtual fit transforms don't get added under the transducer parent folder (unlike the transforms that get loaded in) but otherwise seems to work as expected.

@ebrahimebrahim
Copy link
Contributor Author

Yay, thanks for testing!

Good point I need to put the nodes under the correct folder... I think I will do that before merging.

@ebrahimebrahim ebrahimebrahim removed the request for review from arhowe00 March 13, 2025 19:22
Comment on lines +172 to +178
def move_node_into_transducer_sh_folder(self, node : vtkMRMLNode) -> None:
"""In the subject hiearchy, move the given `node` into this transducer's transform node folder."""
shNode = slicer.vtkMRMLSubjectHierarchyNode.GetSubjectHierarchyNode(slicer.mrmlScene)
shNode.SetItemParent(
shNode.GetItemByDataNode(node),
shNode.GetItemParent(shNode.GetItemByDataNode(self.transform_node)),
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sadhana-r
added this function under SlicerOpenLIFUTransducer to make it simpler to move things into the transducer folder

@ebrahimebrahim ebrahimebrahim merged commit 698f829 into main Mar 14, 2025
ebrahimebrahim added a commit that referenced this pull request Mar 20, 2025
This also updates the ArrayTransform imports since I moved those.
This commit is basically just updating SlicerOpenLIFU to work with
the changes introduced in
OpenwaterHealth/OpenLIFU-python#165
and
OpenwaterHealth/OpenLIFU-python#166
ebrahimebrahim added a commit that referenced this pull request Mar 21, 2025
This also updates the ArrayTransform imports since I moved those.
This commit is basically just updating SlicerOpenLIFU to work with
the changes introduced in
OpenwaterHealth/OpenLIFU-python#165
and
OpenwaterHealth/OpenLIFU-python#166
ebrahimebrahim added a commit that referenced this pull request Mar 21, 2025
This also updates the ArrayTransform imports since I moved those.
This commit is basically just updating SlicerOpenLIFU to work with
the changes introduced in
OpenwaterHealth/OpenLIFU-python#165
and
OpenwaterHealth/OpenLIFU-python#166
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update to the transfom-saving approach to virtual fitting. Add virtual fitting UI

4 participants