-
Notifications
You must be signed in to change notification settings - Fork 5
Integrate virtual fitting #141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| def update_transform(self, transform_matrix:np.ndarray, transform_matrix_units:Optional[str]=None): | ||
| """ Update the transducer transform by postcomposing an additional matrix on top of the current transform. | ||
| The transform_matrix is assumed to be in "openlifu" style transducer coordinates, which is currently hardcoded to being LPS, | ||
| so this function does the needed conversions. | ||
| This function is useful for applying transform updates that come from algorithms in openlifu-python, | ||
| where the transform would be in openlifu conventions. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sadhana-r I just want to make you aware that this SlicerOpenLIFUTransducer.update_transform function is going to be part of my virtual fitting PR. I think your transducer tracking work would also involve an update to the transducer transform, so this could save you the effort of having to write that function.
IMO the surface extraction should be done by the virtual fitting because it is really specific to the search of transducer position. But I am not sure if we will handle the timing, short-term we can have it done by Slicer API easily (on the UI: I made a segmentation using auto/otsu thresholding, then right-clik on seg and |
Having everything in LPS can actually help user-interaction Slicer-side (if we stay in same, i.e. LPS coords, we can show output from vf to the user, like surface, transducer pos etc...). |
0f4e025 to
94e52ab
Compare
e91f63f to
a3320f6
Compare
a3320f6 to
0d1a1a4
Compare
|
@sadhana-r @arhowe00 Added you both as reviewers for visibility but I am not expecting anything from this review, you can just approve :) What this adds is:
(@sadhana-r If this gets merged before yours then there will be an annoyance of conflicting with the skinseg tools and IJK transform modifications that are also in one of your working branches, sorry if so! I can always help out if a rebase gets gnarly don't hesitate to let me know!) |
I am going to merge my PR now |
0d1a1a4 to
0df1624
Compare
|
(just rebased to main, still awaiting approval) |
sadhana-r
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool to have this integrated! I just tested it out on a couple sessions and it works well. I noticed that the virtual fit transforms don't get added under the transducer parent folder (unlike the transforms that get loaded in) but otherwise seems to work as expected.
|
Yay, thanks for testing! Good point I need to put the nodes under the correct folder... I think I will do that before merging. |
| def move_node_into_transducer_sh_folder(self, node : vtkMRMLNode) -> None: | ||
| """In the subject hiearchy, move the given `node` into this transducer's transform node folder.""" | ||
| shNode = slicer.vtkMRMLSubjectHierarchyNode.GetSubjectHierarchyNode(slicer.mrmlScene) | ||
| shNode.SetItemParent( | ||
| shNode.GetItemByDataNode(node), | ||
| shNode.GetItemParent(shNode.GetItemByDataNode(self.transform_node)), | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sadhana-r
added this function under SlicerOpenLIFUTransducer to make it simpler to move things into the transducer folder
This also updates the ArrayTransform imports since I moved those. This commit is basically just updating SlicerOpenLIFU to work with the changes introduced in OpenwaterHealth/OpenLIFU-python#165 and OpenwaterHealth/OpenLIFU-python#166
This also updates the ArrayTransform imports since I moved those. This commit is basically just updating SlicerOpenLIFU to work with the changes introduced in OpenwaterHealth/OpenLIFU-python#165 and OpenwaterHealth/OpenLIFU-python#166
This also updates the ArrayTransform imports since I moved those. This commit is basically just updating SlicerOpenLIFU to work with the changes introduced in OpenwaterHealth/OpenLIFU-python#165 and OpenwaterHealth/OpenLIFU-python#166
Close #139
This also close #145 finally, though most of the work to actually close that was in #179.
Remaining TODO
Outdated comments (ignore this section)
(The below comments are from before virtual fitting was as developed as it is now, I just don't want to delete the history.)
This is intended to be a minimal integration of the virtual fitting algorithm OpenwaterHealth/OpenLIFU-python#147 into SlicerOpenLIFU. After this is done we can work on refining the virtual fitting "workflow" and all that, but here we just want to click "virtual fit", have the algorithm run, and then see the transducer position update.
Some development on openlifu-python is needed before this can be ready.
but we probably want to have a separate "virtual fitting" coordinate grid, so we might need another version ofActually the volume should be provided as a raw array without resampling, just needs to be in LPS coords.make_xarray_in_transducer_coords_from_volume. see Add virtual fit options to Protocol OpenLIFU-python#165 (comment)python-requirements.txtNote to self: I started work on the branch
update_vf_placeholder_to_work_with_P#179at 934e86d which is in this PR, and that branch is likely to be merged to main before this PR. So when rebasing this PR to main I may find myself having to remove the first commit.