Skip to content

Conversation

@ebrahimebrahim
Copy link
Contributor

@ebrahimebrahim ebrahimebrahim commented Jun 6, 2025

Closes #387

One thing I slightly dislike about this method is that it modifies the values in the volume array: it makes the background voxels take a value that is 1 less than the minimum value in the volume. This way they can simply be thresholded out. There should be other ways to do this that involve creating a segmentation node out of the foreground mask and then using it as a "cropping ROI" for the volume. This would be nicer but then it's another piece of data to manage. This method keeps things simple, and it doesn't seem to harm the downstream skin segmentation computation.

Another thing I slightly dislike about this method is that it takes some time to compute the foreground mask, so this makes loading a volume into the scene a slower process. But it isn't too bad, at least on my system, and the result is definitely very pleasing to look at.

For review: Load a volume into the scene by loading a session, or by doing it manually, and observe that it is immediately masked after being loaded in. Give the code a cursory look.

@ebrahimebrahim ebrahimebrahim linked an issue Jun 6, 2025 that may be closed by this pull request
@ebrahimebrahim ebrahimebrahim requested a review from sadhana-r June 6, 2025 05:30
Copy link
Contributor

@sadhana-r sadhana-r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code design looks good. Very cool to see the volume masked!

My understanding from the meeting yesterday was that Peter wanted the volume to get masked when the skin segmentation is computed and displayed during virtual fit (or tracking) since it was anyways computing the foreground mask. That way the computation does get repeated when loading and then running those pipelines. But this works too! The loading is a little slower but not too annoying.

One issue I found with the current approach is that the volume_array is type uint16, so when the minimum value in the image is 0, you are setting the background to -1 which isn't in the allowable range. So the background gets set to 65535 and shows up as white (when loading volume petra in OW2_001. )
Screenshot 2025-06-06 092327

One fix I tested is to instead set the background to volume_array_max + 1 which should hopefully not be greater than 65535 but it might be better to include some error handling for those edge cases or change the data type?

@ebrahimebrahim
Copy link
Contributor Author

Oh wow, glad you tried those examples! I will think about how to deal with the integer overflow. Maybe always forcing float type

My understanding from the meeting yesterday was that Peter wanted the volume to get masked when the skin segmentation is computed and displayed during virtual fit (or tracking) since it was anyways computing the foreground mask. That way the computation does get repeated when loading and then running those pipelines. But this works too! The loading is a little slower but not too annoying.

Yeah I think your understanding of the meeting is correct. I am cheating because I didn't want to add so much logic for sharing the mask result. And then if someone saves the session after pre-planning and then loads again later then the mask would be gone, unless we also add logic for writing it to the database somehow. This seemed like a good tradeoff. I wonder if we can speed up the compute_foreground_mask function a bit...

@sadhana-r
Copy link
Contributor

And then if someone saves the session after pre-planning and then loads again later then the mask would be gone, unless we also add logic for writing it to the database somehow. This seemed like a good tradeoff. I wonder if we can speed up the compute_foreground_mask function a bit...

Hmmm true. Related to this, if a user loads a session where tracking or planning has already been run, the skin-segmentation is not shown unless the user re-runs virtual fit or goes through and 're-approves' the tracking result. This isn't ideal but I left it as is.
One idea:
We could on load_session, compute the skin_segmentation if the loaded session includes an approved tracking result or virtual fit result (we already check for that). This will then resolve the need to save the mask etc. Basically, whenver you have a VF result to tracking result, compute the skin surface. And we can update openlifu to return the mesh and foreground mask when compute_skin_mesh_from_volume is called.

@ebrahimebrahim
Copy link
Contributor Author

Related to this, if a user loads a session where tracking or planning has already been run, the skin-segmentation is not shown unless the user re-runs virtual fit or goes through and 're-approves' the tracking result. This isn't ideal but I left it as is.

Yes, I understood it work that way. I think it is okay -- peter can request a different behavior later if he wants. It will require core openlifu changes to support storing the skin segementation, unless we go with this idea:

We could on load_session, compute the skin_segmentation if the loaded session includes an approved tracking result or virtual fit result (we already check for that). This will then resolve the need to save the mask etc. Basically, whenver you have a VF result to tracking result, compute the skin surface. And we can update openlifu to return the mesh and foreground mask when compute_skin_mesh_from_volume is called.

this is a good idea. I still think we can hold off and get some feedback. It might be fine the way it is. Showing the skin segmentation in the main scene is mainly for the cool factor I think, and not something functional yet

@ebrahimebrahim
Copy link
Contributor Author

@sadhana-r Ready for another look. I am planning to hold off on merging until some of your PRs get merged though, so no rush. Thanks for the review!

Comment on lines +28 to +34
volume_array = slicer.util.arrayFromVolume(volume_node)
volume_array_min = volume_array.min()

background_value = volume_array_min - 1
if background_value < volume_node.GetImageData().GetScalarTypeMin(): # e.g. if volume_array_min is 0 and it's an unsigned int type
logging.info("Casting volume to float for the sake of `threshold_volume_by_foreground_mask`.")
cast_volume_to_float(volume_node)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is how I addressed the problem you pointed out: if the value I am trying to use for background (e.g. -1) is less than the minimum for the data type (e.g. 0 for unsigned integer types), then we cast to float before proceeding.

Copy link
Contributor

@sadhana-r sadhana-r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me!

@ebrahimebrahim
Copy link
Contributor Author

I will add a commit to this to also compute skin segmentation on session load, only if the session has a virtual fit result.

Closes #387

One thing I slightly dislike about this method is that it modifies the
values in the volume array: it makes the background voxels take a value
that is 1 less than the minimum value in the volume. This way they can
simply be thresholded out. There should be other ways to do this that
involve creating a segmentation node out of the foreground mask and then
using it as a "cropping ROI" for the volume. This would be nicer but
then it's another piece of data to manage. This method  keeps things
simple, and it doesn't seem to harm the downstream skin segmentation
computation.

Another thing I slightly dislike about this method is that it takes
some time to compute the foreground mask, so this makes loading a volume
into the scene a slower process. But it isn't too bad, at least on my
system, and the result is definitely very pleasing to look at.
Sometimes the volume display doesn't update after thresholding and this
helps to nudge it to update.
They don't really belong to the skinseg module
@ebrahimebrahim
Copy link
Contributor Author

Merging this without adding #399 fixes like I had suggested in a meeting. I will actually do #399 in a separate PR to keep things simpler.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Mask out background of MRI slices

3 participants