-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix][METEOR-553] Make the time estimation for overview map correct #2905
base: master
Are you sure you want to change the base?
Conversation
src/odemis/acq/align/autofocus.py
Outdated
@@ -546,8 +546,7 @@ def estimateAutoFocusTime(detector, scanner=None, steps=MAX_STEPS_NUMBER): | |||
scanner (None or model.Emitter): In case of a SED this is the scanner used | |||
Estimates overlay procedure duration | |||
""" | |||
# Add 0.5s per step to account for the focus movement (very roughly approximated) | |||
return steps * estimateAcquisitionTime(detector, scanner) + steps * 0.5 | |||
return steps * estimateAcquisitionTime(detector, scanner) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That means the movement of the focus is instantaneous? That sounds a little bit optimistic. On the other hand, steps is MAX_STEPS_NUMBER, which is probably quite pessimistic.
I can imagine it makes (by chance?) the estimation more realistic on the METEOR, but how many other cases did it worsen?
I think you could try to make the focus movement more precise by:
- taking the
focus
component as argument - try to guess the distance to move (on average)
- using the
estimateMoveDuration()
I'd also suggest to use a smaller number than "steps". For instance "steps / 2", but it might be good to check typically how many steps are required.
Hopefully this also gives you a "not too bad" estimate on the METEOR, and doesn't change too much the estimate on other systems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"I can imagine it makes (by chance?) the estimation more realistic on the METEOR, but how many other cases did it worsen?" The parameter that I deleted was added by me in a previous commit to improve the acquisition timer before. With that in mind, the other systems should work as usual ?
@@ -524,7 +523,8 @@ def estimateTime(self, remaining=None): | |||
|
|||
stitch_time = (self._nx * self._ny * max_pxs * self._overlap) / self.STITCH_SPEED | |||
try: | |||
move_time = max(self._guessSmallestFov(self._streams)) * (remaining - 1) / self._move_speed | |||
# Extra time to account for actual stage movement happening in more than 2 dimensions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "more than 2 dimensions"? Isn't stage move always just X&Y?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The meteor stage moves in two dimensions x and y. As y is linked to y and z of the stage-bare, the movement in stitching the tiles occurs in 3 dimensions.
@@ -524,7 +523,8 @@ def estimateTime(self, remaining=None): | |||
|
|||
stitch_time = (self._nx * self._ny * max_pxs * self._overlap) / self.STITCH_SPEED | |||
try: | |||
move_time = max(self._guessSmallestFov(self._streams)) * (remaining - 1) / self._move_speed | |||
# Extra time to account for actual stage movement happening in more than 2 dimensions | |||
move_time = max(self._guessSmallestFov(self._streams)) * (remaining - 1) / self._move_speed + 0.3 * remaining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use estimateMoveDuration() here?
Also, I'm not certain that "remaining - 1" is correct, but I would think it should be the same number as for the "0.3 -...". So either use on both sides "remaining - 1" or "remaining" or something more clever.
Maybe the "- 1" was to handle the case when a 1x1 overview is acquired?! But that would not be really explaining it for the most standard cases....
@K4rishma and @pieleric the |
8437846
to
4cb83f6
Compare
move_time = guessActuatorMoveDuration(focus, "z", distance) + 2 * guessActuatorMoveDuration(focus, "z", distance/2) | ||
# pessimistic guess | ||
acquisition_time = MAX_STEPS_NUMBER * estimateAcquisitionTime(detector, emt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, you did some tests and the moves where estimated so short that it's almost null? That's why you had to bump up the acquisition time to MAX_STEPS_NUMBER?
Maybe the moves should be increased? We could consider doing steps_number
of steps of size distance/steps_numbers
? Maybe add 10ms per steps, to estimate the overhead? My small experience with the autofocus is that the move of the axis is a large part of the time, so it'd be logical to also estimate it in the same way.
… time, tile acquisition, stage movement time and when user changes the focus distance
668310d
to
b896339
Compare
@nandishjpatel @tepals I have removed +2 seconds while estimating tiled_acquisition during the z stack acquisition. I have used the function from acqmng to estimate time during acquisition of z stack. From the past hardware testing sessions, this function has proved to be quite accurate till the last second. |
time_observers = {"acq", "stitch", "move", "save"} | ||
self._save_time = {k: [] for k in time_observers} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just write explicitly:
time_observers = {"acq", "stitch", "move", "save"} | |
self._save_time = {k: [] for k in time_observers} | |
self._save_time = {"acq": [], "stitch": [], "move": [], "save": []} |
time_observers = {"focus", "move"} | ||
time_per_action = {k: [] for k in time_observers} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Write explicitly:
time_observers = {"focus", "move"} | |
time_per_action = {k: [] for k in time_observers} | |
time_per_action = {"focus": [], "move": []} |
for k, v in time_per_action.items(): | ||
if not v: # empty list => no history | ||
return None | ||
observed_time += numpy.mean(v) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to use numpy if you don't have a numpy array. You can use statistics.mean(v)
.
time_per_action = {k: [] for k in time_observers} | ||
for i, (x, y) in enumerate(focus_points): | ||
# Update the time progress | ||
f.set_progress(end=estimate_autofocus_in_roi_time(len(focus_points) - i, ccd, focus, focus_range, time_per_action) + time.time()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole time_per_action
is really complicated, to end up just computing the average time per point.
A simpler way to do it would be:
Store the time at the beginning (before the for
loop) as start_time
.
Here do:
if i > 0:
average_focus_time = (time.time() - start_time) / i
else:
average_focus_time = None
Finally, estimate_autofocus_in_roi_time()
takes as optional argument average_focus_time
. If it's available, it uses it to estimate the n_focus_points
.
# Estimate stitching time based on number of pixels in the overlapping part | ||
max_pxs = 0 | ||
for s in self._streams: | ||
for sda in s.raw: | ||
pxs = sda.shape[0] * sda.shape[1] | ||
if pxs > max_pxs: | ||
max_pxs = pxs | ||
|
||
stitch_time = (self._nx * self._ny * max_pxs * self._overlap) / self.STITCH_SPEED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that's correct to remove it. What you could do, is that if the registrar is REGISTER_IDENTITY as on the METEOR, have a much higher STITCH_SPEED, or even set the stitch time to 0. In general, with the "real" registrar, the stitching does take quite some time.
|
||
logging.debug(f"Estimated autofocus time: {autofocus_time} s, Tiled acquisition time: {tiled_time} s") | ||
logging.debug(f"Estimated autofocus time: {autofocus_time*len(self.areas)} s, Tiled acquisition time: {tiled_time} s") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logging.debug(f"Estimated autofocus time: {autofocus_time*len(self.areas)} s, Tiled acquisition time: {tiled_time} s") | |
logging.debug(f"Estimated autofocus time: {autofocus_time * len(self.areas)} s, Tiled acquisition time: {tiled_time} s") |
@@ -1081,11 +1112,12 @@ def estimate_time(self, roi_idx=0, actual_time_per_roi=None) -> float: | |||
acquisition_time = actual_time_per_roi * remaining_rois |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are now 2 different ways to report the time left. The old "actual_time_per_roi" way, which assumes every RoI is going to take on average the same time, and the new "time_per_task" way, which updates the progress according to the subfutures progress.
I don't think you need to keep the old way. Especially, it's possible it's going to work against the new way, causing weird fluctuation in the expected time. (Of course, right now this function is probably never called with more than 1 RoI, so it's not noticeable).
=> remove everything related to the "actual_time_per_roi". That's going to simplify the code quite a bit, and avoid odd fluctuations.
@@ -895,6 +923,7 @@ def run(self): | |||
raise | |||
finally: | |||
logging.info("Tiled acquisition ended") | |||
logging.info(f"The actual time taken per tile for each action is {self._save_time}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's probably only useful for debugging. We shouldn't keep this in the final version.
def _correct_tiled_acq_time(self): | ||
"""Compute average time taken to complete tiled acquisition for one tile position""" | ||
observed_time = 0 | ||
for k,v in self._save_time.items(): | ||
if not v: # empty list => no history | ||
return None | ||
observed_time += numpy.mean(v) | ||
if observed_time == 0: | ||
return None | ||
return observed_time | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as for "time_per_action". I don't see the point of storing the time into 4 different categories, and once per tile, when the only thing you do is to sum them all together.
=> Estimate the time per tile based on the starting time, and the number of tiles acquired so far.
No description provided.