-
Notifications
You must be signed in to change notification settings - Fork 94
Adding core_check_resource_exists_at_location_center & first PLR audiofeedback #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
After some conversations with @rickwierenga, I decided to implement his idea of moving audiofeedback functions for Jupyter Notebook execution out of liquid_handler.py and generated a new This way the command generation stack is intact and liquid_handler is not imported into STAR but audiofeedback is still easily accessible for all other PLR-integrated machines. |
center = location + resource.centers()[0] + offset | ||
y_width_to_gripper_bump = resource.get_size_y() - gripper_y_margin*2 | ||
assert 9 <= y_width_to_gripper_bump <= int(resource.get_size_y()), \ | ||
f"width between channels must be between 9 and {resource.get_size_y()} mm" \ | ||
" (i.e. the minimal distance between channels and the max y size of the resource" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do the users move the core grippers arms? Just call star.move_channel_y
?
Really cool and useful PR. Sorry this is taking a bit longer to merge. Thank you for moving the stuff to audio.py, setting is up for more future audio features. As I said privately/publicly, adding more audio features is really helping PLR become the most fun framework while also being really useful. This code only works in Jupyter notebooks / IPython, which admittedly is where this feature will be most useful. I tried looking into Just pushed:
On a higher level (not this PR), we should consider moving some of this method out STAR so we can have the checker be independent of the interactive stuff. That seems to support more use-cases and be cleaner. In addition, this interactive method can be robot agnostic (perhaps robot-selective), and will work with at least the Vantage. |
…ofeedback (#78) * adding core_check_resource_exists_at_location_center & first PLR audiofeedback example * adding display function for Jupyter Notebook execution * proper exception handling and linting corrections * use new resource.centers()[0] instead of resource.center()+ Coordinate(0,0,resource.get_size_z()/2 * fixing liquid_handler import into STAR by generating audio folder + audio.py * remove Audio and display imports from liquid_handler.py * formatting * use decorator --------- Co-authored-by: Rick Wierenga <rick_wierenga@icloud.com>
Hi everyone,
In this PR I have added 2 new features to PLR:
STAR.core_check_resource_exists_at_location_center()
method:1.
STAR.core_check_resource_exists_at_location_center()
Example:
gripper_y_margin
(default:5
mm): distance between the front / back wall of the resource and the grippers during "bumping" / checking, another term could be gripper_insetenable_recovery
(default:True
): my proposal to what I see as a major issue in PLR at the moment: not being able to recover a failed execution because it immediately raises an irrecoverable error mid-run. Instead, if an error is about to be raised - in this case because the grippers do not bump into the resource/plate, i.e. they haven't found the resource - the function asks the liquid_handling operator for input about what to do next:audiofeedback
(default:True
): if True will play soundnotFoundAudio
if resource was not found, andgotItemAudio
if resource was found. Designed for increased engagement with the machine.2. Audiofeedback example
To ensure audiofeedback can be used on any machine (not just Hamilton) I defined
notFoundAudio
andgotItemAudio
in/pylabrobot/liquid_handling/liquid_handler.py
and import these functions intoSTAR.py
.The audio is generated by the controlling computer, not the machine it refers to.
As mentioned in this PyLabRobot forum post, this implementation of audiofeedback for liquid handlers originates from my PhD work.
The audio is generated in Jupyter Notebooks using
from IPython.display import Audio
and:These mp3 files are directly taken from the pygame website https://simpleguics2pygame.readthedocs.io/en/latest/_static/links/snd_links.html and are open-source (to my knowledge) but I think long-term we would want to store all mp3 files used in PLR in PLR's GitHub repo, which will ensure long-term security and still enable easy code access.
Please let me know whether a different location compared to
/pylabrobot/liquid_handling/liquid_handler.py
makes more sense to enable the use of audiofeedback on all PLR-integrated machines (Hamilton, Tecan, Opentrons, scales, shakers, temperature_control-modules, ...).