Skip to content

Adding core_check_resource_exists_at_location_center & first PLR audiofeedback #78

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Mar 19, 2024

Conversation

BioCam
Copy link
Contributor

@BioCam BioCam commented Mar 13, 2024

Hi everyone,

In this PR I have added 2 new features to PLR:

  1. the STAR.core_check_resource_exists_at_location_center() method:
  • primary function = checking whether a resource is at the location we believe it to be;
  • secondary function = pushing labware flush to the surface they are sitting on.
  1. I used 1. to demonstrate the first PLR audiofeedback (to my knowledge), giving of different sounds depending on whether a resource is found or not found at the location tested.

1. STAR.core_check_resource_exists_at_location_center()

Example:

# Define a plate and assign it to e.g. a plate_carrier
plt_carrier_1[0] = test_plate_0 = Cos_96_Rd(name = 'test_plate_0')

# Check whether plate is at plt_carrier_1[0]
await lh.backend.core_check_resource_exists_at_location_center(
    location = test_plate_0.get_absolute_location(),
    resource = test_plate_0,
    gripper_y_margin = 15,
    enable_recovery = True,
    audiofeedback = True,
    minimum_traverse_height_at_beginning_of_a_command = 2750,
    z_position_at_the_command_end = 2750,
)
  • gripper_y_margin (default: 5mm): distance between the front / back wall of the resource and the grippers during "bumping" / checking, another term could be gripper_inset
  • enable_recovery (default: True): my proposal to what I see as a major issue in PLR at the moment: not being able to recover a failed execution because it immediately raises an irrecoverable error mid-run. Instead, if an error is about to be raised - in this case because the grippers do not bump into the resource/plate, i.e. they haven't found the resource - the function asks the liquid_handling operator for input about what to do next:
Screenshot 2024-03-13 at 12 51 30 -> this enables recovery of the run, even if a plate has not been found (in the future we could add a `skip` option in addition to `yes` and `abort`)
  • audiofeedback (default: True): if True will play sound notFoundAudio if resource was not found, and gotItemAudio if resource was found. Designed for increased engagement with the machine.

2. Audiofeedback example

To ensure audiofeedback can be used on any machine (not just Hamilton) I defined notFoundAudio and gotItemAudio in /pylabrobot/liquid_handling/liquid_handler.py and import these functions into STAR.py.
The audio is generated by the controlling computer, not the machine it refers to.
As mentioned in this PyLabRobot forum post, this implementation of audiofeedback for liquid handlers originates from my PhD work.
The audio is generated in Jupyter Notebooks using from IPython.display import Audio and:

# Enable audio-feedback - "make liquid handlers talk"

def notFoundAudio():
  display(Audio(
      url='https://codeskulptor-demos.commondatastorage.googleapis.com/pang/arrow.mp3',
      autoplay=True))
    # https://simpleguics2pygame.readthedocs.io/en/latest/_static/links/snd_links.html

def gotItemAudio():
  display(Audio(
      url='https://codeskulptor-demos.commondatastorage.googleapis.com/descent/gotitem.mp3',
      autoplay=True))

These mp3 files are directly taken from the pygame website https://simpleguics2pygame.readthedocs.io/en/latest/_static/links/snd_links.html and are open-source (to my knowledge) but I think long-term we would want to store all mp3 files used in PLR in PLR's GitHub repo, which will ensure long-term security and still enable easy code access.

Please let me know whether a different location compared to /pylabrobot/liquid_handling/liquid_handler.py makes more sense to enable the use of audiofeedback on all PLR-integrated machines (Hamilton, Tecan, Opentrons, scales, shakers, temperature_control-modules, ...).

@BioCam
Copy link
Contributor Author

BioCam commented Mar 14, 2024

After some conversations with @rickwierenga, I decided to implement his idea of moving audiofeedback functions for Jupyter Notebook execution out of liquid_handler.py and generated a new pylabrobot/audio/ directory housing the audio.py file instead.

This way the command generation stack is intact and liquid_handler is not imported into STAR but audiofeedback is still easily accessible for all other PLR-integrated machines.

Comment on lines +4433 to +4437
center = location + resource.centers()[0] + offset
y_width_to_gripper_bump = resource.get_size_y() - gripper_y_margin*2
assert 9 <= y_width_to_gripper_bump <= int(resource.get_size_y()), \
f"width between channels must be between 9 and {resource.get_size_y()} mm" \
" (i.e. the minimal distance between channels and the max y size of the resource"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do the users move the core grippers arms? Just call star.move_channel_y?

@rickwierenga
Copy link
Member

rickwierenga commented Mar 16, 2024

Really cool and useful PR. Sorry this is taking a bit longer to merge. Thank you for moving the stuff to audio.py, setting is up for more future audio features. As I said privately/publicly, adding more audio features is really helping PLR become the most fun framework while also being really useful.

This code only works in Jupyter notebooks / IPython, which admittedly is where this feature will be most useful. I tried looking into playsound but that seems to be broken on Mac now and I think notebooks are actually fine.

Just pushed:

  • added can_play_audio check utility function, to which can we can add more checks as necessary
  • add try except around the IPython import: this 'non-primary' function should not break usual mode of operation. We could discuss raising an error, or just ignore it and not play audio as is the case now. I don't want this 'non-primary' feature to ever crash the code, but on the other hand it could be annoying if you specify audio_feedback=True.
  • used STARFirmwareError.errors to loop over the error tracecodes and remove the regex. It was super hacky and good, but we have a better way to express this thing :)

On a higher level (not this PR), we should consider moving some of this method out STAR so we can have the checker be independent of the interactive stuff. That seems to support more use-cases and be cleaner. In addition, this interactive method can be robot agnostic (perhaps robot-selective), and will work with at least the Vantage.

@rickwierenga rickwierenga merged commit 9b4adf1 into PyLabRobot:main Mar 19, 2024
@BioCam BioCam deleted the check_plate_exists branch March 19, 2024 21:57
rickwierenga added a commit that referenced this pull request Nov 13, 2024
…ofeedback (#78)

* adding core_check_resource_exists_at_location_center & first PLR audiofeedback example

* adding display function for Jupyter Notebook execution

* proper exception handling and linting corrections

* use new resource.centers()[0] instead of resource.center()+ Coordinate(0,0,resource.get_size_z()/2

* fixing liquid_handler import into STAR by generating audio folder + audio.py

* remove Audio and display imports from liquid_handler.py

* formatting

* use decorator

---------

Co-authored-by: Rick Wierenga <rick_wierenga@icloud.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants