Role-Playing with Robot Characters: Increasing User Engagement through Narrative and Gameplay Agency
This repository contains software to run the experiment and data analysis for the paper "Role-Playing with Robot Characters: Increasing User Engagement through Narrative and Gameplay Agency", published at the 19th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI 2024).
Created by Spencer Ng, Ting-Han Lin, You Li, and Sarah Sebo at the Human-Robot Interaction Lab at the University of Chicago.
The system to run the experiment consists of the following hardware and software components:
- Server desktop: Linux PC inside the study room to control the robots and output to the monitor
- ROS backend: Robotics Operating System nodes that pass messages to each other. Configuration
files are at
launch/
,CMakeLists.txt
, andpackage.xml
.- Vosk node for real-time speech recognition of keywords from the researcher
- Flask server (
src/rprobots/server.py
) to receive commands from the Wizard of Oz controller via HTTP - Main controller (
src/rprobots/main.py
) to translate HTTP and voice commands to robot and OBS outputs for each scene - Vector & Misty ROS controllers, in-house wrappers around the robots' native SDKs for an easier programming interface
- Open Broadcaster Software (OBS) & Websocket connection (
src/rprobots/obs_websocket.py
) to programatically control the monitor's display as scenes switch
- ROS backend: Robotics Operating System nodes that pass messages to each other. Configuration
files are at
- Experimenter laptop: any laptop connected to the same network as the server and able to send HTTP requests to the server's port
(e.g., via a VPN subnet). The following is run on the experimenter laptop:
- PyQt client (
src/qtwizard/
) for Wizard of Oz control - OBS to monitor participants and record study videos
- Qualtrics survey opened in a browser. The laptop is given to users at the end of the study to use during the post-study survey.
- PyQt client (
- Input/output devices
The repository also contains the following folders:
data-analysis/
: scripts for analyzing study results in R, along with our anonymized study dataassets/
: experimenter script and physical study materials given to participants (e.g., code sheet). Assets displayed on the monitor can be downloaded from this Google Drive link.
The hardware components described above are physically wired and placed as follows:
- Set up the Misty and Vector robots in the study room, connecting them to a Wi-Fi network that is accessible in the room using their respective apps. If needed, set up a router for Wi-Fi access.
- Position the robots between the monitor, across from the participant's chair as shown above. Both robots should be on their charging bases, with the bases plugged in.
- Connect the desktop PC to power and the same network (either via Ethernet or Wi-Fi) as the two robots.
- Connect the PC to the monitor via a display cable, ensuring that the monitor also has power.
- Connect a microphone, mouse, and keyboard to the PC.
The desktop server directly controls the robots and the OBS instance on the monitor. The following setup enables you to run the server on a new Linux computer:
- Install ROS Noetic from this source
- Create a catkin workspace
- Clone this repository to the
src
folder of your Catkin workspace.
- Download OBS.
- Install OBS Websocket 4.9-compat.
- Set up the websocket with Tools > WebSocket Server Settings (4.9-compat), then set the port to
4444
and password toAgentJay
. - Download the media assets ZIP through this Google Drive link, then unzip to a
media/
folder at the root of this repository. - In OBS, go to Scene Collection > Import, then select
RPR_OBS.json
in this repository as the Collection Path. Click Import. - Switch the scene collection by going to Scene Collection > Role Playing Robots Main.
- A prompt should alert you to missing files in OBS. Click Search Directory..., then select the
media/
folder you extracted. Click Apply. A background with the HRDA logo should appear after a few seconds.
- Clone the Vector Robot ROS wrapper and
the Misty Robot ROS wrapper to the
src
folder of your Catkin workspace. Complete the steps in the "Usage" sections to install theanki_vector_ros
andmisty_ros
packages and configure Vector's connnection. - Clone the ROS vosk package into
src
to enable offline voice recognition. - Build this package from the Catkin workspace root via
catkin_make
- Modify the serial number in
launch/rprobots.launch
to Vector's serial number - Modify the IP address in
launch/rprobots.launch
to Misty's IP address (found via the Misty app) - Install requirements with
pip3 install -r requirements.txt
- Run setup from the
src
directory:sudo python3 setup.py install
- Install Tailscale (or an equivalent subnet solution)
and login using your Google account. Start the connection by running
tailscale up
in the terminal. - Copy the IP of the PC server by running
tailscale ip --4
in the terminal. Replace theSERVER_IP
variable insrc/rprobots/main.py
with this value.
These steps allow you to run the Python/Qt Wizard of Oz client on a laptop (separate from the PC) that the researcher controls during the experiment.
- Clone this repository to the laptop.
- Install Python and pip, then run
pip3 install PyQt5
- Install Tailscale and login using your Google account.
- Replace the
SERVER_IP
variable insrc/qtwizard/wizardcontrol.py
with the IP you previously copied in Step 9 when setting up ROS nodes. - Download OBS.
- To run the study, connect the laptop to the webcam in the room via a long USB cable. Create an OBS scene that adds an Audio Input Capture source with the webcam microphone and a Video Capture Device source with the webcam. Ensure the audio source from the room can be heard by going to Audio Mixer > Gear Icon > [Audio Input source name] > Audio Monitoring > Monitor and Output.
Note: Modifying the client UI (optional, for development only) is a two-step process.
After making changes to qtwizard/rpwizard.ui
in Qt Designer, run pyuic5 rpwizard.ui -o rpwizard_ui.py
to regenerate the Python file. This allows new button functions to be managed in src/wizardcontrol.py
.
- Turn on the PC, monitor, laptop, Misty robot, and Vector robot.
- On the PC's web browser, go to
<MISTY IP>/sdk/dashboard/index.html
(e.g.,192.168.0.219/sdk/dashboard/index.html
) and set Misty's voice to speech pitch0.7
, speech rate to0.95
, and voice asEnglish 20 (US)
. Test Misty's voice in the web browser after doing so. Please note that the address of Misty may change from time to time. If the address is changed, use the phone app to check the new address. - Ensure the PC is connected to the same network as Misty and Vector.
- On the PC, open OBS. If you are starting a new round of user studies, navigate to Sources and make sure only B2, Timer Text Caption, Timer Text, Goal Text, B1, A4, Monitor Background are visible (have the eye icon), and everything else is not.
- On the PC, open a terminal and launch the main node and ROS wrapper backend. These steps will connect both Misty and Vector:
$ cd catkin_ws/src/role-playing-robots/ $ roslaunch launch/rprobots.launch
- On OBS on the PC, click
Main Scene
from Scenes panel. Go to the middle left panel and Right click > Full Screen Preview > Monitor on the main scene. - Move the mouse and keyboard connected to the server PC out of reach from the participant's seat.
- Ensure the lab laptop is connected to Internet. Afterwards, navigate to
src/qtwizard/
in this repository, then runpython3 wizardcontrol.py
. - On the experimenter laptop, test if you can let both Misty and Vector speak by clicking on the UI.
- Open up Qualtrics that is bookmarked on the lab laptop, and use
Fn + F11
to full screen the survey. - Set up the camera and connect it to the lab laptop via the long USB cable.
- On the experimenter laptop, open OBS and adjust the camera angle to include both robots, the PC monitor, and the participant. Make you can hear the participant (preparing to record the interaction). If you can't hear the study room, make sure the audio source is set to Audio Mixer > Right click on the Audio Input Capture item > Advanced Audio Properties > Audio Monitoring > Monitor and Output. If that still doesn't work, try relaunching OBS and/or going to Settings.
- Run the study by following the experiment script, setting up the survey and recording the study each time.
Our data analysis for our study results can be reproduced using the scripts in this repository:
- Open
data-analysis/rp-data-analysis.Rmd
using R Studio, then click on theKnit
function to generate therp-data-analysis.html
file. - Open the
rp-data-analysis.html
file to view the data analysis. The section number for each measure matches the measures section in the paper.
Pre-compiled results can also viewed on the HRI Lab website
When using this work, please include the following citation:
Spencer Ng, Ting-Han Lin, You Li, and Sarah Sebo. 2024. Role-Playing with Robot Characters: Increasing User Engagement through Narrative and Gameplay Agency. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24), March 11–14, 2024, Boulder, CO, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3610977.3634941