Skip to content

An attempt to create a "Red Light, Green Light" robot inspired by Squid Game TV series, using AI for player recognition and tracking on Raspberry Pi 5 + animated doll and shooter using ESP32.

Notifications You must be signed in to change notification settings

fablab-bergamo/squid-game-doll

Repository files navigation

squid-game-doll

An attempt to create a "Red Light, Green Light" robot inspired by Squid Game TV series, using AI for player recognition and tracking. A moving doll is used to signal the game phase. First working version was demonstrated during Arduino Days 2025 in FabLab Bergamo (Italy). A future unit with a laser pan&tilt platform is foreseen in order to shoot moving players with a laser.

Gameplay

Players are expected to line-up 8-10m from the screen, stand in line during registration where their faces are saved, then start moving towards the finish line during green light. If they move during red light, they are eliminated and a fixed sum is added to the prize pool.

Game phase Screen Doll
Loading screen Loading screen random to attract crowd
Registration (with 15s countdown) registration facing, no eyes
Green light registration rotated, no eyes
Red light registration Note that player 1 has reached the finish line and is therefore green. facing, red eyes
Elimination player play screenshot facing, red eyes
End game prize distribution screenshot facing, no eyes

Open issues / Tasks

  • (DOLL) Build a 3D model for a doll with red LED eyes and moving head
  • (VISION) How to combine laser red dot recognition requirements (low exposure) with players recognition requirements (normal exposure)
  • (LASER SHOOTER) Maybe using a depth estimator model to calculate the angles rather than adjusting based on a video stream
  • (GAMEPLAY) How to terminate the game (finish line logic missing)
  • (GAMEPLAY) Have a player registration step or not ??
  • (GAMEPLAY) Sensibility threshold to be shot is based on rectangle center movements, so large moves are authorized far from the camera, very little close to the camera.
  • (LASER SHOOTER) Speed of laser pointing - slow to converge, about 10-15 s
  • (Various) Software quality: github actions for python packaging, some basic automated tests

Hardware

  • Installation on Raspberry PI 5 with AI KIT, see dedicated file INSTALL.md. A PC can be used instead (best experience with CUDA GPU support)
  • ESP32C2 MINI Wemos board for servo control and doll control with Micropython (see esp32 folder)
  • Logitech webcam HD PRO Webcam C920 on Windows 11 / Raspberry PI 5
  • 1 x SG90 servomotor for head animation
  • 2 red LED for eyes animation
  • 3D printable parts available in hardware/doll-model

For laser shooter (not yet working)

Dev tools used in this project

  • VS Code with Python extension
  • Thonny for ESP32 development (download here : https://thonny.org/)
  • Python 3.11/3.12 with main libraries opencv, ultralytics, hailo, numpy, pygame.

Geometry of play space

  • Expected play area 10 x 10 m indoor
  • In order to hit a 50 cm wide target @ 10m the laser shall be precise 2.8° in horizontal axis. This should be doable with standard servos and 3D-printed pan&tilt platform for the laser (see hardware folder).

AI

  • For more details about the neural network model used for player recognition & tracking, see this article.

(VISION) Detecting the red laser dot

In order to reliably point the laser to the eliminated player, laser dot position must be acquired, positioning error calculated, and angles corrected accordingly. In the example picture below, red laser dot is found on the webcam and a visor is added on top of predicted position.

image

Current approach

  • Choose a channel from the webcam picture (R,G,B) or convert to grayscale.
  • Apply a threshold to the picture to find the brightest pixels
diff_thr = cv2.threshold(channel, threshold, 255, cv2.THRESH_TOZERO)

Resulting image:

image

  • Increase remaining spots with dilate (a key ingredient!!)
masked_channel = cv2.dilate(masked_channel, None, iterations=4)

Resulting image:

image

  • Look for circles using Hough Transform
circles = cv2.HoughCircles(masked_channel, cv2.HOUGH_GRADIENT, 1, minDist=50,
                                param1=50,param2=2,minRadius=3,maxRadius=10)

param2 is a very sensitive parameter. minRadius and maxRadius are dependent on webcam resolution and dilate step.

  • If the circles found are more than one, increase threshold (dichotomy search) and retry.
  • If no circles are found, decrease threshold (dichotomy search) and retry
  • If threshold limits are reached, exit reporting no laser
  • If exactly one circle is found, exit reporting the circle center coordinates

What is tricky about laser recognition

  • Exposure of the webcam is very important:

If picture is too bright (autosettings tend to produce very bright images), laser detection fails. For this reason, webcam is set to manual exposure and underexposed. Probably, an exposure calibration step is required until the picture has the right average brightness. With Logitech C920 results are OK in interior room with exposure around (-10, -5) @ 920x720 resolution. This will vary with different webcam.

Overexposure (-4) Under-exposure (-11)
image image
image (threshold) image (after dilate)
  • Exposure is somehow dependent on resolution requested and FPS requested to the webcam. I fixed the parameters in the webcam initialization step to avoid variability.

  • Some surfaces absorb more light than other resulting in brightest spot not being the laser. Additional search algorithms to be tested (e.g. checking maximum R to G / R to B color ratios)

  • Green laser seem to work better than Red laser on many surfaces. But it may be my Aliexpress green laser is more powerful.

  • Try-and-error loop is slow - another approach which helped me speed up testing is to generate pictures by adding fake laser spots (ellipsis with variable red/brightness) and compare actual position with predicted precisions.

Dev notes regarding laser detection

  • Very slow startup on x64 / Windows 11 fixed by
import os
os.environ["OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS"] = "0"

import cv2
  • Image processing techniques that did not work
Attempt Why it failed What could be done
Switching on the laser programmatically and substract images to find the spot Even without buffer, webcam images have latency > 250 ms resulting in difference images having lots of pixels especially with persons in the scene Solve webcam latency and retry with fast laser switch (>25Hz?). Check if the laser really turns off immediately.
Laplace transform to find rapid variations around the spot It's more for contour detection and it finds a lot of rapid variations in normal interior scenes, or faces ???
HSV thresholds based on fixed value Red laser is not fully red on the picture, white is present at the center Implement adaptive adaptation on V value?

The game itself

Using pygame as rendering engine see game.py

image

Player detection (YOLO model)

Face detection for player board

  • mediapipe / FaceDetection see FaceExtractor.py
  • Used to create the player tiles on the left part of the screen
  • Quite slow, running on CPU

How to install/run on PC (see INSTALL.MD for Raspberry)

  • Create a venv, and install requirements from list in src directory
pip install -r ./src/requirements.txt
  • Install CUDA support for NVIDIA GPU
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 --force-reinstall
  • Run game
python ./src/squidgamedoll/SquidGame.py
  • Command-line arguments examples : force monitor 0, webcam 0, enable esp32 tracker on IP=192.168.45.50
python ./src/squidgamedoll/SquidGame.py -m 0 -w 0 -t -i 192.168.45.50

How to profile Python and check what is slow

  • Use cProfile + snakeviz
pip install snakeviz
python -m cProfile -o game.prof  .\src\squidgamesdoll\game.py
snakeviz .\game.prof

Webcam info

About

An attempt to create a "Red Light, Green Light" robot inspired by Squid Game TV series, using AI for player recognition and tracking on Raspberry Pi 5 + animated doll and shooter using ESP32.

Topics

Resources

Stars

Watchers

Forks

Languages