Skip to content

Added memory usage table in readme and bash scripts for building and running docker from terminal #17

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Whole-Body Humanoid MPC

This repository contains a Whole-Body Nonlinear Model Predictive Controller (NMPC) for humanoid loco-manipulation control. This approach enables to directly optimize through the **full-order torque-level dynamics in realtime** to generate a wide range of humanoid behaviors building up on an [extended & updated version of ocs2](https://github.com/manumerous/ocs2_ros2)
This repository contains a Whole-Body Nonlinear Model Predictive Controller (NMPC) for humanoid loco-manipulation control. This approach enables to directly optimize through the **full-order torque-level dynamics in realtime** to generate a wide range of humanoid behaviors building up on an [updated version of ocs2](https://github.com/manumerous/ocs2_ros2)

**Interactive Velocity and Base Height Control via Joystick:**
![Screencast2024-12-16180254-ezgif com-optimize(3)](https://github.com/user-attachments/assets/d4b1f0da-39ca-4ce1-b53c-e1d040abe1be)
Expand Down Expand Up @@ -46,7 +46,7 @@ The project supports both Dockerized workspaces (recommended) or a local install
<details>
<summary>Dockerized Workspace</summary>

We provide a [Dockerfile](https://github.com/manumerous/wb_humanoid_mpc/blob/main/docker/Dockerfile) to enable running and devloping the project from a containerized environment. Check out the [devcontainer.json](https://github.com/manumerous/wb_humanoid_mpc/blob/main/.devcontainer/devcontainer.json) for the arguments that must be supplied to the `docker build` and `docker run` commands.
We provide a [Dockerfile](https://github.com/manumerous/wb_humanoid_mpc/blob/main/docker/Dockerfile) to enable running and devloping the project from a containerized environment. Check out the [devcontainer.json](https://github.com/manumerous/wb_humanoid_mpc/blob/main/.devcontainer/devcontainer.json) for the arguments that must be supplied to the `docker build` and `docker run` commands. This repository includes two helper scripts:`image_build.bash` builds the `wb-humanoid-mpc:dev` Docker image using the arguments defined in `devcontainer.json`. `launch_wb_mpc.bash` starts the Docker container, mounts your workspace, and drops you into a Bash shell ready to build and run the WB Humanoid MPC code.

For working in **Visual Studio Code**, we recommend to install the [Dev Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension. Then, with the root of this repository as the root of your VS Code workspace, enter `Ctrl + Shift + P` and select `Dev Containers: Rebuild and Reopen in Container` at the top of the screen. VS Code will then automatically handle calling the `docker build` and `docker run` commands for you and will reopen the window at the root of the containerized workspace. Once this step is completed, you are ready to [build and run the code](https://github.com/manumerous/wb_humanoid_mpc/tree/main?tab=readme-ov-file#building-the-mpc).

Expand All @@ -65,6 +65,18 @@ envsubst < dependencies.txt | xargs sudo apt install -y
```
</details>

## Build RAM Usage by PARALLEL_JOBS
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make this a subparagraph of ### Building the MPC

I think we should also make the README independent form any specific computer setup. I would write something like:

Warning: Building the Whole Body Humanoid MPC has a large memory requirement and we recommend saving critical data before attempting the build for the first time. In case you have acces to larger ram we recommend adjusting the number of parallel build jobs by setting the environement variable PARALLEL_JOBS as follows:

PARALLEL_JOBS Peak RAM Used
2 (default) 16 GB
4 32 GiB
6+ 64 GiB

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just benchmarked the latest build on my machine (AMD Ryzen 9 7950X 16-Core Processor 128 GB ram) and honestly I could not see a large difference in build time.

Building from scratch I got:

  • 1min 39s on 2 parallel jobs
  • 1min 18s on 6 parallel jobs

So we should just make 2 parallel jobs the new default in the makefile. Would you like to add that to your PR?
In parallel we could also think about switching mujoco from building from scratch to linking against some prebuild binaries. That would probably cut the build time by 40% and drastically reduce the ram needed to run this.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is something for a separate PR though

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nohup bash -c 'while sleep 1; do date +"%T"; free -h; done' > ram_N2.log 2>&1 &

I used above command for continuously monitoring memory usage every second and store it in a log file. The log file is updated every second. After building the repo you can extract the peak used RAM recorded during the build for instance with PARALLEL_JOBS=2 with the following command.

grep '^Mem:' ram_N2.log | awk '{print $3}' | sort -h | tail -1

Copy link
Author

@moavia90 moavia90 Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To use the prebuilt MuJoCo binaries, do I need to download and install the full MuJoCo SDK?

Copy link
Author

@moavia90 moavia90 Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For PARALLEL_JOBS if we dynamically set the value based on available system RAM then that would be the best way to go. Fo my system PARALLEL_JOBS=1, the build happend without any system hanging and was best as my system hung for ~5–7 min for PARALLEL_JOBS=2 and then resumed

The following shell script can help identify which parallel job should be best for specific system.The script runs before building

total_gb=$(free -g | awk '/^Mem:/ {print $2}')

if [ "$total_gb" -ge 64 ]; then
PARALLEL_JOBS=6
elif [ "$total_gb" -ge 32 ]; then
PARALLEL_JOBS=4
elif [ "$total_gb" -ge 16 ]; then
PARALLEL_JOBS=2
else
PARALLEL_JOBS=1
fi

echo "Detected ${total_gb} GB RAM, using PARALLEL_JOBS=${PARALLEL_JOBS}"
make build-all PARALLEL_JOBS="$PARALLEL_JOBS"

On my system I'm getting the following output printed

"Detected 15 GB RAM, using PARALLEL_JOBS=1"


| PARALLEL_JOBS | Peak RAM Used | System Response |
|--------------:|--------------:|----------------------------------------------------|
| 1 | 11 GiB | Completed cleanly |
| 2 | 14 GiB | Hung for ~5–7 min, then resumed |
| 4 | 14 GiB | Froze completely → needed forced power‑off |
| 6 | 14 GiB | Froze completely → needed forced power‑off |

> **Note:** All of these builds were attempted inside a VS Code Dev Container but kept crashing. The image was therefore built and run **from a terminal** Docker session, and all RAM measurements come from terminal‑based setup.
> Tested on: HP Victus (i5‑11400H, RTX 3050, Ubuntu 22.04, 16 GB RAM)

### Building the MPC

```bash
Expand Down
38 changes: 38 additions & 0 deletions image_build.bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#!/usr/bin/env bash
# image_build.bash - Build the WB Humanoid MPC Docker image
# Reflects the "build" section of .devcontainer/devcontainer.json
set -euo pipefail

# Paths (relative to this script)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Assumes this script is in the project root; adjust if needed
CONTEXT="${SCRIPT_DIR}"
DOCKERFILE="${SCRIPT_DIR}/docker/Dockerfile"
TARGET="base"

# Build arguments (from devcontainer.json)
: "# Workspace directory inside container"
WB_HUMANOID_MPC_DIR="/wb_humanoid_mpc_ws"
PYTHON_VERSION="3.12"
USER_ID="$(id -u)"
GROUP_ID="$(id -g)"
GIT_USER_NAME="$(git config --global user.name || echo '')"
GIT_USER_EMAIL="$(git config --global user.email || echo '')"

# Image tag
IMAGE_TAG="wb-humanoid-mpc:dev"

# Build command
docker build \
--file "${DOCKERFILE}" \
--target "${TARGET}" \
--build-arg WB_HUMANOID_MPC_DIR="${WB_HUMANOID_MPC_DIR}" \
--build-arg PYTHON_VERSION="${PYTHON_VERSION}" \
--build-arg USER_ID="${USER_ID}" \
--build-arg GROUP_ID="${GROUP_ID}" \
--build-arg GIT_USER_NAME="${GIT_USER_NAME}" \
--build-arg GIT_USER_EMAIL="${GIT_USER_EMAIL}" \
--tag "${IMAGE_TAG}" \
"${CONTEXT}"

echo "Built image: ${IMAGE_TAG}"
45 changes: 45 additions & 0 deletions launch_wb_mpc.bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#!/usr/bin/env bash
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give me some context why this file is needed? It seems to me like it is installing dependencies and not launching the wb mpc? So If it is needed the naming could be made more precise.

Copy link
Author

@moavia90 moavia90 Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper script launch_wb_mpc.bash isn’t actually installing any of the ROS or MPC dependencies. It’s running the docker image with various flags.

  1. By calling --gpus all (plus the X11/Xauthority dance), it makes the host’s gpu (i.e. RTX 3050 in my case) visible inside the container.

  2. The three bind‑mounts

-v "${HOST_WS}:/wb_humanoid_mpc_ws:cached"
-v "${BUILD_WS}:/wb_humanoid_mpc_ws/build:cached"
-v "${INSTALL_WS}:/wb_humanoid_mpc_ws/install:cached"

mirrors the local worspace inside the container i.e.

wb_humanoid_mpc_ws/
├─ src/
├─ build/
└─ install/

  1. User permissions:
    bash -c "chown -R ubuntu:ubuntu /wb_humanoid_mpc_ws/build /wb_humanoid_mpc_ws/install && exec su ubuntu -c bash"

If you create a file or folder in a container then when you exit the container you can not edit the file or delete file. you need sudo for deleting it. The file or folder is read-only. The above command eliminated that problem.

  1. By "-u root ", you automatically have full root privileges inside the container.

set -euo pipefail

# Allow GUI applications
xhost +SI:localuser:root

# Generate Xauthority file for X11 forwarding
XAUTH=/tmp/.docker.xauth
if [ ! -f "${XAUTH}" ]; then
touch "${XAUTH}"
xauth nlist "${DISPLAY}" \
| sed -e 's/^..../ffff/' \
| xauth -f "${XAUTH}" nmerge -
chmod a+r "${XAUTH}"
fi

# Host workspace root (hard‑coded)
HOST_WS="$(realpath "${PWD}/../..")" # your workspace root
BUILD_WS="${HOST_WS}/build"
INSTALL_WS="${HOST_WS}/install"

# Run the container, mounting the entire workspace
# Bind mounts: src/, build/, and install/ persist on host

docker run --rm -it \
--name wb-mpc-dev \
--gpus all \
--net host \
--privileged \
-u root \
-e DISPLAY \
-e QT_X11_NO_MITSHM=1 \
-e XAUTHORITY="${XAUTH}" \
-e XDG_RUNTIME_DIR="${XDG_RUNTIME_DIR:-/tmp}" \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
-v "${XAUTH}:${XAUTH}:rw" \
-v "${HOST_WS}:/wb_humanoid_mpc_ws:cached" \
-v "${BUILD_WS}:/wb_humanoid_mpc_ws/build:cached" \
-v "${INSTALL_WS}:/wb_humanoid_mpc_ws/install:cached" \
--workdir /wb_humanoid_mpc_ws \
wb-humanoid-mpc:dev \
bash -c "chown -R ubuntu:ubuntu /wb_humanoid_mpc_ws/build /wb_humanoid_mpc_ws/install && exec su ubuntu -c bash"

echo "Done."