Docker image with tools for Gazebo classic and Bebop. (#3608)
Some checks failed
Issues due date / Add labels to issues (push) Has been cancelled
Docker update / build_docker_image (push) Has been cancelled
Doxygen / build (push) Has been cancelled

* Docker to run Gazebo Simulation on  Ubuntu 24.04
* Add bebop compiler to docker image

---------

Co-authored-by: Aramov1 <acarmolopes@tudelft.nl>
This commit is contained in:
Christophe De Wagter
2026-02-20 20:09:18 +01:00
committed by GitHub
parent 979c01cd7c
commit 10040de4cd
5 changed files with 602 additions and 0 deletions

1
.gitignore vendored
View File

@@ -54,6 +54,7 @@ paparazzi.sublime-workspace
# VSCode IDE
.vscode/ipch/
.vscode/settings.json
# Vagrant VM files
/.vagrant

View File

@@ -0,0 +1,97 @@
FROM ubuntu:22.04
LABEL maintainer="paparazzi-gazebo-classic-sim"
# ── Init ─────────────────────────────────────────────────────────────
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# ── Locale & timezone ────────────────────────────────────────────────
ENV LANG C.UTF-8
ENV TZ=Etc/UTC
ARG DEBIAN_FRONTEND=noninteractive
RUN ln -fs /usr/share/zoneinfo/$TZ /etc/localtime \
&& apt-get update \
&& apt-get install -y --no-install-recommends tzdata \
&& rm -rf /var/lib/apt/lists/*
# ── Repositories ─────────────────────────────────────────────────────
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
software-properties-common curl lsb-release gnupg git ca-certificates \
&& rm -rf /var/lib/apt/lists/* \
# Paparazzi PPA
&& add-apt-repository -y ppa:paparazzi-uav/ppa \
# kisak-mesa: Mesa 24.x backport — required for Intel Arc (Meteor Lake) iris driver
&& add-apt-repository -y ppa:kisak/kisak-mesa \
# OSRF: Gazebo Classic 11
&& curl -sSL https://packages.osrfoundation.org/gazebo.gpg \
-o /usr/share/keyrings/pkgs-osrf-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/pkgs-osrf-archive-keyring.gpg] \
http://packages.osrfoundation.org/gazebo/ubuntu-stable $(lsb_release -cs) main" \
> /etc/apt/sources.list.d/gazebo-stable.list
# ── All packages in one layer ─────────────────────────────────────────
RUN apt-get update && apt-get install -y --no-install-recommends \
# Paparazzi build & GCS
paparazzi-dev \
pprzgcs \
# Build tools (simulation only — ARM cross-compiler not needed)
build-essential \
cmake \
python-is-python3 \
python3-future \
python3-lxml \
python3-numpy \
python3-opencv \
# Unified Mocap Router
libboost-program-options-dev \
libboost-filesystem-dev \
# Gazebo Classic 11
gazebo \
libgazebo-dev \
# Mesa 24.x for Intel Arc GPU (iris driver)
mesa-utils \
libgl1-mesa-dri \
libglx-mesa0 \
# X11 / GUI
x11-apps \
dbus-x11 \
libcanberra-gtk3-module \
# Joystick
joystick \
libsdl2-dev \
# Audio (PulseAudio client only — server runs on host)
libpulse0 \
pulseaudio-utils \
# Compiler for BEBOP
gcc-arm-linux-gnueabi \
g++-arm-linux-gnueabi \
libc6-dev-armel-cross \
linux-libc-dev-armel-cross \
binutils-arm-linux-gnueabi \
# Video (RTP viewer)
vlc \
# Speech synthesis (PprzGCS uses --speech for command confirmations)
speech-dispatcher \
espeak-ng \
# Permission helper
gosu \
vim \
&& rm -rf /var/lib/apt/lists/*
# ── Runtime environment ───────────────────────────────────────────────
ENV PULSE_SERVER=/run/pulse/native
# ── User setup ───────────────────────────────────────────────────────
ENV USERNAME=pprz
ENV USER_ID=1000
RUN groupadd -f input \
&& useradd --shell /bin/bash -u $USER_ID -o -c "Paparazzi Docker user" -m $USERNAME \
&& usermod -aG sudo,dialout,plugdev,input $USERNAME
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/tini", "--", "/usr/local/bin/entrypoint.sh"]
CMD ["bash"]

View File

@@ -0,0 +1,336 @@
# Running Paparazzi + Gazebo Classic Simulation via Docker (Ubuntu 24.04 Host)
Gazebo Classic (gazebo11) is not available on Ubuntu 24.04. This Docker setup
runs the full simulation stack (Paparazzi Center, GCS, NPS simulator, Gazebo
Classic) inside an Ubuntu 22.04 container with GUI forwarding to your host.
---
## Prerequisites
### 1. Install Docker
```bash
# Install Docker Engine (not Docker Desktop)
sudo apt-get update
sudo apt-get install -y docker.io
# Add your user to the docker group (avoids needing sudo)
sudo usermod -aG docker $USER
# Log out and log back in, then verify:
docker run hello-world
```
### 2. GPU Drivers (for Gazebo 3D rendering)
**Intel/AMD (Mesa) — usually works out of the box:**
```bash
# Verify DRI is available
ls /dev/dri/
# Should show: card0 renderD128
# If you card names are different, update them accordingly in run_gazebo_sim.sh
```
**NVIDIA — requires nvidia-container-toolkit:**
```bash
# Install NVIDIA Container Toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
```
---
## Quick Start
### Step 1: Build the Docker Image
```bash
cd paparazzi/docker/gazebo-classic
docker build -t paparazziuav/pprz-gazebo-classic .
```
This takes ~10-15 minutes the first time (downloads Ubuntu 22.04 + all
dependencies + Gazebo Classic 11).
### Step 2: Initialize Git Submodules
Make sure the TU Delft Gazebo models are present (needed for CyberZoo worlds):
```bash
cd paparazzi/
git submodule init
git submodule sync
git submodule update sw/ext/tudelft_gazebo_models
```
### Step 3: Launch the Container
```bash
cd paparazzi/docker/gazebo-classic
# Option A: Interactive shell (recommended for first run)
./run_gazebo_sim.sh
# Option B: Launch Paparazzi Center directly
./run_gazebo_sim.sh ./paparazzi
```
### Step 4: Build and Run Simulation (inside the container)
```bash
# If you used Option A (interactive shell):
# First time only — build Paparazzi
make clean
make
# Launch Paparazzi Center
./paparazzi
```
Then in the Paparazzi Center GUI:
1. Select the aircraft: **bebop_orange_avoid** (or your target aircraft)
2. Select the target: **nps** (in the "Target" dropdown)
3. Click **Build** to compile the NPS simulator with Gazebo Classic
4. Select session: **Simulation**
5. Click **Execute** to start all components
This launches:
- **NPS Simulator** — runs the Gazebo Classic physics + the autopilot
- **Server** — IVY bus message broker
- **GCS (PprzGCS)** — Ground Control Station GUI
- **Data Link** — telemetry bridge
---
## Architecture Overview
```
┌─── Ubuntu 24.04 Host ─────────────────────────────────────┐
│ │
│ X11 Server (Wayland/X) ◄──── GUI windows forwarded │
│ PulseAudio Server ◄──── Audio forwarded │
│ /dev/input/js* ◄──── Joystick shared │
│ │
│ ┌─── Docker Container (Ubuntu 22.04) ──────────────────┐ │
│ │ │ │
│ │ Paparazzi Center │ │
│ │ ├── NPS Simulator (simsitl) │ │
│ │ │ └── Gazebo Classic 11 (embedded server) │ │
│ │ ├── Server (IVY bus) │ │
│ │ ├── GCS (PprzGCS) │ │
│ │ └── Data Link │ │
│ │ │ │
│ │ --network=host (shares host network stack) │ │
│ │ IVY bus broadcasts on 127.255.255.255:2010 │ │
│ │ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ /paparazzi (source) ←──volume mount──→ container │
│ │
└────────────────────────────────────────────────────────────┘
```
Key design choices:
- **`--network=host`**: The IVY bus uses UDP broadcast on `127.255.255.255`.
Host networking lets IVY messages flow freely between container processes
(and optionally to host-side tools).
- **Volume mount**: Your paparazzi source tree is mounted into the container.
Build artifacts persist on your host filesystem.
- **UID/GID mapping**: The entrypoint script matches the container user to
your host UID/GID so file permissions stay correct.
---
## Networking Details
### IVY Bus Communication
All Paparazzi components communicate via the **IVY bus** — a lightweight
publish-subscribe protocol over UDP broadcast.
- Default broadcast address: `127.255.255.255` (Linux)
- Default port: `2010`
- The `--network=host` flag shares the host's network stack with the
container, so IVY messages are visible to both sides.
**Running tools on the host alongside Docker:**
Because of `--network=host`, you can run IVY-compatible tools on the host
that communicate with the simulation inside Docker. For example, you could
run a custom GCS or logging tool on the host while the simulation runs in
the container.
### If You Cannot Use Host Networking
If host networking is not an option (e.g., on Docker Desktop for Mac/Windows),
you would need to:
1. Run ALL Paparazzi components inside the container (which this setup does)
2. Or configure IVY to use a specific multicast address and expose it via
Docker port mappings (more complex, not recommended)
---
## Joystick Support
The run script automatically detects and passes joystick devices
(`/dev/input/js*`, `/dev/input/event*`) into the container.
### Verify Joystick Inside Container
```bash
# Inside the container
ls /dev/input/js* # should list your joystick
jstest /dev/input/js0 # test joystick input
# Use joystick with NPS simulator
# The NPS simulator accepts: --js_dev <index>
# This is passed automatically when using the Paparazzi Center
```
### Disable Joystick Passthrough
```bash
DISABLE_JOYSTICK=1 ./run_gazebo_sim.sh
```
---
## Audio Support
Audio from the container (e.g., Gazebo sounds) is forwarded to your host's
PulseAudio server via a socket mount.
### Verify Audio Inside Container
```bash
# Inside the container
pactl info # should show connection to host PulseAudio
paplay /usr/share/sounds/freedesktop/stereo/bell.oga # test sound
```
### Disable Audio
```bash
DISABLE_AUDIO=1 ./run_gazebo_sim.sh
```
---
## GPU Acceleration
Gazebo Classic needs OpenGL for 3D rendering. The run script handles two cases:
### Intel / AMD (default)
The script detects `/dev/dri` and passes it through. No extra setup needed.
```bash
# Verify inside container
glxinfo | grep "OpenGL renderer"
# Should show your GPU, NOT llvmpipe/software
```
### NVIDIA
Set the `GPU_TYPE` environment variable:
```bash
GPU_TYPE=nvidia ./run_gazebo_sim.sh
```
This uses `--gpus all` which requires `nvidia-container-toolkit` (see
Prerequisites above).
---
## Troubleshooting
### "KeyError: 'XDG_SESSION_TYPE'" when running `./paparazzi`
The Paparazzi launcher checks `XDG_SESSION_TYPE` to detect Wayland. Inside
Docker there is no desktop session, so this variable is missing. The run
script sets `XDG_SESSION_TYPE=x11` automatically. If you see this error,
make sure you launched the container via `run_gazebo_sim.sh` (not plain
`docker run`).
### "cannot open display" or blank GUI windows
```bash
# On the host, allow X connections from Docker:
xhost +local:docker
# Or for Wayland (Ubuntu 24.04 default):
# Make sure XWayland is running — it usually is by default.
# If still failing, try switching to an X11 session at login.
```
### Gazebo starts but shows black screen / software rendering
```bash
# Check GPU access inside container
glxinfo | grep "OpenGL renderer"
# If it says "llvmpipe" → GPU passthrough is not working
# For Intel/AMD: make sure /dev/dri exists and is readable
# For NVIDIA: make sure nvidia-container-toolkit is installed
```
### "paparazzi-dev package not found"
The Paparazzi PPA may not have packages for all Ubuntu versions. Since the
Dockerfile uses Ubuntu 22.04, this should work. If it fails, the PPA may
be temporarily down — retry later.
### Build errors about "pkg-config gazebo"
This means Gazebo Classic headers are not found. Inside the container, verify:
```bash
pkg-config --modversion gazebo
# Should print: 11.x.x
```
### IVY bus messages not reaching between processes
```bash
# Verify host networking is active
docker inspect <container_id> | grep NetworkMode
# Should show: "host"
# Test IVY connectivity inside container
ivy-c-logger & # if available, shows IVY messages
```
### Permission denied on /dev/input or /dev/dri
```bash
# Add your user to the input group on the host
sudo usermod -aG input $USER
# Log out and back in
# Or run the container with --privileged (not recommended for daily use)
```
### Build artifacts from container don't match host architecture
The build happens inside Ubuntu 22.04 (same x86_64 arch), so binaries are
compatible. However, if you also build natively on the host, the two builds
may conflict. Use `make clean` when switching between host and Docker builds.
---
## Environment Variables Reference
| Variable | Default | Description |
|---------------------|---------|--------------------------------------------|
| `GPU_TYPE` | (auto) | Set to `nvidia` for NVIDIA GPU support |
| `DISABLE_USB` | (unset) | Set to `1` to skip USB device passthrough |
| `DISABLE_JOYSTICK` | (unset) | Set to `1` to skip joystick passthrough |
| `DISABLE_AUDIO` | (unset) | Set to `1` to skip PulseAudio passthrough |
---
## File Structure
```
docker/gazebo-classic/
├── Dockerfile # Ubuntu 22.04 + Gazebo Classic 11 + all deps
├── entrypoint.sh # UID/GID mapping for volume permissions
├── run_gazebo_sim.sh # Launch script with X11/GPU/audio/joystick
└── README.md # This guide
```

View File

@@ -0,0 +1,23 @@
#!/bin/bash
# Match container user UID/GID with host user to avoid permission issues
USER_NAME=${USERNAME:-pprz}
USER_ID=${LOCAL_USER_ID:-1000}
GROUP_ID=${LOCAL_GROUP_ID:-1000}
if id -u "$USER_NAME" > /dev/null 2>&1; then
echo "Starting with UID: $USER_ID and GID: $GROUP_ID"
groupmod -o --gid "$GROUP_ID" "$USER_NAME"
sed -i "s/$USER_NAME:x:\([0-9]*\):\([0-9]*\):/$USER_NAME:x:$USER_ID:\2:/g" /etc/passwd
else
echo "Adding new user $USER_NAME with UID: $USER_ID and GID: $GROUP_ID"
useradd --shell /bin/bash -u "$USER_ID" -o -c "" -m "$USER_NAME"
fi
export HOME=/home/$USER_NAME
# Start speech-dispatcher as the target user so PprzGCS --speech doesn't block.
# Must run as pprz (not root) to avoid creating ~/.config/ with wrong ownership.
gosu "$USER_NAME" speech-dispatcher --spawn 2>/dev/null || true
exec gosu "$USER_NAME" "$@"

View File

@@ -0,0 +1,145 @@
#!/bin/bash
#
# Run Paparazzi with Gazebo Classic simulation inside Docker.
#
# Usage:
# ./run_gazebo_sim.sh # interactive bash shell
# ./run_gazebo_sim.sh ./paparazzi # launch Paparazzi Center directly
#
# Environment variables:
# DISABLE_USB=1 - skip USB device passthrough
# DISABLE_JOYSTICK=1 - skip joystick passthrough
# DISABLE_AUDIO=1 - skip PulseAudio passthrough
# GPU_TYPE=nvidia - use NVIDIA GPU (requires nvidia-container-toolkit)
#
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PAPARAZZI_SRC="$(readlink -m "$SCRIPT_DIR/../..")"
IMAGE_NAME="paparazziuav/pprz-gazebo-classic"
PPRZ_HOME_CONTAINER="/home/pprz/paparazzi"
# Default: interactive bash if no arguments
if [ $# -lt 1 ]; then
CMD="bash"
else
CMD="$@"
fi
echo "============================================="
echo " Paparazzi + Gazebo Classic (Docker)"
echo "============================================="
echo " Paparazzi source: $PAPARAZZI_SRC"
echo " Command: $CMD"
echo "============================================="
# ── X11 Display Forwarding ───────────────────────────────────────────
XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth.$$
touch "$XAUTH"
xauth nlist "$DISPLAY" | sed -e 's/^..../ffff/' | xauth -f "$XAUTH" nmerge - 2>/dev/null
X_OPTS=(
--volume="$XSOCK:$XSOCK"
--volume="$XAUTH:$XAUTH"
--env="XAUTHORITY=$XAUTH"
--env="DISPLAY=$DISPLAY"
--env="XDG_SESSION_TYPE=x11"
--env="XDG_RUNTIME_DIR=/tmp/runtime-pprz"
--env="LIBGL_DRI3_DISABLE=1"
)
# ── GPU Acceleration ─────────────────────────────────────────────────
# The render group owns /dev/dri/renderD128 (mode crw-rw----).
# We pass the host's render and video GIDs into the container so the
# container user can open the DRM device for hardware rendering.
GPU_OPTS=()
if [ "$GPU_TYPE" = "nvidia" ]; then
echo "[GPU] Using NVIDIA runtime"
GPU_OPTS+=(--gpus all --env NVIDIA_DRIVER_CAPABILITIES=all)
elif [ -d /dev/dri ]; then
echo "[GPU] Using Mesa/Intel/AMD (DRI)"
GPU_OPTS+=(--device=/dev/dri/card0 --device=/dev/dri/renderD128)
RENDER_GID=$(getent group render 2>/dev/null | cut -d: -f3)
VIDEO_GID=$(getent group video 2>/dev/null | cut -d: -f3)
[ -n "$RENDER_GID" ] && GPU_OPTS+=(--group-add="$RENDER_GID") && echo "[GPU] Added render group (GID $RENDER_GID)"
[ -n "$VIDEO_GID" ] && GPU_OPTS+=(--group-add="$VIDEO_GID") && echo "[GPU] Added video group (GID $VIDEO_GID)"
fi
# ── Networking (host mode for IVY bus) ───────────────────────────────
# IVY bus uses UDP broadcast on 127.255.255.255 — host networking is
# the simplest way to allow container ↔ host IVY communication.
NET_OPTS=(--network=host)
# ── PulseAudio ───────────────────────────────────────────────────────
AUDIO_OPTS=()
if [ -z "$DISABLE_AUDIO" ]; then
USER_UID=$(id -u)
PULSE_SOCK="/run/user/${USER_UID}/pulse"
if [ -d "$PULSE_SOCK" ]; then
echo "[Audio] PulseAudio passthrough enabled"
AUDIO_OPTS+=(--volume="$PULSE_SOCK:/run/pulse")
else
echo "[Audio] PulseAudio socket not found, skipping"
fi
fi
# ── Joystick / Input Devices ────────────────────────────────────────
INPUT_OPTS=()
if [ -z "$DISABLE_JOYSTICK" ]; then
# Pass /dev/input for joysticks and gamepads
if [ -d /dev/input ]; then
echo "[Input] Passing /dev/input for joystick support"
INPUT_OPTS+=(--volume=/dev/input:/dev/input)
# Find and pass specific joystick devices
for js in /dev/input/js*; do
[ -e "$js" ] && INPUT_OPTS+=(--device="$js")
done
for ev in /dev/input/event*; do
[ -e "$ev" ] && INPUT_OPTS+=(--device="$ev")
done
fi
fi
# ── USB Devices (serial adapters, etc.) ──────────────────────────────
USB_OPTS=()
if [ -z "$DISABLE_USB" ]; then
for dev in /dev/ttyACM* /dev/ttyUSB*; do
if [ -e "$dev" ]; then
echo "[USB] Passing $dev"
USB_OPTS+=(--device="$dev")
fi
done
fi
# ── Paparazzi Volume Mount ───────────────────────────────────────────
VOL_OPTS=(
--volume="$PAPARAZZI_SRC:$PPRZ_HOME_CONTAINER"
--env="PAPARAZZI_HOME=$PPRZ_HOME_CONTAINER"
--env="PAPARAZZI_SRC=$PPRZ_HOME_CONTAINER"
-w "$PPRZ_HOME_CONTAINER"
)
# ── Run Container ────────────────────────────────────────────────────
echo ""
echo "Starting container..."
docker run \
--rm -it \
--cap-add=SYS_NICE \
--ulimit rtprio=99 \
--env="NO_AT_BRIDGE=1" \
"${X_OPTS[@]}" \
"${GPU_OPTS[@]}" \
"${NET_OPTS[@]}" \
"${AUDIO_OPTS[@]}" \
"${INPUT_OPTS[@]}" \
"${USB_OPTS[@]}" \
"${VOL_OPTS[@]}" \
-e LOCAL_USER_ID="$(id -u)" \
-e LOCAL_GROUP_ID="$(id -g)" \
"$IMAGE_NAME" \
$CMD
# ── Cleanup ──────────────────────────────────────────────────────────
rm -f "$XAUTH"