Now that I have my Raspberry Pi 4 set up with Docker and VS Code remote development, it's time for the exciting part - running ROS 2 in containers! In this article, I'll share how I build and optimize ROS 2 Docker containers specifically for the Pi's ARM64 architecture and resource constraints.
Why I Love Using Docker for ROS 2 on Pi
Before diving into the technical details, let me explain why this approach has become my go-to method:
- Consistent environments - My containers work the same way across different Pi setups
- Easy deployment - I can build once and deploy anywhere
- Resource isolation - Each ROS node runs in its own container with controlled resources
- Version management - I can run multiple ROS 2 versions side-by-side
- Clean development - No more dependency conflicts or messy installations
The first thing I learned when working with Docker on Pi is that not all Docker images work out of the box. The Raspberry Pi 4 uses an ARM64 architecture, so I need ARM64-compatible images.
Checking my Pi's architecture
I always verify my Pi's architecture first:
uname -m
This should return aarch64, confirming I'm running 64-bit ARM.
Finding ARM64 ROS 2 Images
I use the official ROS Docker images that support ARM64:
docker pull ros:humble-ros-base
It checks locally for the image on the Pi first. If it's not there, it starts downloading the image. Docker automatically pulls the ARM64 architecture version of the image for my Pi.
You'll see output like:
humble-ros-base: Pulling from library/ros
fdf67ba0bcdc: Already exists
b0a77e697580: Already exists
22f546c8afef: Already exists
...
Status: Downloaded newer image for ros:humble-ros-base
I can verify the image is downloaded correctly with:
docker images
Here's how I create my first ROS 2 container optimized for the Pi:
My Basic ROS 2 Dockerfile
I create a file called dockerfile.ros2-pi:
# Using the official ROS 2 Humble base image for ARM64
FROM ros:humble-ros-base
# Set environment variables for Pi optimization
ENV ROS_DOMAIN_ID=42
ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
ENV PYTHONUNBUFFERED=1
# Install additional packages I commonly need
RUN apt-get update && apt-get install -y \
python3-pip \
python3-colcon-common-extensions \
python3-rosdep \
ros-humble-rmw-cyclonedds-cpp \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# Set up rosdep
RUN rosdep init || true
RUN rosdep update
# Create a workspace
WORKDIR /ros2_ws
RUN mkdir -p src
# Source ROS 2 in bashrc
RUN echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
# Set the default command
CMD ["bash"]
Building my Container
First, I build the container:
docker build -f dockerfile.ros2-pi -t ros2-pi:humble .
I usually grab a coffee during this build - it takes 10-15 minutes on the Pi.
Running the Container Interactively
I like to start with an interactive container to test things out:
# Run interactively with a terminal
docker run -it --rm --name my-ros2-container ros2-pi:humble
This gives me a bash prompt inside the container where I can run ROS 2 commands directly.
Adding volume mounts for development:
For actual development work, I usually want to share my code between the Pi and the container:
# Run with a workspace directory mounted from the Pi
docker run -it --rm --name my-ros2-container \
-v /home/pi/my_ros2_workspace:/ros2_ws \
ros2-pi:humble
What this does:
- -v /home/pi/my_ros2_workspace:/ros2_ws - Mounts my Pi's workspace folder into the container
- Any changes I make in VS Code (connected to the Pi) appear instantly in the container
- Built packages persist even if I delete the container
Connecting from a second terminal
If my container is already running, I can connect to it from another terminal window:
docker exec -it my-ros2-container bash # Connecting to an already running container
This is incredibly useful when I want to:
- Run multiple ROS 2 nodes in the same container
- Monitor logs while running commands
- Debug issues while keeping the main process running
Running in Background Mode
For production, I run containers in the background:
# Run in background (detached mode)
docker run -d --name my-ros2-container ros2-pi:humble tail -f /dev/null
Then I can still connect to the container anytime with the docker exec command above.
Step 3: Resource Optimization StrategiesThe Pi has limited resources compared to a desktop computer, so I've implemented several strategies to make my containers run efficiently. Here's what I've learned works best:
Memory OptimizationThe Pi 4 has either 4GB or 8GB of RAM, which needs to be shared between the OS and all running containers.
I always set memory limits for my containers to prevent one container from using all available RAM:
# Limit container to 1GB RAM with 2GB total, including swap
docker run --memory=1g --memory-swap=2g ros2-pi:humble
What this does:
- --memory=1g: Limits RAM usage to 1GB
- --memory-swap=2g: Allows up to 1GB additional swap space
- Prevents the container from crashing the Pi by using all the memory
The Pi 4 has a quad-core CPU, but ROS 2 nodes can be CPU-intensive.
For CPU-intensive nodes, I limit CPU usage:
# Limit to 2 CPU cores maximum
docker run --cpus=2 ros2-pi:humble
I can also set CPU priority:
# Lower priority (nice value)
docker run --cpus=2 ros2-pi:humble
What this does:
- Prevents one container from monopolizing all CPU cores
- Ensures the Pi remains responsive for other tasks
- Helps with thermal management (less heat generation)
SD cards have limited space and slower I/O compared to SSDs.
I use .dockerignore to keep build contexts small.
# .dockerignore file
*.log
*.tmp
.git/
__pycache__/
*.pyc
node_modules/
And I clean up after package installations:
# dockerfile
RUN apt-get update && apt-get install -y \
package1 \
package \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
My Multi-Stage Docker Build ApproachWhy I use this: It dramatically reduces the final image size by excluding build tools and temporary files.
To keep container sizes small, I use multi-stage builds:
# Build stage - includes all build tools
FROM ros:humble-ros-base AS builder
WORKDIR /ros2_ws
# Copy source code if src directory exists
COPY src/ src/
# Install build dependencies, these won't be in final image
RUN apt-get update && apt-get install -y \
python3-colcon-common-extensions \
build-essential \
cmake \
&& rm -rf /var/lib/apt/lists/*
# Build the workspace
RUN . /opt/ros/humble/setup.sh && colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release
# Runtime stage - much smaller, only includes what's needed to run
FROM ros:humble-ros-base
# Copy only the built artifacts (not the source or build tools)
COPY --from=builder /ros2_ws/install /ros2_ws/install
# Install only runtime dependencies
RUN apt-get update && apt-get install -y \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Set up environment
RUN echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
RUN echo "source /ros2_ws/install/setup.bash" >> ~/.bashrc
WORKDIR /ros2_ws
CMD ["bash"]
Before building, create the directory:
# Create an empty src directory for testing
mkdir -p src
# Build the image
docker build -f dockerfile.multi-stage -t ros2-pi:multi-stage .
Running with your workspace mounted:
For development work, I mount my workspace directory:
# Run with your workspace mounted from the Pi
docker run -it --rm --name my-ros2-container \
-v /home/pi/my_ros2_workspace:/ros2_ws \
my-ros2-pi:multi-stage
Why use a multi-stage build approach?
- Final image is 50-70% smaller
- Faster deployment and updates
- Less storage usage on the Pi
- Clean separation of build and runtime environments
What is Docker Compose and Why Do I Need It?
Think of Docker Compose as a way to manage multiple containers like they're one application. Instead of running separate docker run commands for each ROS 2 node (which gets messy fast), I wrote one configuration file that describes all my containers and how they work together.
Why I love Docker Compose for ROS 2:
- One command starts everything: docker-compose up starts my entire robot system
- Automatic networking: All containers can talk to each other automatically
- Dependency management: Containers start in the right order
- Easy scaling: I can run multiple copies of the same node
- Simplified development: Changes to one container don't affect others
For complex robotics projects, I use Docker Compose to manage multiple ROS 2 nodes:
My ROS 2 Docker Compose Setup
Now, let's create a practical example using the official ROS 2 talker and listener nodes from the Writing a simple publisher and subscriber (Python) tutorial. I'll setup Docker Compose to run both nodes in separate containers.
Create a ROS 2 package py_pubsub, inside /home/pi/my_ros2_workspace/src by following the steps from here
I create a docker-compose.yml file:
version: '3.8'
services:
ros2-talker:
build:
context: .
dockerfile: dockerfile.ros2-pi
container_name: ros2-talker
network_mode: host
devices:
- /dev/dri:/dev/dri
volumes:
- /home/pi/my_ros2_workspace:/ros2_ws
- /dev:/dev
environment:
- ROS_DOMAIN_ID=42
command: >
bash -c "source /opt/ros/humble/setup.bash &&
cd /ros2_ws &&
colcon build --packages-select py_pubsub &&
source install/setup.bash &&
ros2 run py_pubsub talker"
restart: unless-stopped
ros2-listener:
build:
context: .
dockerfile: dockerfile.ros2-pi
container_name: ros2-listener
network_mode: host
devices:
- /dev/dri:/dev/dri
volumes:
- /home/pi/my_ros2_workspace:/ros2_ws
- /dev:/dev
environment:
- ROS_DOMAIN_ID=42
command: >
bash -c "source /opt/ros/humble/setup.bash &&
cd /ros2_ws &&
colcon build --packages-select py_pubsub &&
source install/setup.bash &&
ros2 run py_pubsub listener"
restart: unless-stopped
Starting my Multi-Node System
From the directory where the docker-compose.yml was created, run:
docker compose up -d
What this setup demonstrates:
- Talker node: Publishes "Hello World" messages every 0.5 seconds to the 'topic' topic
- Listener node: Subscribes to the 'topic' topic and prints received messages
- Automatic building: Each container builds the package before running
- Volume mounting: Source code is shared between the host and containers
- Network communication: Both containers use host networking for ROS 2 discovery
I can monitor all my nodes with:
docker compose logs -f
You will see output like:
ros2-talker | [INFO] [1758575795.439667580] [minimal_publisher]: Publishing: "Hello World: 0"
ros2-listener | [INFO] [1758575795.440115780] [minimal_subscriber]: I heard: "Hello World: 0"
ros2-talker | [INFO] [1758575795.939564973] [minimal_publisher]: Publishing: "Hello World: 1"
ros2-listener | [INFO] [1758575795.942144191] [minimal_subscriber]: I heard: "Hello World: 1"
Stopping Multi-Node systems
To stop and clean up all containers:
docker compose down
Other useful Docker Compose commands:
# Just stop containers (don't remove them)
docker compose stop
# Start stopped containers again
docker compose start
# View status of all services
docker compose ps
Step 5: Pi-Specific Optimizations I Always UseDDS Configuration for PiWhat is this, and where do I create it?
DDS (Data Distribution Service) is how ROS 2 nodes communicate with each other. The default settings are designed for powerful computers, but the Pi needs more conservative settings to avoid overwhelming its network and memory.
I create a custom DDS configuration file called cyclonedds.xml in my project directory (the same folder as my dockerfile):
<?xml version="1.0" encoding="UTF-8" ?>
<CycloneDDS xmlns="https://cyclonedds.org/schema/dds/1.0">
<Discovery>
<ParticipantIndex>auto</ParticipantIndex>
<Peers>
<Peer Address="localhost"/>
</Peers>
</Discovery>
<Internal>
<Watermarks>
<WhcHigh>1MB</WhcHigh>
<WhcLow>512KB</WhcLow>
</Watermarks>
</Internal>
</CycloneDDS>
What this does:
- WhcHigh/WhcLow: Limits memory used for message queues (default can be 100MB+)
- Peers: Tells DDS to only look for other nodes on the same Pi
- ParticipantIndex: Let's DDS automatically assign participant IDs
How to use it in my containers:
In my Docker Compose file:
services:
my-ros-node:
# ... other config
volumes:
- ./cyclonedx.xml:/config/cyclonedx.xml # Mount the config file
environment:
- CYCLONEDX_URI=file:///config/cyclonedx.xml # Tell ROS 2 to use it
Why this helps:
- Reduces memory usage by 80-90%
- Faster node startup times
- More reliable communication on Pi's limited network
When I need GPU acceleration for computer vision tasks:
services:
vision-node:
# ... other config
devices:
- /dev/dri:/dev/dri # GPU access
environment:
- LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra-egl:${LD_LIBRARY_PATH}
I2C and GPIO AccessFor hardware interfacing:
services:
hardware-node:
# ... other config
devices:
- /dev/i2c-1:/dev/i2c-1
- /dev/gpiomem:/dev/gpiomem
privileged: true
Step 6: Monitoring and DebuggingChecking Container PerformanceI regularly monitor my container's resource usage:
docker stats
Debugging Container IssuesFor troubleshooting, I exec into running containers:
docker exec -it ros2-talker bash
Then I can check ROS 2 nodes:
ros2 node list
ros2 topic list
ros2 topic echo /topic
My Log Management StrategyI configure log rotation to prevent storage issues:
services:
my-service:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Step 7: Performance Tuning Tips I've LearnedNetwork PerformanceDefault Docker networking adds overhead that the Pi can't handle well.
I use host networking for ROS 2 containers:
services:
my-ros-node:
network_mode: host # Uses Pi's network directly
Trade-offs:
- Pro: 20-30% better network performance
- Pro: Simpler ROS 2 discovery (no port mapping needed)
- Con: Less container isolation
- Con: Potential port conflicts
When I use each:
- Host networking: For ROS 2 communication (always)
- Bridge networking: For web services, databases (when isolation matters)
The Pi throttles the CPU when it gets too hot, causing the containers to run slowly.
We can monitor using:
# Check current temperature
vcgencmd measure_temp
# Check if throttling occurred
vcgencmd get_throttled
My Docker prevention strategy:
services:
cpu-intensive-node:
deploy:
resources:
limits:
cpus: '2.0' # Don't use all 4 cores
environment:
- OMP_NUM_THREADS=2 # Limit OpenMP threads
Troubleshooting Common IssuesContainer won't start
- Check the Pi's available memory with free -h
- Verify the image architecture matches ARM64
- Look at container logs with docker logs container_name
ROS 2 Nodes can't communicate
- Ensure all containers use the same ROS_DOMAIN-ID
- Verify network_mode is set to host
- Check firewall settings with sudo ufw status
Poor Performance
- Monitor CPU usage with htop
- Check if containers are swapping with docker stats
- Verify adequate cooling (Pi can throttle when hot)
Running ROS 2 in Docker containers on the Raspberry Pi has transformed the way to develop robotics projects. The combination of containerization and proper resource optimization gives me:
- Consistent, reproducible deployments
- Better resource management
- Easier debugging and monitoring
- Scalable multi-node architectures
The key is to understand the Pi's limitations and optimize accordingly. With these techniques, I can run surprisingly complex ROS 2 systems on a single Pi 4.
Github RepositoryAll the files, Dockerfiles, and configurations mentioned in this article are available in my GitHub repository: https://github.com/nilutpolkashyap/ros2-docker-arm64
Comments