This is a simple project to help you get started with Sony’s IMX500-powered Raspberry Pi AI Camera, running a MobileNet SSD v2 model for real-time object detection at the edge.
What You’ll Need- Raspberry Pi 5 (RPi 5)
- Raspberry Pi AI Camera (IMX500)
- SD card (32GB+ recommended)
- Internet connection to setup and update IMX500 libraries.
- Keyboard, mouse, monitor (for local setup)
1️⃣ Connect the Camera to the Raspberry Pi 5
Securely connect your Raspberry Pi AI Camera to the CSI port on your RPi 5. Please refer to the video below for Hardware setup.
2️⃣ Set Up Your Raspberry Pi SD Card
Flash the latest Raspberry Pi OS (64-bit) onto your SD card using the official Raspberry Pi Imager.
3️⃣ Update to the Latest Raspberry Pi OS
Start by making sure your Raspberry Pi is up to date. Use the command below to update the software. Open a terminal on Raspberry Pi 5 and run:
sudo apt update && sudo apt full-upgrade
sudo apt install -y imx500-all
Once update is complete, ensure to reboot the system by running:
sudo reboot now
Please refer to Raspberry Pi Getting Started documentation for more details.
📌 Note: Steps 1–3 are prerequisites before moving forward.
Environment Setup4️⃣ Create a Python Virtual Environment
Open a terminal on your Raspberry Pi and create a new Python virtual environment with a name "picam-mobilenet-v2". You may choose your own name.
python3 -m venv --system-site-packages picam-mobilenet-v2
5️⃣ Activate the newly created Virtual Environment
source picam-mobilenet-v2/bin/activate
You should now see your prompt change like this:
(picam-mobilenet-v2) pi@raspberrypi:~ $
6️⃣ Navigate into project folder
Move into a newly created project folder "picam-mobilenet-v2" or the project name you chose.
cd picam-mobilenet-v2
Code and Dependencies7️⃣ Download and Save the Source Files
Download and unzip source files to your working folder. Navigate to this page with a browser on Raspberry Pi and download source.zip into your project folder (e.g. ~/picam-mobilenet-v2)
. You may download in your terminal with:
wget https://hacksterio.s3.amazonaws.com/uploads/attachments/1865183/source.zip
8️⃣ Unzip source files
Unzip source files to the project folder with :
unzip source.zip
9️⃣️Upgrade Python Package Manager
Update Python Package Manager (pip) by running :
pip install --upgrade pip==24.0
📌 Note: PIP version 24.1 and above requires packages to follow version specifiers described here. Some of packages are not updated to follow this change so we are using pip version 24.0.
Install all Python libraries with:
pip3 install -r ./requirements.txt
Running the Demo🔟 Start the application with:
python3 ./main.py
This will launch python application that host web server.
Access UI with Browser :
Open browser and go to: http://localhost:8080 and click "Start".You should now see real-time detection using MobileNet SSDv2 on the IMX500-powered camera!
📌 Note: You can access the UI from another PC by navigating to http://<IP Address of Raspberry Pi>:8080.
Edge AI Basics: Load, Configure & Run Vision Models on IMX500This guide covers the basics of interacting with the Raspberry Pi AI Camera to run Computer Vision AI inference.
1. Loading the ModelThe IMX500 device from picamera2.devices.imx500
loads an AI model by providing the path to the model file (*.rpk
).For example:
from picamera2.devices.imx500 import IMX500
imx500 = IMX500('/usr/share/imx500-models/imx500_network_ssd_mobilenetv2_fpnlite_320x320_pp.rpk)
Loading the model may take some time, so check the progress with:
imx500.show_network_fw_progress_bar()
2. Configuring AI ModelModel parameters can be configured using Network Intrinsics. Be sure to call update_with_defaults()
to apply any changes.3. Start video stream to run AI Inference.
This demo uses start_recording()
from picamera2
to start the video stream. You can register a function as pre_callback
to process inference results. Since pre_callback
is called before the image is processed by the video encoder, you can manipulate the image to visualize the inference output.
In this demo, the function Mobilenetv2_Annotator.pre_precallback()
is registered to draw bounding boxes and labels. To generate business value from the inference results, you may add additional logic such as triggering alerts or other automated actions.
Now that you've successfully run your first demo, you can explore more advanced possibilities:
- Swapping out the model with your own.
- Logging detections or triggering actions.
- Start building your own apps — from people counters to inventory trackers and smart door alerts.
Comments