A portable, intelligent safety sign that notices you first—with bright, adaptive LEDs, clear voice alerts, and seamless ESPHome/Home Assistant integration.
The Moment Before a SlipIt’s raining outside. Inside a busy mall, a cleaner has just mopped the walkway. A small yellow “Wet Floor” sign stands quietly—easy to miss, easier to ignore. Someone’s texting, a kid darts past, a shopper glances down at a bag… and a heel slips.
Static signs don’t adapt to human attention. They can be invisible in noise and clutter. Some people can’t clearly distinguish the yellow-on-gloss contrast. Others assume the sign’s been there “forever” and stop noticing it entirely. Meanwhile, one slip can mean injuries, lost time, and angry calls.
What if the sign noticed you first?What if it reacted to your approach—lit up dynamically, spoke clearly, and escalated if you got too close?
Meet Smart Signage Sentient. In the story he’s the guard on duty—seeing, reacting, and adapting in real time. In the engineering docs, it’s a modular platform you can reuse, extend, and control.
- Problem: Passive signs rely on perfect human attention in imperfect places—malls, hospitals, warehouses, event venues. Distraction is normal; visibility isn’t guaranteed.
- Vision: A human-aware, active sign that engages sight and sound at the right moment, not just a printed warning.
- Approach: Build a platform, not a one-off gadget—profiles for different situations, modular hardware, event-driven firmware, and Home Assistant automation out of the box.
- Outcome: A portable device that can run standalone, or join a building’s automation system, and that you (or anyone) can customize without surgery on the code base.
This project was born for a Seeed Studio challenge—and honestly, the best part was discovering how far their platform could go. I was introduced to this challenge by Seeed Studio Ranger Salman Faris - and that spark set the whole journey in motion.
At first, I prototyped with an my own Seeed Studio XIAO ESP32C3 (single core, 4MB Flash) board. It worked, but I quickly realized I wanted more horsepower. My buggy code hogging the CPU and was causing OTA updates fail over WiFi as both stack ran on same core. Also i needed bigger flash to enable TTS as well as to save mp3 files in littlefs. That’s when I switched to the Seeed Studio XIAO ESP32S3: dual cores + 8MB flash turned it into a proper platform, and the identical XIAO pin labels meant I could keep the same peripheral wiring from the XIAO ESP32C3.
Here I have to thank another Seeed Studio Ranger, Abhinav Krishna N, who kindly provided the XIAO ESP32S3 and XIAO ESP32C6 boards. His support helped me hit the ground running and explore features much faster.
From there, the project grew beyond a talking wet-floor sign. With the S3’s power and Seeed’s 24 GHz mmWave radar, it evolved into a full smart signage platform built around the idea of profile.
A profile is like a role the sign takes on for a specific situation. Each profile defines how the LEDs behave, how the voice sounds, and what messages are spoken. Here are a few examples:
Wet Floor Profile
- “Heads up! This floor is slicker than a buttered slide.”
- “Caution! Wet floor ahead — step like a cat.”
- “Don’t rush! I’d hate to see you moonwalk unintentionally.”
And if the sign itself tips over (a very real problem with wet floor boards)
- “Tip-over alert! Please stand me back up.”
- “I’ve collapsed — irony level: maximum.”
- “Warning! The warning sign needs help.”
Halloween Profile (Imagine the Sentient parked right next to the candy bowl…)
- “Only one piece, mortal — the spirits are watching!”
- “Take one… or face the curse of sticky fingers.”
- “Greedy hands awaken the skeleton dance — don’t test me.”
Gone for Lunch Profile
- “On a snack quest — will return victorious.”
- “Lunch in progress. Productivity resumes shortly.”
Construction Zone Profile
- “Warning! Hard hats, not hard heads.”
- “Noise and dust ahead — you’ve been warned.”
And these are just surface-level examples. In reality, profiles can go much deeper—combining LED patterns, audio cues, radar behavior, and even Home Assistant automation to suit the exact context.
The best part? You can add new profiles without touching the code—just drop them into the profile catalog, push your audio files with the provided script, and the sign takes on its new role instantly. No fuss, no mess.
Feeling lazy to record an audio and push it to the device? No worries. With a single compilation flag you can enable Text-to-Speech (TTS). Instead of uploading audio files, just write the line directly in the profile catalog.
So instead of:
src: /warning_1.mp3
You can use:
src: Watch your step, the floor is wet!
…and the sign will generate the voice on the fly.
Note: The profile catalog is YAML, so plain strings don’t need quotation marks
Note: TTS is disabled by default because it uses a lot of flash and slows firmware updates.
Smart Signage Sentient isn’t just one trick — it’s a platform with multiple senses and outputs working together. Each component was added for a reason:
Radar (Seeed Studio 24 GHz mmWave)
- Detects people approaching, even in cluttered or low-light environments.
- More reliable than PIR (no false alarms from sunlight, HVAC, or shadows).
- Enables adaptive responses: escalate when someone gets too close.
- Uses a Kalman filter to smooth radar distance measurements, reducing noise and avoiding false triggers.
- Detection distance can be set per profile at runtime, and it’s remembered automatically across reboots and profile changes — making the behavior completely intuitive.
- Source code
Inertial Measurement Unit (MPU 6500 / 6050)
- Real-world issue: in malls and offices, when a wet floor sign tips over, people usually ignore it and don’t set it back upright — leaving the hazard completely unmarked.
- The MPU-6500 solves this by instantly detecting tip-over or falls.
- On a fall, the sign reacts with special voice lines like “Help! I’ve fallen over…” and a bright LED blinking pattern you can’t miss.
- This way, the sign never goes invisible — and once someone sets it back up, it resumes duty automatically.
- Instead of the usual pitch-and-roll calculation, which needs a fixed axis parallel to Earth’s gravity and constrains the hardware design, I treat accelerometer data as 3D vectors, use the initial vector as reference, and track the angle against it — enabling fall detection in any direction.
- Source code
LED System
- Red light grabs attention — and when it blinks, it’s almost impossible to ignore. The sign uses this to ensure people always notice it.
- Supports multiple patterns — blinking pulses, smooth breathing (fade in/out), or combinations of both — with fully configurable duty cycle and cycle count (or continuous/infinite mode).
- Hardware-accelerated control ensures smooth lighting effects without burdening the main processor or memory.
- Adjustable brightness makes it effective both indoors and outdoors, and the setting is saved so it persists across reboots and profile changes.
- Each event and profile can have its own LED effect — for example, a unique pattern when starting, in error, active warning, or tip-over state.
- Source code
Audio System (MAX98357A)
- Flexible playback: supports both pre-recorded audio files and TTS (can enable either one now, but can have both work in parallel in future, by checking the prefix “say” word or suffix “.mp3”. Eg: audioSrc="/warning.mp3" or “say: warning, wet floor”).
- High-quality output: clear audio without pops or clicks (thanks to arduino-audio-tools).
- Flexible playlist: play a single audio multiple or infinite times, or play a sequence of audios any number of times — with a unique, per-track configurable delay between each playback.
- Composable speech: the same logic can be reused in other projects that need to concatenate audio snippets (e.g., numbers, battery percentage, or dynamic status messages), especially in cases where TTS or continuous audio streaming isn’t practical.
- Source code
ESPHome — what you get
- IoT by default: Wi-Fi setup and OTA firmware updates.
- Home Assistant–ready: entities (Profile, Start, Range, Brightness, Volume, Session) for automations/scenes. Note: I haven’t tested HA integration on this build yet.
- Web UI: if you’re not using Home Assistant, enable ESPHome’s local web server for a simple dashboard.
- Plug-and-play extras: add sensors, displays, relays, etc., from ESPHome’s component library by just updating YAML.
Better to show than tell — let’s see him in action.
Demo 1: Setup and Warning
In this demo, Smart Signage Sentient is mounted on a wet floor sign.
- The device is powered on and confirms it’s ready
- With a press of the "obvious" green Start button, the system enters active mode.
- After a short delay, the radar begins monitoring the surroundings.
- As I approach, the sign detects my movement, triggering blinking LEDs and voice alerts.
- The closer I get, the LED breathing frequency increases, escalating the warning to grab attention.
Demo 2: Fall Detection
In this demo, Smart Signage Sentient is already on and active.
- When tipped over, it instantly detects the fall.
- The sign announces the user defined fallen messages repeatedly, while switching to a special blinking LED pattern.
- It keeps calling for help until restored.
- Once set upright again, the sign returns to its active state and confirms with a voice message.
Demo 3: Dashboard
This is where you give Smart Signage Sentient his marching orders.
Demo 4: Build & Flash (Docker) via USB
I made a little script to make my life easier. One command, and it finds the board, spins up a clean Docker workspace, compiles, flashes to device, and streams logs—no toolchain juggling, no mess on the host. The video shows the flow; this is just my “press once, relax” button.
Demo 5: Build & Flash (Docker) via OTA
Relax even more — no USB cable needed. I use the same one-liner (./esphome_docker.sh run). The script checks for a serial port first; if none, it automatically targets smart-signage.local
and pushes the update over Wi-Fi. Build happens in Docker, firmware uploads OTA, device reboots, and logs roll in.
Note: OTA is super convenient but a bit slower. I use USB during rapid dev, then switch to OTA once things are stable.
Note: In OTA logs, you might miss the initial setup logs.
This file is a catalog of profiles. Each profile names the mode and lists what to do on key moments like start, detected, tip-over, recovered, and error (e.g., which LED effect to show and which audio to play — or a say: line for TTS).
profiles:
- name: WetFloor
events:
Ready:
audio: { playCnt: 1, playList: [ { src: /ready.mp3, delayMs: 100 } ] }
led: { pattern: blink, periodMs: 800, cnt: 1 }
Error:
audio: { playCnt: 1, playList: [ { src: /error.mp3, delayMs: 0 } ] }
led: { pattern: blink, periodMs: 150, cnt: 0 }
Start:
audio: { playCnt: 1, playList: [ { src: /start.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 120, cnt: 3 }
Stop:
audio: { playCnt: 1, playList: [ { src: /stop.mp3, delayMs: 0 } ] }
led: { pattern: blink, periodMs: 600, cnt: 1 }
Detected:
audio:
playCnt: 0
playList:
- { src: /test/warning_1.mp3, delayMs: 2000 }
- { src: /test/warning_2.mp3, delayMs: 2000 }
- { src: /test/warning_3.mp3, delayMs: 2000 }
DetectedDistanceMax:
led: { pattern: twinkle, periodMs: 2000, cnt: 0 }
DetectedDistanceMin:
led: { pattern: twinkle, periodMs: 120, cnt: 0 }
Fell:
audio:
playCnt: 0
playList:
- { src: /test/fallen_1.mp3, delayMs: 5000 }
- { src: /test/fallen_2.mp3, delayMs: 5000 }
led: { pattern: blink, periodMs: 300, cnt: 0 }
Rose:
audio: { playCnt: 1, playList: [ { src: /ready3.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 1000, cnt: 2 }
SessionEnd:
audio: { playCnt: 1, playList: [ { src: /eod.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 800, cnt: 2 }
- name: Halloween
events:
.
.
.
Code That Understands Developers’ PainOn boot, the firmware runs sanity logs so you don’t have to chase ghosts later. It prints:
- Profiles loaded (names + count)
- Partition table (type, subtype, offsets, sizes)
- LittleFS tree (paths & sizes of every asset)
Result: instant confirmation that the right profile catalog, partitions, and audio files are in place—before any “why is it silent?” debugging begins.
Step 1 — Build the Hardware
- Download the KiCad schematic (link). That’s the source of truth; the images are only a preview.
- Wire it up using the schematic as your map—either on a breadboard (quick) or perfboard board (sturdy—fingers crossed).
- Download the 3D print files(links) — front panel, back box, and nose — then print and assemble to mount the LEDs, radar, and speaker.
- Note: This build uses off-the-shelf breakout boards, so I didn’t design a custom PCB yet.
Step 2 — Software (Flash + Configure)
- Clone & flash
# 1) Get the code
git clone https://github.com/rahuljeyaraj/smart-signage-esphome.git
cd smart-signage-esphome
# 2) Create your secrets file
cp secrets_sample.yaml secrets.yaml
# 3) Edit Wi-Fi credentials
# Open secrets.yaml and set:
# wifi_ssid: "YourNetwork"
# wifi_password: "YourPassword"
# 4) Build, upload, and stream logs
./esphome_docker.sh run
- Watch the logs — you should see something like this, it means SmartSignage initialized successfully. Order may vary—hardware init runs in parallel.
Guard: RadarReady - 1/4 ready → all=NO
Guard: ImuReady - 2/4 ready → all=NO
Guard: LedReady - 3/4 ready → all=NO
Guard: AudioReady - 4/4 ready → all=Yes
Entered Ready state
- If you see the below log, it is time to debug (the fun part). If everything worked all the time, we’d never look inside 🙂.
Entered Error state!
- Add audio (MP3)
# put your .mp3 files here
cp -v /path/to/*.mp3 smart-signage-esphome/data/
# upload them to LittleFS
./lfs_tool.sh upload
- Tweak the profile catalog,
# 1) edit the catalog
nano smart-signage-esphome/profile_catalog.yaml
# or use your editor
# 2) compile (sanity check)
cd smart-signage-esphome
./esphome_docker.sh compile
# 3) flash
./esphome_docker.sh run
- Open the dashboard: http://smart-signage.local/
- If mDNS doesn’t resolve, use the device IP from the logs or your router.
I haven’t made a one-click Windows script yet. To set up ESPHome CLI, see:
This might help:
py -m pip install --upgrade esphome esptool littlefs-python
Compile & upload:
# First time over USB (replace COM9 with your port)
esphome run .\smart_signage.yaml --device COM9
# Later you can use OTA by name/IP
# esphome run .\smart_signage.yaml --device smart-signage.local
Flash LittleFS (files from .\data)
# Set these from partitions.csv
$FS_OFFSET = 0x47D000
$FS_SIZE = 0x383000
# Build the image from .\data\
littlefs-python create --block-size 4096 --fs-size $FS_SIZE .\data\ littlefs.img
# Flash the image
esptool.py --chip esp32s3 --port COM9 write_flash $FS_OFFSET littlefs.img
Disclaimer
I leaned heavily on ChatGPT (Plus) throughout this build. I usually write C; this project nudged me into C++ and object-oriented patterns—with ChatGPT as my tutor and sounding board.
I love code reviews—the kind where the reviewer not only flags an issue but explains the why. I’ve been lucky to learn from great mentors (especially at Qualcomm) who sharpened my coding style. ChatGPT has become one of those mentors. With it, the hesitation to “ask one more question” disappeared. Like a kid who learns fast by asking relentlessly, I kept pressing on things like patterns, trade-offs, and memory usage. That journey led me to gems like ETL and Boost.SML, and I now use C++ features where they genuinely benefit the design.
It’s not all sunshine and rainbow. ChatGPT still hallucinates at times and can amplify your own tendencies—mine is obsessive polishing—which leads to endless reviews and tweaks. Even when I’m satisfied, a single new suggestion can hook me into another round. ChatGPT is excellent at contained logic, The two scripts—esphome_docker.sh
and lfs_tools.sh
—were written entirely by ChatGPT based on my requirements and were so solid I didn’t dig into them much. But as my confidence grew on it (or when my brain felt tired), I sometimes handed over the design’s steering wheel. Every time I did, I regretted it—things drifted toward complexity—and I had my Thanos moment: “Fine. I’ll do it myself.”
Block Diagram
These are the system blocks. My first attempt used a bottom-up approach: bring up each hardware interface first, then layer the control logic on top. I will show a top-down pass later. For now, let’s walk through the hardware interfaces.
Radar Subsystem (Seeed Studio 24GHz mmWave for XIAO)Wiki link.
Verify it’s alive (two quick ways):
- PC (USB-UART): Power at 3.3 V, cross TX↔RX, open the vendor tool; you’ll see live presence data and can tweak params.
- Android app (BLE): Power the sensor, pair in the app, and adjust thresholds/baud without wires.
UART basics:
- Default is 256000 baud, 8-N-1 on LD2410(B/C).
- Seeed’s XIAO carrier uses soft serial on D2/D3; it often won’t sustain 256000. Either rewire to a hardware UART (keep 256000) or lower the radar’s baud (e.g., 115200/9600) in the app and keep soft serial (Refer).
Wiring tips:
Xiao Board Radar/Module
TX ------------------> RX
RX <------------------ TX
GND ------------------- GND
3V3 ------------------- VCC (3.3 V only)
Protocol:
Full frame/command spec is in the LD2410 protocol manual — handy if you’re rolling your own parser.
What the radar reports:
It streams frames continuously with status and three distance fields:
- Target status: No target / Moving / Stationary / Moving & Stationary.
- Moving target distance (cm)
- Stationary target distance (cm)
- Detection distance (cm)— a single “best” distance based on energy (I guess)
How I decided which distance field to use:
- Test: Walked back and forth across 6 m and plotted
mTargetDistance
(moving, red),sTargetDistance
(stationary, green),distance
(detection distance, blue), andpresence
(yellow). - The generic distance(blue) tracks the moving target and falls back to stationary; I tried per-gate sensitivity tuning to catch movement across the range, but it still wasn’t reliable.
- The static(green) distance lags but is steadier. No real time response.
- The moving(red) distance tracks quickly but is noisy.
- As I wanted real time response for my device, I went with the moving target distance, and delt with the noise using a Kalman filter.
The radar pushes data every ~50 ms (from experience only, not documented AFAIK). If you poll slower (e.g., 500 ms), the UART buffer stacks old frames and you’ll read a stale one—making updates seem slow. Solution: flush the serial buffer and use the last one so you always act on the latest frame.
Libraries I tried
ncmreynolds/ld2410
iavorvel/MyLD2410
Both are solid Arduino libs. I did hit a subtle parse bug in the first one and opened a fix (PR #42).
Hardware issue:
When I first brought up the XIAO ESP32-C3 with the 24 GHz radar, everything crashed the moment I wired the radar. I chased loose breadboard leads and a flaky power supply before noticing the real culprit: I was feeding the radar from the board’s 3V3 pin; the radar draws ~100 mA, which pulled the 3.3 V rail down to ~2.4 V and triggered a brownout. Powering the radar from an external 3.3 V source (common ground) fixed it immediately. I couldn’t reproduce this on the XIAO ESP32-S3, so it’s likely regulator margin on that particular C3 board.
Plugging in Your Own Radar:
The radar interface like all other hardware interfaces in this project is written as a Hardware Abstraction Layer (HAL) — it doesn’t mix with the core logic, which makes it clean and reusable. You can check the code here: radar/hal. To add your own radar, just create a new FooRadarHal.{h,cpp}
implementing the same interface iradar_hal.h, and the system will work with it. The design pattern here is essentially Strategy with dependency injection.
When you hear IMU, it simply means Inertial Measurement Unit—a tiny package that usually combines an accelerometer, a gyroscope, and sometimes extras like a magnetometer or a temperature sensor. Together, these sensors let a device sense tilt, motion, and rotation.
- Accelerometer feels gravity and linear acceleration. At rest, it gives you a reliable tilt. Once you start moving, the motion vector sneaks in as error.
- Gyroscope measures angular velocity (rotation). It tracks angles beautifully while moving, but when you stop, it suffers from drift.
- Magnetometer (not present on all IMUs) gives compass direction.
- Temperature sensor is usually onboard to help compensate readings.
The above details comes purely from my decade-old experience with this sensor. Back then, I was trying to build a self-balancing two-wheeled robot. I quickly discovered the trade-off: the accelerometer is honest when parked, the gyroscope is honest when sprinting. The solution was to fuse them, and the textbooks pointed me to the Kalman filter. That project never quite balanced itself, but the experience etched these quirks into memory.
Fast-forward to today
When it came time to give my signage a sense of “fall” and “rise,” I reached for what Google recommended: the classic MPU-6050. I ordered one, waited eagerly for a week, and then—nothing. The device simply wasn’t detected. I tried different Arduino libraries, including Adafruit MPU6050 and other MPU6050 drivers, but still no luck.
Before contacting the seller, I went into full on debug mode—cross-checking everything on both the hardware and software side: wiring, power supply, breadboard continuity, pin mapping and even alternative libraries. That’s when I stumbled upon the FastIMU library, which includes a handy bus scanner. Running it revealed the surprise: my board wasn’t a 6050 at all, but an MPU-6500. Same package, same silkscreen, but a different chip inside.
Once I switched to the correct 6500 driver in FastIMU, it worked perfectly. The label may have lied, but the library saved me.
Both the MPU-6050 and the MPU-6500 share the same I²C address (0x68
). So when the library tried to talk to the device, it found something at that address—but then failed the WhoAmI check. That register is meant to confirm the identity of the chip. For the MPU-6050, it returns 0x68
, while the MPU-6500 responds with 0x70
.
How orientation is measured during operation:
A 3-D accelerometer vector sample is cached when the Start button is pressed. At each predefined interval, the IMU task wakes up and computes the angle between the current accelerometer vector and the cached reference vector:
uint16_t FSM::computeTiltAngle(const Vector &curAccel, const Vector &refAccel) const {
double dotProd = curAccel.dot(refAccel);
double magProd = curAccel.norm() * refAccel.norm();
if (magProd <= 0.0) return 0.0;
double cosθ = dotProd / magProd;
cosθ = std::clamp(cosθ, -1.0, 1.0);
return static_cast<uint16_t>(std::lround(std::acos(cosθ) * 180.0 / M_PI));
}
If the angle is greater than the threshold for n consecutive samples, it is considered a fall. I used etl::debounce logic to simplify the sample counting logic.
Here too like radar you can replace the IMU with minimal change. Explore here.
LED SubsystemThe LED subsystem supports the usual controls: on, off, and several blinking/breathing waveforms with configurable timing. Each waveform phase is parameterized as shown below:
/**
* Breath Waveform:
* highLevel ____________
* / \
* / \ (repeat)
* / \ /
* lowLevel _/ \_____________/
* ^ ^ ^ ^ ^
* | | | | |
* | | holdHighMs | | holdLowMs |
* toHighMs toLowMs
*/
Waveform phases:
A waveform cycle is composed of four phases:
Phase | Period | Using
------------+---------------+-----------
Low → High | toHighMs | LEDC HW.
Stay High | holdHighMs | SW/HW timer
High → Low | toLowMs | LEDC HW.
Stay Low | holdLowMs | SW/HW timer
From these parameters you can produce different waveforms:
- Square
- Triangular
- Saw tooth (positive and negative)
- Sin-ish
Using LEDC fade hardware:
- LEDC handles PWM ramping in hardware; program a fade and hardware runs it.
- Once a fade starts you cannot change its timing or target duty.
- Parameter updates take effect only after the running fade finishes — apply them at the next phase boundary.
- Avoid very long fade durations to keep responsiveness.
- Fade completion is normally signalled by a fade-end interrupt.
- ISR rules followed: handlers in IRAM and only ISR-safe APIs (e.g.,
xQueueSendFromISR
). - In combined use with audio, LEDC fade-end ISR caused crashes despite mitigations.
- LEDC is a ESP32 specific hardware, you may add a new HAL for porting it to a different chipset.
Workaround (stable):
- Disable LEDC fade interrupt.
- Start a software timer alongside each fade with the same expected duration.
- When the timer fires, wake the LED task to move to the next phase.
- Result: hardware still performs fades, but no LEDC ISR — crashes disappeared; LED + audio stable.
Varying breathing frequency with distance:
- Make the breathing period a function of detected distance.
- Tried linear, exponential, logarithmic mappings — linear worked best in practice.
- Eq: period(d) = P_max - (P_max - P_min) * d / D where d∈[0,D]; if D==0 use P_max.
- D = user-defined radar max distance.
- The P_max and P_min is set by user in the profile catalog:
DetectedDistanceMax:
led: { pattern: twinkle, periodMs: 2000, cnt: 0 }
DetectedDistanceMin:
led: { pattern: twinkle, periodMs: 120, cnt: 0 }
In the video you can see how the LED breathing period(violet) in msec changes with detected distance(green) in cm. This was recorded using an older control logic that included stricter rate-limiting; I removed those rules for simplicity, which is why the distance value reported to the controller(yellow) looks a bit stepped compared to the actual distance(green). The stricter rate-limiting was to avoid spamming controller's event queue which may affect the other interfaces.
The Explorer’s WorkshopContext: Before diving into the audio subsystem, here are the experiments I ran with various IDEs and frameworks during this project.
Arduino IDE
- I began in the Arduino IDE because I thought that’s where Arduino development happens.
- I quickly ran into limits: poor folder/library management and no real project structure for medium-sized work.
PlatformIO + Arduino framework (VS Code)
- Switched to VS Code + PlatformIO + Arduino framework — this was a much better development experience:
- Standard
src
,include
,libs
andtest
layout. - Easy dependency management and many libraries available.
- Well-documented build system.
- Development progressed fast and hardware checks passed.
- I dove into FreeRTOS primitives (tasks + IPC) instead of ESP event loops for portability and a cleaner mental model — if we ever move off ESP, the same RTOS-based architecture can be reused.
- I implemented a generic header-only library (rtos-module-lib) that packages each subsystem as a module with pluggable IPC channels (queues, message buffers, etc.), so you can swap the channel without rewriting subsystem code.
- I then intentionally picked message buffers (for its variable message size)— the hardest choice — as the IPC channel. Message buffers is the hardest as it lack built-in multi-reader/multi-writer support and force explicit serialization/deserialization. It was probably overkill and a fun rabbit hole, but an excellent stress-test and learning exercise.
PlatformIO + ESPHome (Arduino framework)
- I wanted a dashboard and easy IoT integration, so I evaluated ESPHome.
- Goal: make the core logic a reusable library (smart-signage-lib) so it can be built both as a PlatformIO-Arduino standalone project (smart-signage-pio-arduino) and as an ESPHome-Arduino external component (smart-signage-esphome-arduino).
- I refactored the code into a library so both build targets could include the same core.
- Up to this point I followed a bottom-up approach: hardware interfaces first, then control logic and UI.
- I designed low-level drivers with a clear idea of how the top layer would communicate with them.
- But as I wrote the top controller layer, I realized I wasn’t writing code — I was cooking spaghetti of
casts
,if/else
andswitch
statements. - I tried a switch-case state machine, but could see where it would lead, so I stopped and reassessed.
- With just over two weeks left before the contest deadline, I refused to submit code I wasn’t proud of.
- The only way forward was a full redesign while keeping the HAL intact.
- Fueled by the deadline, I jumped in with one mindset: "Maximum Effort."
ESPHome + ESP-IDF framework
- While redesigning I actually switched to the ESPHome–ESP-IDF framework for finer control 🤦🏽 (why not?).
- Menuconfig and the native APIs were irresistible, So I rebuilt the external component — including the UI, controller FSM, subsystem stubs, and even tested littlefs interface — under ESP-IDF.
- I rewrote the whole stack with boost::sml for the FSMs — the result was cleaner, more maintainable, and far easier to test.
- Then the nastiness began: as I brought in my HAL layers, I hit a wall — several libraries I depended on were Arduino-only and wouldn’t run under pure ESP-IDF without nontrivial porting.
ESPHome + Arduino framework
- To minimize rework, I switched back to the ESPHome with Arduino framework.
- Even though there was only one week left before the deadline, I was confident I could finish because I had tested all hardware communication in the PlatformIO-Arduino combo—so no surprises were expected. Or so I thought. ESPHome had other plans for me.
You guessed right, That’s when things went off the rails.
I brought my audio subsystem to ESPHome.
Version mismatches brought the chaos.
When I was at Qualcomm, version mismatches were a constant problem. Android builds had to match Secure TrustZone builds, and the same codebase needed to run across multiple chipsets and hundreds of branches. That same old headache showed up — again.
Here are the problems I hit this time:
Version-mismatch issues:
- Missing header: WiFiClientSecure.h
- I2S/DMA RAM allocation error
Other issues:
- Custom partition table is not picked up by compiler.
- Crash when LED activity overlaps with audio playback.
- Playback will happen only once per boot.
Before we dive into fixes, let’s understand the audio subsystem.
Audio Subsystem - What we expect:
- Give a file path or a message string.
- The subsystem resolves it to a file on the device and plays the audio
- Or convert text to speech and play the generated audio.
Storage & format:
- Flash: 8 MB on the module; a portion is reserved for a LittleFS partition where we store the files.
- Why MP3: To save flash space. WAV was rejected due to size; MP3 keeps it small enough for multiple prompts.
Playback path:
- Open the file from LittleFS.
- Do software mp3 decoding.
- Stream it to the I2S bus.
- MAX98357A converts the I2S digital audio to analog and amplifies it (Class-D).
- Gain set to 12 dB (supports 3/6/9/12/15 dB).
Library choice (and why we switched)
- First tried: ESP8266Audio Observed clicks/pops and a grainy tail after playback, plus deprecated dependency warnings.
- Switched to: arduino-audio-tools: Widely used, powerful; I’m using only a small slice of it.
Worked perfectly in PlatformIO + Arduino framework (VS Code) on the previous bottom up approach but exact same code failed in the ESPHome build.
In the following sections, I’ll walk through each error, its cause, and how I fixed it.
DisclaimerIssue 1: Missing header: WiFiClientSecure.h
I fixed all the issues except the LED-interrupt crash which I provided a working stable workaround. That said — everything happened late at nights under deadline pressure. While I aimed for the cleanest fixes, I may have made assumptions or taken shortcuts. If you spot a better solution or any mistakes in my approach, please let me know (or send a PR).
Symptom:
- The smart-signage-esphome fails to compile when arduino-audio-tools is included, even with local audio playback:
../URLStream.h:9:11: fatal error: WiFiClientSecure.h: No such file or directory
Environment:
- ESPHome 2025.7.5
- default Arduino framework 3.1.3.
RCA:
- PlatformIO builds that worked earlier used Arduino 2.x (kept
WiFiClientSecure.h
), while ESPHome upgraded to Arduino 3.1.3 whereNetworkClientSecure.h
is the canonical header for TLS clients. arduino-audio-tools attempted to pull Wi-Fi streaming that referenced the old header name or layout—triggering the error under ESPHome.
What I tried:
- Pin PIO to Arduino 2.x via ESPHome overrides was attempted, but failed with a missing script error, so I stayed on 3.1.3.
- Stubbed the missing header file (not using streaming), compile progressed but failed at link.
Fix:
- Created
AudioConfigLocal.h
with
// Do NOT auto-include the giant umbrella that drags HTTP/URL code
#define AUDIO_INCLUDE_CORE false
- And refactored my Audio HAL to avoid using the AudioPlayer, which pulls streaming headers by default. In the new HAL I only include what’s needed for local playback and it worked.
#include <AudioToolsConfig.h>
#include <AudioTools/CoreAudio/AudioI2S/I2SStream.h>
#include <AudioTools/CoreAudio/VolumeStream.h>
#include <AudioTools/AudioCodecs/CodecMP3Helix.h>
#include <AudioTools/AudioCodecs/AudioEncoded.h>
#include <AudioTools/CoreAudio/StreamCopy.h>
Notes:
- Network/HTTP streaming is intentionally disabled by this change.
- TTS HAL not tested with this change.
Symptom:
I2S init fails with the GDMA/IRAM error:
E gdma: gdma_register_tx_event_callbacks(...): user context not in internal RAM
E i2s_common: i2s_init_dma_intr(...): Register tx callback failed
E i2s_std: i2s_channel_init_std_mode(...): initialize dma interrupt failed
Environment:
RCA:
- What actually breaks: On ESP32-S3, the I2S driver hooks GDMA interrupts. GDMA rejects the registration if the callback’s user context lives in PSRAM. This happens when
CONFIG_I2S_ISR_IRAM_SAFE
is off and PSRAM is enabled, so the I2S channel/context can land in PSRAM. GitHub - Why you’re seeing it on S3 with PSRAM: Espressif’s docs note the I2S “IRAM-safe” option forces driver objects into internal RAM specifically to avoid accidental PSRAM linking; callbacks used in ISRs must be in internal RAM. With PSRAM enabled, not using IRAM-safe mode exposes this constraint. docs.espressif.com
- Regression & fix window: Community reports show it worked up to Arduino-ESP32 3.1.1, and the bug appeared in 3.1.2 on S3 with PSRAM; the symptom is exactly this log chain. It was later fixed in 3.2.0-RC2 (and thus in 3.2.0). GitHub
What I tried:
- I tried to upgrade my Arduino core to 3.2.1 which has the fix picked up, using following change in my smart_signage.yaml:
esphome:
name: smart-signage
platformio_options:
lib_ldf_mode: deep
esp32:
board: seeed_xiao_esp32s3
flash_size: 8MB
framework:
type: arduino
version: 3.2.1 // <- New line
- But got following error:
error: implicit declaration of function 'i2c_ll_slave_init';
- This was a new error and seems to be due to the miss match between Arduino Core and the underlying ESP IDF (same issue was reported).
I luckily came across the platform-espressif32 versions table, which gave me a lot of clarity about the versions of platform-espressif32, Arduino Core, and ESP-IDF.
I am really grateful to sivar2311 for sharing this table.
- From table it was clear that current the code is :
- And I needed:
- Referring to the table, I chose to upgrade my platform-espressif32 to 54.03.21, hoping it would pick up both Arduino Core 3.2.1 and ESP-IDF 5.4.2, and I added an explicit platform version under
platformio_options
.
esphome:
name: smart-signage
platformio_options:
platform: https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21/platform-espressif32.zip // <- New line
lib_ldf_mode: deep
libraries:
- "https://github.com/ETLCPP/etl.git"
esp32:
board: seeed_xiao_esp32s3
flash_size: 8MB
framework:
type: arduino
- But got again some version miss match error:
error: 'IN6_IS_ADDR_V4MAPPED' was not declared in this scope
- Until now I was working in Windows, only because the first Arduino IDE was already installed there. Since I was not able to verify whether the issue was with the fix or some residual files in the system, I switched to my Ubuntu, set up a Docker, and tested with a clean environment. And the issue persisted even with the clean env.
- Finally, when I looked into the generated
platformio.ini
, I found the issue with my fix. - If I just set Arduino Core version:
framework:
type: arduino
version: 3.2.1
- The generated .esphome/build/smart-signage/platformio.ini contains:
platform = https://github.com/pioarduino/platform-espressif32/releases/download/53.03.13/platform-espressif32.zip
platform_packages = pioarduino/framework-arduinoespressif32@https://github.com/espressif/arduino-esp32/releases/download/3.2.1/esp32-3.2.1.zip
Which means:
- platform-espressif32: 53.03.13 (remained the same)
- i.e. ESP-IDF: 5.3.2 (remained the same)
- Arduino core = 3.2.1 (got updated)
They wont match, hence the error.
- And if I just pin the platform-espressif32 version:
esphome:
platformio_options:
platform: https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21/platform-espressif32.zip
- The generated .esphome/build/smart-signage/platformio.ini contains:
platform = https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21/platform-espressif32.zip
platform_packages = pioarduino/framework-arduinoespressif32@https://github.com/espressif/arduino-esp32/releases/download/3.1.3/esp32-3.1.3.zip
Which means:
- platform-espressif32 = 54.03.21 (got updated)
- i.e. ESP-IDF: 5.4.2 (got updated)
- Arduino core = 3.1.3 (remained the same)
This too wont match, hence the error.
Fix:
So once I understood this, I did the obvious fix, added both line to my yaml:
esphome:
name: smart-signage
platformio_options:
platform: https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21/platform-espressif32.zip // <-- New
esp32:
board: seeed_xiao_esp32s3
framework:
type: arduino
version: 3.2.1 // <-- New
Which finally generated the correct platformio.ini
:
platform = https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21/platform-espressif32.zip
platform_packages = pioarduino/framework-arduinoespressif32@https://github.com/espressif/arduino-esp32/releases/download/3.2.1/esp32-3.2.1.zip
With this, the issue was resolved. I2S was successfully initialized.
Issue 3: Custom partition table is not picked up by compiler.Here is the copy of the issue reported to ESPHome with a workaround fix and possible area where the actual fix has to be made in their script:
Environment
- ESPHome 2025.7.5
- board: seeed_xiao_esp32s3
- framework: arduino;
- platform: https://github.com/pioarduino/platform-espressif32/releases/download/54.03.21/platform-espressif32.zip)
The problem
When building an ESP32 Arduino project that sets:
esp32:
board: seeed_xiao_esp32s3
flash_size: 8MB
partitions: "custom_partitions.csv"
framework:
type: arduino
version: 3.2.1
the build fails with:
*** [.pioenvs/<env>/partitions.bin] Source `custom_partitions.csv' not found, needed by target `.pioenvs/<env>/partitions.bin'.
Placing custom_partitions.csv next to the YAML does not help because ESPHome does not copy the file into the generated build directory for Arduino. (The docs say partitions is a filename/path, which implies it should “just work”.)
ESPHome
Workaround Fix:
What’s odd is that a strange relative path like:board_build.partitions = "../../../custom_partitions.csv"
does work — because PlatformIO resolves it relative to the SCons working dir inside .pioenvs//, which happens to traverse back to the project dir. Absolute host paths also fail.
Expected behavior:
If esp32.partitions: "<file>.csv"
is provided, ESPHome should ensure the file is available in the build directory and handed to PlatformIO consistently (same behavior as with ESP-IDF).
Actual behavior:
For Arduino, ESPHome passes board_build.partitions = "<user value>"
but does not copy the file into the build directory.
Not sure on ESP-IDF, ESPHome behaviour.
Result:
Arduino builds fail unless the user manually games the relative path to land in the right place.
Suggestion for a fix:
I guess we need a line here to copy the custom_partitions.csv to build folder if CONF_PARTITIONS is set.
<TBD>
Issue 5: Playback will happen only once per boot.<TBD>
Looks I Tried- I began with a mental sketch of the finished product. I fed that description to ChatGPT, which produced the concept renders below.
- Fusion 360 iterations for Smart Signage Sentient: exploring friendly, protective forms and serviceable internal layouts.
- Big eye = speake
- Small eye = warning LED.
- The two tiny holes = charging status LEDs.
- I couldn’t place a knob without breaking the character’s silhouette, so I dropped this idea.
Why skull face? you may ask.A peek at my workbench
If the LEDs don’t get your attention, the skull with the glowing red eye definitely will.
----------------------------------------------------------------------------------------------
Stay tuned—more in-depth project details are buffering… ⏳🤖
----------------------------------------------------------------------------------------------
Comments