In the heart of a dense rainforest, a ranger walks his/her daily patrol. The forest is alive with sounds — birds, insects, the rustling of leaves. Yet, hidden among these natural rhythms, there may be other, more dangerous noises: the harsh buzz of chainsaws, the crack of gunfire, or the rumble of illegal logging trucks. Detecting these threats early is critical, but human ears can’t always be there, and by the time someone notices, it’s often too late.
Meanwhile, beneath the waves, marine ecosystems face a quieter but equally dangerous challenge. The hum of boat engines, the shift in natural soundscapes, or even signs of coral reef stress can go unnoticed until the damage is irreversible. Ocean rangers and scientists need a way to listen — continuously, sustainably, and without harming the environment.
That is why we are building TerraSono.
TerraSono is a sustainable, modular acoustic intelligence system designed to give nature a voice — whether in forests or oceans. By capturing and analyzing sounds with low-power Edge AI, TerraSono can detect threats like illegal logging, poaching, or wildfires on land, and monitor biodiversity shifts or harmful human activity at sea. Its modular sensor design means the same core hardware can adapt to both environments: a MEMS microphone module for forests, a hydrophone module for marine life.
Solar-powered and decentralized, TerraSono can work in remote, off-grid regions for months at a time, transmitting only critical events via LoRa to save energy and bandwidth. Its eco-friendly, repairable design ensures that monitoring doesn’t add to environmental waste, aligning with a circular economy approach.
TerraSono empowers rangers, conservationists, and researchers with real-time, intelligent insights, giving them the tools to respond faster and protect ecosystems more effectively.
With TerraSono, forests and oceans no longer go unheard — they tell their story, and we listen.
ComponentsUsed
- ESP32-S3 Wroom Module
- SX1262 LoRa Module
- LoRa Antenna Kit
- BQ25185 Battery Charging IC
- TPS62162 Buck Converter
- 18650 3.4Ah Li-Ion Cell
- MAX16054 ON/OFF controller
- INMP441 MEMS I2S Mic
- 1 Inch Piezoelectric Disc (For Hydrophone)
- PCM1808 ADC to I2S converter (For Hydrophone)
- Connectors and Passive Components
SoftwareUsed
- Arduino IDE
- Edge Impulse Studio
- Node-red
- InfluxDB
- Grafana
Implementation Steps
Step 1: Component SelectionWe started by carefully selecting components that balance low power, modularity, and ruggedness:
- Core MCU: ESP32-S3 (for I²S audio input, Wi-Fi for uplink, and low-power modes).
- LoRa Radio: SX1262 module for long-range, low-power communication between nodes.
- Microphones:
- INMP441 MEMS microphone (I²S) for forest acoustic sensing.
- Piezo-based hydrophone (through PCM1808 ADC) for underwater monitoring.
- Power
- 18650 Li-Ion cell (3.4Ah) for energy storage.
- Solar panel: 6V, 100mA for off-grid charging.
- Charge Controller: BQ25185 for safe charging and battery protection.
- Antenna: LoRa-optimized 868MHz antenna with U.FL to SMA interface for robust signal.
- Enclosures: IP-rated, ruggedized, with modular sensor mounting:
- Forest version: sealed with acoustic vents.
- Marine version: waterproof float housing with acoustic coupling for hydrophone.
For forest monitoring, we selected the INMP441 digital MEMS microphone, which provides a direct I²S interface to the ESP32-S3 microcontroller. This eliminates the need for an external ADC and reduces analog noise pickup. The microphone supports sampling rates of 32 kHz at 16-bit resolution or up to 48 kHz at 24-bit resolution, sufficient for detecting typical forest events such as chainsaws, gunshots, vehicle engines, and bird calls. To ensure reliable outdoor operation, the microphone is protected using a windscreen and a Gore hydrophobic vent, which allow acoustic transparency while blocking rain, dust, and insects.
For marine monitoring, we are in the process of implementing a piezoelectric hydrophone module. Piezo elements are naturally suited for underwater acoustics due to their wide bandwidth and durability. Since these sensors produce high-impedance analog signals, we add a low-noise preamplifier stage (OPA1652) to boost and condition the signal. The output is digitized using a PCM1808 I²S ADC, which supports 48 kHz or 96 kHz at 24-bit resolution for high-fidelity underwater recordings. This digital stream is processed by the ESP32-S3 running a TinyML classifier to detect marine events such as boat traffic, reef activity, and biodiversity changes. The hydrophone will be deployed in a tethered float housing with an M8 waterproof connector, ensuring durability in long-term marine deployments.
By designing TerraSono with modular acoustic front-ends, the same core hardware can be reconfigured to monitor either forest or marine habitats, simply by swapping sensor modules and enclosures.
Step 2: Schematic & PCB DesignDesigned a two-layer custom PCB integrating:
- ESP32-S3 MCU
- SX1262 LoRa transceiver
- Power management (BQ25185 + battery holder + solar input)
- I²S microphone interface for MEMS Mic and hydrophone
- SD Card for events logging
- Debug headers for programming & testing
Ensured:
- Low power routing with wide power traces and low-dropout regulators.
- Modularity: sensor input routed via JST connectors for easy swapping.
A special acknowledgment goes to NextPCB for sponsoring the PCB and PCBA for this project. Their commitment to on-time delivery and consistent quality in manufacturing ensured smooth progress during development.
The high reliability of their boards, along with professional support, enabled TerraSono to move from concept to functional hardware quickly and effectively.
This contribution was instrumental in achieving a robust and sustainable implementation.
PCB quality (silkscreen, solder mask, pad alignment) was excellent — ensuring reliable performance and professional finish.
Step 4: Enclosure DesignForest Node:
- IP65 casing with waterproof membrane vent for microphone.
- Solar panel integrated on top.
- Backplate designed for tree-mounting.
Marine Node:
- Buoyant float system with hydrophone cable going underwater.
- Transparent solar window for charging.
- Double O-ring sealing for electronics bay.
- Nominal voltage: 3.7V
- Capacity: 3400mAh × 3.7V ≈ 12.6 Wh
- Deep sleep: 0.2 mA
- Active sensing (audio + inference): 80 mA for ~5 sec/event
- LoRa Tx burst: 120 mA for ~1 sec/event
- Wi-Fi uplink (gateway only): 160 mA for ~5 sec every 10 min
Average consumption (slave node) ≈ 3–5 mA (assuming few events per day)Average consumption (gateway) ≈ 10–12 mA
Backup Time (No Solar):- Slave node: 3400mAh ÷ 5mA ≈ 680 hours ≈ 28 days
- Gateway node: 3400mAh ÷ 12mA ≈ 280 hours ≈ 11–12 days
- At 4h sun/day → 0.6W × 4h = 2.4 Wh/day
- Node usage ~0.25–0.3 Wh/day Solar easily replenishes daily consumption, making device sustainable indefinitely.
TerraSono system operates as a two-node architecture consisting of an acoustic sensing node (slave) and a gateway node (master). Both nodes share identical hardware, but perform different roles based on firmware configuration.
Acoustic Event Detection (Slave Node):
- The slave node continuously samples audio using the INMP441 I²S MEMS microphone (forest) or the planned piezo-hydrophone + PCM1808 ADC (marine).
- The ESP32-S3 executes an on-device TinyML classifier that runs in low-power mode, monitoring for acoustic signatures such as chainsaws, gunshots, vehicles, or abnormal soundscapes.
- When the classifier detects an event exceeding a defined threshold, the node logs a 1–5 second audio snippet in local memory for validation or offline analysis.
Event Packaging & Transmission:
- Instead of streaming raw audio (to save bandwidth and power), the node transmits a compact event packet over LoRa (SX1262).
The packet includes:
- Node ID (unique identifier for sensor node)
- Event type (classified acoustic activity)
- Timestamp (synchronized clock or relative counter)
- Battery status (remaining charge %
This ensures low-power, long-range communication in remote areas with no cellular coverage.
Gateway Processing (Master Node):
- The gateway node receives LoRa packets and parses the data.
- It is connected to a Wi-Fi network using its onboard ESP32-S3 radio.
- Parsed event data is forwarded to an MQTT broker, in this implementation using the FlowFuse platform.
Data Pipeline (Node-RED + InfluxDB):
- Node-RED is used to bridge MQTT messages into the data storage layer.
- Incoming event payloads are cleaned, tagged with metadata (e.g., node location), and written into InfluxDB, a time-series database optimized for IoT telemetry.
Visualization & Dashboard (Grafana):
- Grafana queries InfluxDB to visualize real-time and historical data.
The dashboard includes:
- Event timeline with activity classification (chainsaw/gunshot/vehicle).
- Battery level trends for each deployed node.
- Geographic mapping (if GPS modules are later integrated).
- Status of nodes (online/offline).
This provides rangers, researchers, or stakeholders with intuitive monitoring tools accessible from any browser.
Event-driven operation and LoRa’s low-power transmission ensure that the system can remain deployed for extended periods, with solar recharging extending operational life indefinitely under adequate sunlight.
Model Training & Edge AI IntegrationTo enable on-device acoustic classification, we used Edge Impulse Studio, a platform optimized for training and deploying TinyML models on microcontrollers like the ESP32-S3.
Data Collection:
- Acoustic samples (chainsaw, gunshot, vehicle noise, and background forest ambiance) were recorded directly from the INMP441 microphone on the sensing node.
- Short 1–5 second audio clips were stored on the ESP32-S3 and then uploaded to Edge Impulse Studio for dataset preparation.
- Multiple environmental variations (different distances, noise levels, and ambient conditions) were captured to improve robustness.
Dataset Preparation & Training:
- The dataset was annotated into classes: chainsaw, gunshot, vehicle, and background.
- Edge Impulse’s MFCC (Mel-Frequency Cepstral Coefficients) audio feature extraction pipeline was used to convert raw audio into spectrogram-like features.
- A lightweight neural network (1D CNN + dense layers) was trained on-device-compatible parameters to balance accuracy and computational efficiency.
- Model performance was validated using train/test splits, achieving real-time inference suitability.
Deployment to ESP32-S3 (Slave Node):
- Once trained, the model was exported as an Arduino library (Edge Impulse firmware).
- The library was integrated into the Arduino IDE project running on the ESP32-S3.
- The ESP32-S3 continuously samples audio over I²S, extracts features in real time, and passes them to the TinyML model for inference.
- When suspicious activity is detected, the slave node generates an event packet (Node ID, Event Type, Timestamp, Battery Level).
LoRa Communication Integration:
- The event packet is transmitted via SX1262 LoRa module to the gateway node.
- The gateway node is programmed (Arduino IDE + LoRa library) to:
- Receive LoRa packets.
- Parse payload fields.
- Append gateway metadata (e.g., timestamp, RSSI).
Data Uplink to Server:
- The gateway node connects to local Wi-Fi using ESP32-S3’s onboard radio.
- Event data is published to an MQTT broker (FlowFuse platform) using the PubSubClient library.
- A Node-RED flow subscribes to MQTT topics, processes incoming payloads, and stores them in InfluxDB.
- Grafana dashboards visualize the events, node status, and trends in real time.
The following Arduino libraries are used in the firmware:
- WiFi.h – to connect the ESP32 to Wi-Fi networks.
- PubSubClient.h – for MQTT communication with the broker.
- ArduinoJson.h – for building and parsing JSON payloads
- RadioLib.h – for LoRa communication (SX1262 module).
- SPI.h – to handle SPI bus communication between the ESP32 and the LoRa module.
All libraries can be installed through the Arduino IDE Library Manager or manually via GitHub.
2. Slave Node Firmware (LoRa Transmitter)The slave node is responsible for detecting acoustic events using the connected microphone and ML model. For demonstration, events are randomly selected from a predefined list (e.g., Gunshot, Chainsaw, Dolphin Sound, Logging, Vehicle).
Logic Flow:
- Initialize LoRa module using RadioLib.
- Run ML model in while loop and detect event.
- Log event details in SD card.
- Build a string message (e.g., JSON-like format) containing node ID, event type, and battery status.
- Transmit the packet over LoRa.
- Enter low-power mode to conserve energy.
The gateway node acts as a bridge between LoRa slave nodes and the MQTT broker.
Logic Flow:
- Initialize LoRa module with RadioLib.
- Continuously listen for incoming LoRa packets.On receiving a packet:
- Parse the event data.
- Build a JSON payload with fields:
node_id
,location
,event
,battery
,nodes_active
,total_events
,lat
,lon
, andtimesta
Connect to Wi-Fi using WiFi.h.
- Connect to Wi-Fi using WiFi.h.
- Reconnect to MQTT broker if disconnected.
- Publish the JSON payload to the configured MQTT topic.
On the slave node, the ESP32-S3 integrates an Edge Impulse TinyML model compiled as an Arduino library. The INMP441 microphone provides audio frames via I²S, which are fed into the model for real-time inference.
The model is invoked using the standard Edge Impulse inference API:
// Run inference on captured audiosignal_t signal;numpy::signal_from_buffer(features, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);ei_impulse_result_t result;EI_IMPULSE_ERROR res = run_classifier(&signal, &result, false);
The inference output provides class probabilities for each label (e.g., chainsaw, gunshot, vehicle, background). We select the class with the highest confidence and apply a threshold (e.g., >80%) to trigger an event.
String eventType = "background";float confidence = 0.0;for (int I = 0; I < EI_CLASSIFIER_LABEL_COUNT; i++) { if (result.classification[i].value > confidence) { confidence = result.classification[i].value; eventType = String(result.classification[i].label); }}
If the detected class is not background noise, the system constructs a JSON payload with metadata and sends it via LoRa:
// JSON packet{ "node_id": 1, "event": "gunshot", "confidence": 0.87, "timestamp": 253800, "battery": 3240}
The gateway node receives the LoRa packet, validates the JSON string, and forwards it to the MQTT broker:
client.publish("terrasano/events", payload.c_str());
Downstream, Node-RED subscribes to this topic, parses the JSON fields (event, confidence, timestamp, battery), and writes them into InfluxDB. The Grafana dashboard visualizes event timelines, and node health in real time.
As part of commitment to conservation and knowledge sharing, I am engaged with WILDLABS, a global network of conservation practitioners, to present the TerraSono project and gather feedback from experts in the field. I have shared my project concept, objectives, and preliminary results in the WILDLABS discussion forum here. This engagement allowed me to receive valuable insights on acoustic monitoring for wildlife and forest health, and helped align TerraSono with real-world conservation needs.
The TerraSono – Sustainable Acoustic Intelligence System demonstrates a complete solution for real-time environmental monitoring using acoustic sensing, edge intelligence, and IoT technologies. By combining LoRa-based slave nodes with a Wi-Fi gateway, the system reliably transmits detected events in JSON format to an MQTT broker, where Node-RED, InfluxDB, and Grafana handle data processing, storage, and visualization.
With features like solar-powered operation, SD card event logging, configurable modularity, and swappable plug-and-play microphone probes, TerraSono is adaptable to both forest and marine deployments. Machine learning models trained with Edge Impulse enable accurate detection of events such as chainsaws, gunshots, logging, vehicles, and marine life sounds.
Comments