Forest Guard: A LoRa Based Decentralized Edge-AI Mesh Network for Forest Monitoring
Protecting Forests Where the Internet Cannot Reach
Forests are the lungs of our planet, yet they remain vulnerable to poaching, illegal logging, and devastating wildfires. Remote regions are often unmonitored because they lack infrastructure, no cellular coverage, no internet, and no reliable power. Traditional solutions depend on towers, GSM networks, or satellite links, all of which are either unreliable or prohibitively expensive in deep forest zones.
Forest Guard redefines forest monitoring with a self-sustaining, decentralized, and intelligent mesh network that brings security where no traditional network can.
What Makes Forest Guard Different?
Instead of relying on costly connectivity, our system builds a solar-powered sensor mesh using LoRa Meshtastic. Each node is intelligent at the edge, capable of running AI models locally to detect events like gunshots via an onboard microphone and Edge Impulse classification. Coupled with environmental sensors and a smoke detector, the system can issue real-time alerts about fire outbreaks or human intrusion.
When an anomaly is detected, the alert propagates through the LoRa mesh to a gateway node, which syncs with the cloud when internet is available. The data is visualized on a web-based dashboard, showing sensor activity, live alerts, and precise node locations on a map.
This means no single point of failure, no dependency on fragile infrastructure, and the ability to scale across vast landscapes with just low-power radios and the sun.
Why It Matters
- Early Fire Detection - Prevent small sparks from becoming catastrophic forest fires.
- Anti-Poaching & Logging Defense - Gunshot detection provides actionable intelligence for rangers.
- Sustainable Design - Fully solar-powered nodes with custom PCBs for durability.
- Decentralized & Resilient - Operates even without internet; data flows peer-to-peer until a gateway is reached.
- Community & Conservation Impact - Helps safeguard biodiversity, human settlements, and natural heritage.
With the NextPCB, we will fabricate custom PCBs for the sensor nodes. These PCBs integrate:
- ESP32-S3 & RP2040 LoRa modules
- Solar & battery management
- Environmental, smoke, and audio sensors
This ensures ruggedness, consistent quality, and rapid deployment of multiple nodes, transforming our prototype into a scalable, field-ready system.
The Big Picture
Forest Guard isn’t just a hardware project; it’s a blueprint for protecting forests worldwide. By combining edge AI, mesh networking, and sustainable power, we deliver a system that communities, conservationists, and governments can deploy today to build a safer, greener tomorrow.
SuppliesComponents For 1x Node Unit:
- 1x Custom Node PCB
- 1x Gravity: Multifunctional Environmental sensor
- 1x Gravity: GNSS Sensor
- 1x Fermion: I2S MEMS Microphone
- 1x Fermion: MEMS Smoke Detection Sensor
- 1x RP2040 LoRa with Type C adapter
- 1x Li-Po Battery
- 1x 70x70mm Solar Panel
- 8x M3x10mm Screws
Components For 1x Gateway Unit:
- 1x Arduino Uno R4 WiFi
- 1x Fermion: 3.5” 480x320 TFT LCD Display
- 1x RP2040 LoRa
- 1x Li-Po Battery
- 1x Micro Push Switch
- 4x M2x5mm Screws
- 1x 3V Buzzer
Tools
- 3D Printer (for enclosures and mounting parts)
- Soldering Kit (iron, solder wire, flux, wick)
- Screwdriver Kit (for M2/M3 hardware)
Designing the Forest Guard PCB was the very first milestone in this project.
I am not a professional PCB designer, but with hands-on experience in electronics and by studying references from existing ESP32-S3 development boards, I created a custom PCB in EasyEDA that integrates:
- ESP32-S3 as the main controller
- Battery management and charging circuit
- Type-C USB for programming/power
- Headers for plugging in LoRa module and sensors
The PCB design files (Gerber + BOM) are available on my GitHub repository:
👉 Forest-Guard GitHub Repository
To bring this design to life, I got 5× PCBs fabricated and 2× fully assembled boards (with SMD assembly for ESP32-S3, battery & power management, Type-C, etc.) manufactured by NextPCB. The sensors and LoRa modules are later mounted as through-hole or header components.
This combination gave me the flexibility to test multiple prototypes, while the assembled PCBs saved me time and ensured professional quality soldering of fine-pitch SMD parts.
We’ll flash the Meshtastic firmware onto the RP2040 LoRa modules and configure them for UART communication.
⚠️ Important Safety Note:
Always connect the antenna before powering on the LoRa module to prevent damage.
1. Flashing Meshtastic Firmware
- Go to Meshtastic Downloads.
- Click Go to Flasher.
- Select Target Device: RP2040 LoRa.
- Choose a version → click Flash → then Continue.
- Download the .UF2 firmware file.
2. Upload Firmware to RP2040
- Press and hold the BOOT button on the module.
- While holding BOOT, connect the USB Type-C cable to your PC.
- A new drive named RP2 will appear.
- Copy the downloaded .UF2 file into the RP2 drive.
- Once copied, press the RESET button.
- The device will reboot with the new firmware.
3. Connect to Meshtastic Client
- Open Meshtastic Client.
- Click New Connection.
- Select Serial.
- Click New Device → choose the COM port where your module is connected.
- You should now see the Meshtastic Node Page.
4. Configure LoRa Region
- Go to Config → LoRa.
- Set the Region according to your country’s LoRa regulations.
5. Configure Serial UART
- Go to Module Config → Serial.
- Enable Serial Output.
- Set pins:
- Receive Pin (RX): 8
- Transmit Pin (TX): 9
- Save by clicking the top-right save button.
This configures the module to communicate via UART with external devices.
6. Repeat for All Modules
Repeat the above steps for every LoRa module you plan to use in your project.
Step 3: PCB AssemblyWith the custom PCB manufactured, the next step is to carefully solder the sensor modules and communication hardware onto the board.
Components to Solder
- Gravity: Multifunctional Environmental Sensor
- Fermion I²S MEMS Microphone
- Fermion MEMS Smoke Detection Sensor
- RP2040 LoRa Module with Type-C Adapter
Prepare the Workspace
- Use a clean, static-free surface.
- Preheat your soldering iron to around 350 °C (for leaded solder) or 370–380 °C (for lead-free).
- Have tweezers and flux ready to handle small pins.
Solder Components One by One
- Begin with the smallest modules (sensors) first MEMS microphone and Smoke sensor..
- Than carefully align the Environmental sensor and solder the I²C pins.
- Finally, solder the LoRa module.
- Double-check pin alignment before applying solder. Incorrect orientation can damage the modules.
Continuity Testing
- After soldering each module, use a multimeter in continuity mode.
- Probe between the module pin and the corresponding PCB pad/trace.
- A beep or zero-resistance confirms proper connectivity.
To make the Forest Guard Node truly field ready, I designed a custom enclosure in Fusion 360. This was done by first importing all the standard components and then exporting the PCB’s 3D model from EasyEDA into Fusion 360, ensuring that every cutout and mount point lined up perfectly.
Enclosure Features
The Node enclosure is made up of multiple parts:
- Housing - Holds the custom PCB, with cutouts for the Type-C port, push switch, and top-mounted LoRa antenna. A large center cutout allows light from the onboard RGB LED to pass through.
- Diffuser - A dedicated piece that diffuses the RGB LED light, making it visible in the field without being harsh.
- Cover - Designed to mount the solar panel on top and provide space for the GNSS sensor.
- Mount & Clip Set - Allows the node to be attached securely to trees, walls, or other structures.
The enclosure is secured with 12× M3 screws, giving it the feel and robustness of a professional product enclosure.
3D Printing
I printed the parts on a Bambu Labs P1S 3D printer:
- Housing and cover were printed in light gray PLA for durability and aesthetics.
- Diffuser was printed in pure white PLA to achieve soft light diffusion from the RGB LED.
Files for You
- STL files - Ready-to-print files for direct 3D printing.
- Fusion 360 design file - For anyone who wants to modify or customize the design further.
To make the RGB LED indicator and environmental sensor light input effective, we add a diffuser and a light visor to the node housing. This ensures the LED glow is soft and visible in the field, while the environmental sensor gets accurate light readings without interference.
Parts Needed
- Housing
- Diffuser
- Small piece of clear plastic (cut from packaging or acrylic sheet)
- Quick glue (super glue or instant adhesive)
Attach the Diffuser
- Apply a thin line of quick glue around the Diffuser cutout in the housing.
- Carefully snap the diffuser into place as shown (it should align flush with the cutout).
- Hold gently for a few seconds until the glue sets.
Install the Clear Plastic Visor
- Locate the cutout for the Environmental Sensor light input.
- Apply a small amount of quick glue around the edges of this cutout.
- Place the clear plastic piece over the opening. This acts as a protective window and ensures correct light transmission for the sensor.
- Cut two wires, each about 10 cm long (one red, one black).
- Solder the red wire to the + pad on the back of the battery connector.
- Solder the black wire to the – pad.
- Take the assembled PCB, housing, battery, and the LoRa antenna.
- First, connect the antenna to the LoRa module.
- ⚠️ Never power on without the antenna connected.
- Connect the battery to the PCB.
- Place the PCB inside the housing, aligning the Type-C port with the cutout.
- Secure the PCB using 4× M3 screws.
- Unscrew the antenna, pass it through the top housing hole, and screw it back in place.
- Finally, use double-sided tape to fix the battery to the back of the PCB.
- Take the solar panel, cover, and quick glue.
- Align the solar panel with the cutout on the cover and snap it into place.
- From the back side of the cover, locate the four holes.
- Apply a small amount of quick glue into each hole to secure the panel firmly.
- Let it sit for a few minutes to allow the glue to set fully.
- Take the cover and the GNSS sensor module.
- Connect the GNSS antenna to the GNSS module.
- Place the module over the mounting holes on the cover.
- Secure the module using 4× M3 screws.
- Use double-sided tape to secure the antenna on the cover so it stays in place.
Take the Housing Assembly and the Cover Assembly.
Use the 4-pin connector that came with the GNSS sensor:
- Cut the connector in half using a cutter.
- Plug one side into the GNSS sensor.
- Strip the wires on the other side and solder them to the PCB as follows:
- Red to 3V3
- Black to GND
- Green to SDA
- Blue to SCL
Now connect the solar wires coming from the PCB to the solar panel:
- Black to –
- Red to +
Double-check all connections before powering on.
Step 11: Final Assembly- Take the assembled housing and the assembled cover.
- Carefully align the cover on top of the housing.
- ⚠️ Make sure no wires get pinched during this step.
- Once aligned, snap the cover into place.
- Use 4× M3 screws to securely fasten the cover to the housing.
Now your Forest Guard Node is fully assembled and ready for field testing!
Step 12: Pre-Requisite to Program Node (Edge Impulse)Before uploading the final Node firmware, we need to prepare the machine learning (ML) model that runs locally on the ESP32-S3. This is done using Edge Impulse, a powerful platform for developing and deploying ML models directly to embedded devices.
What is Edge Impulse?
Edge Impulse is an edge AI development platform that makes it simple to:
- Collect and label sensor data (audio, vibration, environmental, camera, etc.).
- Train ML models using classical algorithms or neural networks.
- Optimize models for low-power microcontrollers like ESP32, RP2040, and STM32.
- Generate ready-to-use Arduino libraries that can be imported directly into your Node firmware.
This enables us to bring AI directly to the forest, without needing internet access or cloud inference — the model runs entirely on the Node itself.
Audio Classification for Gunshot Detection
For this project, we focus on audio classification using the onboard MEMS microphone:
- Data Collection
- Record short audio clips of gunshots and background forest sounds (wind, birds, insects, etc.).
- Upload these samples into your Edge Impulse project.
- Feature Extraction
- Edge Impulse automatically converts raw audio into spectrograms (MFCCs), which represent the frequency patterns of the sound.
- This allows the model to detect unique signatures of gunshot sounds compared to other noises.
- Model Training
- A classification model is trained to output labels like:
- "gunshot"
- "background"
- The model learns the difference in frequency and amplitude patterns.
- Deployment
- Once trained and tested, export the model as an Arduino library.
- Include this library in your Node code.
- The ESP32-S3 runs the inference on its second core, ensuring real-time classification without blocking sensor updates or LoRa communication.
Why This Matters
This setup means that every Node becomes an intelligent sentinel:
- Capable of hearing gunshots in the forest.
- Making real-time decisions without cloud dependency.
- Sending alerts through the LoRa mesh instantly.
And importantly, this is just the beginning — with Edge Impulse, you can retrain the model on other audio events like chainsaws (illegal logging) or calls of endangered animals, making the Forest Guard system highly adaptable and future-proof.
Create Edge Impulse Project
To train and deploy your ML model, you first need to set up a project in Edge Impulse Studio.
Create a Project
- Open Edge Impulse Studio.
- Login with your account credentials.
- Click on “Create New Project”.
- Give your project a meaningful name, e.g., Forest Guard Gunshot Detector.
Get Your Project Key
- After the project is created, go to Dashboard → Keys.
- Locate your Project API Key.
- Copy this key and keep it handy — you’ll need it in the Flask tool and Node code to connect data and models to Edge Impulse.
One of the biggest hurdles when working with Edge Impulse is data collection, especially for audio and image inputs. While numeric sensor streams (like temperature or humidity) can be pushed directly via serial, Edge Impulse currently doesn’t allow us to easily stream raw audio or image frames from the ESP32 to their platform in the same fast-forward way.
This means we normally have to:
- Log data to an SD card.
- Remove the card.
- Copy files to the computer.
- Upload them manually to Edge Impulse.
This process quickly becomes tedious when collecting hundreds of samples.
My Solution: Flask Data Uploader
To make this seamless, I built a Flask-based desktop tool that bridges the ESP32 and Edge Impulse:
- ESP32 Data Firmware
- First, flash a simple Arduino sketch onto the ESP32 that streams audio (from the microphone) or images (from a camera) over Serial USB.
- Flask App
- On the PC side, run my Flask tool.
- It listens to the ESP32’s serial port and captures the incoming raw data.
- Using your Edge Impulse API key, the tool automatically uploads this data into your project.
- Benefits
- No need for SD cards or manual file transfers.
- Data is organized and labeled as it’s uploaded.
- Faster iteration when training models with new samples.
Before we can collect and upload audio samples into Edge Impulse, we need the ESP32-S3 to stream raw microphone data over Serial USB. This is done by flashing a small Arduino sketch that continuously records from the I²S microphone and sends the audio buffer to the PC.
Install the ESP32 Board Package (Board Manager)
- Open Arduino IDE → File → Preferences.
- In Additional Boards Manager URLs, add:
https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_index.json
- Click OK.
- Go to Tools → Board → Boards Manager….
- Search “ESP32” and install esp32 by Espressif Systems (latest).
Tip: After install, restart Arduino IDE if the boards list doesn’t refresh.
- Open the provided esp32_audio_serial.ino sketch into Arduino IDE.
- This code initializes the microphone, records a buffer, and streams it line-by-line over Serial.
- Inside the sketch, you’ll see a configurable parameter:
constexpr int SECONDS_TO_GRAB = 10;
Change this value if you want longer or shorter recordings.
Default is 10 seconds per sample.
- Go to Tools → Board → ESP32 → DFRobot FireBeetle 2 ESP32-S3.
- Connect your ESP32-S3 to the PC with USB-C.
- Under Tools → Port, choose the correct COM port.
- Under Tools → USB CDC On Boot → Enable
- Click Upload to flash the code onto your ESP32-S3.
Now that your ESP32-S3 is streaming microphone data over Serial, let’s use the Flask Data Tool to capture it and upload directly into your Edge Impulse project.
Setup the Flask Tool
- Download the project repository:
- 👉 Forest-Guard GitHub Repository
- Open the Edge Impulse Data Tool folder.
- Run the Flask app:
python app.py
(File path: Edge Impulse Data Tool/app.py)
Access the Web Interface
- Once the server is running, open your browser and go to:
- http://127.0.0.1:5000/
- You will see the data collection dashboard.
Collect Audio Data
- Select COM Port → Choose the port where your ESP32 is connected.
- Paste API Key → Enter your Edge Impulse project API key (from Step 14).
- Choose Mode → Select whether this sample is for training or testing.
- Enter Label → e.g., gunshot or background.
- Select Data Type → Choose Audio.
- Click Capture → Recording will begin.
- The Node LED will glow green while audio is being recorded.
- Once the LED turns off, the captured audio file is automatically uploaded to your Edge Impulse project.
You should now see your labeled audio samples appear inside the Edge Impulse Studio → Data Acquisition tab. From here, you can repeat the process to build up your dataset of gunshots and background noise.
Step 16: Collect DataNow that the Flask tool is ready and connected to Edge Impulse, it’s time to build our training dataset. A good dataset is the most important factor for achieving a reliable classification model.
Collect Background Noise Data
- Set the label flag to Noise.
- Start recording samples in different environments:
- Indoors → quiet rooms, fan noise, people talking.
- Outdoors → wind, birds, insects, cars, etc.
- Collect at least 120 seconds of audio in each scenario.
- The more variety, the better the model can tell background noise apart from gunshots.
Collect Gunshot Data
- Set the label flag to Gun.
- Play different gunshot audio samples (different calibers, environments, echo levels).
- Record up to 120 seconds of audio in total.
Using multiple gunshot sound samples with slightly different characteristics helps the model generalize better to real-world scenarios.
To make sure the model is reliable:
- Split your dataset 80:20 → 80% for training, 20% for testing.
- Edge Impulse automatically suggests the split, but you can also move samples manually if needed.
Tips for Better Results
- Collect data at different volumes and distances.
- Try to balance the number of Noise and Gunshot samples.
- Keep background data diverse — this prevents false positives.
Right now, each recorded audio sample is 10 seconds long. For better accuracy, we need to split these into smaller 1-second samples that can be used as training features in Edge Impulse.
Splitting Process in Edge Impulse
- In Edge Impulse Studio, go to the Data Acquisition tab.
- Find one of your 10-second audio samples (either Noise or Gunshot).
- Click on the three dots (…) menu next to the sample.
- Choose Split Sample.
- Use the tool to crop each segment into 1-second chunks.
- Example: a 10-second audio file becomes 10× 1-second samples.
- For gunshot recordings, isolate the exact segment of the shot to ensure the model learns the event clearly.
- Click Split to save.
With your dataset ready and split into 1-second audio clips, the next step in Edge Impulse is to design the impulse, the pipeline that converts raw audio into features, and then trains a classification model.
Create a New Impulse
- In Edge Impulse Studio, go to the Create Impulse tab.
- Set the Window Size and Frequency as shown in the reference image (these define how much audio is processed in each slice and at what sample rate).
Add Blocks
- Processing Block: Select Audio (MFCC).
- MFCC (Mel-Frequency Cepstral Coefficients) transforms raw sound waves into a spectrogram — a compact representation of sound patterns that the ML model can learn from.
- Learning Block: Select Classification.
- This will train a neural network to classify between labels like Gunshot and Noise.
Save the Impulse
- Once both blocks are added and configured, click Save Impulse. This locks in the pipeline that will be used in the next steps for feature extraction and training.
Now that the impulse is created, we need to extract features from our audio samples. This is the process that converts raw sound into meaningful patterns (MFCCs) that the classifier can learn from.
- In Edge Impulse Studio, go to the MFCC block (under Impulse Design).
- Click Save Parameters to confirm the default MFCC settings.
- Press Generate Features.
- Edge Impulse will now process all your audio samples.
- This step can take a few minutes depending on dataset size.
- Once finished, you’ll see a Feature Explorer graph on the right side of the screen.
- Each point on the graph represents a 1-second audio sample.
- Samples with similar characteristics (like background noise) will cluster together, while distinct sounds (like gunshots) will form separate groups.
- Clear separation between Gunshot and Noise clusters is a good sign — it means your model will be easier to train accurately.
With your features generated, it’s time to train the Neural Network classifier that will distinguish between Gunshot and Noise.
- In Edge Impulse Studio, go to the Classifier tab.
- Click Save and Train.
- Training will take a few minutes depending on dataset size.
Default training settings usually work well:
- Number of training cycles: 100
- Learning rate: 0.005
- Processor: CPU
- Architecture: 1D Convolutional Neural Network (recommended for audio)
Results
Once training is complete, you’ll see:
- Accuracy → ~96% (based on your dataset).
- Loss → around 0.25 (lower is better).
Confusion Matrix →
- Gunshot classified correctly ~94% of the time.
- Noise classified correctly ~100% of the time.
Metrics →
- Precision: 0.97
- Recall: 0.96
- F1 Score: 0.96
On-device performance →
- Inferencing time: ~3 ms
- RAM usage: ~12.5 KB
- Flash usage: ~45 KB
Once your classifier is trained and performing well, the next step is to export the model so it can run directly on your ESP32-S3 Node. Edge Impulse makes this very easy by packaging the trained model into an Arduino-compatible library.
- In Edge Impulse Studio, go to the Deployment tab.
- Under Deployment options, select Arduino library.
- This will create a.zip library that can be imported into the Arduino IDE.
- Click Build.
Once the build completes, Edge Impulse will automatically download the library to your computer.
The file will be named something like:
Forest_Guard_Gunshot_Detector_arduino-1.0.0.zip
Step 22: Arduino SetupNow that we have our trained Edge Impulse model ready, let’s set up the Arduino IDE with all the required libraries to compile and upload the Node code.
Open the Project
- Launch Arduino IDE.
- Open the Node_V2.ino file (this is the main code for the Forest Guard Node).
Install Required Libraries
1. Edge Impulse Model Library
- Go to Sketch → Include Library → Add.ZIP Library…
- Select the .zip file you downloaded from Edge Impulse in Step 20.
- This adds your custom ML model to the project.
2. GNSS Library
Download and install the GNSS driver library from DFRobot:
- Install it the same way (Add.ZIP Library).
3. Environmental Sensor Library
Download the library for the multifunction environmental sensor:
👉 DFRobot Environmental Sensor Library
- Install it the same way (Add.ZIP Library).
NeoPixel Library
- In Arduino IDE, open Library Manager (Sketch → Include Library → Manage Libraries…).
- Search for Adafruit NeoPixel.
- Install the latest version.
Now that everything is configured, it’s time to flash the Node firmware to the ESP32-S3.
Code Adjustments Before Upload
Open the Node_V2.ino sketch in Arduino IDE and check the following user configuration section:
Edge Impulse Include
- Change the #include <...inferencing.h> line to match the filename of the model you downloaded in Step 20.
- Example:
#include <Forest_Guard_Gunshot_Detector_inferencing.h>
Node ID
- Set a unique NODE_ID for each device.
- Example: "01", "02", etc.
GNSS Availability
- If your Node has a GNSS sensor attached → set GNSS_AVAILABLE = true.
- If not → set it to false.
Manual Location (Optional)
- When GNSS is disabled, update the fallback latitude and longitude:
static const float INITIAL_LAT = <your_latitude>;
static const float INITIAL_LON = <your_longitude>;
Arduino IDE Settings
- Go to Tools → Board → ESP32 → DFRobot FireBeetle 2 ESP32-S3.
- Connect your ESP32-S3 via USB-C cable.
- Under Tools → Port, select the correct COM port.
- Go to Tools → USB CDC On Boot → Disable.
Upload the Code
- Click the Upload button in Arduino IDE.
- The code will compile (this may take a while since the Edge Impulse model is large).
- Once complete, the firmware will be flashed to your ESP32-S3 Node.
After Upload
- The Node should boot with a Blue breathing LED (boot + LoRa init).
- After registration with the Gateway, it will begin sending sensor data and detecting events.
For the Gateway enclosure, I started by importing the Arduino Uno R4 WiFi and the 3.5” TFT display model into Fusion 360. This allowed me to design the case around the exact dimensions of the components.
Enclosure Features
- Housing - Includes cutouts for the TFT display, LoRa antenna, and the Arduino Type-C port.
- Cover - Designed with mounting holes to securely fix the Arduino board inside.
3D Printing
I 3D printed both the housing and the cover in light gray using my Bambu Labs P1S printer. The parts came out strong, precise, and professional-looking, making the gateway unit both robust and visually consistent with the Node design.
- Take the gateway housing and the TFT display.
- Place the display into the housing, making sure it is in the correct orientation with the screen aligned to the cutout.
- Secure the display using 4× M2 screws.
- Double-check that the screen sits flush with the housing and is firmly fixed in place.
- Take the LoRa antenna.
- Unscrew the antenna connector from the module.
- Pass the antenna through the antenna hole on the housing.
- Screw the antenna back onto the LoRa module from the outside.
- Make sure the antenna is firmly seated and facing upright.
- Take the Arduino Uno R4 WiFi and the gateway cover.
- Align the Arduino with the mounting holes on the cover.
- Secure it in place using 4× M2 screws.
- Ensure the Type-C port and headers remain accessible through the cover cutouts.
- Take the buzzer, the power switch, and some quick glue.
- Insert the buzzer into its dedicated slot on the cover.
- Insert the power switch into its cutout hole on the cover.
- Apply a small amount of quick glue around the switch edges to secure it in place.
Now it’s time to wire everything together. Follow the circuit diagram carefully when connecting the Arduino, TFT Display, and LoRa module.
I used male header pins to avoid soldering directly to the Arduino. This way, the display and modules can be plugged and unplugged easily for debugging or replacement.
Arduino ↔ Display (TFT)
- Connect as shown in the wiring diagram above (image).
- Ensure all data and control pins are matched correctly, with 5V and GND powering the display.
Arduino ↔ LoRa Module
- GND → GNS (LoRa GND)
- 5V → VSys (LoRa Power)
- Pin 2 → Pin 9 (LoRa UART RX/TX pair)
- Pin 3 → Pin 8 (LoRa UART TX/RX pair)
Power & Peripherals
- Connect the battery and power switch between GND and 5V of the Arduino.
- Connect the buzzer:
- GND → Arduino GND
- +Ve → Arduino Pin5
Now let’s program the Gateway so it can communicate with the nodes, process sensor/event data, and upload everything to Firebase.
Download the Code
- Go to the Forest Guard GitHub repository.
- Download and extract the files.
- Open Gateway_V1.ino in the Arduino IDE.
Setup Arduino IDE
- Make sure the Arduino Uno R4 WiFi board package is installed via Board Manager.
- Install all required libraries as shown in the reference images (WiFiS3, ArduinoHttpClient, NTPClient, DFRobot UI/TFT libraries, etc.).
Add Your Credentials
Inside the sketch:
- Enter your WiFi SSID and password.
- Enter your Google Firebase host URL and authentication key.
// Wi-Fi
const char* WIFI_SSID = "<your-ssid>";
const char* WIFI_PASS = "<your-pass>";
// Firebase RTDB (no https://, no trailing slash)
const char* FB_HOST = "<project-id>-default-rtdb.asia-southeast1.firebasedatabase.app";
// Legacy database secret copied in Step 3
const char* FB_AUTH = "<DATABASE_SECRET>";
Upload the Code
- In Tools → Board, select Arduino UNO R4 WiFi.
- In Tools → Port, select the correct COM port for your board.
- Click Upload.
Once uploaded, the Gateway will:
- Connect to WiFi.
- Sync time via NTP.
- Register nodes and receive LoRa messages.
- Push ENV, LOC, and event data into Firebase.
- Drive the TFT display and buzzer for real-time monitoring.
1) Create a Firebase project
- Open https://console.firebase.google.com/
- Create project → (Google Analytics optional; you can keep default).
- Wait for provisioning to finish.
Your screenshot: “Configure Google Analytics” → Create project ✅
2) Create a Realtime Database
- Left sidebar → Build → Realtime Database → Create Database
- Choose a region close to you (e.g., asia-southeast1 / Singapore).
- For quick testing select Start in Test mode (Firebase allows open read/write for 30 days).
Your screenshots: Region dialog → Test/Locked mode selection ✅
Copy the Database URL shown at the top of the Data tab.
It looks like:
https://<your-project-id>-default-rtdb.asia-southeast1.firebasedatabase.app/
You will use this as FB_HOST in the Gateway sketch.
3) Get an auth token (Database Secret) for REST
Your GA uses simple HTTPS REST with the ?auth=... query param.
- Project settings (gear) → Service accounts
- Click Database secrets → Show → Copy the secret.
You will use this as FB_AUTH in the Gateway sketch.
Add a Web App (for your dashboard)
- Project Overview → Add app → Web
- Give it a name (e.g., Forest Guard) → Register app
- On the next screen you’ll see your Web SDK config:
const firebaseConfig = {
apiKey: "...",
authDomain: "...",
databaseURL: "https://<project-id>-default-rtdb.<region>.firebasedatabase.app",
projectId: "...",
storageBucket: "...",
messagingSenderId: "...",
appId: "..."
};
Copy these values into your dashboard (Lovable.dev) settings.
1) Node (NA) boot & registration
- NA = ESP32-S3 with Env + Smoke + Mic + (optional) GNSS + RP2040 LoRa (Meshtastic).
On boot:
- LED Blue breath.
- Initializes sensors.
- Checks GNSS_AVAILABLE. If present, uses GNSS time; location is sent only when satsUsed > 3.
Registers with GA by broadcasting #<NODE_ID>* every 10 s until GA replies #<NODE_ID>+OK*.
- Only after registration do Edge Impulse (gunshot) and fire/smoke checks start.
2) Periodic telemetry (non-blocking)
- Every 10sec the NA sends:
- ENV: #E, <ID>, temp, humidity, uv, lux, pressure, alt*
- LOC: #L, <ID>, lat, lon* (only if GNSS fix has >3 sats; if GNSS is not fitted, system can use your initial set location).
- LED Green breath on successful send.
3) Event detection & retry
- Gunshot: Edge Impulse score crosses threshold (e.g., ≥0.90).
- Fire: Smoke reading crosses threshold with hysteresis.
- Node latches a single “current event” and creates eventId = random(0..100).
- Sends every 10 s until cleared by GA:
- Fire: #F+<id>, <ID>, <smoke>, YYYY/MM/DD, HH:MM:SS* or NT if no GNSS time.
- Gun: #G+<id>, <ID>, <score>, YYYY/MM/DD, HH:MM:SS* or NT.
- LED Red breath while event is latched.
4) Gateway (GA) reception & reliability
- GA = Arduino UNO R4 WiFi + TFT UI + Buzzer.
- LoRa noise-proofing: both sides parse only bytes between # and *; everything else is ignored.
- On #<ID>* → replies #<ID>+OK* (register ACK).
On telemetry:
- Maintains last posted values and only uploads to Firebase when changed
- ENV changed by ≥ ±1.0 per field
- LOC changed by ≥ 0.00010° (~11 m)
- NTP gate: GA writes to Firebase only after epoch ≥ 2025-01-01 (NTP warmup).
5) Cloud logging (your schema)
GA writes to Firebase RTDB paths:
- nodes/<ID>/env/<epoch> → { temp, humi, uvi, li, pres, alt }
- nodes/<ID>/Loc/<epoch> → { lat, lon } (capital L)
- nodes/<ID>/fire/<epoch> → { value, NodeTime }
- nodes/<ID>/gun/<epoch> → { score, NodeTime }
- nodes/<ID>/meta → { Event, lastSeenAt }
When an event frame arrives:
- Sets meta/Event = true.
- Logs the event (de-duplicates by eventId).
- Starts buzzer (non-blocking toggle).
6) Dashboard + operator loop
- Dashboard reads RTDB to render map, charts, and alerts.
- When the site is inspected and safe, the operator sets meta/Event = false in the dashboard.
7) Clearing the event (end-to-end handshake)
- GA polls meta/Event. When it becomes false:
- GA broadcasts #<ID>+C* (a few times for reliability).
- Stops buzzer, unlatches its local event, and remembers the last cleared eventId.
- If NA keeps repeating the same eventId, GA does not re-log the event; it simply re-ACKs CLEAR and moves on.
- NA receives #<ID>+C* → clears its event latch and resumes normal telemetry.
8) LED summary (NA)
- Blue: boot/LoRa/registration
- Green: data sent
- Red: event latched
To visualize the data coming from the Forest Guard Nodes, I built a custom web dashboard using Lovable.dev. This dashboard connects directly to Firebase and provides both a quick overview and detailed insights into the forest monitoring network.
Setup
- When the dashboard is first opened, it takes you to a Firebase configuration page.
- Here, you enter your Firebase host and authentication key.
- Once saved, the dashboard connects to the database and loads the real-time data.
Map View
- The map view shows the live location of all deployed nodes.
- Each node is color-coded by status:
- Gray → Inactive
- Green → Active
- Red → Alert (fire or gunshot detected)
- By clicking on a node, you can quickly check its latest sensor data and status.
Quick Cards
At the top of the dashboard, quick cards summarize the system:
- Total Nodes → Number of nodes in the network.
- Online Status → Active vs inactive nodes.
- Recent Alerts → Count of fire/gunshot events in the last 12 hours.
- Data Points → Total environmental readings logged.
Node Details
Clicking on “View Node Details” opens a full dashboard view for that node. Here you can monitor:
- Current Environmental Conditions (temperature, humidity, pressure, light, UV, altitude).
- Trends over Time with graphs for Temperature & Humidity, Light & UV Index.
- Fire Detection Events (timestamped alerts from smoke sensor).
- Gunshot Detection Events (with AI confidence scores from Edge Impulse model).
Why It Matters
This dashboard transforms raw sensor data into a clear, real-time interface for rangers, researchers, or conservation teams. With one glance, you can see:
- Which nodes are active, where they are, and what conditions they’re reporting.
- Whether a fire or gunshot event has been detected.
- Historical trends that help understand the forest’s environmental conditions.
It essentially turns the Forest Guard network into a living digital twin of the forest.
Step 34: ConclusionWith the completion of this build, we have created Forest Guard, a decentralized forest surveillance system that can detect and alert about critical events such as gunshots or forest fires — even in regions with no internet or cellular coverage. By combining low-power LoRa mesh networking, solar-powered sensor nodes, and edge AI intelligence, this project proves that modern technology can play a vital role in safeguarding our forests and protecting wildlife.
The Gateway provides a central bridge to the cloud, where data is stored and visualized in real time, while the Nodes tirelessly monitor the environment, detect anomalies, and forward alerts across the mesh. Together, they form a scalable, resilient, and sustainable system that can make a real difference for conservationists, rangers, and environmental researchers.
What makes this system truly exciting is the flexibility of Edge AI. Using the Edge Impulse platform, we trained a model to detect gunshots, but the same pipeline can be extended further:
- By training on audio recordings of chainsaws or tree cutting, the system could become an anti-illegal logging detector.
- With audio datasets of endangered or extinct species calls, it could serve as a wildlife discovery and monitoring system, helping scientists and communities identify rare animals in the wild.
This adaptability shows that Forest Guard is not just a single-purpose project, but a platform for innovation in forest conservation. From early fire detection to biodiversity monitoring, the possibilities are vast.
In the end, this project is a step toward a future where technology and nature coexist, where smart sensors and AI extend the eyes and ears of humans into places we cannot always reach — ensuring our forests remain safe, vibrant, and full of life for generations to come. 🌲🌍💡
Join the discussion:
I have started a thread on WILDLABS to talk about Forest Guard and the wildlife protection challenges it targets. Share your field insights, deployment constraints, and ideas for improving the system here:
https://wildlabs.net/article/forest-guard
Let’s collaborate to adapt this project to real-world conservation needs around the world.
Comments