In the dense forests and mountainous regions of Kerala, especially areas like Nelliampathy, wild elephants frequently traverse routes that have been part of their migratory patterns for centuries. These pathways, known locally as “Aanathaara” (elephant corridors), are deeply embedded in the landscape and ecology. However, with rapid human expansion — including roads, farmlands, and settlements — these natural trails now intersect with human activity, often with devastating consequences.
While locals may instinctively understand where and when to be cautious, most visitors, tourists, and even daily commuters remain unaware of the risks.In early 2025, a tragic incident occurred in Nelliampathy, Kerala. A German tourist, unfamiliar with the terrain and unaware of a wild elephant blocking the road, ventured forward despite the locals’ warnings. Tragically, he encountered the elephant and lost his life. Traditional static signage, such as painted boards warning of elephants, often fades into the background and fails to provide real-time, actionable alerts. The result? Dangerous — and sometimes fatal — encounters that could have been avoided with better awareness.
That's where EleTect 1.5 comes in — combining TinyML, LoRa, solar power, and interactive signage to proactively warn and deter.
🛠️ What It DoesEleTect 1.5 is an advanced extension of the EleTect 1.0 system. It introduces an interactive digital signage system that provides real-time warnings to riders and drivers when elephants are present ahead on forest roads.
🐘 EleTect Node (Detection Unit)
- Detects elephants using a TinyML-powered camera model.
- Uses LoRa to send elephant presence status to the signage node.
- Triggers a deterrent mechanism (e.g., honeybee sound) only when vehicles are present.
🚦 Signage Node (Warning System)
- Placed 500m before known elephant crossings.
- Displays a bright, red, flashing elephant warning.
- Integrated camera detects the presence of vehicles.
- Sends vehicle presence data to EleTect Node.
- All powered entirely by solar energy.
- Elephant detected ➡️ EleTect Node triggers LoRa alert to Sign Board.
- Signboard flashes elephant warning if vehicles are approaching.
- Signboard checks for vehicles using camera:
If vehicles are detected → message sent to EleTect Node.
EleTect Node waits 10 minutes → plays deterrent bee sound.
- After elephants leave, detection stops → Signboard resets.
Eletect is a technology-driven system designed to detect elephants early, deter them harmlessly, and alert nearby communities. Its goal is to protect lives, foster coexistence, and contribute to wildlife conservation.
Despite their size and power, elephants have a surprising vulnerability: they are instinctively afraid of bee buzzing sounds. By carefully and harmlessly using this natural deterrent, Eletect can safely redirect elephants away from human settlements without harming them. This peaceful strategy respects both humans and elephants.
⚙️ How it WorksAt the forest boundaries, multiple TinyML-powered nodes are deployed. Each node can:
- Detect elephants using a vision-based TinyML model on the Seeed Studio Grove Vision AI V2 module.
- Analyze sound using a Seeed Studio XIAO ESP32S3 Sense, running a TinyML audio model to detect elephant vocalizations.
- Trigger deterrents by playing honeybee buzzing sounds through an onboard speaker.
- Communicate via LoRa/LoRaWAN with a central master node to ensure real-time updates even in remote areas.
The system is completely solar-powered, making it sustainable and ideal for deployment in remote forest regionem Architecture
🔗 Components UsedComponent
XIAO ESP32S3
Grove LoRa-E5 Module
Solar Charging Modules
Custom battery pack
Custom LED panel
Enclosure Using Acrylic sheet
🛠️ Step 1: Build the Custom Signage EnclosureIn this step, we’ll create a weatherproof and visually impactful enclosure that houses the electronics for the elephant warning signage system. The enclosure is made from 5mm clear acrylic sheets, designed in Fusion 360, and laser cut for precision.
🧰 Materials Needed- 5mm thick clear acrylic sheet
- Access to a laser cutter
- Acrylic glue (e.g., Weld-On 3 or Fevikwik)
- Vinyl cutter and precision knife
- Reflective vinyl sticker sheet (yellow and red)
- Matte black vinyl sheet
- Clamps or tape for alignment
- Fusion 360 or similar CAD software
- Cooling film
Create a Design in Fusion360 for the enclosure.
Sketch the front panel dimensions based on your component layout (camera, LEDs, LoRa antenna, etc.).Ensure the box has enough depth to house the electronics.
Export each face of the enclosure as a DXF file for laser cutting.
- Upload the DXF files to your laser cutter’s software.
- Set your laser cutter to the appropriate power/speed settings for 5mm acrylic.
- Carefully cut each panel and label them as you go to avoid confusion during assembly.Peel off any protective film after cutting.
👉 Safety first! Wear proper eye protection and operate the cutter in a ventilated area.
Lay out all the cut pieces on a clean surface.
Begin with the base and edges.Apply acrylic glue along the joining edges and press pieces together.Use clamps or masking tape to hold parts in place until dry.
Continue assembling all sides until the box is complete.
Let the entire assembly cure for several hours to ensure strong bonding.
👉 Tip: Double-check the alignment before applying glue — acrylic bonds instantly!Grind and smoothen the irregular edges using a grinding tool.
✨ Step 1.4: Apply the Reflective GraphicsDesign an elephant silhouette and the text “ELEPHANTS AHEAD” using vector software (e.g., Adobe Illustrator or Inkscape).
Cut the design using a vinyl cutter from reflective vinyl sheet.
Clean the front acrylic panel with a microfiber cloth.Carefully transfer the reflective vinyl design onto the panel using transfer tape.
Cover the remaining back and side edges with matte black vinyl to block internal components and focus attention on the warning.
👉 Result: A bold, reflective front that is highly visible when headlights or onboard LEDs shine on it.
💡 Step 2: Building the Custom LED PanelIn this step, we’ll design and assemble a high-visibility LED panel in the shape of an elephant, mounted inside our previously built acrylic enclosure. This panel serves as a visual alert, visible from a distance even in low-light conditions.
🧰 Materials Required- 4x generic dotted PCBs (perforated board)
- 400x 5mm Red Clear LEDs
- 200x 68Ω resistors
- 22 AWG hookup wire
- Soldering iron + solder wire
- Black matte spray paint (optional, for aesthetics)
- 1x N-channel MOSFET (e.g., IRFZ44N)
- 1x 220Ω resistor (for MOSFET gate)
- 1x Custom 3S3P LiPo battery pack (11.1V nominal)
- Heat shrink, glue, basic tools
Take your four dotted PCBs paint them with matte black paint — this step is optional but gives a professional look and improves contrast against the red LEDs.
Let them dry completely.
👉 Tip: Paste a sheet or paper on the back of the pcb before painting so that the paint won't get into the backside✂️ Step 2.2: Join the PCBs to Form a Large Panel
Measure and cut the boards to your desired dimensions.
Carefully align and glue the four PCBs together to create a larger panel.
Make sure all the solder pads align properly and the board is flat.
🐘 Step 2.3: Trace and Plan the LED LayoutPlace the vinyl elephant signage or sticker over the panel as a reference.Using a white marker or chalk, roughly trace the outline of the elephant and the text “ELEPHANTS AHEAD”.
Plan the LED positions inside this trace to match the shape as closely as possible
👉 Tip: Leave a bit of spacing between each LED to avoid overcrowding.
🔗 Step 2.4: LED Chain DesignWe’ll use a simple and efficient wiring scheme:
2 LEDs in series + 1 resistor (68Ω) = 1 chain
Multiple such chains are wired in parallel across the panel
Why this config?With a 3S (11.1V) LiPo battery, 2 red LEDs (approx. 2V each) + 68Ω resistor draws safe current and provides balanced brightness.
🧪 Step 2.5: Prototype and Test the LED CircuitFirst, build one LED chain on a breadboard.
Power it using a bench power supply.
Confirm brightness and measure current draw.Once satisfied, continue soldering the full design onto the board.
🔩 Step 2.6: Solder All LED ChainsStart from the top of the board, following your traced outline.
Insert 2 LEDs in series and connect the 68Ω resistor to complete the chain.Continue placing and soldering LED chains across the board, following your elephant outline and text.
Use thin wires to connect common positive and negative rails at the back.
After soldering, check for shorts and test small sections individually.
⚡ Step 2.7: Power and Drive CircuitConnect all the negative lines from each LED chain to a MOSFET drain.Connect the MOSFET source to ground.Use a 220Ω resistor on the MOSFET gate and connect it to your microcontroller’s digital output (for PWM or ON/OFF control).The positive rail of all LED chains goes directly to the power supply.
✅ Final TestingSecurely mount the pack inside the enclosure and wire it to the LED panel via a toggle switch or microcontroller control..Turn on the system and ensure all LEDs light up in the correct pattern.
Check heat levels and ensure no resistors or LEDs are overheating.
Step 3:Custom model for vehicle detectionIn this part, we'll kick off by labeling our dataset with the intuitive tools provided by Roboflow. From there, we'll advance to training our model within Google Colab's collaborative environment. Next up, we'll explore deploying our trained model using the SenseCraft Model Assistant, a process designed to smoothly bridge the gap between training and real-world applications. By the conclusion of this part, you'll have your very own custom model ready to detect vehicles, operational on Grove Vision AI V2.
From dataset to model deployment, our journey consists of the following key stages:
1. Dataset Labeling — This section details the process of acquiring datasets suitable for training models. There are two primary methods: utilizing labeled datasets from the Roboflow community or curating your own dataset with scenario-specific images, necessitating manual labeling.
2. Model Training with Google Colab — Here, we focus on training a model capable of deployment on Grove Vision AI V2, leveraging the dataset obtained in the previous step via the Google Colab platform.
3. Model Upload via SenseCraft Model Assistant — This segment explains how to employ the exported model file to upload our elephant detection model to Grove Vision AI V2 using the SenseCraft Model Assistant.
Step 1.Create a free Roboflow account
Roboflow provides everything you need to label, train, and deploy computer vision solutions. To get started, create a free Roboflow account.
Step 2. Creating a New Project and Uploading images
Once you've logged into Roboflow, Click on Create Project.\
Name your project ("EleTect 1.5"). Define your project as t as Object Detection. Set the Output Labels as Categorical
Now it's time to upload vehicle images.
Collect images of elephants. Ensure you have a variety of backgrounds and lighting conditions. On your project page, click "Add Images".
You can drag and drop your images or select them from your computer. Upload at least 100 images for a robust dataset.
click on Save and Continue
Step 3: Annotating Images
After uploading, you'll need to annotate the images by labeling vehicle.
Roboflow offers three different ways of labelling images: Auto Label, Roboflow Labeling and Manual Labeling.
- Auto Label: Use a large generalized model to automatically label images.
- Roboflow Labeling: Work with a professional team of human labelers. No minimum volumes. No upfront commitments. Bounding Box annotations start at $0.04 and Polygon annotations start at $0.08.
- Manual Labeling: You and your team label your own images.
The following describes the most commonly used method of manual labelling.
Click on "Manual Labeling" button. Roboflow will load the annotation interface.
Select the "Start Annotating" button. Draw bounding boxes around the vehicle in each image.
Label each bounding box as vehicle.
Use the ">" button to move through your dataset, repeating the annotation process for each image.
Step 4: Review and Edit Annotations
It's essential to ensure annotations are accurate.
Review each image to make sure the bounding boxes are correctly drawn and labeled. If you find any mistakes, select the annotation to adjust the bounding box or change the label.
Step 5: Generating and Exporting the Dataset
Once all images are annotated. In Annotate click the Add x images to Dataset button in the top right corner.
Then click the Add Images button at the bottom of the new pop-up window.
Click Generate in the left toolbar and click Continue in the third Preprocessing step.
In the Augmentation in step 4, select Mosaic, which increases generalisation.
In the final Create step, please calculate the number of images reasonably according to Roboflow's boost; in general, the more images you have, the longer it takes to train the model. However, the more pictures you have will not necessarily make the model more accurate, it mainly depends on whether the dataset is good enough or not.
Click on Create to create a version of your dataset. Roboflow will process the images and annotations, creating a versioned dataset. After the dataset is generated, click Export Dataset. Choose the COCO format that matches the requirements of the model you'll be training.
Click on Continue and you'll then get the Raw URL for this model. Keep it, we'll use the link in the model training step a bit later.
Congratulations! You have successfully used Roboflow to upload, annotate, and export a dataset for elephant detection model. With your dataset ready, you can proceed to train a machine learning model using platforms like Google Colab.
Training Dataset Exported Model Step 1. Access the Colab Notebook
You can find different kinds of model Google Colab code files on the SenseCraft Model Assistant's Wiki. If you don't know which code you should choose, you can choose any one of them, depending on the class of your model (object detection or image classification).
If you are not already signed into your Google account, please sign in to access the full functionalities of Google Colab.
Click on "Connect" to allocate resources for your Colab session.
select the panel showing RAM and Disk
select "Change runtime type"
Select "T4 GPU"
Now run the "Setup SSCMA"
you will get a warning like this click on "Run anyways"
Wait untill the repositary is fully clonedand installed all the dependencies.
now its finished
Now run the "download the pretrain model weights file
Step 2. Add your Roboflow Dataset
Before officially running the code block step-by-step, we need to modify the code's content so that the code can use the dataset we prepared. We have to provide a URL to download the dataset directly into the Colab filesystem.
To customize this code for your own model link from Roboflow:
1)Replace Gesture_Detection_Swift-YOLO_192
with the desired directory name where you want to store your dataset.
2)Replace the Roboflow dataset URL (https://universe.roboflow.com/ds/xaMM3ZTeWy?key=5bznPZyI0t
)
with the link to your exported dataset (It's the Raw URL we got in the last step in Labelled Datasets). Make sure to include the key parameter if required for access.
3)Adjust the output filename in the wget
command if necessary
(-O your_directory/your_filename.zip
).4)Make sure the output directory in the unzip
command matches the directory you created and the filename matches the one you set in the wget
command.
Step 3. Adjustment of model parameters
The next step is to adjust the input parameters of the model. Please jump to the Train a model with SSCMA section and you will see the following code snippet.
This command is used to start the training process of a machine learning model, specifically a YOLO (You Only Look Once) model, using the SSCMA (Seeed Studio SenseCraft Model Assistant) framework.
To customize this command for your own training, you would:
1)Replace configs/swift_yolo/swift_yolo_tiny_1xb16_300e_coco.py
with the path to your own configuration file if you have a custom one.
2)Change work_dir
to the directory where you want your training outputs to be saved.
3)Update num_classes
to match the number of classes in your own dataset. It depends on the number of tags you have, for example rock, paper, scissors should be three tags.
4)Adjust epochs
to the desired number of training epochs for your model. Recommended values are between 50 and 100.
5)Set height
and width
to match the dimensions of the input images for your model.
6)Change data_root
to point to the root directory of your dataset.
7)If you have a different pre-trained model file, update the load_from
path accordingly.
Step 5. Exportthe model
After training, you can export the model to the format for deployment. SSCMA supports exporting to ONNX, and TensorFlow Lite at present
Step 6. Evaluate the model
When you get to the Evaluate the model section, you have the option of executing the Evaluate the TFLite INT8 model code block.
Step 6. Download the exported model file
After the Export the model section, you will get the model files in various formats, which will be stored in the Model Assistant folder by default. Our stored directory is EleTect 1.5.
select "ModelAssistatnt"
In the directory above, the .tflite model files are available for XIAO ESP32S3 and Grove Vision AI V2. For Grove Vision AI V2, we prefer to use the vela.tflite files, which are accelerated and have better operator support. And due to the limitation of the device memory size, we recommend you to choose INT8 model.
After locating the model files, it's essential to promptly download them to your local computer. Google Colab might clear your storage directory if there's prolonged inactivity. With these steps completed, we now have exported model files compatible with Grove Vision AI V2. Next, let's proceed to deploy the model onto the device.
Upload models to Grove Vision V2 via SenseCraft Model AssistantPlease connect the device after selecting Grove Vision AI V2 and then select Upload Custom AI Model at the bottom of the page.
You will then need to prepare the name of the model, the model file, and the labels. I want to highlight here how this element of the label ID is determined.
If you are using a custom dataset, then you can view the different categories and its order on the Health Check page. Just install the order entered here.
Then click Send Model in the bottom right corner. This may take about 3 to 5 minutes or so. If all goes well, then you can see the results of your model in the Model Name and Preview windows above.
Click on deploy and connect your grove vision V2.
Press Confirm and you are good to go.Now that we have done training the vision based model, now we can connect it to EleTect node
🧠 Step 4: Connecting the Signboard to EleTect NodeNow that we have our physical enclosure and the custom LED warning panel ready, it's time to connect the system to the EleTect detection node using LoRa communication.
📡 Communication Architecture:1.EleTect Node (Forest-side)
Detects elephant presence using:
- Grove Vision AI V2 → vision-based elephant detection.
- XIAO ESP32S3 Sense → sound-based detection.
- On detection → Sends
ELEPHANT_DETECTED
message via LoRa to the Signboard Node. - If
VEHICLE_PRESENT
is received back → waits 10 minutes → activates bee sound deterrent via DFPlayer Mini + Speaker. - Sends
ELEPHANT_LEFT
when elephants leave → resets the system.
2.Signboard Node (Roadside)
- Listens for elephant alerts from EleTect Node.
- On detection → flashes warning LED (with elephant symbol) continuously until elephant leaves.
- Uses Grove Vision AI V2 running a TinyML vehicle detection model to constantly check for vehicles.
- If vehicles are detected while elephants are present → sends
VEHICLE_PRESENT
message to EleTect Node.
Outcome
- 🚦 Elephant but no vehicles → Only flashing signboard (no sound, less disturbance).
- 🚦 Elephant + vehicles present → Signboard flashes + EleTect triggers bee sound after 10 minutes.
- ✅ Once elephant leaves → Signboard turns off, deterrent stops, system resets.
- XIAO ESP32S3 – For LoRa and LED control
- LoRa-E5 Grove Module – For receiving data from EleTect node3S3P Li-ion Pack – Custom power solution for high LED current drawMOSFET (e.g., IRF540N or similar) – To drive the LED panel
- 220Ω Resistor – Gate resistor for the MOSFET
- Jumper Wires
Here's a basic sketch to control the LED flashing when elephant presence data is received from the EleTect node.
🧾 Code for the Signboard Node (Receiver):#include <Arduino.h>
#include <LoRaE5.h>
#define LED_PIN 5 // LED/Signboard pin
#define LORA_RX 6
#define LORA_TX 7
HardwareSerial loraSerial(1);
// State flags
bool elephantPresent = false;
bool vehiclePresent = false;
unsigned long lastBlink = 0;
bool ledState = false;
void setup() {
pinMode(LED_PIN, OUTPUT);
digitalWrite(LED_PIN, LOW);
Serial.begin(115200); // Debug
loraSerial.begin(9600, SERIAL_8N1, LORA_RX, LORA_TX); // LoRa
Serial.println("Signboard Node Ready");
}
void loop() {
// 1. Listen for LoRa messages
if (loraSerial.available()) {
String msg = loraSerial.readStringUntil('\n');
msg.trim();
Serial.println("LoRa IN: " + msg);
if (msg == "ELEPHANT_DETECTED") {
elephantPresent = true;
} else if (msg == "ELEPHANT_LEFT") {
elephantPresent = false;
vehiclePresent = false;
digitalWrite(LED_PIN, LOW);
}
}
// 2. Read Vision AI V2 serial output (vehicle detection)
if (Serial.available()) {
String visionData = Serial.readStringUntil('\n');
visionData.trim();
if (visionData == "vehicle") {
vehiclePresent = true;
if (elephantPresent) {
loraSerial.println("VEHICLE_PRESENT");
Serial.println("Vehicle present → Sent alert to EleTect Node");
}
} else {
vehiclePresent = false;
}
}
// 3. Flash LED if elephant detected
if (elephantPresent) {
if (millis() - lastBlink > 500) { // Blink every 500ms
ledState = !ledState;
digitalWrite(LED_PIN, ledState ? HIGH : LOW);
lastBlink = millis();
}
}
}
🔋 Powering the System:The entire signboard is powered by a custom 3S3P battery pack made using Li-ion cells (11.1V ~ 12.6V).And downed using a buck converter to 5V.
The MOSFET allows the microcontroller to switch the high current LED panel without overloading the XIAO.
🧩 Step 5: 3D Printing and Mounting the LED Panel to a Custom StandStand Construction from Scrap MetalTo reduce costs and promote sustainability, the stand for the EleTect warning signage and solar panel was made entirely from scrap metal. Despite being built from repurposed material, it is sturdy, weather-resistant, and highly visible.
Materials Used- 1-inch square metal pipe (scrap, reused)
- Welding machine (for joints)
- Cutting tool (angle grinder)
- Anti-rust paint (to protect against corrosion in forest conditions)
- Mounting bolts & brackets (for signage + solar panel)
- Height: ~8 feet (tall enough for clear visibility to drivers).
- Pole: Single vertical square pipe serves as the main post.
- Base: Welded flat support with angled bracing, bolted into the ground for stability.
Top Section:
- Solar panel mounted with a small angled bracket for optimal sunlight.
- Warning signage (“Elephants Ahead”) firmly attached just below the panel.
- Cutting the Scrap Pipe:
The scrap pipe was cut into one 8 ft piece (main vertical) and smaller pieces for the base support. - Building the Base:
A flat base with short stabilizers was welded to the bottom so it could be anchored securely into the ground with bolts.
Mounting the Sign & Solar Panel:
- A horizontal bracket at the top holds the solar panel at an angle.
- The triangular signage board is bolted slightly below the solar panel.
Painting & Finishing:
The entire stand was coated with anti-rust paint, ensuring long durability outdoors.
Ultra Low Cost: Made from waste scrap material.
Eco-Friendly: Reuses metal that would otherwise be discarded.
Durable: Strong enough to withstand wind, rain, and wildlife contact.
Scalable: Simple design can be replicated in bulk for multiple forest corridors.
Designing the Mount
To ensure stability and long-term deployment in outdoor environments, we designed a custom 3D printed mount that securely holds the LED signboard and aligns it on the custom metal pipe-based stand.
🛠️ Materials Used:- PLA filament (Black)
- 3D printerM4 bolts and nutsScrews (for enclosure-mounting)steel pipe (for pole mounting)
CAD Modeling:
Using Fusion 360 design a mounting bracket that matches the dimensions of the acrylic signboard enclosure
Printing:
Assembly:
Once printed, the enclosure was carefully slotted into the mount
Secure with M4 bolts on either side to lock it in place.The mount was then clamped or screwed onto a metal/PVC pole, completing the physical installation.
🌍 Sustainability and Edge Deployment- No internet required: Uses LoRa for remote communication
- Fully solar-powered, ideal for deployment in forest areas
- Deterrent only activates when necessary, conserving energy and minimizing disturbance to wildlife
One of the most impactful upgrades we are planning for EleTect is the integration of Google Maps into the system. Currently, EleTect is capable of detecting elephant movement and triggering local deterrents or alerts. However, by incorporating Google Maps, we aim to create a real-time, centralized monitoring system that will drastically improve response times and ensure safer coexistence between humans and elephants.
🔹 How It Will Work:
- Live Location Mapping – Each EleTect unit deployed in the field will send detection data (timestamp, location coordinates, and event details) to a central cloud server.
- Google Maps Visualization – The data will be displayed on Google Maps, showing the exact position of elephant sightings or conflicts in real-time.
- Risk Zone Alerts – Areas with frequent detections will be marked as “High Risk” zones, allowing authorities and locals to take precautionary measures.
- Community Access – Farmers, drivers, and forest officials will be able to access a live web/app dashboard powered by Google Maps, ensuring they are instantly informed of nearby elephant movement. Historical Data & Prediction – By overlaying historical data on Google Maps, the system can predict elephant routes and hotspots, helping in long-term conflict mitigation planning.
🔹 Benefits:
- Prevent Road Accidents – Drivers will receive live alerts on elephant crossings ahead, reducing the risk of collisions (like the unfortunate German tourist incident).
- Safer Agriculture – Farmers can check real-time elephant movement before stepping into their fields at night.
- Better Resource Deployment – Forest officials can allocate patrols and deterrents more effectively.
- Community Awareness – Local communities gain accessible, visual information, fostering safer human-elephant coexistence.
While designed for mitigating human-elephant conflict, the system can also be adapted for other regions and animals, such as kangaroos, deer, or bison, with only minimal modifications required. The overall goal remains the same: to reduce accidents, save human lives, and protect wildlife.
This integration will make EleTect not just a detection and deterrent system, but a smart, location-aware safety network that can save countless lives – both human and elephant.
🌱 Let’s Save Lives — Humans and Elephants AlikeWith EleTect 1.5, we bridge the communication gap between the wild and the road. Let’s make our forests safer — not by blocking wildlife, but by understanding and respecting their paths.
📎 Resources
Comments