One of the essential components of human civilization is agriculture. It helps the economy in addition to supplying food. Plant leaves or crops are vulnerable to different diseases during agricultural cultivation. The diseases halt the growth of their respective species. Early and precise detection and classification of the diseases may reduce the chance of additional damage to the plants. The detection and classification of these diseases have become serious problems. Farmers’ typical way of predicting and classifying plant leaf diseases can be boring and erroneous. Problems may arise when attempting to predict the types of diseases manually. The inability to detect and classify plant diseases quickly may result in the destruction of crop plants, resulting in a significant decrease in products. Farmers that use computerized image processing methods in their fields can reduce losses and increase productivity.
Numerous techniques have been adopted and applied in the detection and classification of plant diseases based on images of infected leaves or crops. Researchers have made significant progress in the detection and classification of diseases in the past by exploring various techniques. The DL and ML techniques and CNNs are often the favored choice for image detection and classification due to their inherent capacity to autonomously acquire pertinent image features and grasp spatial hierarchies.
In this project, I will show how I made a paddy plant disease identification system using RT-Thread Vision Board and Edge Impulse machine learning platform. As I am using machine learning for disease identification a good dataset is very important for the accuracy of the result. I downloaded a pre-prepared dataset from Kaggle (link: https://www.kaggle.com/datasets/jay7080dev/rice-plant-diseases-dataset) that contains three types of rice leaf images for Bacterial blight, Brown spot, and Leaf smut. Using the dataset I trained a machine learning model using edge impulse to detect any of these diseases automatically from the leaf image. The model can be deployed on any microcontroller board with a camera.
About the HardwareThe RT-Thread Vision board comes loaded with hardware perfect for embedded vision development. The heart of the board is a 5-megapixel camera (OV5640) for capturing high-quality images and videos. It also comes with ample storage (8MB flash and 32MB SDRAM) to hold image data. The board integrates an additional AT32F425 microcontroller (ARM Cortex-M4) that acts as a DAP-Link debugger, allowing easy programming and debugging via USB. For wireless connectivity needs, the RT-Thread Vision board includes a separate RW007 WiFi module (Realtek RTL8710BN inside) that handles communication through SPI/UART using simple AT commands. This makes it ideal for projects that require remote data transfer or control. And the best part? The board’s schematics are available on the official Github repository, making it easy for experienced developers to customize the hardware for their specific projects.
Board layout:
Edge Impulse® is a platform that simplifies the process of creating machine learning models by choosing reasonable defaults for the countless parameters you could set when creating a ML model. It provides a simple user interface that not only allows to train a ML model but also to inspect the data and test the model.
To train a ML model to classify an image we need to feed it with image data of that object. During the training process, the model will be trained using a concept called supervised learning. This means that we train the model with known data and tell it while it's "practicing" its predictions if they are correct or not. This is similar to what happens when you tell a toddler who is pointing at a donkey saying "horse" and you tell them that it's actually a donkey. The next few times they see a donkey they may still get it wrong but over time under your supervision, they will learn to correctly identify a donkey. Conceptually, that's also how our ML model learns.
For training a model, I first created a project in Edge Impulse and uploaded the image data I downloaded from the Kaggle.
Data was automatically divided into train and test sets. Then I made an impulse as shown in the following screenshot.
Then I trained the result and the result was quite good.
After training the model I tested it in Edge Impulse and the test result is shown in the following screenshot.
After getting the satisfactory result the next step is to build the model for the target platform. If your result is not satisfactory you can retrain the model by changing the training parameters and dataset. As my output is okay I am ready to build the model. I am going to use OpenMV IDE for programming the RT-Thread Vision Board. So, I have chosen the OpenMV library as shown in the following image.
It was built successfully. The library was automatically downloaded as a zip file.
I downloaded and installed the latest version of OpenMV IDE on my computer. After launching the IDE I connected the Vision Board to my computer using the USB-OTG port of the vision board.
The icon was changed as shown in the above image. After clicking the USB plug icon the board was successfully connected and the icon was changed as follows:
The Hello World program has already opened. I clicked the play button and the program was run on the vision board. The camera image was shown in the top right box.
So, the vision board and the IDE are working perfectly. Now I will upload the machine-learning model to the board. As the OpenMV library was downloaded as a zip file, I unzipped the file and got three files inside the zip file.
I copied the labels.txt and the trained.tflite files to the vision board which appeared as a mass storage drive named DAPLink in the computer when connected through the USB-DBG port as shown in the following image.
Then I opened the ei_image_classification.py using OpenMV IDE and ran the program.
If it runs successfully, your model is working. Now you can modify the example code according to your requirements. You can also connect actuators to the vision board and control the actuators according to the inference result.
Comments