The Grove Vision AI Module V2 is a game-changer in the world of microcontroller-based AI vision modules. Powered by the ARM Cortex-M55, it outperforms regular ESP32 CAM-based boards while consuming significantly less power. After extensive testing, we found it to be exceptionally powerful and precise.
Comparison with Xiao ESP32-S3 Sense Board
In our tests, we compared the Grove Vision AI Module V2 with the Xiao ESP32-S3 Sense board. The difference is clear in the comparison video. The Grove Vision AI Module V2 delivers a higher frame rate while maintaining low power consumption, outperforming the Xiao ESP32-S3 Sense board.
The product arrives in standard Seeed Studio packaging. Inside the box, you'll find:
- The Vision AI Module V2
- A connecting wire
- A sticker with a brief introduction to the module
Specifications
The module features the WiseEye2 HX6538 processor, which includes:
- Dual Core ARM Cortex M55:
- High Performance Core clocked at 400MHz
- High Efficiency Core clocked at 150MHz
- ARM Ethos-U55 microNPU (Neural Processing Unit) clocked at 400MHz
- PUF (Physical Unclonable Function) hardware security
These features enable rapid AI and ML processing, making it ideal for computer vision projects requiring high frame rates and low power consumption.
Memory and Connectivity
- 60MB of onboard flash memory
- PDM microphone
- SD card slot
- External camera connectivity
- CSI port
- Grove connector
- Dedicated pinout for connecting Xiao series microcontroller boards from Seeed Studio
Software Compatibility
The module supports a wide range of AI models and frameworks:
- SenseCraft AI models, including Mobilenet V1/V2, EfficientNet-Lite, YOLO v5/v8
- TensorFlow and PyTorch frameworks
It is compatible with popular development platforms like Arduino, Raspberry Pi, and ESP dev boards, making it versatile for further development.
Applications
Our tests confirmed that the Grove Vision AI Module V2 is suitable for a variety of applications, including:
- Industrial Automation: Quality inspection, predictive maintenance, voice control
- Smart Cities:Device monitoring, energy management
- Transportation: Status monitoring, location tracking
- Smart Agriculture: Environmental monitoring
- Mobile IoT Devices: Wearable and handheld devices
We can declare with confidence that the Grove Vision AI Module V2 delivers unmatched AI processing capabilities, flexible model support, a wealth of peripheral possibilities, high compatibility, and an entirely open-source environment after conducting rigorous testing. It is a great option for a variety of AI and computer vision applications because to its low power consumption and great performance.
Hardware Overview
Refer this article, 2024 MCU AI Vision Boards: Performance Comparison, it is possible to confirm how powerful Grove Vision AI (V2) is when compared to Seeed Studio Grove - Vision AI Module, Espressif ESP-EYE, XIAO ESP32S3 and on an Arduino Nicla Vision. Do check it out.
Connecting to a CSI interface camera
Once you have the Grove Vision AI V2 and camera ready to go, then you can connect them via the CSI connection cable. When connecting, please pay attention to the direction of the row of pins and don't plug them in backwards.
Boot / Reset / Flashed Driver
Boot
If you have used some unusual method that has caused the Grove Vision AI to not work properly at all (at the software level), then you may need to put the device into BootLoader mode to revive the device. Here is how to enter BootLoader mode.
Method 1
Please disconnect the connection cable between the Grove Vision AI and your computer, then press and hold the Boot button on the device without releasing it. At this time, please connect Grove Vision AI to your computer with a Type-C type data cable, and then release it again. At this point the device will enter BootLoader mode.
Method 2
With the Grove Vision AI connected to your computer, you can enter BootLoader mode by pressing the Boot button and then quickly pressing the Reset button.
Reset
If you're experiencing problems with device data suddenly not uploading or images getting stuck, you can try restarting your device using the Reset button.
Driver
If you find that the Grove Vision AI V2 is not recognised after connecting it to your computer. Then you may need to install the CH343 driver on your computer. Here are some links to download and install the CH343 driver.
Windows Vendor VCP Driver One-Click Installer: CH343SER.EXE
Windows Vendor VCP Driver: CH343SER.ZIP
Windows CDC driver one-click installer: CH343CDC.EXE
Windows CDC driver: CH343CDC.ZIP
macOS Vendor VCP Driver: CH34xSER_MAC.ZIP
Below is a block Diagram of the Grove Vision AI (V2) system, including a camera and a master controller.
Edge Impulse is a platform for developing machine learning models specifically designed for edge devices and embedded systems. It provides a comprehensive set of tools and services that enable developers to quickly create, train, and deploy machine learning models without requiring deep expertise in machine learning.
In this tutorial we will cover 2 chapters focusing on developing vision and audio TinyML model with the seeed studio grove vision v2 using edge impulse.
CHAPTER 1:VISION BASED TINYML MODEL IN EDGE IMPULSECreate a new project in edge impulse1)Start by creating an account in the edge impulse platform
2)Click on create new project option
3)Enter the name for your new project:
4)Finish by clicking the Create new project option
Now you have successfully created a new project in edge impulse
Installing dependenciesTo set this board up in Edge Impulse, you will need to install the following software:
Note: Make sure that you have the CLI tools version at least 1.27.1. You can check it with:
edge-impulse-daemon --version
2.On Linux, please install screen:
sudo apt install screen
In my case i had the old version
In order to update and install the newer version of CLI tools open a command prompt or terminal and run the following command:
npm install -g edge-impulse-cli --force
Now let's check the version one more time to ensure it's updated to the required version
You should now have the tools available in your PATH.
Once the software is ready, you can connect the board to Edge Impulse.
1. Update Edge Impulse Firmware
The board doesn't come with the appropriate Edge Impulse firmware out of the box. To update it:
1. Download and unzip the latest Edge Impulse firmware.
2. Use a USB Type-C cable to connect the board to your PC, Mac, or Linux machine.
3. Inside the extracted firmware folder, you'll find scripts to flash your device:
For macOS:
./flash_mac.command
For Windows:
"C:.\flash_windows.bat"
For Linux:
./flash_linux.sh
4. The flashing process will ask you to choose the serial port for your device, and the script will handle the firmware update
Press the restart button it's requested
Now that you have updated the firmware of the grove vision V2,click on any button to continue
If the script asks you to press the "reset" (RST) button but doesn't continue, it might be because your himax-flash-tool is outdated. In that case, you'll need to update the tool on your host system as described in the previous steps.2. Configuring Keys
Open a command prompt or terminal and run the following command:
edge-impulse-daemon
This will launch a wizard that prompts you to log in and select an Edge Impulse project. If you need to switch projects, you can re-run the command with the --clean
option.
You will be asked to enter your login credentials of the edge impulse platform,enter it and then you can see an option to select the project you want to connect the device to Select the project which we have created in the first steps.
You will be redirected to your project dashboard
Now let's start to collect the needed dataset for training the custom, model, for that go to the Data acquisition tab in the left side of the window.
In the Collect Data
Tab Select the Device
and the Sensor
we are going to utilise to get the data and give it a proper Label
Here as an example i have selected the seeed studio wioterminal to train the model. Click on Start sampling
to collect the images,repeat the same procedure until you required number of dataset
Now let's change the label and let's start collecting the dataset for the next object . Here iam using the seeed studio SenseCAP Watcher as the next object.
Repeat the procedure until you get required dataset and image classes.
Now that we have successfully collected the needed dataset,let's move on to the image annotation part.
here are two ways you can use to perform AI-assisted labeling on the Edge Impulse Studio (free version):
- Using yolov5
- Tracking objects between frames
Edge Impulse launched an auto-labeling feature for Enterprise customers, easing labeling tasks in object detection projects.
Here iam using track objects between frames option
Now for the next object enter the new label and continue annotating
Continue with this process until the queue is empty. At the end, all images should have the objects labeled.
Next go to the top of Data acquisition tab to see and check the dataset split
Here you can see that there is no dataset in the test split and all the dataset is loaded into the training data.
So inorder to rebalance the dataset we will perform the perform train/test split option and split it into 80/20%
Enter "perform split" to continue with the option
Now we can see that now the data is loaded to the test split also but it's not in the 80%/20 split.
We can examine and analyse which data is not split accurately and move it to test or train split accordingly .Here we can see that the Watcher data is not loaded into the test split.
Click on the 3 dots to move the data to the desired set.Here we are moving some of the watcher data to the test set.
Now you can see that the dataset is split perfectly between the train/split set
An impulse takes raw data (in this case, images), extracts features (resize pictures), and then uses a learning block to classify new data.
In this phase, you should define how to:
- Pre-processing consists of resizing the individual images from
320 x 240
to192x 192
and squashing them (squared form, without cropping). Afterward, the images are converted from RGB to Grayscale. - Design a Model, in this case, "Object Detection."
Now add the appropriate processing block and the learning block. Add the Image block as the processing block and Object detection block as the learning block.
Click on "Save Impulse" and you have successfully stored impulse.
Pre-processing (Feature generation)
Besides resizing the images, we can change them to grayscale or keep the actual RGB color depth. Let's start selecting Grayscale
. Doing that, each one of our data samples will have dimension 9, 216 features (96x96x1). Keeping RGB, this dimension would be three times bigger. Working with Grayscale helps to reduce the amount of final memory needed for inference.
Do not forget to [Save parameters]
. This will generate the features to be used in training.
The Studio moves automatically to the next section, Generate features
, where all samples will be pre-processed.
Click on "Generate features".
The feature explorer shows that all samples evidence a good separation after the feature generation.
Model Design, Training, and TestNow go to Object detection tab.
Regarding the training hyper-parameters, the model will be trained with:
- Epochs: 60
- Learning Rate: 0.001
Click on train
Now the model training is completed and you can see the cofusion matrix.
Now we can move to the deployment part. Search for seeed Grove Vision AI Module V2 in the deployment options.
And in Model Optimization select TensorFlow Lite and Quantized(int8).
Click on Build to generate the model file
Now extract the full file
Now repeat the same procedure we did to burn the firmware to the grove vision in the early steps and burn this new firmware to the grove vision v2.
Now that we have finished with all let's just have a look at the results , for that run the command
edge-impulse-run-impulse --debug
you will get an https link
copy paste that in your browser to see the live classification in the webpage.
Congratulations! You’ve successfully created an vision-based TinyML model for the Grove Vision V2 using Edge Impulse.
1)Click on create new project option
2)Enter the name for your new project:
3)Finish by clicking the Create new project option
Now you have successfully created a new project in edge impulse
2. Configuring KeysOpen a command prompt or terminal and run the following command:
edge-impulse-daemon
This will launch a wizard that prompts you to log in and select an Edge Impulse project. If you need to switch projects, you can re-run the command with the --clean
option.
You will be asked to enter your login credentials of the edge impulse platform,enter it and then you can see an option to select the project you want to connect the device to Select the project which we have created in the first steps.
You will be redirected to your project dashboard
Now let's start to collect the needed dataset for training the custom, model, for that go to the Data acquisition tab in the left side of the window.
In the Collect Data
Tab Select the Device
and the Sensor(Microphone)
we are going to utilise to get the data and give it a proper Label
Here as an example i will be training 2 keywords "HI" and "HELLO". Click on Start sampling
to collect the audio data,repeat the same procedure until you required number of dataset
Now let's change the label and let's start collecting the dataset for the next keyword.Now we will collect the "HELLO" data.
We also need to collect the background noice,label it as "NOICE" and collect the data.
All data on dataset have a 1s length, but the samples recorded in the previous section have 10s and must be split into 1s samples to be compatible. Click on three dots after the sample name and select Split sample.
Once inside the tool, split the data into 1-second records. If necessary, add or remove segments. This procedure should be repeated for all samples.
Now that we have finished collecting the data,make sure to balance between train and test set.80/20 is the best recommended ratio.
perform train/test split.
Goto Impulse design
An impulse takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data. First, we will take the data points with a 1-second window, augmenting the data, sliding that window each 500ms. Note that the option zero-pad data is set. This is important to fill with zeros samples smaller than 1 second (in some cases, I reduced the 1000 ms window on the split tool to avoid noises and spikes).
The next step is to create the images to be trained in the next phase. We can keep the default parameter values or take advantage of the DSP Auto tune parameters option, which we will do.
Click on "Generate features"
We will use a Convolution Neural Network (CNN) model. The basic architecture is defined with two blocks of Conv1D + MaxPooling (with 8 and 16 neurons, respectively) and a 0.25 Dropout. And on the last layer, after Flattening four neurons, one for each class.
Click on "save & train" to train the model.
Now we can move to the deployment part. Search for seeed Grove Vision AI Module V2 in the deployment options.
And in Model Optimization select TensorFlow Lite and Quantized(int8).
Click on Build to generate the model file
Now extract the full file
Now repeat the same procedure we did to burn the firmware to the grove vision in the early steps and burn this new firmware to the grove vision v2.
Now that we have finished with all let's just have a look at the results , for that run the command
edge-impulse-run-impulse
Now you can see the inference result in the cmd window
Great job! You’ve expertly created an audio-based TinyML model for the Grove Vision V2 using Edge Impulse.
Comments