Last summer I got a Free-trial access to the Imagimob Machine learning platform and they also have provided me with an Acconeer Radar sensor as a give away.
In this article I will describe how that sensor works and how to use the Imagimob platform to create and deploy neural network (NN) models and what I have done with all these things for the evaluation project.
Imagimob has built it’s platform as a service that helps to collect data from any sensor, to label your data in a very convenient way, build signal processing and NN model and translate the result into C code to deploy it on a microcontroller.
The Acconeer radar sensor is a single chip with an antenna embedded in the package that emits and receives radio waves of 60GHz frequency in impulse mode. This sensor is aimed for near field sensing applications, it provides good IQ signal and has tiny dimensions and consumes around nothing uA.
As an evaluation project I wanted to make a system that can help to turn my apartment into a smarter one. The radar sensor was installed on the top of the entrance door. I wanted the sensor to tell me when: the door is open, the door is closed, I am near the door, Mijin is near the door.
I set up the radar sensor to send envelope data. In this configuration the sensor returns a vector of data where the position of the element means distance from sensor and the value of element means power of signal that is reflected from that distance.
Even though the entrance door is a metal one it didn't reflect enough signal back to the sensor with that angle of installation so I attached a small corner reflector made of alumni film.
I recorded data that represented conditions of the opened door, the closed door, me staying under the sensor and Mijin staying under the sensor. I have collected several files of each condition for training, validation and test of future NN models.
Imagimob utils script collects raw sensor data and records video of the event simultaneously. In the studio the raw data and recorded video are shown together and you can explore what you have recorded sliding back and forth.
When all data is labeled the Studio checks if there is any mistake in files or labels. The studio shows how evenly the labels are distributed among source files in terms of quantity and time.
And on the next step you can prepare your raw data for a better fit for future NN models. You can reshape your raw data, apply filtering or windowing or any other DSP function (ready to use in Studio or from your custom Python scripts).
That's it. All preparations are done. After that the Studio will propose to you several NN models and a few parameters to adjust. If you are good with it then Studio will send data to the cloud and the training process will begin.
When the training is complete you can see the accuracy and other resulting scores of each NN model on training and validation and test datasets.
If you are not satisfied with the result, go back to the previous steps. But if you like the scores then you can save the best model and evaluate it with hardware in real time using one of Imagimob utils.
As soon as the result is confirmed the model can be translated to C code for further deployment on a microcontroller.
The results of my evaluation project you can see on the video:
Using subsequence of recognized states, a simple post processing can transform them into more sophisticated events.
I.e. [door_close -> door_open -> Vladi -> door_close] can be transformed into "Vladi has got home" or [door_close -> Vladi -> door_open -> door_close] into "Vladi went out" which could be more useful for smart home applications. I have stopped at this point though. I had fun to play with these tools. I was surprised how fast a PoC project with NN on the edge can be done. It took me a couple hours to collect the data and a couple hours to label data, build the model and evaluate the result.
Thanks to Imagimob and Acconeer for this chance to try out their technologies.
And thanks for reading!
Comments