Welcome to this interesting project! Let's dive in and explore more about this project by reading through this documentation.
Hangman is one of the popular word games out there and I am quite sure most of you would have come across this interactive word game. The individuals who play this game are usually aged 6 and above, and students are introduced to this game to help learn effectively and make vocabulary learning fun for them. However, this game is enjoyed by people of all ages and not just young ones and students.
Due to the controversy regarding the Hangman game where others were against the game finding it offensive, kid friendly alternatives like Snowman game were introduced as well.
This project is initially based on the Hangman game but the number of attempts will be denoted by hearts representing the lives left instead of the stick figure.
I used chatgpt to help me generate the background image and the word lists for this game.
Hangman is a word guessing game which is played by two or more players. It often starts off by a player (referred to as the "puzzle setter") thinks of a word and draws dashes, each representing a letter of the secret word, while the other player (the "guesser") takes turns to guess the letters.
The puzzle setter gives a hint to the word or sometimes mentions the category under which the word belongs to, for example: the hint is "a fruit" if the word is "apple". If the guesser makes a mistake, a live is lost and they win if they manage to guess the word before they run out of lives.
What is the role of M5Stack Core2 in this project?For this project, I will be using the M5Stack Core2 to input the letters while the computer will take up the role as the puzzle setter. The M5Stack Core2 is deployed with a TinyML model that will help classify the handwritten English alphabet strokes as the corresponding letters and then "communicate" with the Python application (the game) via the Serial port to input the letters as we guess each letter of the secret word.
I was not able to train the model will all letters of the English alphabet but I am planning on collecting more data for each letter and complete this model. For now, the words of this friendly Hangman game will avoid the following letters:
d,e,h,j,k,n,q,t,u,x,z
The tinyML model that I have used in this project is actually from one of my previous tinyML projects that recognizes handwritten letters and displays colors that begin with the corresponding letters.
Data Collection: Preparing the Training and Test datasetsI used my M5Stack to collect the data and prepare the training and test datasets. I decided to assign 75% of the data as the training dataset and the remaining 25% as the test dataset.
The target variable of my training dataset will be the 'Label' variable.
I collected 400 samples for each letter, 300 of which belonged to the training dataset and 100 of which belonged to the test dataset. There will be 255 feature variables that contain the pixel values of the respective pixels. I verified the approximate number of pixels that had significant pixel values as I drew my digits and decided to use 255 feature variables.
The screen resolution of the M5Stack is 320 x 240 pixels. To calculate the pixel location, I used the following formula:
x = i * 320 + j
The factor by which you multiply the pixel's row number is basically the width of your touchscreen. To store the pixel values, I tried using the list method. I declared a list and then tried to append the pixel values within a for loop (with 255 iterations) but it was not successful as I kept getting an error saying that the pixel location did not have an appropriate data type. To save time, I decided to use the buffer method instead where I allocated memory to store pixel values.
Each person will have a different way to write a letter, so I made sure to draw out each letter in all possible ways and collect sufficient samples for each way.
Neuton TinyML requires the datasets in a CSV format so I prepared my training and test datasets as CSV files. Your dataset, both training, and test, will have to meet some other requirements as well, but don't worry, you can always view them in the Support Library on the platform.
The code for data collection is available in the Code section below.
The next step is model training.
Training the ModelVisit the Neuton TinyML's web page (neuton.ai) and click on ‘Get Started’. Click on the ‘Start for Free’ button and you will be redirected to the welcome page where you can sign in using your Google account and get started. Set up your CGP account and you will receive free credits to upload your own data and train your models. Subscribe to Neuton's Zero Gravity Plan and you are good to go!
Click on 'Add New Solution' and you will see something like this:
Once you are done, click ‘Next’ and you will be required to upload your training dataset. The dataset will be validated and if it meets all requirements, it will show a green tick and allow you to continue. You should not have duplicate rows or any missing values.
Click ‘OK’ and proceed to the next step. Choose your target variable which is the 'Label' variable and if you want to eliminate any other variables, you can also do that.
The next step will require you to specify the task type, the metric, and TinyML model settings. The platform can identify the target metric and task type itself but I will explain why I used the Classification task type.
This model should be able to classify the given input as a letter of the alphabet within a-z and this is supervised machine learning as we are training the model with the target and feature variables.
This is classification since we are not predicting a continuous dependent variable using independent variables like predicting the yearly income using the number of hours worked per week. There are two types of classification - binary and multi. Binary classification will classify the input into one of the two classes. But in this project, we will be classifying the input into one of the twenty-six classes so the task type is Multi Classification in this case.
The target metric is Accuracy and you will eventually know why the platform chose it after your model's training is complete. The target metric calculates the error rate of the model predictions on the validation dataset and represents the model quality.
If you want to create tiny models for microcontrollers, enable the TinyML mode using the slider and set the model settings.
The input data type is FLOAT32 and the normalization type is 'Unified scale for all features'. You will need to choose this normalization type if the data from your feature variables are within the same range and doing this will also reduce the time required for training. Enable float datatype support and select 8 bits as the bit depth for calculations. Once you are done, click 'Start training' and the training process will start.
You can view the quality of your model, its accuracy, and other analytics once your training is complete.
Training resultsMy model had an accuracy of 97.2% and a model quality index of 97%. I am satisfied with the results!
I enabled prediction to see how well my model performed. For this, I used my test dataset.
The results were better than expected and I felt quite confident about my TinyML model. I downloaded the C library and got ready to deploy it on my M5Stack.
Embedding the Neuton Model on M5StackCreate an Arduino sketch file to deploy your model. After downloading the C library, extract the zipped folder and copy the contents into the folder with your sketch file. Read the README text file within the downloaded content to learn how to embed your model.
According to the README file, the two main functions are:
- neuton_model_set_inputs - to set input values
- neuton_model_run_inference - to make predictions
You will need to make an array with model inputs. In my case, I have used a buffer as my input data type was not suitable for an array. Please make sure that the input count and order are the same as in the training dataset. Pass this to neuton_model_set_inputs
function. The function will return 0 when the buffer is full and this indicates that the model is ready for prediction.
You should call neuton_model_run_inference
function with two arguments when your buffer is ready. These two arguments are:
- pointer to
index
of predicted class - pointer to neural net
outputs
As you can see in the code below, 0
is returned by neuton_model_run_inference
function when the prediction is successful.
if (neuton_model_set_inputs(inputs) == 0)
{
uint16_t index;
float* outputs;
if (neuton_model_run_inference(&index, &outputs) == 0)
{
// code for handling prediction result
}
}
After a successful prediction, classification takes place and the inference results are mapped on your classes. Note that the inference results are encoded (0..n). Use dictionaries binary_target_dict_csv.csv / multi_target_dict_csv.csv
for the mapping process.
I have uploaded the complete source code in the Code section for your convenience.
Python applicationNow we have a successful model that can recognize handwritten letters of the English alphabet. Right now, the tinyML model is displaying the classification output on the Serial Monitor. The next step is to develop the friendly hangman game and help our M5Stack "communicate" with the Python application.
You will be needing the following libraries for this project:
- Tkinter - standard GUI library for Python
- Serial - Allows access to the serial port
- Time - Used in this project to set delay periods
The code can be found in the Code section below for your convenience.
I will give you a brief overview of this Python application and its features.
You will first be welcomed by a menu screen where you will be able to see the score board as well as the Play Game
button.
When you press the button, you will be redirected to another screen where you will be prompted to choose a difficulty level to begin the game. The game modes are as follows:
- Easy - Common kid-friendly words (3 letters)
- Medium - Common words but slightly longer (5 - 7 words)
- Hard - Uncommon words with trickier spelling
- Expert - Rare, abstract or technical words (words longer than 8 letters)
- Random - You can ask the game to surprise you with a random difficulty mode
Once you choose the difficulty mode, the screen will be shifted to the game screen where the number of lives will be displayed on top as red hearts and just under them, you will be able to see a string of dashes which each denote the letters of the secret word. The hint will be displayed below the dashes and at the bottom, you will see the Guess
button which should be pressed before you can input your guess using the M5Stack Core2.
The Scoreboard will be updated after every win and loss.
This is the current preview of this application but I will be developing this application in the future with more features.
ConclusionTinyML models have a range of applications and this project is one of them. I feel satisfied with the resulting model as it works greatly on the device and I had a nice experience while recreating this interactive word game. I hope you liked my tutorial and found it helpful. I'm always open to suggestions so please feel free to share your feedback below.
Future WorkI would like to train my TinyML model with more data for each of the 26 letters of the English alphabet and develop this game even further with multiplayer mode as well. I am currently working on the next stages of this project and a second and improved version of this project will be published soon in the future.
Comments