I’ve tried to enhance the micro_speech application by training a new model as TF Lite suggests using Google Colab to generate new model.tflite and model.cc for microcontrollers.
Google Colab Link for training: Google Colab
Micro speech example works with 2 models, a preprocessor model and the model that performs the detection. As far as I understand, if I want to change what words are recognized (yes and no in the provided model example) , I need to train the new model (I did for ‘on’ and ‘off’) and replace the .cpp and .h files with the quantized model.
I got that .cpp and .h file from the model.cc byte array from the quantized model obtained in the training (I tried copying the raw bytes and replacing my current model, and using generate_cc_arrays.py to generate the C++ array) and replaced the commands kCategoryLabels for “on” and “off” replacing “yes” and “no” and left kCategoryCount as 4 in micro_model_settings.h.
I’m getting this error when running the same code with the new model updated array and size, and the changes in micro_model_settings.h :
Feature generation failed Requested feature_data_ size -268435456 doesn’t match 1960
I’ve tried filling the .cpp array with the generate_cc_arrays.py output from importing the new model.tflite quantized in this .py and obtaining the vector and I tried also filling with the model.cc (C byte array) obtained in the training also with xxd usage.
The error is being triggered at the FeatureProvider class and kFeatureElementCount is not calculated correctly.
Do I need to replace the audio preprocessor also? I’m missing any quantization or step after obtaining the new model.tflite trained model?
I thought it would be as easy as replacing the example model vector with the new contents of the new model and updating the size of it…
I attach you the github repo of the micro_speech example from istvanzk that I used to import the example to TF Lite framework as an example:
I hope I’m missing an easy step that you guys can spot and help me import my own model to the example