Dear Openears team,
Thank you very much for the excellent iOS library.
I would like to use Openears with my application for only activating voice commands instead of tapping buttons that using most of the time.
There are 6 voice commands that going to activate in my application via voice commands such as Cancel, Reply, Reply All, Send, Block, Delete.
I have created a dictionary and language model using only these 6 words and also I have added Rejecto plugin.
Here is the problem. If I speak “Bad”, it recognizes as “Block” or If I speak something else than the dictionary items, it recognizes as my one of the voice commands/dictionary item. I would like to recognize only the list of items when I speak. So what is the best approach to do this? Can I use something like confident value (however it will not work out if I’m getting this value for the entire detected sentence instead of a word)?
Here are the approaches that I’m planning to implement.
1. Provide a huge language-model file to process and skip the words which are not a command.
2. Use the recognitionScore as confident value ( however I don’t know more details about recognitionScore such as its max value, min value etc).
Please let me know which would be the best approach. I would appreciate if you have any great suggestions.
Thank you.