Reply To: how to use openears framework to "parrot" back to you what you said

Home Forums OpenEars how to use openears framework to "parrot" back to you what you said Reply To: how to use openears framework to "parrot" back to you what you said

#1017572
Halle Winkler
Politepix

Hi doogie,

That’s correct, it’s always necessary to create a language model or grammar containing the words that can be recognized. The interesting thing about it is that this isn’t actually a property of offline recognition — even Google Voice Search and Siri have to use pre-defined language models and grammar.

The difference is just that they are being run on enormous server farms and then the models are shared across many user sessions simultaneously, so it is possible for their language and acoustic models to be so large and exist in so much memory that they can create the illusion of detecting “anything”, even though at some point in time a (very big) language model like the output of LanguageModelGenerator was made. Since we’re just running on a phone, which is like a sliver of the available memory and cpu power of just a single server, we have to be very frugal and efficient with what is possible to recognize so it has to be constrained vocabularies which are in some way specific to the task at hand.

Luckily you can swap between vocabularies very quickly with OpenEars, or even generate them dynamically using LanguageModelGenerator based on the needs of the moment, so the usual approach for offline recognition is to have vocabularies which change based on the mode of the app.