Good to know, I wonder if that is actually a bug that the smaller grammars are less accurate. What was the thought process behind opting for a grammar versus a language model in a case where you are looking for a single word from a set?
Additionally, although performance of OpenEars (and pocketsphinx) is really good at generating grammar or language model data on the fly
Thanks, just to clarify, Pocketsphinx doesn’t generate models or grammars. OpenEars generates grammars and dictionaries, and ARPA files are mostly done by CMUCLMTK with some modifications.