Sorry, no specific ideas, but you can collect audio of these cases for your own QA so you can investigate issues with your app when people outside of your team are using it. OpenEars has two features that can help you undertake your QA: the first is that you can collect audio of other users’ OpenEars sessions using SaveThatWave (you can use the demo version for the purpose of collecting problem audio without buying the plugin as long as you don’t ship your SaveThatWave-enabled version to the App Store) using its method startSessionDebugRecord to obtain all speech for an entire app session as a WAV, and then once you have this WAV and add it to your bundle, you can use OpenEars’ method pathToTestFile in order to replay the session in your app, which may show you what is happening when there is a problem. Then you can (for instance) try different words in your model or change the vadThreshold or the other standard troubleshooting steps available to you. You will also need to collect info about which devices and OS versions are being used in case you need to break out separate logic for those cases.
Sorry, it isn’t possible for me to troubleshoot very generalized issues such as a model which works well for some speakers and less well for others (this is a standard issue in speech recognition), but if you do your own QA and see something that could be reported as a specifiable and replicable bug, please feel free to let me know about it.