Home › Forums › OpenEars › OpenEars detects multiple words during silence › Reply To: OpenEars detects multiple words during silence
No worries Norm, we’ve all been there at the beginning of learning a new language. I think this doesn’t have a role in that code so it is better to remove:
What jumps out at me is that the suspend call is coming at a sort of arbitrary moment after the PocketsphinxController has been started and we don’t really know whether it’s finished setting up, or if it successfully set up or anything; we just know that it’s getting suspended after however much time it took the AVSpeechUtterance stuff to instantiate. Usually suspendListening is expected to be called on a session that we know for sure has gotten past calibration and has started actively listening, although maybe I need to make that a little more bulletproof for situations like this.
This is a personal design decision, but I think you’ll have better results if you suspend listening after listening has begun, which you can find out when there is a callback in – (void) pocketsphinxDidStartListening. Or, you could use that callback to decide when it’s OK to start your speech synthesis, and use similar code to what is in your viewDidLoad. One way or the other, the idea is to only call suspend when you know for sure there is something to suspend, which the delegate methods of OpenEarsEventsObserver should help with. Let me know if that helps.