OK, it’s important to only do testing/issue reporting from a device since the Simulator is only simulating low-latency audio, but it looks like you’re now just testing on a real device.
It sounds a bit like one of the audio objects you are using could be changing the audio session, which isn’t compatible with using PocketsphinxController at the same time. Can you show the code where you create the AVSpeechSynthesizer and AVAudioPlayer objects, and also your output from verbosePocketsphinx and OpenEarsLogging?
Another question I would have is how you are handling the suspend/resume in order to make sure that the TTS isn’t being heard and analyzed by PocketsphinxController (i.e. what are the events you are using to decide when it’s safe to suspend and to resume?). Are you 100% sure that recognition isn’t in progress at the same time that your app is playing sounds through the speaker?