It’s quite possible that this behavior changes from iOS version to iOS version or device to device, but if session mixing is turned on and the audio object still behaves that unexpectedly (BTW, reduced sampling rate is not unexpected – OpenEars has to set the sampling rate on the session and if you play audio with a higher rate during OpenEars’ session, it may be downsampled – this isn’t documented in Core Audio but appears to have consistently been the default behavior across versions), that is a Core Audio bug or undocumented Core Audio behavior. As a result, although I’d like a different result as much as you would, it’s unclear what could be done about it by this framework, particularly if the behavior didn’t manifest in every iOS version or every device – I’ve documented that audio object coexistence during recognition is going to be very limited and problematic, although AVAudioPlayer is known to work with 16kHz PCM files since it is also a major element of OpenEars. I wouldn’t be surprised if there is different behavior with compressed codecs, although since these topics aren’t documented in Core Audio and appear to change over devices and iOS versions I’m not in a position to simply improve those results via troubleshooting for one version and one device (changing the default audio mode or using a non-recording mode as a semi-workaround is a non-starter, regrettably).
I can recommend taking a look at the new OEPocketsphinxController overrides added with 2.052 that are intended for improving Bluetooth device behavior when the Bluetooth device doesn’t exactly match up to spec:
and convert your audio to a PCM format such as WAV, and see if the results are better. Consider trying out disableSessionResetsWhileStopped if this is happening after listening ends. Make sure to set these overrides before starting listening and after activating the OEPocketsphinxController instance.