OK I believe the crash is fixed now.
OpenEars isn’t designed to listen for speech with the internal mic while the same device is being used to play sound out the built-in speaker in an open environment, which is just a worst-case recognition scenario unfortunately. The design is that you suspend listening while playing music or other sound from the device if you are using the speaker and internal mic together. So in this scenario, you would want to use it according to its design and have it not listening while the music is on, and to resume listening when the user pauses the music to speak.
Is there any specific reason for this? Is that because openears takes over the audio session or because of something else? Actually we have run some tests in the past, setting a mixed session and recognition worked even with music on (at least on low volume). We were using an audio out thought sending music to external speakers.