Correct, OpenEars uses the AVAudioSession PlayAndRecord, which is how it is able to use the mic input for speech recognition. If it didn’t use a recording audio session, no recognition would be possible because there would be no microphone access. It’s expected that any app making use of the mic for low-latency usage will have that kind of audio session.
Starting with iOS7, whenever the mic is first made live in the app, the user is asked permission for mic access. This is a very good thing in my opinion, especially for applications like speech recognition. Apple’s rejection is basically saying that you are getting this permission, but it isn’t clear to them what you are doing with it and they want any mic-using feature to be made apparent to new users of the app. I don’t know what your UI is like so I don’t know if this is an accurate critique or not; not every reviewer is the same and they have a lot to do and can occasionally overlook things like everyone else, but let’s operate on the assumption that it’s correct.
Solving this in a definitive way is an app implementation issue rather than a framework issue and is probably something more for the realm of visual UI than speech UI, which isn’t inside of the scope of things OpenEars tries to do (besides giving you hooks like the audio input volume so you can create a visual UI in your app). But since it is clearly a concern for Apple, which means it’s a concern of people using OpenEars, I will add a warning about the necessity of communicating the fact that the app is using the mic for speech recognition in the OpenEars documentation.
I can make some suggestions about how to indicate to the user that there is ongoing speech recognition. You could describe it via an introductory alert on first run, right before starting the speech recognition so that the permission request appears in the context of allowing speech recognition. Alternately you could probably indicate it with a label or an icon.
Since you have the rejection email, though, I would simply ask them directly. “My app is performing speech recognition by doing keyword spotting as part of its user interface, which is why the PlayAndRecord audio session is in use. Based on the rejection reason, this voice UI feature isn’t apparent. What would be the best way for me to indicate that my app has a voice UI to my users so that the features or functionality that use the microphone for audio recording are clearly apparent?”