Tagged: AVAudioSession, AVAudioSessionCategoryRecord, iOS7
- This topic has 26 replies, 3 voices, and was last updated 9 years, 9 months ago by Halle Winkler.
-
AuthorPosts
-
November 12, 2013 at 12:18 pm #1018871pablowaaParticipant
Hi,
I’m using the OpenEars framework in one of my apps. It’s been recently rejected. The reason:
“During review we were prompted to provide consent to use the microphone, however, we were not able to find any features or functionality that use the microphone for audio recording.
The microphone consent request is generated by the use of either AVAudioSessionCategoryRecord or AVAudioSessionCategoryPlayAndRecord audio categories.
If you do not intend to record audio with your application, it would be appropriate to choose the AVAudioSession session category that fits your application’s needs or modify your app to include audio-recording features.”Is there any way to fix it? The only part of the app where I’m using the microphone is where the OpenEars framework is being used.
Thank you
November 12, 2013 at 12:44 pm #1018872Halle WinklerPolitepixWelcome pablowaa,
I’m sorry you had an app rejection — this is the first time I’ve ever heard of one as a result of using the framework for a purpose it was designed for (I heard of one other temporary rejection back in 2011, but it was due to an obvious misuse of the framework). Are you doing speech recognition in your app or just TTS?
November 12, 2013 at 1:00 pm #1018874pablowaaParticipantHi, thanks for your response.
I’m doing speech recognition. Only recognise 2 words (“next”/”forward”) to navigate between slides in one section inside the app.
I’ll give you some more background about the app:
This app is part of a family that has been using the OpenEars framework since iOS 5. When I release a new app I change the content, themes, and I update the UI and code to work with the new iOS version requirements.
The only place where I use the microphone is in OpenEars framework and Apple never reported a problem about it. This is the first time I release an app from this family for iOS 7, so I don’t know if it has something to do with iOS 7?According to Apple I’m using audio recording, but the only thing I do is recognising 2 words with OpenEars
November 12, 2013 at 1:45 pm #1018875Halle WinklerPolitepixCorrect, OpenEars uses the AVAudioSession PlayAndRecord, which is how it is able to use the mic input for speech recognition. If it didn’t use a recording audio session, no recognition would be possible because there would be no microphone access. It’s expected that any app making use of the mic for low-latency usage will have that kind of audio session.
Starting with iOS7, whenever the mic is first made live in the app, the user is asked permission for mic access. This is a very good thing in my opinion, especially for applications like speech recognition. Apple’s rejection is basically saying that you are getting this permission, but it isn’t clear to them what you are doing with it and they want any mic-using feature to be made apparent to new users of the app. I don’t know what your UI is like so I don’t know if this is an accurate critique or not; not every reviewer is the same and they have a lot to do and can occasionally overlook things like everyone else, but let’s operate on the assumption that it’s correct.
Solving this in a definitive way is an app implementation issue rather than a framework issue and is probably something more for the realm of visual UI than speech UI, which isn’t inside of the scope of things OpenEars tries to do (besides giving you hooks like the audio input volume so you can create a visual UI in your app). But since it is clearly a concern for Apple, which means it’s a concern of people using OpenEars, I will add a warning about the necessity of communicating the fact that the app is using the mic for speech recognition in the OpenEars documentation.
I can make some suggestions about how to indicate to the user that there is ongoing speech recognition. You could describe it via an introductory alert on first run, right before starting the speech recognition so that the permission request appears in the context of allowing speech recognition. Alternately you could probably indicate it with a label or an icon.
Since you have the rejection email, though, I would simply ask them directly. “My app is performing speech recognition by doing keyword spotting as part of its user interface, which is why the PlayAndRecord audio session is in use. Based on the rejection reason, this voice UI feature isn’t apparent. What would be the best way for me to indicate that my app has a voice UI to my users so that the features or functionality that use the microphone for audio recording are clearly apparent?”
December 19, 2013 at 10:39 am #1019207Halle WinklerPolitepixI’m marking this resolved because although it isn’t possible for this project to handle the visual UI requirements or user education requirements of indicating microphone use, it provides hooks for easy UI building, and I’ve added a note about this requirement to the FAQ so everyone understands that Apple wants an indication of mic usage to be part of the interface of a mic-using app.
January 16, 2014 at 12:10 am #1019546SPatrickBParticipantMy app update (for iOS 7) just got rejected for the exact same reason given by Apple. Trouble is my app is only using TTS and so should not use the microphone.
If a flag could be set in the API for TTS only (use AVAudioSessionCategoryPlayback) then I think this could be resolved.January 16, 2014 at 12:14 am #1019548Halle WinklerPolitepixWelcome,
Sorry, that’s frustrating. You can set this behavior by setting the property noAudioSessionOverrides to TRUE in your FliteController. Please test that it gives the desired result of no audio session overrides on a new install of your app before re-submitting!
January 16, 2014 at 12:00 pm #1019751Halle WinklerPolitepixI’ve decided that the next version will do this automatically if you are doing TTS without speech recognition so this doesn’t affect anyone else. ETA for version 1.65 is in the next week or so.
January 16, 2014 at 12:52 pm #1019752SPatrickBParticipantI set that flag but I still see an entry for my app appearing in settings/privacy/microphone. However, it’s OK, I’ll just wait out until the next version of OpenEars.
January 16, 2014 at 12:57 pm #1019753Halle WinklerPolitepixYou would need to completely uninstall the app including its settings and let it reinstall as a new app, since that setting is made on the first install ever when you accept the permission request.
January 16, 2014 at 12:59 pm #1019754Halle WinklerPolitepixThe other possibility is that you have a call to PocketsphinxController somewhere.
January 19, 2014 at 5:14 am #1019790SPatrickBParticipantWhen I delete the app and reboot the device, I see that it has also gone from the settings but when I reinstall, it reappears in settings also. I don’t know what else I can do.
January 19, 2014 at 9:37 am #1019792Halle WinklerPolitepixCan you show me how you are initializing fliteController and setting noAudioSessionOverrides? There is really no way for the app to use the mic as a result of OpenEars if you have noAudioSessionOverrides set to true and you aren’t using PocketsphinxController.
You can verify it in the framework source code; the only place in the framework that the AudioSessionManager is initialized from is PocketsphinxController, and once in FliteController if noAudioSessionOverrides isn’t set.
According to Stack Overflow, permissions are not necessarily reset on app removal so you could try changing your bundle ID to be sure a new install won’t ask permission: http://stackoverflow.com/a/18625274/119717
January 20, 2014 at 12:46 pm #1019802SPatrickBParticipantChanging the bundle ID worked, thanks.
January 20, 2014 at 12:53 pm #1019803Halle WinklerPolitepixVery glad to hear it. I’ll keep in mind for future support questions that new mic permissions settings won’t take effect when testing an already-installed bundle.
January 28, 2014 at 12:20 pm #1019938SPatrickBParticipantOK, settings->general->reset->reset location & privacy will remove the previous settings, i.e. no need for bundle ID change.
However, after setting the noAudioSessionOverrides to TRUE, it appears to only work for the initial call to the say method. I still have the microphone consent request popping up with subsequent calls even though the noAudioSessionOverrides flag is still set to TRUE;January 28, 2014 at 12:40 pm #1019940Halle WinklerPolitepixHmm, I’m not sure how that could happen unless you are using a different memory management approach than the recommended. This is the only code in the 1.64 version of FliteController which calls AudioSessionManager:
if(audioSessionHasBeenInstantiated == FALSE) { if(self.noAudioSessionOverrides == FALSE) { audioSessionHasBeenInstantiated = TRUE; AudioSessionManager *audioSessionManager = [AudioSessionManager sharedAudioSessionManager]; // We want to do this before the first AVAudioPlayer instantiation but only once and only if the developer hasn't set noAudioSessionOverrides TRUE [audioSessionManager startAudioSession]; } }
It does something with AudioSessionManager if there has never been an instantiated audio session and if overrides have been allowed. If there is a call to the audio session, it is either because PocketsphinxController is being used or because this check is failing:
if(self.noAudioSessionOverrides == FALSE) {
The 1.65 version will not even do this much, but if your FliteController has an issue with a BOOL property you are setting not being seen after the first call to the FliteController, that is something to investigate regardless.
January 28, 2014 at 12:59 pm #1019943SPatrickBParticipantI can see in my app that audioSessionHasBeenInstantiated is never set to TRUE in initial or subsequent calls to the say method. Is this pertinent?
January 28, 2014 at 1:19 pm #1019944Halle WinklerPolitepixSort of — it kind of vaguely suggests that whatever is setting your audio session isn’t related to OpenEars. If you only have FliteController and aren’t making any use of PocketsphinxController, the call above is the only opportunity for audio session calls so having noAudioSessionOverrides set is the last word because it is later than the check against audioSessionHasBeenInstantiated. If noAudioSessionOverrides is set, OpenEars isn’t making a call to its AudioSessionManager (which is why audioSessionHasBeenInstantiated is never set TRUE). If a call is being made to the audio session, it should perhaps be suspected of coming from other objects.
January 28, 2014 at 1:25 pm #1019945SPatrickBParticipantAh, I am using AudioServicesPlaySystemSound elsewhere in my app and am therefore calling
[[AudioSessionManager sharedAudioSessionManager] setSoundMixing:true];
January 28, 2014 at 1:33 pm #1019946Halle WinklerPolitepixOK, if that is for OpenEars, shouldn’t be necessary when you have noAudioSessionOverrides on. But that also should not result in a call to request the mic if OpenEars is never getting to the point where it is making its own audio session call.
January 28, 2014 at 1:50 pm #1019947SPatrickBParticipantI can confirm that removing the call to setSoundMixing (from my controller init) has stopped the mic consent and has not affected the AudioServicesPlaySystemSound operation either.
Still, I do not understand why this was happening.January 28, 2014 at 2:06 pm #1019948Halle WinklerPolitepixGlad to hear it. Have you searched your project and other linked libraries for the case-insensitive word “session”? That can often turn up overlooked session calls. A lot of audio sample code out there on Stack Overflow and elsewhere contains extraneous calls to the audio session using one or the other audio session API, and sometimes they can sneak into otherwise-good code by that vector.
January 28, 2014 at 2:11 pm #1019949Halle WinklerPolitepixSorry, my mistake, the call you are making above is to OpenEars’ AudioSessionManager so it is OpenEars which is doing audio session management. That isn’t mysterious since the mixing call is there in order to keep OpenEars’ audio session management functional while being used with another object that is affecting the audio session.
January 28, 2014 at 2:42 pm #1019950SPatrickBParticipantThere was only the one reference to an audio session in my code which was the call to setSoundMixing.
January 28, 2014 at 3:05 pm #1019951Halle WinklerPolitepixThat’s correct, sending any call directly to the audio session manager will result in some audio session management. That isn’t part of the documented API but is a very old workaround from before PocketsphinxController got its own soundMixing property that insures that soundMixing is only ever set when using PocketsphinxController.
February 13, 2014 at 9:09 pm #1020136Halle WinklerPolitepixThis was fixed in version 1.65 of OpenEars and should not be an issue for app submissions anymore regardless of overrideAudioSession settings.
-
AuthorPosts
- You must be logged in to reply to this topic.