[Resolved] App rejected

Home Forums OpenEars [Resolved] App rejected

Viewing 27 posts - 1 through 27 (of 27 total)

  • Author
    Posts
  • #1018871
    pablowaa
    Participant

    Hi,

    I’m using the OpenEars framework in one of my apps. It’s been recently rejected. The reason:

    “During review we were prompted to provide consent to use the microphone, however, we were not able to find any features or functionality that use the microphone for audio recording.
    The microphone consent request is generated by the use of either AVAudioSessionCategoryRecord or AVAudioSessionCategoryPlayAndRecord audio categories.
    If you do not intend to record audio with your application, it would be appropriate to choose the AVAudioSession session category that fits your application’s needs or modify your app to include audio-recording features.”

    Is there any way to fix it? The only part of the app where I’m using the microphone is where the OpenEars framework is being used.

    Thank you

    #1018872
    Halle Winkler
    Politepix

    Welcome pablowaa,

    I’m sorry you had an app rejection — this is the first time I’ve ever heard of one as a result of using the framework for a purpose it was designed for (I heard of one other temporary rejection back in 2011, but it was due to an obvious misuse of the framework). Are you doing speech recognition in your app or just TTS?

    #1018874
    pablowaa
    Participant

    Hi, thanks for your response.

    I’m doing speech recognition. Only recognise 2 words (“next”/”forward”) to navigate between slides in one section inside the app.

    I’ll give you some more background about the app:
    This app is part of a family that has been using the OpenEars framework since iOS 5. When I release a new app I change the content, themes, and I update the UI and code to work with the new iOS version requirements.
    The only place where I use the microphone is in OpenEars framework and Apple never reported a problem about it. This is the first time I release an app from this family for iOS 7, so I don’t know if it has something to do with iOS 7?

    According to Apple I’m using audio recording, but the only thing I do is recognising 2 words with OpenEars

    #1018875
    Halle Winkler
    Politepix

    Correct, OpenEars uses the AVAudioSession PlayAndRecord, which is how it is able to use the mic input for speech recognition. If it didn’t use a recording audio session, no recognition would be possible because there would be no microphone access. It’s expected that any app making use of the mic for low-latency usage will have that kind of audio session.

    Starting with iOS7, whenever the mic is first made live in the app, the user is asked permission for mic access. This is a very good thing in my opinion, especially for applications like speech recognition. Apple’s rejection is basically saying that you are getting this permission, but it isn’t clear to them what you are doing with it and they want any mic-using feature to be made apparent to new users of the app. I don’t know what your UI is like so I don’t know if this is an accurate critique or not; not every reviewer is the same and they have a lot to do and can occasionally overlook things like everyone else, but let’s operate on the assumption that it’s correct.

    Solving this in a definitive way is an app implementation issue rather than a framework issue and is probably something more for the realm of visual UI than speech UI, which isn’t inside of the scope of things OpenEars tries to do (besides giving you hooks like the audio input volume so you can create a visual UI in your app). But since it is clearly a concern for Apple, which means it’s a concern of people using OpenEars, I will add a warning about the necessity of communicating the fact that the app is using the mic for speech recognition in the OpenEars documentation.

    I can make some suggestions about how to indicate to the user that there is ongoing speech recognition. You could describe it via an introductory alert on first run, right before starting the speech recognition so that the permission request appears in the context of allowing speech recognition. Alternately you could probably indicate it with a label or an icon.

    Since you have the rejection email, though, I would simply ask them directly. “My app is performing speech recognition by doing keyword spotting as part of its user interface, which is why the PlayAndRecord audio session is in use. Based on the rejection reason, this voice UI feature isn’t apparent. What would be the best way for me to indicate that my app has a voice UI to my users so that the features or functionality that use the microphone for audio recording are clearly apparent?”

    #1019207
    Halle Winkler
    Politepix

    I’m marking this resolved because although it isn’t possible for this project to handle the visual UI requirements or user education requirements of indicating microphone use, it provides hooks for easy UI building, and I’ve added a note about this requirement to the FAQ so everyone understands that Apple wants an indication of mic usage to be part of the interface of a mic-using app.

    #1019546
    SPatrickB
    Participant

    My app update (for iOS 7) just got rejected for the exact same reason given by Apple. Trouble is my app is only using TTS and so should not use the microphone.
    If a flag could be set in the API for TTS only (use AVAudioSessionCategoryPlayback) then I think this could be resolved.

    #1019548
    Halle Winkler
    Politepix

    Welcome,

    Sorry, that’s frustrating. You can set this behavior by setting the property noAudioSessionOverrides to TRUE in your FliteController. Please test that it gives the desired result of no audio session overrides on a new install of your app before re-submitting!

    #1019751
    Halle Winkler
    Politepix

    I’ve decided that the next version will do this automatically if you are doing TTS without speech recognition so this doesn’t affect anyone else. ETA for version 1.65 is in the next week or so.

    #1019752
    SPatrickB
    Participant

    I set that flag but I still see an entry for my app appearing in settings/privacy/microphone. However, it’s OK, I’ll just wait out until the next version of OpenEars.

    #1019753
    Halle Winkler
    Politepix

    You would need to completely uninstall the app including its settings and let it reinstall as a new app, since that setting is made on the first install ever when you accept the permission request.

    #1019754
    Halle Winkler
    Politepix

    The other possibility is that you have a call to PocketsphinxController somewhere.

    #1019790
    SPatrickB
    Participant

    When I delete the app and reboot the device, I see that it has also gone from the settings but when I reinstall, it reappears in settings also. I don’t know what else I can do.

    #1019792
    Halle Winkler
    Politepix

    Can you show me how you are initializing fliteController and setting noAudioSessionOverrides? There is really no way for the app to use the mic as a result of OpenEars if you have noAudioSessionOverrides set to true and you aren’t using PocketsphinxController.

    You can verify it in the framework source code; the only place in the framework that the AudioSessionManager is initialized from is PocketsphinxController, and once in FliteController if noAudioSessionOverrides isn’t set.

    According to Stack Overflow, permissions are not necessarily reset on app removal so you could try changing your bundle ID to be sure a new install won’t ask permission: http://stackoverflow.com/a/18625274/119717

    #1019802
    SPatrickB
    Participant

    Changing the bundle ID worked, thanks.

    #1019803
    Halle Winkler
    Politepix

    Very glad to hear it. I’ll keep in mind for future support questions that new mic permissions settings won’t take effect when testing an already-installed bundle.

    #1019938
    SPatrickB
    Participant

    OK, settings->general->reset->reset location & privacy will remove the previous settings, i.e. no need for bundle ID change.
    However, after setting the noAudioSessionOverrides to TRUE, it appears to only work for the initial call to the say method. I still have the microphone consent request popping up with subsequent calls even though the noAudioSessionOverrides flag is still set to TRUE;

    #1019940
    Halle Winkler
    Politepix

    Hmm, I’m not sure how that could happen unless you are using a different memory management approach than the recommended. This is the only code in the 1.64 version of FliteController which calls AudioSessionManager:

        if(audioSessionHasBeenInstantiated == FALSE) {
            if(self.noAudioSessionOverrides == FALSE) {
                audioSessionHasBeenInstantiated = TRUE;
                AudioSessionManager *audioSessionManager = [AudioSessionManager sharedAudioSessionManager]; // We want to do this before the first AVAudioPlayer instantiation but only once and only if the developer hasn't set noAudioSessionOverrides TRUE
                [audioSessionManager startAudioSession];
              
            } 
        }
    

    It does something with AudioSessionManager if there has never been an instantiated audio session and if overrides have been allowed. If there is a call to the audio session, it is either because PocketsphinxController is being used or because this check is failing:

    if(self.noAudioSessionOverrides == FALSE) {
    

    The 1.65 version will not even do this much, but if your FliteController has an issue with a BOOL property you are setting not being seen after the first call to the FliteController, that is something to investigate regardless.

    #1019943
    SPatrickB
    Participant

    I can see in my app that audioSessionHasBeenInstantiated is never set to TRUE in initial or subsequent calls to the say method. Is this pertinent?

    #1019944
    Halle Winkler
    Politepix

    Sort of — it kind of vaguely suggests that whatever is setting your audio session isn’t related to OpenEars. If you only have FliteController and aren’t making any use of PocketsphinxController, the call above is the only opportunity for audio session calls so having noAudioSessionOverrides set is the last word because it is later than the check against audioSessionHasBeenInstantiated. If noAudioSessionOverrides is set, OpenEars isn’t making a call to its AudioSessionManager (which is why audioSessionHasBeenInstantiated is never set TRUE). If a call is being made to the audio session, it should perhaps be suspected of coming from other objects.

    #1019945
    SPatrickB
    Participant

    Ah, I am using AudioServicesPlaySystemSound elsewhere in my app and am therefore calling [[AudioSessionManager sharedAudioSessionManager] setSoundMixing:true];

    #1019946
    Halle Winkler
    Politepix

    OK, if that is for OpenEars, shouldn’t be necessary when you have noAudioSessionOverrides on. But that also should not result in a call to request the mic if OpenEars is never getting to the point where it is making its own audio session call.

    #1019947
    SPatrickB
    Participant

    I can confirm that removing the call to setSoundMixing (from my controller init) has stopped the mic consent and has not affected the AudioServicesPlaySystemSound operation either.
    Still, I do not understand why this was happening.

    #1019948
    Halle Winkler
    Politepix

    Glad to hear it. Have you searched your project and other linked libraries for the case-insensitive word “session”? That can often turn up overlooked session calls. A lot of audio sample code out there on Stack Overflow and elsewhere contains extraneous calls to the audio session using one or the other audio session API, and sometimes they can sneak into otherwise-good code by that vector.

    #1019949
    Halle Winkler
    Politepix

    Sorry, my mistake, the call you are making above is to OpenEars’ AudioSessionManager so it is OpenEars which is doing audio session management. That isn’t mysterious since the mixing call is there in order to keep OpenEars’ audio session management functional while being used with another object that is affecting the audio session.

    #1019950
    SPatrickB
    Participant

    There was only the one reference to an audio session in my code which was the call to setSoundMixing.

    #1019951
    Halle Winkler
    Politepix

    That’s correct, sending any call directly to the audio session manager will result in some audio session management. That isn’t part of the documented API but is a very old workaround from before PocketsphinxController got its own soundMixing property that insures that soundMixing is only ever set when using PocketsphinxController.

    #1020136
    Halle Winkler
    Politepix

    This was fixed in version 1.65 of OpenEars and should not be an issue for app submissions anymore regardless of overrideAudioSession settings.

Viewing 27 posts - 1 through 27 (of 27 total)
  • You must be logged in to reply to this topic.