the app plays audio short mp3 using
This should be OK, but a) are you suspending/resuming when you play this or is it being submitted as part of the sound to be recognized, and b) is it being actively played in the timeframe in which you get the recognition that never returns a hypothesis?
Video usage is more than simple a mere set of jpegs being displayed
Is this displayed with any kind of video object, or is it displayed with UIImage?
Problem is … But if the noise appears and then stops, and no end of speech is ever again detected
Does this also happen with the sample app that ships with the distribution if you don’t make any changes to it?
Which acoustic model are you using?