Home › Forums › OpenEars › How to recognise speech after suspend › Reply To: How to recognise speech after suspend
Sorry, no. Generally, I encourage developers not to use this design of emulating a push-to-talk recognizer with a continuous recognizer because it fights the design of the library. In your case, the interesting question is why recognition isn’t completing when there is silence – that isn’t the way the library usually functions so maybe that is worth doing some troubleshooting on.
However, there is a this method which performs recognition on an arbitrary complete WAV file:
- (void) runRecognitionOnWavFileAtPath: (NSString *) usingLanguageModelAtPath: (NSString *) dictionaryAtPath: (NSString *) acousticModelAtPath: (NSString *) languageModelIsJSGF: (BOOL) wavPath languageModelPath dictionaryPath acousticModelPath languageModelIsJSGF
So if all you want to do is perform recognition on a recording you’ve made, just capture a WAV on button press using one of the many ways to do that with iOS and after two seconds, stop capturing it and submit it to runRecognitionOnWavFileAtPath:. It will need to be mono PCM, 16-bit and 16k.