Home › Forums › OpenEars › OpenEars detects multiple words during silence › Reply To: OpenEars detects multiple words during silence
Sorry for being slow but I believe your second question about the suspend/resume holds the answer to my problem. I am using this code to determine the event from the hypothesis:
- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
NSLog(@"The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID);
if ([hypothesis isEqualToString:@"HELP"]) {
[self slotHelp:self];
}
if ([hypothesis isEqualToString:@"BET"]) {
[self slotBet:self];
}
if ([hypothesis isEqualToString:@"SPIN"]) {
[self slotSpin:self];
}
if ([hypothesis isEqualToString:@"BALANCE"]) {
[self slotScore:self];
}
}
During each event there is speech from both TTS and audioPlayer. I use this conditional to test when there is no speech at the beginning of each event but am not sure where to suspend and resume:
if (self.synthesizer.speaking == NO && ![audioPlayer1 isPlaying] && ![audioPlayer2 isPlaying] && ![audioPlayer3 isPlaying] && ![audioPlayer4 isPlaying]) {
// no speech currently
}
I thought I could suspend recognition after this conditional and let the event take place but I am not sure where or how to resume recognition after the speech is completed.
Thanks so much for your patience and help. I am hoping to develop several apps using OpenEars for people who are visually challenged. OpenEars is a vital part of my vision. Again thanks for your help.