Home › Forums › OpenEars plugins › rapidEarsDidDetectLiveSpeechAsWordArray not being called
- This topic has 11 replies, 4 voices, and was last updated 7 years, 8 months ago by Halle Winkler.
-
AuthorPosts
-
February 5, 2015 at 3:03 pm #1024707foobar8675Participant
I have rapid ears implemented and see that the next 2 methods are being called
– (void) rapidEarsDidReceiveLiveSpeechHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore
– (void) rapidEarsDidReceiveFinishedSpeechHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScorewhere the next two methods are not.
– (void) rapidEarsDidDetectFinishedSpeechAsWordArray:(NSArray *)words scoreArray:(NSArray *)scores startTimeArray:(NSArray *)startTimes endTimeArray:(NSArray *)endTimes
– (void) rapidEarsDidDetectLiveSpeechAsWordArray:(NSArray *)words scoreArray:(NSArray *)scores startTimeArray:(NSArray *)startTimes endTimeArray:(NSArray *)endTimes
is there something special I need to do to get these callbacks?
As you might (or not) know, I’m doing this with the objective of counting words in a ‘kind of’ real time way . I experimented with rapidEarsDidReceiveLiveSpeechHypothesis but it does give a lot of redundant words. I was hoping to use the start / end time array as a way of filtering some of the redundancies.
Thank you Halle
February 5, 2015 at 4:27 pm #1024709Halle WinklerPolitepixSure, the requirements for those callbacks are covered in the OEPocketsphinxController+RapidEars header and docs, so take a look.
February 5, 2015 at 5:12 pm #1024710foobar8675ParticipantThank you Halle.
I believe I have implemented as per the docs since the the first two delegate callbacks from my initial post are being called. The issue is the second two delegate callbacks are not being called.
Just to double check things, I went through the following steps :
– I added the import for #import <RapidEarsDemo/OEEventsObserver+RapidEars.h>
– made sure OEEventsObserver is a property
– I double checked that I am using startRealtimeListeningWithLanguageModelAtPath instead of startListeningWithLanguageModelAtPathFebruary 5, 2015 at 5:15 pm #1024711Halle WinklerPolitepixOK, from the OEPocketsphinxController+RapidEars header and docs:
/** Setting this to true will cause you to receive your hypotheses as separate words rather than a single NSString. This is a requirement for using OEEventsObserver delegate methods that contain timing or per-word scoring. This can't be used with N-best.*/ - (void) setReturnSegments:(BOOL)returnSegments;
/** Setting this to true will cause you to receive segment hypotheses with timing attached. This is a requirement for using OEEventsObserver delegate methods that contain word timing information. It only works if you have setReturnSegments set to TRUE. This can't be used with N-best.*/ - (void) setReturnSegmentTimes:(BOOL)returnSegmentTimes;
February 5, 2015 at 5:32 pm #1024712foobar8675ParticipantWhoops. Thank you for pointing that out.
February 6, 2015 at 1:53 pm #1024733Halle WinklerPolitepixNo prob! Glad it’s working.
May 28, 2016 at 5:20 pm #1030407krniadiParticipantHi, i have some problem in swift. I can make rapidears work but when i use setReturnSegments(true) the rapidEarsDidDetectLiveSpeechAsWordArray never called. this is how i set it in swift
var rapidEventsObserver = OEEventsObserver()
….
func loadOpenEars() {
rapidEventsObserver = OEEventsObserver()
self.rapidEventsObserver.delegate = self
…..
}func startListening() {
do {
try OEPocketsphinxController.sharedInstance().setReturnSegments(true)
try OEPocketsphinxController.sharedInstance().setReturnSegmentTimes(true)
try OEPocketsphinxController.sharedInstance().returnNbest = false
try OEPocketsphinxController.sharedInstance().setRapidEarsReturnNBest(false)
try OEPocketsphinxController.sharedInstance().setActive(true)
try OEPocketsphinxController.sharedInstance().setFinalizeHypothesis(false)
}
catch {}
OEPocketsphinxController.sharedInstance().startRealtimeListeningWithLanguageModelAtPath(lmPath, dictionaryAtPath: dicPath, acousticModelAtPath: OEAcousticModel.pathToModel(“AcousticModelEnglish”))
}func rapidEarsDidDetectLiveSpeechAsWordArray(words: [AnyObject]!, andScoreArray scores: [AnyObject]!) {
print(“delegate accessed”)
}func rapidEarsDidDetectFinishedSpeechAsWordArray(words: [AnyObject]!, andScoreArray scores: [AnyObject]!) {
print(“delegate accessed”)
}
/**The engine has detected in-progress speech. Words and respective scores and timing are delivered in separate arrays with corresponding indexes.*/func rapidEarsDidDetectLiveSpeechAsWordArray(words: [AnyObject]!, scoreArray scores: [AnyObject]!, startTimeArray startTimes: [AnyObject]!, endTimeArray endTimes: [AnyObject]!) {
print(“delegate accessed”)
}but none of them being called by rapidears. have I missed something ?
Thank you Halle
May 28, 2016 at 5:27 pm #1030408Halle WinklerPolitepixWelcome,
This needs to precede any calls to OEPocketsphinxController:
try OEPocketsphinxController.sharedInstance().setActive(true)
May 28, 2016 at 5:42 pm #1030409krniadiParticipantOh my god, I was not aware of that. Big thank you halle. Now it work. :)
May 28, 2016 at 5:48 pm #1030410Halle WinklerPolitepixGlad to hear it!
July 21, 2016 at 7:41 am #1030722sidParticipantrapidEarsDidDetectLiveSpeechAsWordArray is not called
I have set setReturnSegments:true
still having the issueJuly 21, 2016 at 10:46 am #1030723Halle WinklerPolitepixWelcome,
This is known to be working fine, so give the documentation (OEPocketsphinxController as well as RapidEars) and this thread a closer look (what you tried isn’t what solved the issue for the other poster) and you’ll get it working.
-
AuthorPosts
- The topic ‘rapidEarsDidDetectLiveSpeechAsWordArray not being called’ is closed to new replies.