rapidEarsDidDetectLiveSpeechAsWordArray not being called

Home Forums OpenEars plugins rapidEarsDidDetectLiveSpeechAsWordArray not being called

Viewing 12 posts - 1 through 12 (of 12 total)

  • Author
    Posts
  • #1024707
    foobar8675
    Participant

    I have rapid ears implemented and see that the next 2 methods are being called

    – (void) rapidEarsDidReceiveLiveSpeechHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore
    – (void) rapidEarsDidReceiveFinishedSpeechHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore

    where the next two methods are not.

    – (void) rapidEarsDidDetectFinishedSpeechAsWordArray:(NSArray *)words scoreArray:(NSArray *)scores startTimeArray:(NSArray *)startTimes endTimeArray:(NSArray *)endTimes

    – (void) rapidEarsDidDetectLiveSpeechAsWordArray:(NSArray *)words scoreArray:(NSArray *)scores startTimeArray:(NSArray *)startTimes endTimeArray:(NSArray *)endTimes

    is there something special I need to do to get these callbacks?

    As you might (or not) know, I’m doing this with the objective of counting words in a ‘kind of’ real time way . I experimented with rapidEarsDidReceiveLiveSpeechHypothesis but it does give a lot of redundant words. I was hoping to use the start / end time array as a way of filtering some of the redundancies.

    Thank you Halle

    #1024709
    Halle Winkler
    Politepix

    Sure, the requirements for those callbacks are covered in the OEPocketsphinxController+RapidEars header and docs, so take a look.

    #1024710
    foobar8675
    Participant

    Thank you Halle.

    I believe I have implemented as per the docs since the the first two delegate callbacks from my initial post are being called. The issue is the second two delegate callbacks are not being called.

    Just to double check things, I went through the following steps :

    – I added the import for #import <RapidEarsDemo/OEEventsObserver+RapidEars.h>
    – made sure OEEventsObserver is a property
    – I double checked that I am using startRealtimeListeningWithLanguageModelAtPath instead of startListeningWithLanguageModelAtPath

    #1024711
    Halle Winkler
    Politepix

    OK, from the OEPocketsphinxController+RapidEars header and docs:

    /** Setting this to true will cause you to receive your hypotheses as separate words rather than a single NSString. This is a requirement for using OEEventsObserver delegate methods that contain timing or per-word scoring. This can't be used with N-best.*/
    - (void) setReturnSegments:(BOOL)returnSegments; 
    /** Setting this to true will cause you to receive segment hypotheses with timing attached. This is a requirement for using OEEventsObserver delegate methods that contain word timing information. It only works if you have setReturnSegments set to TRUE. This can't be used with N-best.*/
    - (void) setReturnSegmentTimes:(BOOL)returnSegmentTimes;
    #1024712
    foobar8675
    Participant

    Whoops. Thank you for pointing that out.

    #1024733
    Halle Winkler
    Politepix

    No prob! Glad it’s working.

    #1030407
    krniadi
    Participant

    Hi, i have some problem in swift. I can make rapidears work but when i use setReturnSegments(true) the rapidEarsDidDetectLiveSpeechAsWordArray never called. this is how i set it in swift

    var rapidEventsObserver = OEEventsObserver()
    ….
    func loadOpenEars() {
    rapidEventsObserver = OEEventsObserver()
    self.rapidEventsObserver.delegate = self
    …..
    }

    func startListening() {
    do {
    try OEPocketsphinxController.sharedInstance().setReturnSegments(true)
    try OEPocketsphinxController.sharedInstance().setReturnSegmentTimes(true)
    try OEPocketsphinxController.sharedInstance().returnNbest = false
    try OEPocketsphinxController.sharedInstance().setRapidEarsReturnNBest(false)
    try OEPocketsphinxController.sharedInstance().setActive(true)
    try OEPocketsphinxController.sharedInstance().setFinalizeHypothesis(false)
    }
    catch {

    }
    OEPocketsphinxController.sharedInstance().startRealtimeListeningWithLanguageModelAtPath(lmPath, dictionaryAtPath: dicPath, acousticModelAtPath: OEAcousticModel.pathToModel(“AcousticModelEnglish”))
    }

    func rapidEarsDidDetectLiveSpeechAsWordArray(words: [AnyObject]!, andScoreArray scores: [AnyObject]!) {
    print(“delegate accessed”)
    }

    func rapidEarsDidDetectFinishedSpeechAsWordArray(words: [AnyObject]!, andScoreArray scores: [AnyObject]!) {
    print(“delegate accessed”)
    }
    /**The engine has detected in-progress speech. Words and respective scores and timing are delivered in separate arrays with corresponding indexes.*/

    func rapidEarsDidDetectLiveSpeechAsWordArray(words: [AnyObject]!, scoreArray scores: [AnyObject]!, startTimeArray startTimes: [AnyObject]!, endTimeArray endTimes: [AnyObject]!) {
    print(“delegate accessed”)
    }

    but none of them being called by rapidears. have I missed something ?

    Thank you Halle

    #1030408
    Halle Winkler
    Politepix

    Welcome,

    This needs to precede any calls to OEPocketsphinxController:

    try OEPocketsphinxController.sharedInstance().setActive(true)

    #1030409
    krniadi
    Participant

    Oh my god, I was not aware of that. Big thank you halle. Now it work. :)

    #1030410
    Halle Winkler
    Politepix

    Glad to hear it!

    #1030722
    sid
    Participant

    rapidEarsDidDetectLiveSpeechAsWordArray is not called

    I have set setReturnSegments:true
    still having the issue

    #1030723
    Halle Winkler
    Politepix

    Welcome,

    This is known to be working fine, so give the documentation (OEPocketsphinxController as well as RapidEars) and this thread a closer look (what you tried isn’t what solved the issue for the other poster) and you’ll get it working.

Viewing 12 posts - 1 through 12 (of 12 total)
  • The topic ‘rapidEarsDidDetectLiveSpeechAsWordArray not being called’ is closed to new replies.