deliverRejectedSpeechInHypotheses: Ignored if model is not generated

Home Forums OpenEars plugins deliverRejectedSpeechInHypotheses: Ignored if model is not generated

Viewing 9 posts - 1 through 9 (of 9 total)

  • Author
    Posts
  • #1026936
    Bruno
    Participant

    Hi,

    I’m using Rejecto plus Rapid Ears to detect voice commands, my set of supported commands has 6 simple words. I’m trying to avoid generating the language model each time I start listening but I always receive rejected phonemes unless I generate my language model every time.

    I’m running generateRejectingLanguageModelFromArray: only if I can’t get a valid path from pathToSuccessfullyGeneratedXXXXWithRequestedName:.

    If I not perform the generation OEEventsObserver always deliver the rejected hypothesis not matter what value I pass to my OELanguageModelGenerator instance in deliverRejectedSpeechInHypotheses:.

    Also I think there is a typo in the docs for deliverRejectedSpeechInHypotheses:. It says:

    …during your app session, it is still necessary to instantiate OELanguageModelGenerator+Rejecto and run deliverRejectedSpeechInHypotheses:FALSE if you want to see the phonemes that Rejecto is rejecting as part of your hypotheses.

    It says that I should pass FALSE to see the rejected phonemes as part of my hypothesis but it should be TRUE.

    Thanks.

    #1026937
    Halle Winkler
    Politepix

    Welcome Bruno,

    Good to talk to you again. Could you forward to me your demo download email so I can see if it’s actually pointing to the right version download? It’s the email you got when you requested the demo.

    Thanks,

    Halle

    #1026938
    Bruno
    Participant

    Hey,

    I’m working for a client that already have the plugins integrated in the code base some time ago so I don’t have the email used to request the plugin. Sorry.

    I update the OpenEars framework a couple of weeks ago but I don’t get the last Rejecto and RapidEars plugins. Could that be an issue?

    Also, I’m using Swift 2 and XCode 7.

    Thanks for your quick answer. Nice you remember me :)

    Regards,

    #1026939
    Halle Winkler
    Politepix

    Hi,

    No prob. I think that the main issue you’re encountering is that the demo version of Rejecto doesn’t write out a file – it can only be used to dynamically generate models. Currently, the licensed version can produce written out files; this is a small difference between the demo and the licensed version. However, you shouldn’t have to generate them every time you start listening – once in a session should work fine.

    Can you clarify these issues for me a bit more? I’m not quite following them yet:

    I’m running generateRejectingLanguageModelFromArray: only if I can’t get a valid path from pathToSuccessfullyGeneratedXXXXWithRequestedName:.

    If I not perform the generation OEEventsObserver always deliver the rejected hypothesis not matter what value I pass to my OELanguageModelGenerator instance in deliverRejectedSpeechInHypotheses:.

    #1026940
    Halle Winkler
    Politepix

    To briefly clarify, in order to not have Rejecto phonemes returned, it is necessary to use Rejecto to dynamically generate your models at runtime (this takes no notable time), and to pass the model path to OEPocketsphinxController startListening:etc using pathToSuccessfullyGeneratedLanguageModelWithRequestedName, which will work fine.

    #1026941
    Bruno
    Participant

    I think that the main issue you’re encountering is that the demo version of Rejecto doesn’t write out a file – it can only be used to dynamically generate models.

    I get the model files saved to my app Cache folder, that means I’m not using the demo version right? If I use them in different app sessions (application runs) I always get the rejected phonemes as hypothesis in the OEEventsObserver callbacks.

    The algorithm I’m using the get the model paths needed to call startRealtimeListeningWithLanguageModelAtPath is:

    • Call pathToSuccessfullyGeneratedXXXXWithRequestedName: for each path needed (model and dictionary in my case)
    • If I get a paths to existing files I use those to call startRealtimeListening...
    • If I don’t get a valid path I use generateRejectingLanguageModelFromArray and start over

    That make sense?

    To briefly clarify, in order to not have Rejecto phonemes returned, it is necessary to use Rejecto to dynamically generate your models at runtime…

    That means that if I don’t want to receive rejected phonemes I need to always generate the language model? That is what we are using so far and works well, my intention is avoid generate the model files each time I start listening mostly because the input parameters never change.

    #1026944
    Halle Winkler
    Politepix

    I get the model files saved to my app Cache folder, that means I’m not using the demo version right?

    If it’s a demo, the word ‘demo’ will be in the name of the linked framework.

    If I use them in different app sessions (application runs) I always get the rejected phonemes as hypothesis in the OEEventsObserver callbacks.

    Right, that is expected because you have to be running Rejecto in order for it to do things. Since it doesn’t take any notable time to create the models once at the start of your app session, it’s necessary to let Rejecto create its models.

    One big reason for this is that it is the only way you will pick up many fixes and improvements in future updates, since with Rejecto they are frequently in model generation (this also includes any lm-related changes to OpenEars which affect Rejecto, though).

    The biggest reason is that the Rejecto implementation (which does other things besides output a language model) is designed around the hard requirement of Rejecto being instantiated in any session its features are needed with (like in this case), so you will encounter strange outcomes like this one because the plugin’s functionality has been shut off.

    #1026966
    Bruno
    Participant

    Thanks for the explanation. I’m generating the models each time I needed now and it’s working as expected.

    #1026967
    Halle Winkler
    Politepix

    That’s great, I’m glad it’s working for you.

Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.