How to Initially set language model without listening?

Home Forums OpenEars plugins How to Initially set language model without listening?

Viewing 5 posts - 1 through 5 (of 5 total)

  • Author
  • #1019016

    I have a place where I initially set the language model, and one where I change it.

    When I initially set it I use: startListeningWithLanguageModelAtPath
    When I change it later, I use: changeLanguageModelToFile

    My problem is that I need to be able to do this not yet start recognizing voice until I later resume or call some other function which starts the recognition.

    Is there some way to do this? My problem is that those calls above take several seconds to run and I’m trying to set them up in advance and then enable them instantly when needed.

    Halle Winkler

    Hi Lann,

    Switching language models shouldn’t take a notable amount of time under normal circumstances. Are you sure it isn’t some other part of the logical flow that is using the time?

    For instance, the initial startup call takes time because of starting the audio driver and calibrating. But the part where the language model is loaded isn’t a significant part of its duration, so setting it in advance of starting wouldn’t make a UX difference.

    If loading/switching the models is really taking a lot of time, maybe there is an unusual issue of some kind. What in the logging is leading to that impression?


    My logs show that it takes 4 seconds to execute the following statement on an iPhone 4S:

    [self.pocket_sphinx_controller startListeningWithLanguageModelAtPath:self.path_to_dynamic_language_model dictionaryAtPath:self.path_to_dynamic_grammar acousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”] languageModelIsJSGF:NO];

    Here is how the language model was earlier initialized (if this is relevant):

    NSError *error = [self.language_model_generator generateRejectingLanguageModelFromArray:languageArray
    forAcousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”]];

    [self.language_model_generator deliverRejectedSpeechInHypotheses:TRUE];


    Oh, maybe the issue is that I’m using startListeningWithLanguageModelAtPath every time I change the language model. Shouldn’t I be using changeLanguageModelToFile on successive calls?

    Halle Winkler

    That’s correct. startListeningWithLanguageModelAtPath only needs to be done once in a listening session because it starts the recognition engine and audio driver and calibrates the speech/silence levels to the room. Once it has started, you use changeLanguageModelToFile to change models within the same session and it should be more or less instant.

Viewing 5 posts - 1 through 5 (of 5 total)
  • You must be logged in to reply to this topic.