How to make OpenEars detect single words only, not phrases.

Home Forums OpenEars How to make OpenEars detect single words only, not phrases.

Viewing 4 posts - 1 through 4 (of 4 total)

  • Author
    Posts
  • #1020577
    Xiangxin
    Participant

    I’m currently using the dynamic language model.

    
    NSArray *languageArray = [[NSArray alloc] initWithArray:[NSArray arrayWithObjects:
                                                                 @"UP",
                                                                 @"DOWN",
                                                                 @"FIRE",
                                                                 nil]];	
    LanguageModelGenerator *languageModelGenerator = [[LanguageModelGenerator alloc] init];
    NSError *error = [languageModelGenerator generateLanguageModelFromArray:languageArray withFilesNamed:@"OpenEarsDynamicGrammar" forAcousticModelAtPath:[AcousticModel pathToModel:@"AcousticModelEnglish"]];

    It detects combinations of words in my vocabulary automatically. It may cause potential delay as it waits for the next word. I wonder is there a way to make OpenEars only detect single words?

    Thanks!

    #1020580
    Halle Winkler
    Politepix

    Welcome,

    Sorry, I don’t quite understand the question yet. Is the wish that the recognition engine would only attempt to listen for the first word-like utterance a user makes and then ignore subsequent ones?

    #1020582
    Xiangxin
    Participant

    Yes. I just want one word, no phrase or sentence. Is that possible?

    #1020583
    Halle Winkler
    Politepix

    The issue is that there is no concept of a word for the speech recognition engine until it has already found a hypothesis. Until it has searched for a hypothesis, it is just dealing with a set of phonemes in the utterance that it found, and the utterance is defined by speech followed by a pause. After it attempts to recognize the utterance, there is a concept of a word because the phonemes in the utterance were matched with words with a high probability of being the spoken words. So concepts like “one word” would have to come into effect after the speech utterance has been analyzed by the engine, not before.

    What you could do if you only want to know the first word a user said is to take the delivered hypothesis (either in OpenEars or RapidEars) and split it into an array along the whitespace separator and then just use the word at index zero.

Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.