Applying a tricky format to JSGF grammar file in OpenEars stack usage

Home Forums OpenEars plugins Applying a tricky format to JSGF grammar file in OpenEars stack usage

Tagged: 

Viewing 6 posts - 1 through 6 (of 6 total)

  • Author
    Posts
  • #1025786
    wuwen128
    Participant

    If I generate a grammar file for the sentence “SO HOW LONG HAVE YOU BEEN FEELING LIKE THIS” like this:

    @{
    OneOfTheseWillBeSaidOnce = (
    “SO HOW LONG HAVE YOU BEEN FEELING LIKE THIS”,
    “HOW LONG HAVE YOU BEEN FEELING LIKE THIS”,
    “LONG HAVE YOU BEEN FEELING LIKE THIS”,
    “HAVE YOU BEEN FEELING LIKE THIS”,
    “YOU BEEN FEELING LIKE THIS”,
    “BEEN FEELING LIKE THIS”,
    “FEELING LIKE THIS”,
    “LIKE THIS”,
    THIS
    );
    }

    What do you think of this? Is it a better solution(higher recognition rate) to use this grammar file as language model than use DMP+ARPA language model?

    Considering any OpenEars version?

    #1025788
    Halle Winkler
    Politepix

    Hi,

    What is the UX purpose of using this kind of grammar?

    #1025793
    wuwen128
    Participant

    Purpose is to recognise word by word, not just sequentially, in a sentence, just like using .DMP + .arpa files?

    The point is using JSGF .gram file as language model will only result in sequential/sub-sequential words from the sentence.

    #1025795
    Halle Winkler
    Politepix

    What is the advantage over ARPA? It will be slower than ARPA to return a hypothesis but it will not be limited to words in sequence – it can still deliver clusters of words out of sequence over multiple hypotheses.

    #1025799
    wuwen128
    Participant

    If I generate this:
    @{
    ThisWillBeSaidOnce = (
    “SO HOW LONG HAVE YOU BEEN FEELING LIKE THIS”);
    }

    And speak “SO LONG YOU FEELING THIS”, OpenEars will never output hypothesis like “SO LONG YOU FEELING THIS”, no matter how good the pronunciation is. I guess because it doesn’t have DTW as preprocess module?

    BTW, I am always using
    pocketsphinxController.pathToTestFile = wavFilePath
    and
    [self.pocketsphinxController startListeningWithLanguageModelAtPath:self.pathToGrammarToStartAppWith
    dictionaryAtPath:self.pathToDictionaryToStartAppWith
    acousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”]
    languageModelIsJSGF:YES];

    #1025800
    Halle Winkler
    Politepix

    If I generate this:
    @{
    ThisWillBeSaidOnce = (
    “SO HOW LONG HAVE YOU BEEN FEELING LIKE THIS”);
    }

    And speak “SO LONG YOU FEELING THIS”, OpenEars will never output hypothesis like “SO LONG YOU FEELING THIS”, no matter how good the pronunciation is.

    This is the correct behavior – the purpose of a grammar is to restrict utterances to the allowable phrases in the grammar. If you want to be able to recognize various subsets of a phrase, a language model is the correct approach.

Viewing 6 posts - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.