Recognizing words which aren't in the vocabulary

Home Forums OpenEars plugins Recognizing words which aren't in the vocabulary

Tagged: 

Viewing 10 posts - 1 through 10 (of 10 total)

  • Author
    Posts
  • #1018841
    quietp3512
    Participant

    Will rapidears recognize whatever words we have spoke out or this will only recognize a group of words which is already defined inside a textfile?

    #1018842
    Halle Winkler
    Politepix

    Welcome,

    RapidEars is a plugin for OpenEars which extends its functionality in one specific way, by doing recognition in realtime as the user is speaking instead of performing recognition after the user has completed their speech and paused for a certain period of time. To learn about how OpenEars uses a vocabulary that you define dynamically at runtime from an array of words or phrases, there is a good explanation in the documentation about the basics of offline speech recognition from the main OpenEars page and examples in the OpenEars tutorial and the sample app that ships with the OpenEars distribution.

    The short answer is that all speech recognition uses a predefined list of words in a text file or memory structure of some kind. However, cloud-based speech recognition is able to use a much larger word list than recognition that is being performed on a handheld device because of the greater CPU and memory resources available, meaning that it can give the impression of recognizing “everything a user says”, while an offline SDK such as OpenEars needs to be used with a smaller chosen word set that applies to the specific task of the application rather than being generalized to any application.

    #1018843
    quietp3512
    Participant

    is there a method to get the words even if the word does’nt match

    #1018844
    Halle Winkler
    Politepix

    No, there is no kind of speech recognition that can recognize words that aren’t defined somewhere within the recognition system, or in a data source that the recognition system has access to.

    #1018845
    quietp3512
    Participant

    actually cant we get the word we spoke?

    #1018846
    Halle Winkler
    Politepix

    I’m sorry, I don’t quite understand how your last question is different from the two I just answered. If it is a new question but I’m not quite following it, could you please clarify it further so I can help? Otherwise I think it may have already been addressed above in my explanation and in the links I gave, and also in the documentation, sample app, and the tutorial for OpenEars, which all talk about the kind of predefined-vocabulary recognition that OpenEars is designed for. Thanks for your interest!

    #1018847
    quietp3512
    Participant

    if a person spoke out a word.I would like to get the word which the person spoke out.Is there a way to get it. If i speak out “WORD”. I should get WORD.

    #1018848
    Halle Winkler
    Politepix

    OK, take a look at my answer above.

    #1019027
    man oram
    Participant

    There seems to be problems with the rejecto programming, it is responding to words not in its vocabulary, when the whole reason the software is called rejecto is that is sould rejecto any word not in its vocabulary. For example when I can say “Australia”, or “My name is geoff”, custom message 1 recognizing!!! those words are not in vacabulary.. please help from this

    see my rejecto module

    LanguageModelGenerator *lmGenerator = [[LanguageModelGenerator alloc] init];
    NSArray *words = [[NSArray alloc] initWithArray:[NSArray arrayWithObjects:@”CUSTOMONE”,@”CUSTOMTWO”,@”CUSTOMTHREE”,@”THANKYOUSCREEN”,
    @”HOMESCREEN”,
    @”TOOCLOSESCREEN”,
    nil]];
    NSString *name = @”NameIWantForMyLanguageModelFiles”;
    NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words
    withFilesNamed:name withOptionalExclusions:nil usingVowelsOnly:TRUE withWeight:nil forAcousticModelAtPath:[AcousticModel pathToModel:@”AcousticModelEnglish”]];

    Mano

    #1019028
    Halle Winkler
    Politepix

    Welcome Mano,

    Speech recognition is a complex application and depending on the requirements of the app, Rejecto may or may not work right out of the box in the way you’re expecting. It isn’t a scam or a programming error, it’s just the usual challenges of machine perception that are the reasons we don’t have perfect universal speech recognition working without the network in our phones already.

    The language model generation command has several arguments that are designed to let its behavior be customized to a particular vocabulary for best rejection performance, which you can read about in its documentation. The argument you might want to check out first is withWeight:.

    In general, offline speech in an app is the kind of project where you’ll be happier with the results if you go into it expecting to spend a bit of time testing, refining, asking constructive questions and reading the docs, because every application is different and approaches that work well for one might need to be altered for the next. Thanks for giving Rejecto a try!

Viewing 10 posts - 1 through 10 (of 10 total)
  • You must be logged in to reply to this topic.