lm/dic files

Home Forums OpenEars lm/dic files

Viewing 13 posts - 1 through 13 (of 13 total)

  • Author
    Posts
  • #1024051
    MissKitty
    Participant

    I have tennis.lm and tennis.dic files generated using ….
    http://www.speech.cs.cmu.edu/tools/lmtool-new.html

    I am at a lost as to where Openears wants these files to be placed. Below is my ViewController code

    [OEPocketsphinxController sharedInstance].returnNullHypotheses = TRUE;

    [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];

    NSString *correctPathToMyLanguageModelFile = [NSString stringWithFormat:@”%@/tennis%@”,[NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) objectAtIndex:0],@”gram”];

    NSString *lmPath = correctPathToMyLanguageModelFile;
    NSString *dicPath = correctPathToMyLanguageModelFile;

    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”] languageModelIsJSGF:NO];

    Below are messages I get when testing

    startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF

    with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this language model, that means the correct path to your language model that you should pass to this method’s languageModelPath argument is as follows:

    NSString *correctPathToMyLanguageModelFile = [NSString stringWithFormat:@”%@/TheNameIChoseForMyLanguageModelAndDictionaryFile.%@”,[NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) objectAtIndex:0],@”DMP”];

    Feel free to copy and paste this code for your path to your language model, but remember to replace the part that says “TheNameIChoseForMyLanguageModelAndDictionaryFile” with the name you actually chose for your language model and dictionary file or you will get this error again.

    Could you point me to example/doc using lm/dic files?

    #1024052
    Halle Winkler
    Politepix

    OpenEars does its own language model generation, so take a look at the tutorial to learn how to use language model generation using the OELanguageModelGenerator class:

    https://www.politepix.com/openears/tutorial

    There is also a clear example of generating language models using OELanguageModelGenerator in the sample app.

    #1024054
    MissKitty
    Participant

    My LM consists of 329 sentences, 1289 words. This runs fine using Android – Pocketsphinx. I get excellent recognition. I can write some code to convert this file to your format so that I will only have to maintain one LM file. Do you see any problem with this approach?

    #1024055
    Halle Winkler
    Politepix

    I’d recommend going ahead and setting it up with the built-in language model generator so it’s possible for you to use the existing documentation and tutorials. There are many possible approaches that would theoretically work, but unfortunately the resources are not there on my end to create new custom documentation and troubleshooting on demand for porting from Android without domain knowledge of Objective-C or Cocoa.

    #1024057
    MissKitty
    Participant

    The question I am asking is will Openears have any problem with an LM of my size. Android has nothing to do with it except that it works on Android.

    #1024058
    Halle Winkler
    Politepix

    Without knowing anything about your app or your language model, I don’t have any insight into the likely user experience with your model. It will only take ~10 minutes to test it out for yourself using the tutorial tool, so go ahead and give it a try.

    #1024059
    Halle Winkler
    Politepix

    (Keeping in mind that accuracy should only be evaluated on an actual device rather than the Simulator).

    #1024114
    MissKitty
    Participant

    I just bought a new mac. Transferred source and re- downloaded OpenEars. Xcode 6.1.1 both cases. Was working on old mac. I now have a new problem, can not find
    OpenEars/OEEventsObserver.h

    The Framework Search Path is the same as the old mac except for the user name change. Any suggestions? I will not have access to old mac after today.

    #1024115
    Halle Winkler
    Politepix

    Sorry, no suggestions if you’ve verified the correct framework search path. It could help to see your complete copy/pasted error rather than a rephrase.

    #1024143
    MissKitty
    Participant

    I have speech recognition now working with large lm. My app is designed to start speech recognition when user presses button and stop when hypothesis is retuned. In code below I am doing resumeRecognition and suspendRecognition. I then do multiple speaks via Flite.
    1. From the log below it appears that every time Flite has finished speaking – Pocketsphinx has resumed recognition. Is there a way to prevent this?
    2. From the log below I use Flite to say “error netfor: hand”. Then “Love”. And finally “Thirty”. I only hear “error netfor: hand”. Is there away to wait after each phase to make sure speech is complete as there is using Google TTS?
    3. I am using speech punctuation “:” is this any problem with Flite?

    **************************** xcode log *********************************
    2015-01-08 05:54:25.777 MTC[1257:91066] MTC button pressed.
    2015-01-08 05:54:26.026 MTC[1257:91066] MTC button released.
    2015-01-08 05:54:26.026 MTC[1257:91066] Local callback: Pocketsphinx has resumed recognition.
    2015-01-08 05:54:26.890 MTC[1257:91066] Local callback: Pocketsphinx has detected speech.
    2015-01-08 05:54:28.307 MTC[1257:91066] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    2015-01-08 05:54:28.313 MTC[1257:91066] MTC to gotWords
    2015-01-08 05:54:28.314 MTC[1257:91066] error netfor: hand
    2015-01-08 05:54:28.354 MTC[1257:91066] Love
    2015-01-08 05:54:28.377 MTC[1257:91066] Thirty
    2015-01-08 05:54:28.400 MTC[1257:91066] Local callback: Pocketsphinx has suspended recognition.
    2015-01-08 05:54:28.401 MTC[1257:91066] Local callback: Flite has started speaking
    2015-01-08 05:54:28.401 MTC[1257:91066] Local callback: Flite has started speaking
    2015-01-08 05:54:28.401 MTC[1257:91066] Local callback: Flite has started speaking
    2015-01-08 05:54:28.401 MTC[1257:91066] Local callback: Pocketsphinx has suspended recognition.
    2015-01-08 05:54:28.401 MTC[1257:91066] Local callback: Pocketsphinx has suspended recognition.
    2015-01-08 05:54:28.401 MTC[1257:91066] Local callback: Pocketsphinx has suspended recognition.
    2015-01-08 05:54:29.946 MTC[1257:91066] Local callback: Flite has finished speaking
    2015-01-08 05:54:29.946 MTC[1257:91066] Local callback: Flite has finished speaking
    2015-01-08 05:54:29.946 MTC[1257:91066] Local callback: Pocketsphinx has resumed recognition.
    2015-01-08 05:54:29.946 MTC[1257:91066] Local callback: Pocketsphinx has resumed recognition.
    2015-01-08 05:54:30.100 MTC[1257:91066] Local callback: Pocketsphinx has detected speech.
    2015-01-08 05:54:30.758 MTC[1257:91066] Local callback: Flite has finished speaking
    2015-01-08 05:54:30.759 MTC[1257:91066] Local callback: Pocketsphinx has resumed recognition.

    ****************** in didViewLoad ***************************

    // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController.

    [OEPocketsphinxController sharedInstance].returnNullHypotheses = TRUE;

    [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];

    if(![OEPocketsphinxController sharedInstance].isListening) {
    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@”AcousticModelEnglish”] languageModelIsJSGF:FALSE];

    // Start speech recognition if we aren’t already listening.
    }

    [self startDisplayingLevels];

    // This suspends listening without ending the recognition loop

    [[OEPocketsphinxController sharedInstance] suspendRecognition];

    ************************* end didViewLoad

    – (IBAction)buttonDown:(id)sender {

    NSLog(@” MTC button pressed.”);

    }

    – (IBAction)buttonUp:(id)sender {

    NSLog(@” MTC button released.”);

    AudioServicesPlaySystemSound(1005);

    [OEPocketsphinxController sharedInstance].returnNullHypotheses = TRUE;

    [[OEPocketsphinxController sharedInstance] resumeRecognition];

    }

    -(void) handleLongPress : (id)sender
    {
    //Long Press done by the user

    NSLog(@” MTC long press”);

    }

    – (void)speakWithNSString:(NSString *)text {

    self.fliteController = [[OEFliteController alloc] init];
    self.slt = [[Slt alloc] init];

    ];

    NSLog(@”%@”,
    text);

    [self.fliteController say:[NSString stringWithFormat:@” %@”,text] withVoice:self.slt];
    }

    – (void)myLogWithNSString:(NSString *)text {

    NSLog(@”%@”,
    text);
    }
    //

    – (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {

    // This suspends listening without ending the recognition loop

    [[OEPocketsphinxController sharedInstance] suspendRecognition];

    NSLog(@” MTC to gotWords”);

    MTCccActivity *theInstance = [MTCccActivity getInstance];
    [theInstance gotWordsWithNSString:hypothesis];}

    @end

    #1024148
    Halle Winkler
    Politepix

    1. From the log below it appears that every time Flite has finished speaking – Pocketsphinx has resumed recognition. Is there a way to prevent this?

    No, but you can just submit one complete utterance for OEFliteController rather than multiple ones in a row if you don’t want suspending and resuming between them, or if for whatever reason it is a requirement to submit separate utterances to OEFliteController you could also just suspend again after speech is complete until your last utterance.

    Is there away to wait after each phase to make sure speech is complete as there is using Google TTS?

    Yes, check out the docs for OEEventsObserver’s delegate methods.

    3. I am using speech punctuation “:” is this any problem with Flite?

    I would expect it to just be ignored – are you seeing a different result?

    #1024150
    MissKitty
    Participant

    Problems solved – app running on mac time to move to iphone – Thanks for your help, you can close this out

    #1024151
    Halle Winkler
    Politepix

    Super, glad it helped. I noticed a bug with punctuation in language models today so there will be a fix for it out tomorrow after the current version has passed testing – definitely keep an eye out for the update when it is released and apply it since you are using punctuation. You can see releases at https://www.politepix.com/openears/changelog

Viewing 13 posts - 1 through 13 (of 13 total)
  • You must be logged in to reply to this topic.