Error in generateLanguageModelFromArray call

Home Forums OpenEars Error in generateLanguageModelFromArray call

Viewing 21 posts - 1 through 21 (of 21 total)

  • Author
    Posts
  • #1029954
    maxgarmar
    Participant

    Hi Halle,

    I hope you are doing well.
    I downloaded the openears’s newest version and for changes needed for my app, now the initialization is created in a viewController which is recreated every time I call a button from other viewController (could be the reason of the problem maybe?).
    So here is my problem:

    when I create the viewController in viewDidLoad I initialize all the parameters needed like used to be in my old version.
    For my iPhone 6 device there is no problem recreating and initializing openears overtime I create the viewController but when I run it on iPad 2 simulator the second time I call the viewController and it initializes all the variables for openears I got an exception here and the app crashes:

    err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]];
    

    The log from openears is:

    [spoiler]
    2016-04-06 21:19:51.855 QuickCart[8766:185061] Starting OpenEars logging for OpenEars version 2.501 on 32-bit device (or build): iPad Simulator running iOS version: 8.100000
    2016-04-06 21:19:51.859 QuickCart[8766:185061] Starting dynamic language model generation

    2016-04-06 21:19:51.877 QuickCart[8766:185061] Done creating language model with CMUCLMTK in 0.017872 seconds.
    2016-04-06 21:19:51.877 QuickCart[8766:185061] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
    2016-04-06 21:19:51.924 QuickCart[8766:185061] The word CEMENTO was not found in the dictionary of the acoustic model /Users/maxgarmar/Library/Developer/CoreSimulator/Devices/7C36E2E2-8FDD-4649-898C-61EC41D5C49E/data/Containers/Bundle/Application/4B29B1AE-C310-4E79-B6ED-6EE230383647/QuickCart.app/AcousticModelEnglish.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2016-04-06 21:19:51.925 QuickCart[8766:185061] Using convertGraphemes for the word or phrase cemento which doesn’t appear in the dictionary
    VAL: tried to access cart in -1 type val
    [/spoiler]

    The last line is the one I see differences when the app runs well.
    Perhaps as I told you if I initialize the openears variable everytime I call the viewController is messing around something? but then should happen on my iPhone 6 also, right?

    Thanks I hope my explanation can help

    Maxi

    #1029957
    Halle Winkler
    Politepix

    Hi,

    I’ve never heard of it, but I’d be looking for situations where you have two view controllers up simultaneously that both have access to the OELanguageModelGenerator objects for English-language unknown pronunciation generation and some kind of reference cycle where an unused view controller can’t release its objects, or specifically its OELanguageModelGenerator. It seems like a race condition to make use of a flite voice to do g2p that probably needs to be singular.

    #1029958
    maxgarmar
    Participant

    Hi Halle,

    That’s not what I meant, I meant I have just one viewController which overtime is created is calling the viewDidLoad so then is calling the openears’s initialization. So there aren’t two viewControllers using it, it’s just one dismissing and then creating again when I needed. Perhaps this explanation helps.
    Another point is:
    The device has iOS 9 and the simulator is iOS 8 although I tried the same simulator iPad 2 with iOS 9.3 and also fails.

    Thank you

    #1029959
    Halle Winkler
    Politepix

    Sorry, I don’t know the cause of that. You can troubleshoot it more by testing with the default English acoustic model that ships with OpenEars 2.5 rather than a custom one, by testing using real devices only, and by testing against other unknown words.

    What OpenEars initialization are you referring to specifically? The issue you have seems to only relate to OELanguageModelGenerator, but OEPocketsphinxController no longer is really initialized per se, since it just has a shared object. I would take a look at the way things are set up in the sample app and compare it to your app to make sure there isn’t any unnecessary or out-of-date code as another troubleshooting step.

    it’s just one dismissing and then creating again when I needed

    It might be a good idea to look at the logging for this behavior when you stop the engine before the view controller is dismissed to see if it is able to shut down cleanly.

    #1029960
    maxgarmar
    Participant

    I should add: The error happens with all the simulators.
    You can reproduce it:

    Creating a viewController which is loaded from other one.
    Initialize openEars every time the view is created in viewDidLoad like this:

     //Arreglar
            lmGenerator = [[OELanguageModelGenerator alloc] init];
            
            fliteController = [[OEFliteController alloc] init];
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];
            
            
             //[OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE;
             
             [OELogging startOpenEarsLogging];
             
             //lmGenerator.verboseLanguageModelGenerator = TRUE;
            
            
            
            
            
            
            [[OEPocketsphinxController sharedInstance] setSecondsOfSilenceToDetect:0.7];
            
            if([voiceLanguage isEqualToString:@"AcousticModelSpanish"]){
                
                //supresion de sonido en español
                
                if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:2.0];
                }
                else
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:3.5];
                }
                
                
            }else{
                
                //supresion de sonido en ingles
                if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:2.0];
                }
                else
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:3.5];
                }
                
                
                
            }
            
            //NSError* err=[self reloadAcousticModel];
            
            NSMutableArray *words = [[NSMutableArray alloc]init];
            NSMutableDictionary *listProduct = [[NSMutableDictionary alloc]init];
            
            [listProduct setObject:@"AGUA" forKey:@"1"];
            [listProduct setObject:@"SAL" forKey:@"2"];
            [listProduct setObject:@"COCACOLA" forKey:@"3"];
            [listProduct setObject:@"DETERGENTE" forKey:@"4"];
            [listProduct setObject:@"PEGAMENTO" forKey:@"5"];
            [listProduct setObject:@"ZUMO NARANJA" forKey:@"6"];
            [listProduct setObject:@"PAPEL HIGIENICO" forKey:@"7"];
            [listProduct setObject:@"ZUMO PIÑA" forKey:@"8"];
            [listProduct setObject:@"ZUMO MELOCOTON" forKey:@"9"];
            [listProduct setObject:@"ZUMO PERA" forKey:@"10"];
            [listProduct setObject:@"CEMENTO" forKey:@"11"];
            
            for (NSString* key in listProduct) {
                NSString* value =[listProduct objectForKey:key];
                NSError *error = nil;
                NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@"[-*/_;.()',+]" options:NSRegularExpressionCaseInsensitive error:&error];
                NSString *modifiedString = [regex stringByReplacingMatchesInString:value options:0 range:NSMakeRange(0, [value length]) withTemplate:@" "];
                [words addObject: [modifiedString uppercaseString]];
            }
            //    NSArray* palabras = [[NSArray alloc]initWithArray:words];
            
            NSString *name = @"NameIWantForMyLanguageModelFiles";
            NSError *err;
            
            if([words count]>0){
                err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.
            }else{
                
                [words addObject:@"VACIO"];
                
                err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.
                
                
            }
            
            // Call this once before setting properties of the OEPocketsphinxController instance.
            
            if([err code] == noErr) {
                
                self.pathToFirstDynamicallyGeneratedLanguageModel = [lmGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"NameIWantForMyLanguageModelFiles"];
                self.pathToFirstDynamicallyGeneratedDictionary =  [lmGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"NameIWantForMyLanguageModelFiles"];
                
                
            } else {
                NSLog(@"Error: %@",[err localizedDescription]);
            }
            
            ///FIN openEars
            
            [self.openEarsEventsObserver setDelegate:self];
            
            //********arrancando el OpenEars********
    

    Xcode Version 7.3 (7D175)

    I hope it helps

    Maxi

    #1029961
    maxgarmar
    Participant

    Sorry, I don’t know the cause of that. You can troubleshoot it more by testing with the default English acoustic model that ships with OpenEars 2.5 rather than a custom one, by testing using real devices only, and by testing against other unknown words.

    I am using the English acoustic model shipped with Openears 2.5 is just a variable voiceVariable that I initialize depending on the device language.

    What OpenEars initialization are you referring to specifically? The issue you have seems to only relate to OELanguageModelGenerator, but OEPocketsphinxController no longer is really initialized per se, since it just has a shared object. I would take a look at the way things are set up in the sample app and compare it to your app to make sure there isn’t any unnecessary or out-of-date code as another troubleshooting step.

    I compared with the example and are the same steps.

    It might be a good idea to look at the logging for this behavior when you stop the engine before the view controller is dismissed to see if it is able to shut down cleanly.

    I am stopping the engine with stopListening method. But I don’t think it is releasing OELanguageModelGenerator variable.
    Is there any way to release or clean the OELanguageModelGenerator before dismissing the viewController?
    By the way, I just saw something strange:

    When the problem was happening I was loading Spanish words to AcousticModelEnglish (just because I am testing the app, in the future the dictionary should only contains English words) but still if I change the Spanish words to 3 English words, I can reload the viewController as many time as I want without crashing….
    Interesting enough is if I load english words to AcousticModelSpanish the crash is not happening. Does it help you to narrow the problem?

    As I told you the dictionary will be with English or Spanish words depending the country and it will match with the AcousticModel accordingly. But what I am afraid of is if the dictionary contains any strange English word because the user types it incorrectly if the error will happen also and the app will not run ever because crashes every time the viewController is loaded.

    Thank you

    #1029964
    Halle Winkler
    Politepix

    Hi,

    testing using real devices only

    It might be a good idea to look at the logging for this behavior when you stop the engine before the view controller is dismissed to see if it is able to shut down cleanly.

    Please verify that this isn’t a Simulator-only issue with stopping listening. Generally, no Simulator-only bugs are taken as reports here, from the post Please read before you post – how to troubleshoot and provide logging info here:

    The Simulator can’t be used for testing or evaluation with OpenEars (there is more about this in the documentation and the source) so please do not submit any questions or bug reports relating to Simulator results.

    If this can either be replicated on a real device or disproven that it relates to Simulator environment differences from the device, I can help further.

    #1029985
    maxgarmar
    Participant

    Hi Halle,

    As you told me, next step was to try on my devices. Same happened:

    – I change my iPad language to English then my app uses the AcousticModelEnglish.
    – Then I load 10 Spanish words on it.
    – I start the viewController, first time ok.
    – I start viewController again second time, app crashes. The same line like the simulator:

    err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:@"AcousticModelEnglish" forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]];

    If I use the AcosticModelSpanish and I load English words on it. No problem recreating the viewController as many times as I want.

    Device: iPad mini 3 with iOS 9.2.1
    Let me know if you need something from me to help you out and find the error.

    Thanks

    Maxi

    #1029986
    Halle Winkler
    Politepix

    If you can create a code-only replacement for the main viewcontroller in the sample app that demonstrates it, it can be recreated and fixed faster.

    #1030055
    maxgarmar
    Participant

    Hi Halle,

    Here you go:

    Just you would substitute the code below for the ViewController of the sampleApp and the sampleapp will crash like on my app

    Thanks

    //  ViewController.m
    //  OpenEarsSampleApp
    //
    //  ViewController.m demonstrates the use of the OpenEars framework. 
    //
    //  Copyright Politepix UG (haftungsbeschränkt) 2014. All rights reserved.
    //  https://www.politepix.com
    //  Contact at https://www.politepix.com/contact
    //
    //  This file is licensed under the Politepix Shared Source license found in the root of the source distribution.
    
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // IMPORTANT NOTE: Audio driver and hardware behavior is completely different between the Simulator and a real device. It is not informative to test OpenEars' accuracy on the Simulator, and please do not report Simulator-only bugs since I only actively support 
    // the device driver. Please only do testing/bug reporting based on results on a real device such as an iPhone or iPod Touch. Thanks!
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    
    #import "ViewController.h"
    #import <OpenEars/OEPocketsphinxController.h>
    #import <OpenEars/OEFliteController.h>
    #import <OpenEars/OELanguageModelGenerator.h>
    #import <OpenEars/OELogging.h>
    #import <OpenEars/OEAcousticModel.h>
    #import <Slt/Slt.h>
    
    @interface ViewController()
    
    // UI actions, not specifically related to OpenEars other than the fact that they invoke OpenEars methods.
    - (IBAction) stopButtonAction;
    - (IBAction) startButtonAction;
    - (IBAction) suspendListeningButtonAction;
    - (IBAction) resumeListeningButtonAction;
    
    // Example for reading out the input audio levels without locking the UI using an NSTimer
    
    - (void) startDisplayingLevels;
    - (void) stopDisplayingLevels;
    
    // These three are the important OpenEars objects that this class demonstrates the use of.
    @property (nonatomic, strong) Slt *slt;
    
    @property (nonatomic, strong) OEEventsObserver *openEarsEventsObserver;
    @property (nonatomic, strong) OEPocketsphinxController *pocketsphinxController;
    @property (nonatomic, strong) OEFliteController *fliteController;
    
    // Some UI, not specifically related to OpenEars.
    @property (nonatomic, strong) IBOutlet UIButton *stopButton;
    @property (nonatomic, strong) IBOutlet UIButton *startButton;
    @property (nonatomic, strong) IBOutlet UIButton *suspendListeningButton;	
    @property (nonatomic, strong) IBOutlet UIButton *resumeListeningButton;	
    @property (nonatomic, strong) IBOutlet UITextView *statusTextView;
    @property (nonatomic, strong) IBOutlet UITextView *heardTextView;
    @property (nonatomic, strong) IBOutlet UILabel *pocketsphinxDbLabel;
    @property (nonatomic, strong) IBOutlet UILabel *fliteDbLabel;
    @property (nonatomic, assign) BOOL usingStartingLanguageModel;
    @property (nonatomic, assign) int restartAttemptsDueToPermissionRequests;
    @property (nonatomic, assign) BOOL startupFailedDueToLackOfPermissions;
    
    // Things which help us show off the dynamic language features.
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedDictionary;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedDictionary;
    
    // Our NSTimer that will help us read and display the input and output levels without locking the UI
    @property (nonatomic, strong) 	NSTimer *uiUpdateTimer;
    
    @end
    
    @implementation ViewController
    
    #define kLevelUpdatesPerSecond 18 // We'll have the ui update 18 times a second to show some fluidity without hitting the CPU too hard.
    
    //#define kGetNbest // Uncomment this if you want to try out nbest
    #pragma mark - 
    #pragma mark Memory Management
    
    - (void)dealloc {
        [self stopDisplayingLevels];
    }
    
    #pragma mark -
    #pragma mark View Lifecycle
    
    - (void)viewDidLoad {
        [super viewDidLoad];
        self.fliteController = [[OEFliteController alloc] init];
        self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
        self.openEarsEventsObserver.delegate = self;
        self.slt = [[Slt alloc] init];
        
        self.restartAttemptsDueToPermissionRequests = 0;
        self.startupFailedDueToLackOfPermissions = FALSE;
        
         [OELogging startOpenEarsLogging]; // Uncomment me for OELogging, which is verbose logging about internal OpenEars operations such as audio settings. If you have issues, show this logging in the forums.
        [OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE; // Uncomment this for much more verbose speech recognition engine output. If you have issues, show this logging in the forums.
        
        [self.openEarsEventsObserver setDelegate:self]; // Make this class the delegate of OpenEarsObserver so we can get all of the messages about what OpenEars is doing.
        
        [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this before setting any OEPocketsphinxController characteristics
        
        // This is the language model we're going to start up with. The only reason I'm making it a class property is that I reuse it a bunch of times in this example, 
        // but you can pass the string contents directly to OEPocketsphinxController:startListeningWithLanguageModelAtPath:dictionaryAtPath:languageModelIsJSGF:
        
        //spanish words on AcousticModelEnglish
        NSArray *firstLanguageArray = @[@"AGUA",
                                        @"COCACOLA",
                                        @"DETERGENTE",
                                        @"PEGAMENTO",
                                        @"ZUMO NARANJA",
                                        @"PAPEL HIGIENICO",
                                        @"ZUMO PIÑA",
                                        @"ZUMO MELOCOTON"];
        
        OELanguageModelGenerator *languageModelGenerator = [[OELanguageModelGenerator alloc] init]; 
        
        // languageModelGenerator.verboseLanguageModelGenerator = TRUE; // Uncomment me for verbose language model generator debug output.
        
        NSError *error = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        //Second time I allocate this variable simulating that the viewController was closed and recreated again
        languageModelGenerator = [[OELanguageModelGenerator alloc] init];
    
        //then I generate the languageModel again and here the app crashes like it does on my app
        NSError *error2 = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
    
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        } else {
            self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
        }
        
        self.usingStartingLanguageModel = TRUE; // This is not an OpenEars thing, this is just so I can switch back and forth between the two models in this sample app.
        
        // Here is an example of dynamically creating an in-app grammar.
        
        // We want it to be able to response to the speech "CHANGE MODEL" and a few other things.  Items we want to have recognized as a whole phrase (like "CHANGE MODEL") 
        // we put into the array as one string (e.g. "CHANGE MODEL" instead of "CHANGE" and "MODEL"). This increases the probability that they will be recognized as a phrase. This works even better starting with version 1.0 of OpenEars.
        
        NSArray *secondLanguageArray = @[@"SUNDAY",
                                         @"MONDAY",
                                         @"TUESDAY",
                                         @"WEDNESDAY",
                                         @"THURSDAY",
                                         @"FRIDAY",
                                         @"SATURDAY",
                                         @"QUIDNUNC",
                                         @"CHANGE MODEL"];
        
        // The last entry, quidnunc, is an example of a word which will not be found in the lookup dictionary and will be passed to the fallback method. The fallback method is slower,
        // so, for instance, creating a new language model from dictionary words will be pretty fast, but a model that has a lot of unusual names in it or invented/rare/recent-slang
        // words will be slower to generate. You can use this information to give your users good UI feedback about what the expectations for wait times should be.
        
        // I don't think it's beneficial to lazily instantiate OELanguageModelGenerator because you only need to give it a single message and then release it.
        // If you need to create a very large model or any size of model that has many unusual words that have to make use of the fallback generation method,
        // you will want to run this on a background thread so you can give the user some UI feedback that the task is in progress.
        
        // generateLanguageModelFromArray:withFilesNamed returns an NSError which will either have a value of noErr if everything went fine or a specific error if it didn't.
        error = [languageModelGenerator generateLanguageModelFromArray:secondLanguageArray withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        //    NSError *error = [languageModelGenerator generateLanguageModelFromTextFile:[NSString stringWithFormat:@"%@/%@",[[NSBundle mainBundle] resourcePath], @"OpenEarsCorpus.txt"] withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Try this out to see how generating a language model from a corpus works.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        }	else {
            
            self.pathToSecondDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"]; // We'll set our new .languagemodel file to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            self.pathToSecondDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];; // We'll set our new dictionary to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            // Next, an informative message.
            
            NSLog(@"\n\nWelcome to the OpenEars sample project. This project understands the words:\nBACKWARD,\nCHANGE,\nFORWARD,\nGO,\nLEFT,\nMODEL,\nRIGHT,\nTURN,\nand if you say \"CHANGE MODEL\" it will switch to its dynamically-generated model which understands the words:\nCHANGE,\nMODEL,\nMONDAY,\nTUESDAY,\nWEDNESDAY,\nTHURSDAY,\nFRIDAY,\nSATURDAY,\nSUNDAY,\nQUIDNUNC");
            
            // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController. We won't do this if the language generation failed since it will be listening for a command to change over to the generated language.
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this once before setting properties of the OEPocketsphinxController instance.
            
            //   [OEPocketsphinxController sharedInstance].pathToTestFile = [[NSBundle mainBundle] pathForResource:@"change_model_short" ofType:@"wav"];  // This is how you could use a test WAV (mono/16-bit/16k) rather than live recognition. Don't forget to add your WAV to your app bundle.
            
            if(![OEPocketsphinxController sharedInstance].isListening) {
                [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            }
            // [self startDisplayingLevels] is not an OpenEars method, just a very simple approach for level reading
            // that I've included with this sample app. My example implementation does make use of two OpenEars
            // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
            // method of fliteController. 
            //
            // The example is meant to show one way that you can read those levels continuously without locking the UI, 
            // by using an NSTimer, but the OpenEars level-reading methods 
            // themselves do not include multithreading code since I believe that you will want to design your own 
            // code approaches for level display that are tightly-integrated with your interaction design and the  
            // graphics API you choose. 
            
            [self startDisplayingLevels];
            
            // Here is some UI stuff that has nothing specifically to do with OpenEars implementation
            self.startButton.hidden = TRUE;
            self.stopButton.hidden = TRUE;
            self.suspendListeningButton.hidden = TRUE;
            self.resumeListeningButton.hidden = TRUE;
        }
    }
    
    #pragma mark -
    #pragma mark OEEventsObserver delegate methods
    
    // What follows are all of the delegate methods you can optionally use once you've instantiated an OEEventsObserver and set its delegate to self. 
    // I've provided some pretty granular information about the exact phase of the Pocketsphinx listening loop, the Audio Session, and Flite, but I'd expect 
    // that the ones that will really be needed by most projects are the following:
    //
    //- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID;
    //- (void) audioSessionInterruptionDidBegin;
    //- (void) audioSessionInterruptionDidEnd;
    //- (void) audioRouteDidChangeToRoute:(NSString *)newRoute;
    //- (void) pocketsphinxDidStartListening;
    //- (void) pocketsphinxDidStopListening;
    //
    // It isn't necessary to have a OEPocketsphinxController or a OEFliteController instantiated in order to use these methods.  If there isn't anything instantiated that will
    // send messages to an OEEventsObserver, all that will happen is that these methods will never fire.  You also do not have to create a OEEventsObserver in
    // the same class or view controller in which you are doing things with a OEPocketsphinxController or OEFliteController; you can receive updates from those objects in
    // any class in which you instantiate an OEEventsObserver and set its delegate to self.
    
    // This is an optional delegate method of OEEventsObserver which delivers the text of speech that Pocketsphinx heard and analyzed, along with its accuracy score and utterance ID.
    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
        
        NSLog(@"Local callback: The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID); // Log it.
        if([hypothesis isEqualToString:@"CHANGE MODEL"]) { // If the user says "CHANGE MODEL", we will switch to the alternate model (which happens to be the dynamically generated model).
            
            // Here is an example of language model switching in OpenEars. Deciding on what logical basis to switch models is your responsibility.
            // For instance, when you call a customer service line and get a response tree that takes you through different options depending on what you say to it,
            // the models are being switched as you progress through it so that only relevant choices can be understood. The construction of that logical branching and 
            // how to react to it is your job; OpenEars just lets you send the signal to switch the language model when you've decided it's the right time to do so.
            
            if(self.usingStartingLanguageModel) { // If we're on the starting model, switch to the dynamically generated one.
                
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedLanguageModel withDictionary:self.pathToSecondDynamicallyGeneratedDictionary]; 
                self.usingStartingLanguageModel = FALSE;
                
            } else { // If we're on the dynamically generated model, switch to the start model (this is an example of a trigger and method for switching models).
                
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedLanguageModel withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
                self.usingStartingLanguageModel = TRUE;
            }
        }
        
        self.heardTextView.text = [NSString stringWithFormat:@"Heard: \"%@\"", hypothesis]; // Show it in the status box.
        
        // This is how to use an available instance of OEFliteController. We're going to repeat back the command that we heard with the voice we've chosen.
        [self.fliteController say:[NSString stringWithFormat:@"You said %@",hypothesis] withVoice:self.slt];
    }
    
    #ifdef kGetNbest   
    - (void) pocketsphinxDidReceiveNBestHypothesisArray:(NSArray *)hypothesisArray { // Pocketsphinx has an n-best hypothesis dictionary.
        NSLog(@"Local callback:  hypothesisArray is %@",hypothesisArray);   
    }
    #endif
    // An optional delegate method of OEEventsObserver which informs that there was an interruption to the audio session (e.g. an incoming phone call).
    - (void) audioSessionInterruptionDidBegin {
        NSLog(@"Local callback:  AudioSession interruption began."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption began."; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) {
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening (if it is listening) since it will need to restart its loop after an interruption.
            if(error) NSLog(@"Error while stopping listening in audioSessionInterruptionDidBegin: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the interruption to the audio session ended.
    - (void) audioSessionInterruptionDidEnd {
        NSLog(@"Local callback:  AudioSession interruption ended."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption ended."; // Show it in the status box.
        // We're restarting the previously-stopped listening loop.
        if(![OEPocketsphinxController sharedInstance].isListening){
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't currently listening.    
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the audio input became unavailable.
    - (void) audioInputDidBecomeUnavailable {
        NSLog(@"Local callback:  The audio input has become unavailable"); // Log it.
        self.statusTextView.text = @"Status: The audio input has become unavailable"; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening){
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening since there is no available input (but only if we are listening).
            if(error) NSLog(@"Error while stopping listening in audioInputDidBecomeUnavailable: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the unavailable audio input became available again.
    - (void) audioInputDidBecomeAvailable {
        NSLog(@"Local callback: The audio input is available"); // Log it.
        self.statusTextView.text = @"Status: The audio input is available"; // Show it in the status box.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition, but only if we aren't already listening.
        }
    }
    // An optional delegate method of OEEventsObserver which informs that there was a change to the audio route (e.g. headphones were plugged in or unplugged).
    - (void) audioRouteDidChangeToRoute:(NSString *)newRoute {
        NSLog(@"Local callback: Audio route change. The new audio route is %@", newRoute); // Log it.
        self.statusTextView.text = [NSString stringWithFormat:@"Status: Audio route change. The new audio route is %@",newRoute]; // Show it in the status box.
        
        NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling the Pocketsphinx loop to shut down and then start listening again on the new route
        
        if(error)NSLog(@"Local callback: error while stopping listening in audioRouteDidChangeToRoute: %@",error);
        
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
    // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
    - (void) pocketsphinxRecognitionLoopDidStart {
        
        NSLog(@"Local callback: Pocketsphinx started."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx started."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
    - (void) pocketsphinxDidStartListening {
        
        NSLog(@"Local callback: Pocketsphinx is now listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx is now listening."; // Show it in the status box.
        
        self.startButton.hidden = TRUE; // React to it with some UI changes.
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
    - (void) pocketsphinxDidDetectSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected speech."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance. 
    // This was added because developers requested being able to time the recognition speed without the speech time. The processing time is the time between 
    // this method being called and the hypothesis being returned.
    - (void) pocketsphinxDidDetectFinishedSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected a second of silence, concluding an utterance."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected finished speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most 
    // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
    - (void) pocketsphinxDidStopListening {
        NSLog(@"Local callback: Pocketsphinx has stopped listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has stopped listening."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
    // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
    // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
    - (void) pocketsphinxDidSuspendRecognition {
        NSLog(@"Local callback: Pocketsphinx has suspended recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has suspended recognition."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
    // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
    // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
    - (void) pocketsphinxDidResumeRecognition {
        NSLog(@"Local callback: Pocketsphinx has resumed recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has resumed recognition."; // Show it in the status box.
    }
    
    // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
    // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
    - (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
        NSLog(@"Local callback: Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@",newLanguageModelPathAsString,newDictionaryPathAsString);
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
    // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
    - (void) fliteDidStartSpeaking {
        NSLog(@"Local callback: Flite has started speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has started speaking."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
    // complex interaction between sound classes.
    - (void) fliteDidFinishSpeaking {
        NSLog(@"Local callback: Flite has finished speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has finished speaking."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Setting up the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to start recognition loop."; // Show it in the status box.	
    }
    
    - (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Tearing down the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to cleanly end recognition loop."; // Show it in the status box.	
    }
    
    - (void) testRecognitionCompleted { // A test file which was submitted for direct recognition via the audio driver is done.
        NSLog(@"Local callback: A test file which was submitted for direct recognition via the audio driver is done."); // Log it.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // If we're listening, stop listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error) NSLog(@"Error while stopping listening in testRecognitionCompleted: %@", error);
        }
        
    }
    /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
    - (void) pocketsphinxFailedNoMicPermissions {
        NSLog(@"Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.");
        self.startupFailedDueToLackOfPermissions = TRUE;
        if([OEPocketsphinxController sharedInstance].isListening){
            NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // Stop listening if we are listening.
            if(error) NSLog(@"Error while stopping listening in micPermissionCheckCompleted: %@", error);
        }
    }
    
    /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a TRUE or a FALSE result  (will only be returned on iOS7 or later).*/
    - (void) micPermissionCheckCompleted:(BOOL)result {
        if(result) {
            self.restartAttemptsDueToPermissionRequests++;
            if(self.restartAttemptsDueToPermissionRequests == 1 && self.startupFailedDueToLackOfPermissions) { // If we get here because there was an attempt to start which failed due to lack of permissions, and now permissions have been requested and they returned true, we restart exactly once with the new permissions.
    
                if(![OEPocketsphinxController sharedInstance].isListening) { // If there was no error and we aren't listening, start listening.
                    [[OEPocketsphinxController sharedInstance] 
                     startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel 
                     dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary 
                     acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] 
                     languageModelIsJSGF:FALSE]; // Start speech recognition.
                    
                    self.startupFailedDueToLackOfPermissions = FALSE;
                }
            }
        }
    }
    
    #pragma mark -
    #pragma mark UI
    
    // This is not OpenEars-specific stuff, just some UI behavior
    
    - (IBAction) suspendListeningButtonAction { // This is the action for the button which suspends listening without ending the recognition loop
        [[OEPocketsphinxController sharedInstance] suspendRecognition];	
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = FALSE;
    }
    
    - (IBAction) resumeListeningButtonAction { // This is the action for the button which resumes listening if it has been suspended
        [[OEPocketsphinxController sharedInstance] resumeRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;	
    }
    
    - (IBAction) stopButtonAction { // This is the action for the button which shuts down the recognition loop.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // Stop if we are currently listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error)NSLog(@"Error stopping listening in stopButtonAction: %@", error);
        }
        self.startButton.hidden = FALSE;
        self.stopButton.hidden = TRUE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) startButtonAction { // This is the action for the button which starts up the recognition loop again if it has been shut down.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    #pragma mark -
    #pragma mark Example for reading out Pocketsphinx and Flite audio levels without locking the UI by using an NSTimer
    
    // What follows are not OpenEars methods, just an approach for level reading
    // that I've included with this sample app. My example implementation does make use of two OpenEars
    // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
    // method of OEFliteController. 
    //
    // The example is meant to show one way that you can read those levels continuously without locking the UI, 
    // by using an NSTimer, but the OpenEars level-reading methods 
    // themselves do not include multithreading code since I believe that you will want to design your own 
    // code approaches for level display that are tightly-integrated with your interaction design and the  
    // graphics API you choose. 
    // 
    // Please note that if you use my sample approach, you should pay attention to the way that the timer is always stopped in
    // dealloc. This should prevent you from having any difficulties with deallocating a class due to a running NSTimer process.
    
    - (void) startDisplayingLevels { // Start displaying the levels using a timer
        [self stopDisplayingLevels]; // We never want more than one timer valid so we'll stop any running timers first.
        self.uiUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/kLevelUpdatesPerSecond target:self selector:@selector(updateLevelsUI) userInfo:nil repeats:YES];
    }
    
    - (void) stopDisplayingLevels { // Stop displaying the levels by stopping the timer if it's running.
        if(self.uiUpdateTimer && [self.uiUpdateTimer isValid]) { // If there is a running timer, we'll stop it here.
            [self.uiUpdateTimer invalidate];
            self.uiUpdateTimer = nil;
        }
    }
    
    - (void) updateLevelsUI { // And here is how we obtain the levels.  This method includes the actual OpenEars methods and uses their results to update the UI of this view controller.
        
        self.pocketsphinxDbLabel.text = [NSString stringWithFormat:@"Pocketsphinx Input level:%f",[[OEPocketsphinxController sharedInstance] pocketsphinxInputLevel]];  //pocketsphinxInputLevel is an OpenEars method of the class OEPocketsphinxController.
        
        if(self.fliteController.speechInProgress) {
            self.fliteDbLabel.text = [NSString stringWithFormat:@"Flite Output level: %f",[self.fliteController fliteOutputLevel]]; // fliteOutputLevel is an OpenEars method of the class OEFliteController.
        }
    }
    
    @end
    
    #1030056
    Halle Winkler
    Politepix

    Super, I’ll add a test case and see if I can fix it for the next update (or let you know what code needs to be changed if it’s a code issue).

    #1030103
    Halle Winkler
    Politepix

    OK, I have replicated this and I have a pretty good sense of what the issue is. Thank you for the good test case and the report. It isn’t going to be an easy fix, so I can’t guarantee it will be in the next update, although that is a goal. In the meantime, I believe you can work around this by passing the same instance of your OELanguageModelGenerator to your subviews rather than re-instantiating it.

    #1030104
    Halle Winkler
    Politepix

    OK, I think I have a fix for this and subject to more testing it will be in the next update. I don’t have an ETA for that update but there is only one other high-priority bug on the list so it shouldn’t be too long.

    #1030110
    maxgarmar
    Participant

    Hi Halle,

    Ok. Looking forward

    Thank you

    #1030518
    Halle Winkler
    Politepix

    Hi Max,

    Yesterday’s OpenEars 2.502 update (http://changelogs.politepix.com) should fix this.

    #1033198
    minilv
    Participant

    I am using
    https://www.politepix.com/openears/ The latest code

    But I still encountered the bug of generateLanguageModelFromArray crash.

    Crash detailed stack:

    
    10  myApp                   0x00000001060b208c -[OECMUCLMTKModel runCMUCLMTKOnCorpusFile:withBin:binarySuffix:] + 21012620 (OECMUCLMTKModel.m:278)
    11  myApp                   0x00000001061110b8 -[OELanguageModelGenerator createLanguageModelFromFilename:] + 21401784 (OELanguageModelGenerator.m:488)
    12  myApp                   0x0000000106111ba8 -[OELanguageModelGenerator generateLanguageModelFromArray:withFilesNamed:forAcousticModelAtPath:] + 21404584 (OELanguageModelGenerator.m:601) <code></code>
    
    
    Code is:
    
    

    let languageModelGenerator = OELanguageModelGenerator()

    let keywordList = [“hey”, “test”, “start”]

    let firstLanguageArray = keywordList

    let firstVocabularyName = “FirstVocabulary”

    // AcousticModelEnglish from bundle.
    let firstLanguageModelGenerationError: Error! = languageModelGenerator.generateLanguageModel(from: firstLanguageArray, withFilesNamed: firstVocabularyName, forAcousticModelAtPath: OEAcousticModel.path(toModel: “AcousticModelEnglish”))
    `

    • This reply was modified 2 years, 10 months ago by minilv.
    • This reply was modified 2 years, 10 months ago by minilv.
    • This reply was modified 2 years, 10 months ago by minilv.
    #1033202
    Halle Winkler
    Politepix

    Please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue.

    #1033203
    minilv
    Participant

    I only meet once this issue. And this is my release build. When I turn on the full log mode, this bug cannot reproduce again.

    But this OELanguageModelGenerator.m is private, and I can’t see the internal implementation.

    Can you help me check if this function can cause a crash? It is possible at line 601.

    OELanguageModelGenerator.m:601

    #1033204
    Halle Winkler
    Politepix

    Sorry, I can’t help you without all the information from the debugging post I linked to.

    > But this OELanguageModelGenerator.m is private, and I can’t see the internal implementation.

    I see – where did you obtain a version of OpenEars with a OELanguageModelGenerator.m that you can’t see the implementation for?

    #1033205
    minilv
    Participant

    I clicked on this website(https://www.politepix.com/openears/) and clicked on Go to the quickstart tutorial (Swift) to download.

    The OELanguageModelGenerator class is packaged as a static library.

    It is not public, or can you open source the code of this class to me for a look, Thanks a lot!

    #1033206
    Halle Winkler
    Politepix

    If you downloaded OpenEars from this site, all of the source is in /OpenEarsDistribution/OpenEars/.

Viewing 21 posts - 1 through 21 (of 21 total)
  • The topic ‘Error in generateLanguageModelFromArray call’ is closed to new replies.