maxgarmar

Forum Replies Created

Viewing 27 posts - 1 through 27 (of 27 total)

  • Author
    Posts
  • in reply to: Error in generateLanguageModelFromArray call #1030110
    maxgarmar
    Participant

    Hi Halle,

    Ok. Looking forward

    Thank you

    in reply to: Error in generateLanguageModelFromArray call #1030055
    maxgarmar
    Participant

    Hi Halle,

    Here you go:

    Just you would substitute the code below for the ViewController of the sampleApp and the sampleapp will crash like on my app

    Thanks

    //  ViewController.m
    //  OpenEarsSampleApp
    //
    //  ViewController.m demonstrates the use of the OpenEars framework. 
    //
    //  Copyright Politepix UG (haftungsbeschränkt) 2014. All rights reserved.
    //  https://www.politepix.com
    //  Contact at https://www.politepix.com/contact
    //
    //  This file is licensed under the Politepix Shared Source license found in the root of the source distribution.
    
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // IMPORTANT NOTE: Audio driver and hardware behavior is completely different between the Simulator and a real device. It is not informative to test OpenEars' accuracy on the Simulator, and please do not report Simulator-only bugs since I only actively support 
    // the device driver. Please only do testing/bug reporting based on results on a real device such as an iPhone or iPod Touch. Thanks!
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    
    #import "ViewController.h"
    #import <OpenEars/OEPocketsphinxController.h>
    #import <OpenEars/OEFliteController.h>
    #import <OpenEars/OELanguageModelGenerator.h>
    #import <OpenEars/OELogging.h>
    #import <OpenEars/OEAcousticModel.h>
    #import <Slt/Slt.h>
    
    @interface ViewController()
    
    // UI actions, not specifically related to OpenEars other than the fact that they invoke OpenEars methods.
    - (IBAction) stopButtonAction;
    - (IBAction) startButtonAction;
    - (IBAction) suspendListeningButtonAction;
    - (IBAction) resumeListeningButtonAction;
    
    // Example for reading out the input audio levels without locking the UI using an NSTimer
    
    - (void) startDisplayingLevels;
    - (void) stopDisplayingLevels;
    
    // These three are the important OpenEars objects that this class demonstrates the use of.
    @property (nonatomic, strong) Slt *slt;
    
    @property (nonatomic, strong) OEEventsObserver *openEarsEventsObserver;
    @property (nonatomic, strong) OEPocketsphinxController *pocketsphinxController;
    @property (nonatomic, strong) OEFliteController *fliteController;
    
    // Some UI, not specifically related to OpenEars.
    @property (nonatomic, strong) IBOutlet UIButton *stopButton;
    @property (nonatomic, strong) IBOutlet UIButton *startButton;
    @property (nonatomic, strong) IBOutlet UIButton *suspendListeningButton;	
    @property (nonatomic, strong) IBOutlet UIButton *resumeListeningButton;	
    @property (nonatomic, strong) IBOutlet UITextView *statusTextView;
    @property (nonatomic, strong) IBOutlet UITextView *heardTextView;
    @property (nonatomic, strong) IBOutlet UILabel *pocketsphinxDbLabel;
    @property (nonatomic, strong) IBOutlet UILabel *fliteDbLabel;
    @property (nonatomic, assign) BOOL usingStartingLanguageModel;
    @property (nonatomic, assign) int restartAttemptsDueToPermissionRequests;
    @property (nonatomic, assign) BOOL startupFailedDueToLackOfPermissions;
    
    // Things which help us show off the dynamic language features.
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedDictionary;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedDictionary;
    
    // Our NSTimer that will help us read and display the input and output levels without locking the UI
    @property (nonatomic, strong) 	NSTimer *uiUpdateTimer;
    
    @end
    
    @implementation ViewController
    
    #define kLevelUpdatesPerSecond 18 // We'll have the ui update 18 times a second to show some fluidity without hitting the CPU too hard.
    
    //#define kGetNbest // Uncomment this if you want to try out nbest
    #pragma mark - 
    #pragma mark Memory Management
    
    - (void)dealloc {
        [self stopDisplayingLevels];
    }
    
    #pragma mark -
    #pragma mark View Lifecycle
    
    - (void)viewDidLoad {
        [super viewDidLoad];
        self.fliteController = [[OEFliteController alloc] init];
        self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
        self.openEarsEventsObserver.delegate = self;
        self.slt = [[Slt alloc] init];
        
        self.restartAttemptsDueToPermissionRequests = 0;
        self.startupFailedDueToLackOfPermissions = FALSE;
        
         [OELogging startOpenEarsLogging]; // Uncomment me for OELogging, which is verbose logging about internal OpenEars operations such as audio settings. If you have issues, show this logging in the forums.
        [OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE; // Uncomment this for much more verbose speech recognition engine output. If you have issues, show this logging in the forums.
        
        [self.openEarsEventsObserver setDelegate:self]; // Make this class the delegate of OpenEarsObserver so we can get all of the messages about what OpenEars is doing.
        
        [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this before setting any OEPocketsphinxController characteristics
        
        // This is the language model we're going to start up with. The only reason I'm making it a class property is that I reuse it a bunch of times in this example, 
        // but you can pass the string contents directly to OEPocketsphinxController:startListeningWithLanguageModelAtPath:dictionaryAtPath:languageModelIsJSGF:
        
        //spanish words on AcousticModelEnglish
        NSArray *firstLanguageArray = @[@"AGUA",
                                        @"COCACOLA",
                                        @"DETERGENTE",
                                        @"PEGAMENTO",
                                        @"ZUMO NARANJA",
                                        @"PAPEL HIGIENICO",
                                        @"ZUMO PIÑA",
                                        @"ZUMO MELOCOTON"];
        
        OELanguageModelGenerator *languageModelGenerator = [[OELanguageModelGenerator alloc] init]; 
        
        // languageModelGenerator.verboseLanguageModelGenerator = TRUE; // Uncomment me for verbose language model generator debug output.
        
        NSError *error = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        //Second time I allocate this variable simulating that the viewController was closed and recreated again
        languageModelGenerator = [[OELanguageModelGenerator alloc] init];
    
        //then I generate the languageModel again and here the app crashes like it does on my app
        NSError *error2 = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
    
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        } else {
            self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
        }
        
        self.usingStartingLanguageModel = TRUE; // This is not an OpenEars thing, this is just so I can switch back and forth between the two models in this sample app.
        
        // Here is an example of dynamically creating an in-app grammar.
        
        // We want it to be able to response to the speech "CHANGE MODEL" and a few other things.  Items we want to have recognized as a whole phrase (like "CHANGE MODEL") 
        // we put into the array as one string (e.g. "CHANGE MODEL" instead of "CHANGE" and "MODEL"). This increases the probability that they will be recognized as a phrase. This works even better starting with version 1.0 of OpenEars.
        
        NSArray *secondLanguageArray = @[@"SUNDAY",
                                         @"MONDAY",
                                         @"TUESDAY",
                                         @"WEDNESDAY",
                                         @"THURSDAY",
                                         @"FRIDAY",
                                         @"SATURDAY",
                                         @"QUIDNUNC",
                                         @"CHANGE MODEL"];
        
        // The last entry, quidnunc, is an example of a word which will not be found in the lookup dictionary and will be passed to the fallback method. The fallback method is slower,
        // so, for instance, creating a new language model from dictionary words will be pretty fast, but a model that has a lot of unusual names in it or invented/rare/recent-slang
        // words will be slower to generate. You can use this information to give your users good UI feedback about what the expectations for wait times should be.
        
        // I don't think it's beneficial to lazily instantiate OELanguageModelGenerator because you only need to give it a single message and then release it.
        // If you need to create a very large model or any size of model that has many unusual words that have to make use of the fallback generation method,
        // you will want to run this on a background thread so you can give the user some UI feedback that the task is in progress.
        
        // generateLanguageModelFromArray:withFilesNamed returns an NSError which will either have a value of noErr if everything went fine or a specific error if it didn't.
        error = [languageModelGenerator generateLanguageModelFromArray:secondLanguageArray withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        //    NSError *error = [languageModelGenerator generateLanguageModelFromTextFile:[NSString stringWithFormat:@"%@/%@",[[NSBundle mainBundle] resourcePath], @"OpenEarsCorpus.txt"] withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]]; // Try this out to see how generating a language model from a corpus works.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        }	else {
            
            self.pathToSecondDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"]; // We'll set our new .languagemodel file to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            self.pathToSecondDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];; // We'll set our new dictionary to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            // Next, an informative message.
            
            NSLog(@"\n\nWelcome to the OpenEars sample project. This project understands the words:\nBACKWARD,\nCHANGE,\nFORWARD,\nGO,\nLEFT,\nMODEL,\nRIGHT,\nTURN,\nand if you say \"CHANGE MODEL\" it will switch to its dynamically-generated model which understands the words:\nCHANGE,\nMODEL,\nMONDAY,\nTUESDAY,\nWEDNESDAY,\nTHURSDAY,\nFRIDAY,\nSATURDAY,\nSUNDAY,\nQUIDNUNC");
            
            // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController. We won't do this if the language generation failed since it will be listening for a command to change over to the generated language.
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this once before setting properties of the OEPocketsphinxController instance.
            
            //   [OEPocketsphinxController sharedInstance].pathToTestFile = [[NSBundle mainBundle] pathForResource:@"change_model_short" ofType:@"wav"];  // This is how you could use a test WAV (mono/16-bit/16k) rather than live recognition. Don't forget to add your WAV to your app bundle.
            
            if(![OEPocketsphinxController sharedInstance].isListening) {
                [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            }
            // [self startDisplayingLevels] is not an OpenEars method, just a very simple approach for level reading
            // that I've included with this sample app. My example implementation does make use of two OpenEars
            // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
            // method of fliteController. 
            //
            // The example is meant to show one way that you can read those levels continuously without locking the UI, 
            // by using an NSTimer, but the OpenEars level-reading methods 
            // themselves do not include multithreading code since I believe that you will want to design your own 
            // code approaches for level display that are tightly-integrated with your interaction design and the  
            // graphics API you choose. 
            
            [self startDisplayingLevels];
            
            // Here is some UI stuff that has nothing specifically to do with OpenEars implementation
            self.startButton.hidden = TRUE;
            self.stopButton.hidden = TRUE;
            self.suspendListeningButton.hidden = TRUE;
            self.resumeListeningButton.hidden = TRUE;
        }
    }
    
    #pragma mark -
    #pragma mark OEEventsObserver delegate methods
    
    // What follows are all of the delegate methods you can optionally use once you've instantiated an OEEventsObserver and set its delegate to self. 
    // I've provided some pretty granular information about the exact phase of the Pocketsphinx listening loop, the Audio Session, and Flite, but I'd expect 
    // that the ones that will really be needed by most projects are the following:
    //
    //- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID;
    //- (void) audioSessionInterruptionDidBegin;
    //- (void) audioSessionInterruptionDidEnd;
    //- (void) audioRouteDidChangeToRoute:(NSString *)newRoute;
    //- (void) pocketsphinxDidStartListening;
    //- (void) pocketsphinxDidStopListening;
    //
    // It isn't necessary to have a OEPocketsphinxController or a OEFliteController instantiated in order to use these methods.  If there isn't anything instantiated that will
    // send messages to an OEEventsObserver, all that will happen is that these methods will never fire.  You also do not have to create a OEEventsObserver in
    // the same class or view controller in which you are doing things with a OEPocketsphinxController or OEFliteController; you can receive updates from those objects in
    // any class in which you instantiate an OEEventsObserver and set its delegate to self.
    
    // This is an optional delegate method of OEEventsObserver which delivers the text of speech that Pocketsphinx heard and analyzed, along with its accuracy score and utterance ID.
    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
        
        NSLog(@"Local callback: The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID); // Log it.
        if([hypothesis isEqualToString:@"CHANGE MODEL"]) { // If the user says "CHANGE MODEL", we will switch to the alternate model (which happens to be the dynamically generated model).
            
            // Here is an example of language model switching in OpenEars. Deciding on what logical basis to switch models is your responsibility.
            // For instance, when you call a customer service line and get a response tree that takes you through different options depending on what you say to it,
            // the models are being switched as you progress through it so that only relevant choices can be understood. The construction of that logical branching and 
            // how to react to it is your job; OpenEars just lets you send the signal to switch the language model when you've decided it's the right time to do so.
            
            if(self.usingStartingLanguageModel) { // If we're on the starting model, switch to the dynamically generated one.
                
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedLanguageModel withDictionary:self.pathToSecondDynamicallyGeneratedDictionary]; 
                self.usingStartingLanguageModel = FALSE;
                
            } else { // If we're on the dynamically generated model, switch to the start model (this is an example of a trigger and method for switching models).
                
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedLanguageModel withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
                self.usingStartingLanguageModel = TRUE;
            }
        }
        
        self.heardTextView.text = [NSString stringWithFormat:@"Heard: \"%@\"", hypothesis]; // Show it in the status box.
        
        // This is how to use an available instance of OEFliteController. We're going to repeat back the command that we heard with the voice we've chosen.
        [self.fliteController say:[NSString stringWithFormat:@"You said %@",hypothesis] withVoice:self.slt];
    }
    
    #ifdef kGetNbest   
    - (void) pocketsphinxDidReceiveNBestHypothesisArray:(NSArray *)hypothesisArray { // Pocketsphinx has an n-best hypothesis dictionary.
        NSLog(@"Local callback:  hypothesisArray is %@",hypothesisArray);   
    }
    #endif
    // An optional delegate method of OEEventsObserver which informs that there was an interruption to the audio session (e.g. an incoming phone call).
    - (void) audioSessionInterruptionDidBegin {
        NSLog(@"Local callback:  AudioSession interruption began."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption began."; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) {
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening (if it is listening) since it will need to restart its loop after an interruption.
            if(error) NSLog(@"Error while stopping listening in audioSessionInterruptionDidBegin: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the interruption to the audio session ended.
    - (void) audioSessionInterruptionDidEnd {
        NSLog(@"Local callback:  AudioSession interruption ended."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption ended."; // Show it in the status box.
        // We're restarting the previously-stopped listening loop.
        if(![OEPocketsphinxController sharedInstance].isListening){
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't currently listening.    
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the audio input became unavailable.
    - (void) audioInputDidBecomeUnavailable {
        NSLog(@"Local callback:  The audio input has become unavailable"); // Log it.
        self.statusTextView.text = @"Status: The audio input has become unavailable"; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening){
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening since there is no available input (but only if we are listening).
            if(error) NSLog(@"Error while stopping listening in audioInputDidBecomeUnavailable: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the unavailable audio input became available again.
    - (void) audioInputDidBecomeAvailable {
        NSLog(@"Local callback: The audio input is available"); // Log it.
        self.statusTextView.text = @"Status: The audio input is available"; // Show it in the status box.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition, but only if we aren't already listening.
        }
    }
    // An optional delegate method of OEEventsObserver which informs that there was a change to the audio route (e.g. headphones were plugged in or unplugged).
    - (void) audioRouteDidChangeToRoute:(NSString *)newRoute {
        NSLog(@"Local callback: Audio route change. The new audio route is %@", newRoute); // Log it.
        self.statusTextView.text = [NSString stringWithFormat:@"Status: Audio route change. The new audio route is %@",newRoute]; // Show it in the status box.
        
        NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling the Pocketsphinx loop to shut down and then start listening again on the new route
        
        if(error)NSLog(@"Local callback: error while stopping listening in audioRouteDidChangeToRoute: %@",error);
        
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
    // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
    - (void) pocketsphinxRecognitionLoopDidStart {
        
        NSLog(@"Local callback: Pocketsphinx started."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx started."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
    - (void) pocketsphinxDidStartListening {
        
        NSLog(@"Local callback: Pocketsphinx is now listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx is now listening."; // Show it in the status box.
        
        self.startButton.hidden = TRUE; // React to it with some UI changes.
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
    - (void) pocketsphinxDidDetectSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected speech."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance. 
    // This was added because developers requested being able to time the recognition speed without the speech time. The processing time is the time between 
    // this method being called and the hypothesis being returned.
    - (void) pocketsphinxDidDetectFinishedSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected a second of silence, concluding an utterance."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected finished speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most 
    // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
    - (void) pocketsphinxDidStopListening {
        NSLog(@"Local callback: Pocketsphinx has stopped listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has stopped listening."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
    // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
    // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
    - (void) pocketsphinxDidSuspendRecognition {
        NSLog(@"Local callback: Pocketsphinx has suspended recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has suspended recognition."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
    // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
    // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
    - (void) pocketsphinxDidResumeRecognition {
        NSLog(@"Local callback: Pocketsphinx has resumed recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has resumed recognition."; // Show it in the status box.
    }
    
    // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
    // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
    - (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
        NSLog(@"Local callback: Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@",newLanguageModelPathAsString,newDictionaryPathAsString);
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
    // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
    - (void) fliteDidStartSpeaking {
        NSLog(@"Local callback: Flite has started speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has started speaking."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
    // complex interaction between sound classes.
    - (void) fliteDidFinishSpeaking {
        NSLog(@"Local callback: Flite has finished speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has finished speaking."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Setting up the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to start recognition loop."; // Show it in the status box.	
    }
    
    - (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Tearing down the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to cleanly end recognition loop."; // Show it in the status box.	
    }
    
    - (void) testRecognitionCompleted { // A test file which was submitted for direct recognition via the audio driver is done.
        NSLog(@"Local callback: A test file which was submitted for direct recognition via the audio driver is done."); // Log it.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // If we're listening, stop listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error) NSLog(@"Error while stopping listening in testRecognitionCompleted: %@", error);
        }
        
    }
    /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
    - (void) pocketsphinxFailedNoMicPermissions {
        NSLog(@"Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.");
        self.startupFailedDueToLackOfPermissions = TRUE;
        if([OEPocketsphinxController sharedInstance].isListening){
            NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // Stop listening if we are listening.
            if(error) NSLog(@"Error while stopping listening in micPermissionCheckCompleted: %@", error);
        }
    }
    
    /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a TRUE or a FALSE result  (will only be returned on iOS7 or later).*/
    - (void) micPermissionCheckCompleted:(BOOL)result {
        if(result) {
            self.restartAttemptsDueToPermissionRequests++;
            if(self.restartAttemptsDueToPermissionRequests == 1 && self.startupFailedDueToLackOfPermissions) { // If we get here because there was an attempt to start which failed due to lack of permissions, and now permissions have been requested and they returned true, we restart exactly once with the new permissions.
    
                if(![OEPocketsphinxController sharedInstance].isListening) { // If there was no error and we aren't listening, start listening.
                    [[OEPocketsphinxController sharedInstance] 
                     startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel 
                     dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary 
                     acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] 
                     languageModelIsJSGF:FALSE]; // Start speech recognition.
                    
                    self.startupFailedDueToLackOfPermissions = FALSE;
                }
            }
        }
    }
    
    #pragma mark -
    #pragma mark UI
    
    // This is not OpenEars-specific stuff, just some UI behavior
    
    - (IBAction) suspendListeningButtonAction { // This is the action for the button which suspends listening without ending the recognition loop
        [[OEPocketsphinxController sharedInstance] suspendRecognition];	
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = FALSE;
    }
    
    - (IBAction) resumeListeningButtonAction { // This is the action for the button which resumes listening if it has been suspended
        [[OEPocketsphinxController sharedInstance] resumeRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;	
    }
    
    - (IBAction) stopButtonAction { // This is the action for the button which shuts down the recognition loop.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // Stop if we are currently listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error)NSLog(@"Error stopping listening in stopButtonAction: %@", error);
        }
        self.startButton.hidden = FALSE;
        self.stopButton.hidden = TRUE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) startButtonAction { // This is the action for the button which starts up the recognition loop again if it has been shut down.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    #pragma mark -
    #pragma mark Example for reading out Pocketsphinx and Flite audio levels without locking the UI by using an NSTimer
    
    // What follows are not OpenEars methods, just an approach for level reading
    // that I've included with this sample app. My example implementation does make use of two OpenEars
    // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
    // method of OEFliteController. 
    //
    // The example is meant to show one way that you can read those levels continuously without locking the UI, 
    // by using an NSTimer, but the OpenEars level-reading methods 
    // themselves do not include multithreading code since I believe that you will want to design your own 
    // code approaches for level display that are tightly-integrated with your interaction design and the  
    // graphics API you choose. 
    // 
    // Please note that if you use my sample approach, you should pay attention to the way that the timer is always stopped in
    // dealloc. This should prevent you from having any difficulties with deallocating a class due to a running NSTimer process.
    
    - (void) startDisplayingLevels { // Start displaying the levels using a timer
        [self stopDisplayingLevels]; // We never want more than one timer valid so we'll stop any running timers first.
        self.uiUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/kLevelUpdatesPerSecond target:self selector:@selector(updateLevelsUI) userInfo:nil repeats:YES];
    }
    
    - (void) stopDisplayingLevels { // Stop displaying the levels by stopping the timer if it's running.
        if(self.uiUpdateTimer && [self.uiUpdateTimer isValid]) { // If there is a running timer, we'll stop it here.
            [self.uiUpdateTimer invalidate];
            self.uiUpdateTimer = nil;
        }
    }
    
    - (void) updateLevelsUI { // And here is how we obtain the levels.  This method includes the actual OpenEars methods and uses their results to update the UI of this view controller.
        
        self.pocketsphinxDbLabel.text = [NSString stringWithFormat:@"Pocketsphinx Input level:%f",[[OEPocketsphinxController sharedInstance] pocketsphinxInputLevel]];  //pocketsphinxInputLevel is an OpenEars method of the class OEPocketsphinxController.
        
        if(self.fliteController.speechInProgress) {
            self.fliteDbLabel.text = [NSString stringWithFormat:@"Flite Output level: %f",[self.fliteController fliteOutputLevel]]; // fliteOutputLevel is an OpenEars method of the class OEFliteController.
        }
    }
    
    @end
    
    in reply to: Error in generateLanguageModelFromArray call #1029985
    maxgarmar
    Participant

    Hi Halle,

    As you told me, next step was to try on my devices. Same happened:

    – I change my iPad language to English then my app uses the AcousticModelEnglish.
    – Then I load 10 Spanish words on it.
    – I start the viewController, first time ok.
    – I start viewController again second time, app crashes. The same line like the simulator:

    err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:@"AcousticModelEnglish" forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]];

    If I use the AcosticModelSpanish and I load English words on it. No problem recreating the viewController as many times as I want.

    Device: iPad mini 3 with iOS 9.2.1
    Let me know if you need something from me to help you out and find the error.

    Thanks

    Maxi

    in reply to: Error in generateLanguageModelFromArray call #1029961
    maxgarmar
    Participant

    Sorry, I don’t know the cause of that. You can troubleshoot it more by testing with the default English acoustic model that ships with OpenEars 2.5 rather than a custom one, by testing using real devices only, and by testing against other unknown words.

    I am using the English acoustic model shipped with Openears 2.5 is just a variable voiceVariable that I initialize depending on the device language.

    What OpenEars initialization are you referring to specifically? The issue you have seems to only relate to OELanguageModelGenerator, but OEPocketsphinxController no longer is really initialized per se, since it just has a shared object. I would take a look at the way things are set up in the sample app and compare it to your app to make sure there isn’t any unnecessary or out-of-date code as another troubleshooting step.

    I compared with the example and are the same steps.

    It might be a good idea to look at the logging for this behavior when you stop the engine before the view controller is dismissed to see if it is able to shut down cleanly.

    I am stopping the engine with stopListening method. But I don’t think it is releasing OELanguageModelGenerator variable.
    Is there any way to release or clean the OELanguageModelGenerator before dismissing the viewController?
    By the way, I just saw something strange:

    When the problem was happening I was loading Spanish words to AcousticModelEnglish (just because I am testing the app, in the future the dictionary should only contains English words) but still if I change the Spanish words to 3 English words, I can reload the viewController as many time as I want without crashing….
    Interesting enough is if I load english words to AcousticModelSpanish the crash is not happening. Does it help you to narrow the problem?

    As I told you the dictionary will be with English or Spanish words depending the country and it will match with the AcousticModel accordingly. But what I am afraid of is if the dictionary contains any strange English word because the user types it incorrectly if the error will happen also and the app will not run ever because crashes every time the viewController is loaded.

    Thank you

    in reply to: Error in generateLanguageModelFromArray call #1029960
    maxgarmar
    Participant

    I should add: The error happens with all the simulators.
    You can reproduce it:

    Creating a viewController which is loaded from other one.
    Initialize openEars every time the view is created in viewDidLoad like this:

     //Arreglar
            lmGenerator = [[OELanguageModelGenerator alloc] init];
            
            fliteController = [[OEFliteController alloc] init];
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];
            
            
             //[OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE;
             
             [OELogging startOpenEarsLogging];
             
             //lmGenerator.verboseLanguageModelGenerator = TRUE;
            
            
            
            
            
            
            [[OEPocketsphinxController sharedInstance] setSecondsOfSilenceToDetect:0.7];
            
            if([voiceLanguage isEqualToString:@"AcousticModelSpanish"]){
                
                //supresion de sonido en español
                
                if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:2.0];
                }
                else
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:3.5];
                }
                
                
            }else{
                
                //supresion de sonido en ingles
                if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:2.0];
                }
                else
                {
                    [[OEPocketsphinxController sharedInstance] setVadThreshold:3.5];
                }
                
                
                
            }
            
            //NSError* err=[self reloadAcousticModel];
            
            NSMutableArray *words = [[NSMutableArray alloc]init];
            NSMutableDictionary *listProduct = [[NSMutableDictionary alloc]init];
            
            [listProduct setObject:@"AGUA" forKey:@"1"];
            [listProduct setObject:@"SAL" forKey:@"2"];
            [listProduct setObject:@"COCACOLA" forKey:@"3"];
            [listProduct setObject:@"DETERGENTE" forKey:@"4"];
            [listProduct setObject:@"PEGAMENTO" forKey:@"5"];
            [listProduct setObject:@"ZUMO NARANJA" forKey:@"6"];
            [listProduct setObject:@"PAPEL HIGIENICO" forKey:@"7"];
            [listProduct setObject:@"ZUMO PIÑA" forKey:@"8"];
            [listProduct setObject:@"ZUMO MELOCOTON" forKey:@"9"];
            [listProduct setObject:@"ZUMO PERA" forKey:@"10"];
            [listProduct setObject:@"CEMENTO" forKey:@"11"];
            
            for (NSString* key in listProduct) {
                NSString* value =[listProduct objectForKey:key];
                NSError *error = nil;
                NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@"[-*/_;.()',+]" options:NSRegularExpressionCaseInsensitive error:&error];
                NSString *modifiedString = [regex stringByReplacingMatchesInString:value options:0 range:NSMakeRange(0, [value length]) withTemplate:@" "];
                [words addObject: [modifiedString uppercaseString]];
            }
            //    NSArray* palabras = [[NSArray alloc]initWithArray:words];
            
            NSString *name = @"NameIWantForMyLanguageModelFiles";
            NSError *err;
            
            if([words count]>0){
                err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.
            }else{
                
                [words addObject:@"VACIO"];
                
                err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.
                
                
            }
            
            // Call this once before setting properties of the OEPocketsphinxController instance.
            
            if([err code] == noErr) {
                
                self.pathToFirstDynamicallyGeneratedLanguageModel = [lmGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"NameIWantForMyLanguageModelFiles"];
                self.pathToFirstDynamicallyGeneratedDictionary =  [lmGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"NameIWantForMyLanguageModelFiles"];
                
                
            } else {
                NSLog(@"Error: %@",[err localizedDescription]);
            }
            
            ///FIN openEars
            
            [self.openEarsEventsObserver setDelegate:self];
            
            //********arrancando el OpenEars********
    

    Xcode Version 7.3 (7D175)

    I hope it helps

    Maxi

    in reply to: Error in generateLanguageModelFromArray call #1029958
    maxgarmar
    Participant

    Hi Halle,

    That’s not what I meant, I meant I have just one viewController which overtime is created is calling the viewDidLoad so then is calling the openears’s initialization. So there aren’t two viewControllers using it, it’s just one dismissing and then creating again when I needed. Perhaps this explanation helps.
    Another point is:
    The device has iOS 9 and the simulator is iOS 8 although I tried the same simulator iPad 2 with iOS 9.3 and also fails.

    Thank you

    in reply to: OpenEars support for Apple Watch? #1029731
    maxgarmar
    Participant

    Hi all,

    Halle, how is it going? will be a version soon for watch?

    Looking forward because I am back on my project

    Thanks

    in reply to: OpenEars support for Apple Watch? #1026166
    maxgarmar
    Participant

    Hi Halle and Rikk,

    I am Maxi, perhaps you remember me from other topics here in the forum. I am sorry to interfere in this conversation, but I am also really interested in openears for watchOS and now with watchOS 2 I think could be possible cause microphone and other sensors are opened now. Would be awesome.
    What do you think?

    Thanks as usual

    in reply to: Accuracy with just one word in the dictionary #1025338
    maxgarmar
    Participant

    Hi Halle,

    Trying to create the dictionary with grammar like this:

     NSDictionary *grammar = @{
                                          ThisWillBeSaidOnce : @[
                                                  @{ OneOfTheseWillBeSaidOnce : @[@"CREATE A TASK", @"CREATE A TASK FOR", @"TASK FOR"]},
                                                  @{ OneOfTheseWillBeSaidOnce : @[@"YURIY", @"JOHN", @"ANN"]},
                                                  ]
                                          };
                
                [dictRecognition setObject:array forKey:@"OneOfTheseCanBeSaidOnce"];
    
                
                //err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]]; // Change "AcousticModelEnglish" to "AcousticModelSpanish" to create a Spanish language model instead of an English one.
                
                 err = [lmGenerator generateGrammarFromDictionary:grammar withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage]];
                
                
            }
            
            // Call this once before setting properties of the OEPocketsphinxController instance.
            
            
            
            
            if([err code] == noErr) {
                
                
               self.pathToFirstDynamicallyGeneratedLanguageModel = [lmGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"NameIWantForMyLanguageModelFiles"];
               self.pathToFirstDynamicallyGeneratedDictionary =  [lmGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"NameIWantForMyLanguageModelFiles"];
                
                NSLog(@"path1: %@", self.pathToFirstDynamicallyGeneratedLanguageModel);
                
                NSLog(@"path2: %@", self.pathToFirstDynamicallyGeneratedDictionary);
                
                
            } else {
                NSLog(@"Error: %@",[err localizedDescription]);
            }
            
            ///FIN openEars
            
            [self.openEarsEventsObserver setDelegate:self];
    
            if(![OEPocketsphinxController sharedInstance].isListening){
                
                
                
                [self.pocketsphinxController            startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:voiceLanguage] languageModelIsJSGF:TRUE];
                
                 }
    

    I get this error:

    2015-04-07 14:46:13.783 QuickCart[8478:256415] Error: you have invoked the method:

    startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF

    with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this language model, that means the correct path to your language model that you should pass to this method’s languageModelPath argument is as follows:

    NSString *correctPathToMyLanguageModelFile = [NSString stringWithFormat:@”%@/TheNameIChoseForMyLanguageModelAndDictionaryFile.%@”,[NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) objectAtIndex:0],@”DMP”];

    Feel free to copy and paste this code for your path to your language model, but remember to replace the part that says “TheNameIChoseForMyLanguageModelAndDictionaryFile” with the name you actually chose for your language model and dictionary file or you will get this error again.

    What am I doing wrong ? I tried also with

    languageModelIsJSGF

    to

    false

    but does not help.

    Thanks

    in reply to: Accuracy with just one word in the dictionary #1025289
    maxgarmar
    Participant

    Hi Halle,

    Thanks for the answer.
    Ok I will think about using rejecto but what do you mean creating a grammar ? How can I create it ?

    Thanks

    maxgarmar
    Participant

    ok

    maxgarmar
    Participant

    But then, when should I call to suspendRecognition? If I don’t want my app listening when it starts.

    in reply to: [Resolved] generateLanguageModelFromArray problem #1024214
    maxgarmar
    Participant

    The problem was that in the viewDidLoad I was instantiating OELanguageModelGenerator (being a class property). Then later on when a new word is added to refresh and change the model for the word’s list I was instantiating once again that variable and then the problem came up with calling generateLanguageModelFromArray. This is only happening with english Acoustic Model. Spanish one does not care about this.

    in reply to: [Resolved] generateLanguageModelFromArray problem #1024211
    maxgarmar
    Participant

    Ok, sorry for that senseless word, I wanted to say that is crazy to create a new language model with a different name for every word the user introduce in the app. Anyway I don’t know how I did the test before but I did not do it correctly because you are right with:

    3. Does it happen if you make OELanguageModelGenerator a property of your view controller and alloc/init it in viewDidLoad (or similar) rather than using it as a method variable?

    If the OELanguageModelGenerator is created just one in viewDidLoad and not allocated again, it is not failing.

    Many thanks again, Halle. I hope I don’t have to disturb you anymore

    You can close this thread also

    in reply to: [Resolved] generateLanguageModelFromArray problem #1024209
    maxgarmar
    Participant

    But the problem is that are not just two models. I am recreating the model anytime I add a new word from user input or is coming from iCloud. The models are increasing from user inputs in the application. They are not fixed or so.
    So in that case I would need a new name for every word the user is adding.
    That’s what I meant when I said “crazy”.

    Could you guide me to make a workaround or something ?

    in reply to: [Resolved] generateLanguageModelFromArray problem #1024206
    maxgarmar
    Participant

    Ok answering…

    1. I can’t test it because I need the whole words again, does not make sense generate a second model without a word, even more, any time I run the app the word with the problem is changing, I meant, is not always CALAMARES, it is a random word every time.
    2. No, if I use a different name it is running without exception. But this is crazy generating new names anytime.
    3. I tried making it a property of my ViewController and setting it just in viewDidLoad but does not help/solve.
    4. I don’t see anything strange causing this. In addition as I told you with Spanish language it does not happen, and I did this before with 1.x version and it was running well in English.

    Thanks

    in reply to: [Resolved] generateLanguageModelFromArray problem #1024203
    maxgarmar
    Participant

    Here it goes,
    [749:161996] Starting dynamic language model generation
    ## Vocab generated by v2 of the CMU-Cambridge Statistcal
    ## Language Modeling toolkit.
    ##
    ## Includes 21 words ##
    wfreq2vocab : Done.
    text2idngram
    Vocab : /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.vocab
    Output idngram : /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.idngram
    N-gram buffer size : 10
    Hash table size : 5000
    Temp directory : /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/cmuclmtk-Uoiyta
    Max open files : 20
    FOF size : 10
    n : 3
    Initialising hash table…
    Reading vocabulary…
    Allocating memory for the n-gram buffer…
    Reading text into the n-gram buffer…
    20,000 n-grams processed for each “.”, 1,000,000 for each line.

    Sorting n-grams…
    Writing sorted n-grams to temporary file /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/cmuclmtk-Uoiyta/1
    Merging 1 temporary files…

    2-grams occurring: N times > N times Sug. -spec_num value
    0 35 45
    1 31 4 14
    2 3 1 11
    3 0 1 11
    4 0 1 11
    5 0 1 11
    6 0 1 11
    7 0 1 11
    8 0 1 11
    9 0 1 11
    10 0 1 11

    3-grams occurring: N times > N times Sug. -spec_num value
    0 50 60
    1 48 2 12
    2 2 0 10
    3 0 0 10
    4 0 0 10
    5 0 0 10
    6 0 0 10
    7 0 0 10
    8 0 0 10
    9 0 0 10
    10 0 0 10
    text2idngram : Done.

    read_wlist_into_siht: a list of 21 words was read from “/var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.vocab”.
    read_wlist_into_array: a list of 21 words was read from “/var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.vocab”.
    Unigram was renormalized to absorb a mass of 0.463415
    prob[UNK] = 1e-99
    ARPA-style 3-gram will be written to /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.arpa
    idngram2lm : Done.
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.arpa \
    -o /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.DMP \
    -debug 10

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -case
    -debug 10
    -help no no
    -i /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.arpa
    -ienc
    -ifmt
    -logbase 1.0001 1.000100e+00
    -mmap no no
    -o /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.DMP
    -oenc utf8 utf8
    -ofmt

    INFO: ngram_model_arpa.c(504): ngrams 1=21, 2=34, 3=21
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543): 21 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561): 34 = #bigrams created
    INFO: ngram_model_arpa.c(562): 6 = #prob2 entries
    INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583): 21 = #trigrams created
    INFO: ngram_model_arpa.c(584): 4 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model…
    INFO: ngram_model_dmp.c(548): 21 = #unigrams created
    INFO: ngram_model_dmp.c(649): 34 = #bigrams created
    INFO: ngram_model_dmp.c(650): 6 = #prob2 entries
    INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661): 21 = #trigrams created
    INFO: ngram_model_dmp.c(662): 4 = #prob3 entries
    2015-01-13 13:52:16.863[749:161996] Done creating language model with CMUCLMTK in 0.106283 seconds.
    2015-01-13 13:52:16.928[749:161996] The word CALAMARES was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/057D1FCC-0E5A-42AE-B064-B8524A54A8A2//AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-13 13:52:16.928[749:161996] Now using the fallback method to look up the word CALAMARES
    2015-01-13 13:52:16.928[749:161996] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase. This can also happen if you submit words with punctuation attached – consider removing punctuation from language models or grammars you create before submitting them.
    2015-01-13 13:52:16.928[749:161996] Using convertGraphemes for the word or phrase CALAMARES which doesn’t appear in the dictionary
    (lldb) bt
    * thread #1: tid = 0x278cc, 0x000000010019e000`feat_copy_into + 24, queue = ‘com.apple.main-thread’, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
    frame #0: 0x000000010019e000`feat_copy_into + 24
    frame #1: 0x00000001001abeb8`utt_init + 32
    frame #2: 0x000000010019a59c`flite_synth_text + 52
    frame #3: 0x000000010019a3bc `___lldb_unnamed_function417$$ + 128
    frame #4: 0x00000001001af090 `___lldb_unnamed_function551$$ + 1384
    frame #5: 0x00000001001b1100 `___lldb_unnamed_function564$$ + 468
    frame #6: 0x00000001001b098c `___lldb_unnamed_function559$$ + 560
    * frame #7: 0x000000010011b194 `-[MRSCViewController openEarsRefreshProduct](self=0x000000015551cee0, _cmd=0x000000010042265a) + 1648 at MRSCViewController.m:1969
    frame #8: 0x0000000100119ec4 `-[MRSCViewController refreshTableNotif:](self=0x000000015551cee0, _cmd=0x0000000100421e98, notification=0x0000000170056aa0) + 4616 at MRSCViewController.m:1850
    frame #9: 0x0000000182f801e0 CoreFoundation`__CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 20
    frame #10: 0x0000000182ebf370 CoreFoundation`_CFXNotificationPost + 2060
    frame #11: 0x0000000183dbacc0 Foundation`-[NSNotificationCenter postNotificationName:object:userInfo:] + 72
    frame #12: 0x0000000100123f98 `-[MRSCAppDelegate storeDidChange:](self=0x0000000170056bc0, _cmd=0x0000000100422d55, notification=0x0000000174245850) + 2000 at MRSCAppDelegate.m:265
    frame #13: 0x0000000182f801e0 CoreFoundation`__CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 20
    frame #14: 0x0000000182ebf370 CoreFoundation`_CFXNotificationPost + 2060
    frame #15: 0x0000000183dbacc0 Foundation`-[NSNotificationCenter postNotificationName:object:userInfo:] + 72
    frame #16: 0x0000000183f7a148 Foundation`-[NSUbiquitousKeyValueStore _postDidChangeNotificationExternalChanges:sourceChangeCount:] + 396
    frame #17: 0x0000000183f7a538 Foundation`__53-[NSUbiquitousKeyValueStore _syncConcurrentlyForced:]_block_invoke_2 + 256
    frame #18: 0x0000000100518e30 libdispatch.dylib`_dispatch_call_block_and_release + 24
    frame #19: 0x0000000100518df0 libdispatch.dylib`_dispatch_client_callout + 16
    frame #20: 0x000000010051d75c libdispatch.dylib`_dispatch_main_queue_callback_4CF + 1056
    frame #21: 0x0000000182f916a0 CoreFoundation`__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 12
    frame #22: 0x0000000182f8f748 CoreFoundation`__CFRunLoopRun + 1492
    frame #23: 0x0000000182ebd1f4 CoreFoundation`CFRunLoopRunSpecific + 396
    frame #24: 0x000000018c00f5a4 GraphicsServices`GSEventRunModal + 168
    frame #25: 0x00000001877ee784 UIKit`UIApplicationMain + 1488
    frame #26: 0x00000001001229e0 `main(argc=1, argv=0x000000016fd13a70) + 116 at main.m:16
    frame #27: 0x000000019404aa08 libdyld.dylib`start + 4
    <strong> [749:161996] Starting dynamic language model generation
    ## Vocab generated by v2 of the CMU-Cambridge Statistcal
    ## Language Modeling toolkit.
    ##
    ## Includes 21 words ##
    wfreq2vocab : Done.
    text2idngram
    Vocab : /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.vocab
    Output idngram : /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.idngram
    N-gram buffer size : 10
    Hash table size : 5000
    Temp directory : /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/cmuclmtk-Uoiyta
    Max open files : 20
    FOF size : 10
    n : 3
    Initialising hash table…
    Reading vocabulary…
    Allocating memory for the n-gram buffer…
    Reading text into the n-gram buffer…
    20,000 n-grams processed for each “.”, 1,000,000 for each line.

    Sorting n-grams…
    Writing sorted n-grams to temporary file /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/cmuclmtk-Uoiyta/1
    Merging 1 temporary files…

    2-grams occurring: N times > N times Sug. -spec_num value
    0 35 45
    1 31 4 14
    2 3 1 11
    3 0 1 11
    4 0 1 11
    5 0 1 11
    6 0 1 11
    7 0 1 11
    8 0 1 11
    9 0 1 11
    10 0 1 11

    3-grams occurring: N times > N times Sug. -spec_num value
    0 50 60
    1 48 2 12
    2 2 0 10
    3 0 0 10
    4 0 0 10
    5 0 0 10
    6 0 0 10
    7 0 0 10
    8 0 0 10
    9 0 0 10
    10 0 0 10
    text2idngram : Done.

    read_wlist_into_siht: a list of 21 words was read from “/var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.vocab”.
    read_wlist_into_array: a list of 21 words was read from “/var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.vocab”.
    Unigram was renormalized to absorb a mass of 0.463415
    prob[UNK] = 1e-99
    ARPA-style 3-gram will be written to /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.arpa
    idngram2lm : Done.
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.arpa \
    -o /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.DMP \
    -debug 10

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -case
    -debug 10
    -help no no
    -i /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.arpa
    -ienc
    -ifmt
    -logbase 1.0001 1.000100e+00
    -mmap no no
    -o /var/mobile/Containers/Data/Application/461A3153-5928-4041-832A-D3F5AD119C33/Library/Caches/NameIWantForMyLanguageModelFiles.DMP
    -oenc utf8 utf8
    -ofmt

    INFO: ngram_model_arpa.c(504): ngrams 1=21, 2=34, 3=21
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543): 21 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561): 34 = #bigrams created
    INFO: ngram_model_arpa.c(562): 6 = #prob2 entries
    INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583): 21 = #trigrams created
    INFO: ngram_model_arpa.c(584): 4 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model…
    INFO: ngram_model_dmp.c(548): 21 = #unigrams created
    INFO: ngram_model_dmp.c(649): 34 = #bigrams created
    INFO: ngram_model_dmp.c(650): 6 = #prob2 entries
    INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661): 21 = #trigrams created
    INFO: ngram_model_dmp.c(662): 4 = #prob3 entries
    2015-01-13 13:52:16.863 [749:161996] Done creating language model with CMUCLMTK in 0.106283 seconds.
    2015-01-13 13:52:16.928 [749:161996] The word CALAMARES was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/057D1FCC-0E5A-42AE-B064-B8524A54A8A2/ /AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-13 13:52:16.928 [749:161996] Now using the fallback method to look up the word CALAMARES
    2015-01-13 13:52:16.928 [749:161996] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase. This can also happen if you submit words with punctuation attached – consider removing punctuation from language models or grammars you create before submitting them.
    2015-01-13 13:52:16.928 [749:161996] Using convertGraphemes for the word or phrase CALAMARES which doesn’t appear in the dictionary
    (lldb) bt
    * thread #1: tid = 0x278cc, 0x000000010019e000 `feat_copy_into + 24, queue = ‘com.apple.main-thread’, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
    frame #0: 0x000000010019e000 `feat_copy_into + 24
    frame #1: 0x00000001001abeb8 `utt_init + 32
    frame #2: 0x000000010019a59c `flite_synth_text + 52
    frame #3: 0x000000010019a3bc `___lldb_unnamed_function417$$ + 128
    frame #4: 0x00000001001af090 `___lldb_unnamed_function551$$ + 1384
    frame #5: 0x00000001001b1100 `___lldb_unnamed_function564$$ + 468
    frame #6: 0x00000001001b098c `___lldb_unnamed_function559$$ + 560
    * frame #7: 0x000000010011b194 `-[MRSCViewController openEarsRefreshProduct](self=0x000000015551cee0, _cmd=0x000000010042265a) + 1648 at MRSCViewController.m:1969
    frame #8: 0x0000000100119ec4 `-[MRSCViewController refreshTableNotif:](self=0x000000015551cee0, _cmd=0x0000000100421e98, notification=0x0000000170056aa0) + 4616 at MRSCViewController.m:1850
    frame #9: 0x0000000182f801e0 CoreFoundation`__CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 20
    frame #10: 0x0000000182ebf370 CoreFoundation`_CFXNotificationPost + 2060
    frame #11: 0x0000000183dbacc0 Foundation`-[NSNotificationCenter postNotificationName:object:userInfo:] + 72
    frame #12: 0x0000000100123f98 `-[MRSCAppDelegate storeDidChange:](self=0x0000000170056bc0, _cmd=0x0000000100422d55, notification=0x0000000174245850) + 2000 at MRSCAppDelegate.m:265
    frame #13: 0x0000000182f801e0 CoreFoundation`__CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 20
    frame #14: 0x0000000182ebf370 CoreFoundation`_CFXNotificationPost + 2060
    frame #15: 0x0000000183dbacc0 Foundation`-[NSNotificationCenter postNotificationName:object:userInfo:] + 72
    frame #16: 0x0000000183f7a148 Foundation`-[NSUbiquitousKeyValueStore _postDidChangeNotificationExternalChanges:sourceChangeCount:] + 396
    frame #17: 0x0000000183f7a538 Foundation`__53-[NSUbiquitousKeyValueStore _syncConcurrentlyForced:]_block_invoke_2 + 256
    frame #18: 0x0000000100518e30 libdispatch.dylib`_dispatch_call_block_and_release + 24
    frame #19: 0x0000000100518df0 libdispatch.dylib`_dispatch_client_callout + 16
    frame #20: 0x000000010051d75c libdispatch.dylib`_dispatch_main_queue_callback_4CF + 1056
    frame #21: 0x0000000182f916a0 CoreFoundation`__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 12
    frame #22: 0x0000000182f8f748 CoreFoundation`__CFRunLoopRun + 1492
    frame #23: 0x0000000182ebd1f4 CoreFoundation`CFRunLoopRunSpecific + 396
    frame #24: 0x000000018c00f5a4 GraphicsServices`GSEventRunModal + 168
    frame #25: 0x00000001877ee784 UIKit`UIApplicationMain + 1488
    frame #26: 0x00000001001229e0 `main(argc=1, argv=0x000000016fd13a70) + 116 at main.m:16
    frame #27: 0x000000019404aa08 libdyld.dylib`start + 4

    maxgarmar
    Participant

    Not for this. I am using it so I just wanted to improve it as much as I can.
    I think we can close this thread (although I will open a new one because other problem :D).

    Thanks for helping and supporting

    maxgarmar
    Participant

    In which language are you talking about, Halle ?
    I saw in changelog that there are many improvements in 2.03. I will test it soon.

    Thanks

    maxgarmar
    Participant

    Ok Halle, now I got it. Anyway thanks so much for increasing that value to improve the response with noises in the Spanish language. But now I have a doubt.
    Regarding,

    I did notice that it looks a bit like for Spanish recognition, the vadThreshold values would be more useful if they went higher as you requested earlier. In 2.01 I had similar results to 1.70 when I used a vadThreshold of 4.4 or 4.3, which more similar to lower values with the English model (although I had better accuracy with 2.01)

    I would like to know then, what are the “lower” values for English language to make it like Spanish. I meant, if 4.4 or 4.3 is for Spanish, what do you think is the best for English. My app is also working in English so it would help me.

    Thanks and I will give you a feedback about my test of 2.0.2

    maxgarmar
    Participant

    Well ok, let’s do it again.

    1. The devices and versions are the same. Just I have my app with 1.7 installed and also your example app directly taken from the 2.0.1 download on my iPhone 5 with 7.0.1 version.
    Sorry but in my app I could not take any recording values because in production testing and I can’t modify the code. But I can tell you that is not sensible like 2.0.1.
    Anyway, regarding your phrase “a vadThreshold as high as 3.5 will suppress actual user speech in testing” is enough to know that something is really wrong with 2.0.1 because here my test and you will see the results:

    2.

    //  ViewController.m
    //  OpenEarsSampleApp
    //
    //  ViewController.m demonstrates the use of the OpenEars framework.
    //
    //  Copyright Politepix UG (haftungsbeschränkt) 2014. All rights reserved.
    //  https://www.politepix.com
    //  Contact at https://www.politepix.com/contact
    //
    //  This file is licensed under the Politepix Shared Source license found in the root of the source distribution.
    
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // IMPORTANT NOTE: Audio driver and hardware behavior is completely different between the Simulator and a real device. It is not informative to test OpenEars' accuracy on the Simulator, and please do not report Simulator-only bugs since I only actively support
    // the device driver. Please only do testing/bug reporting based on results on a real device such as an iPhone or iPod Touch. Thanks!
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    
    #import "ViewController.h"
    #import <OpenEars/OEPocketsphinxController.h>
    #import <OpenEars/OEFliteController.h>
    #import <OpenEars/OELanguageModelGenerator.h>
    #import <OpenEars/OELogging.h>
    #import <OpenEars/OEAcousticModel.h>
    #import <Slt/Slt.h>
    
    @interface ViewController()
    
    // UI actions, not specifically related to OpenEars other than the fact that they invoke OpenEars methods.
    - (IBAction) stopButtonAction;
    - (IBAction) startButtonAction;
    - (IBAction) suspendListeningButtonAction;
    - (IBAction) resumeListeningButtonAction;
    
    // Example for reading out the input audio levels without locking the UI using an NSTimer
    
    - (void) startDisplayingLevels;
    - (void) stopDisplayingLevels;
    
    // These three are the important OpenEars objects that this class demonstrates the use of.
    @property (nonatomic, strong) Slt *slt;
    
    @property (nonatomic, strong) OEEventsObserver *openEarsEventsObserver;
    @property (nonatomic, strong) OEPocketsphinxController *pocketsphinxController;
    @property (nonatomic, strong) OEFliteController *fliteController;
    
    // Some UI, not specifically related to OpenEars.
    @property (nonatomic, strong) IBOutlet UIButton *stopButton;
    @property (nonatomic, strong) IBOutlet UIButton *startButton;
    @property (nonatomic, strong) IBOutlet UIButton *suspendListeningButton;
    @property (nonatomic, strong) IBOutlet UIButton *resumeListeningButton;
    @property (nonatomic, strong) IBOutlet UITextView *statusTextView;
    @property (nonatomic, strong) IBOutlet UITextView *heardTextView;
    @property (nonatomic, strong) IBOutlet UILabel *pocketsphinxDbLabel;
    @property (nonatomic, strong) IBOutlet UILabel *fliteDbLabel;
    @property (nonatomic, assign) BOOL usingStartingLanguageModel;
    @property (nonatomic, assign) int restartAttemptsDueToPermissionRequests;
    @property (nonatomic, assign) BOOL startupFailedDueToLackOfPermissions;
    
    // Things which help us show off the dynamic language features.
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedDictionary;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedDictionary;
    
    // Our NSTimer that will help us read and display the input and output levels without locking the UI
    @property (nonatomic, strong) 	NSTimer *uiUpdateTimer;
    
    @end
    
    @implementation ViewController
    
    #define kLevelUpdatesPerSecond 18 // We'll have the ui update 18 times a second to show some fluidity without hitting the CPU too hard.
    
    #define kGetNbest // Uncomment this if you want to try out nbest
    #pragma mark -
    #pragma mark Memory Management
    
    - (void)dealloc {
        [self stopDisplayingLevels];
    }
    
    #pragma mark -
    #pragma mark View Lifecycle
    
    - (void)viewDidLoad {
        [super viewDidLoad];
        self.fliteController = [[OEFliteController alloc] init];
        self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
        self.openEarsEventsObserver.delegate = self;
        self.slt = [[Slt alloc] init];
        
        self.restartAttemptsDueToPermissionRequests = 0;
        self.startupFailedDueToLackOfPermissions = FALSE;
        
         [OELogging startOpenEarsLogging]; // Uncomment me for OELogging, which is verbose logging about internal OpenEars operations such as audio settings. If you have issues, show this logging in the forums.
        [OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE; // Uncomment this for much more verbose speech recognition engine output. If you have issues, show this logging in the forums.
        
        [self.openEarsEventsObserver setDelegate:self]; // Make this class the delegate of OpenEarsObserver so we can get all of the messages about what OpenEars is doing.
        
        [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this before setting any OEPocketsphinxController characteristics
        
        [OEPocketsphinxController sharedInstance].returnNbest = TRUE;
        [OEPocketsphinxController sharedInstance].nBestNumber = 5;
        
        [[OEPocketsphinxController sharedInstance] setSecondsOfSilenceToDetect:0.5];
        [[OEPocketsphinxController sharedInstance] setVadThreshold:3.5];
        
        // This is the language model we're going to start up with. The only reason I'm making it a class property is that I reuse it a bunch of times in this example,
        // but you can pass the string contents directly to OEPocketsphinxController:startListeningWithLanguageModelAtPath:dictionaryAtPath:languageModelIsJSGF:
        
        NSArray *firstLanguageArray = @[@"ADIOS",
                                         @"LECHUGA",
                                         @"MADRID",
                                         @"BARCELONA",
                                         @"PARIS",
                                         @"ROMA",
                                         @"MAINZ",
                                         @"HOLA",
                                         @"CHORIZO",
                                         @"HORCHATA"];
        
        OELanguageModelGenerator *languageModelGenerator = [[OELanguageModelGenerator alloc] init];
        
        languageModelGenerator.verboseLanguageModelGenerator = TRUE; // Uncomment me for verbose language model generator debug output.
        
        NSError *error = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"]]; // Change "AcousticModelSpanish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);
        } else {
            self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
        }
        
        self.usingStartingLanguageModel = TRUE; // This is not an OpenEars thing, this is just so I can switch back and forth between the two models in this sample app.
        
        // Here is an example of dynamically creating an in-app grammar.
        
        // We want it to be able to response to the speech "CHANGE MODEL" and a few other things.  Items we want to have recognized as a whole phrase (like "CHANGE MODEL")
        // we put into the array as one string (e.g. "CHANGE MODEL" instead of "CHANGE" and "MODEL"). This increases the probability that they will be recognized as a phrase. This works even better starting with version 1.0 of OpenEars.
        
        
        NSArray *secondLanguageArray = @[@"ADIOS",
                                         @"LECHUGA",
                                         @"MADRID",
                                         @"BARCELONA",
                                         @"PARIS",
                                         @"ROMA",
                                         @"MAINZ",
                                         @"HOLA",
                                         @"CHORIZO",
                                         @"HORCHATA"];
        
        // The last entry, quidnunc, is an example of a word which will not be found in the lookup dictionary and will be passed to the fallback method. The fallback method is slower,
        // so, for instance, creating a new language model from dictionary words will be pretty fast, but a model that has a lot of unusual names in it or invented/rare/recent-slang
        // words will be slower to generate. You can use this information to give your users good UI feedback about what the expectations for wait times should be.
        
        // I don't think it's beneficial to lazily instantiate OELanguageModelGenerator because you only need to give it a single message and then release it.
        // If you need to create a very large model or any size of model that has many unusual words that have to make use of the fallback generation method,
        // you will want to run this on a background thread so you can give the user some UI feedback that the task is in progress.
        
        // generateLanguageModelFromArray:withFilesNamed returns an NSError which will either have a value of noErr if everything went fine or a specific error if it didn't.
        error = [languageModelGenerator generateLanguageModelFromArray:secondLanguageArray withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"]]; // Change "AcousticModelSpanish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        //    NSError *error = [languageModelGenerator generateLanguageModelFromTextFile:[NSString stringWithFormat:@"%@/%@",[[NSBundle mainBundle] resourcePath], @"OpenEarsCorpus.txt"] withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"]]; // Try this out to see how generating a language model from a corpus works.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);
        }	else {
            
            self.pathToSecondDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"]; // We'll set our new .languagemodel file to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            self.pathToSecondDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];; // We'll set our new dictionary to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            // Next, an informative message.
            
            NSLog(@"\n\nWelcome to the OpenEars sample project. This project understands the words:\nBACKWARD,\nCHANGE,\nFORWARD,\nGO,\nLEFT,\nMODEL,\nRIGHT,\nTURN,\nand if you say \"CHANGE MODEL\" it will switch to its dynamically-generated model which understands the words:\nCHANGE,\nMODEL,\nMONDAY,\nTUESDAY,\nWEDNESDAY,\nTHURSDAY,\nFRIDAY,\nSATURDAY,\nSUNDAY,\nQUIDNUNC");
            
            // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController. We won't do this if the language generation failed since it will be listening for a command to change over to the generated language.
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this once before setting properties of the OEPocketsphinxController instance.
            
            
            [[OEPocketsphinxController sharedInstance] setSecondsOfSilenceToDetect:0.5];
            [[OEPocketsphinxController sharedInstance] setVadThreshold:3.5];
            
            
            [OEPocketsphinxController sharedInstance].pathToTestFile = [[NSBundle mainBundle] pathForResource:@"openears4" ofType:@"wav"];  // This is how you could use a test WAV (mono/16-bit/16k) rather than live recognition
            
            
            if(![OEPocketsphinxController sharedInstance].isListening) {
                [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            }
            // [self startDisplayingLevels] is not an OpenEars method, just a very simple approach for level reading
            // that I've included with this sample app. My example implementation does make use of two OpenEars
            // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
            // method of fliteController.
            //
            // The example is meant to show one way that you can read those levels continuously without locking the UI,
            // by using an NSTimer, but the OpenEars level-reading methods
            // themselves do not include multithreading code since I believe that you will want to design your own
            // code approaches for level display that are tightly-integrated with your interaction design and the
            // graphics API you choose.
            
            [self startDisplayingLevels];
            
            // Here is some UI stuff that has nothing specifically to do with OpenEars implementation
            self.startButton.hidden = TRUE;
            self.stopButton.hidden = TRUE;
            self.suspendListeningButton.hidden = TRUE;
            self.resumeListeningButton.hidden = TRUE;
        }
    }
    
    #pragma mark -
    #pragma mark OEEventsObserver delegate methods
    
    // What follows are all of the delegate methods you can optionally use once you've instantiated an OEEventsObserver and set its delegate to self.
    // I've provided some pretty granular information about the exact phase of the Pocketsphinx listening loop, the Audio Session, and Flite, but I'd expect
    // that the ones that will really be needed by most projects are the following:
    //
    //- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID;
    //- (void) audioSessionInterruptionDidBegin;
    //- (void) audioSessionInterruptionDidEnd;
    //- (void) audioRouteDidChangeToRoute:(NSString *)newRoute;
    //- (void) pocketsphinxDidStartListening;
    //- (void) pocketsphinxDidStopListening;
    //
    // It isn't necessary to have a OEPocketsphinxController or a OEFliteController instantiated in order to use these methods.  If there isn't anything instantiated that will
    // send messages to an OEEventsObserver, all that will happen is that these methods will never fire.  You also do not have to create a OEEventsObserver in
    // the same class or view controller in which you are doing things with a OEPocketsphinxController or OEFliteController; you can receive updates from those objects in
    // any class in which you instantiate an OEEventsObserver and set its delegate to self.
    
    // This is an optional delegate method of OEEventsObserver which delivers the text of speech that Pocketsphinx heard and analyzed, along with its accuracy score and utterance ID.
    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
        
        NSLog(@"Local callback: The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID); // Log it.
        if([hypothesis isEqualToString:@"CHANGE MODEL"]) { // If the user says "CHANGE MODEL", we will switch to the alternate model (which happens to be the dynamically generated model).
            
            // Here is an example of language model switching in OpenEars. Deciding on what logical basis to switch models is your responsibility.
            // For instance, when you call a customer service line and get a response tree that takes you through different options depending on what you say to it,
            // the models are being switched as you progress through it so that only relevant choices can be understood. The construction of that logical branching and
            // how to react to it is your job, OpenEars just lets you send the signal to switch the language model when you've decided it's the right time to do so.
            
            if(self.usingStartingLanguageModel) { // If we're on the starting model, switch to the dynamically generated one.
                
                // You can only change language models with ARPA grammars in OpenEars (the ones that end in .languagemodel or .DMP).
                // Trying to switch between JSGF models (the ones that end in .gram) will return no result.
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedLanguageModel withDictionary:self.pathToSecondDynamicallyGeneratedDictionary];
                self.usingStartingLanguageModel = FALSE;
            } else { // If we're on the dynamically generated model, switch to the start model (this is just an example of a trigger and method for switching models).
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedLanguageModel withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
                self.usingStartingLanguageModel = TRUE;
            }
        }
        
        self.heardTextView.text = [NSString stringWithFormat:@"Heard: \"%@\"", hypothesis]; // Show it in the status box.
        
        // This is how to use an available instance of OEFliteController. We're going to repeat back the command that we heard with the voice we've chosen.
        [self.fliteController say:[NSString stringWithFormat:@"You said %@",hypothesis] withVoice:self.slt];
    }
    
    #ifdef kGetNbest
    - (void) pocketsphinxDidReceiveNBestHypothesisArray:(NSArray *)hypothesisArray { // Pocketsphinx has an n-best hypothesis dictionary.
        NSLog(@"Local callback:  hypothesisArray is %@",hypothesisArray);
    }
    #endif
    // An optional delegate method of OEEventsObserver which informs that there was an interruption to the audio session (e.g. an incoming phone call).
    - (void) audioSessionInterruptionDidBegin {
        NSLog(@"Local callback:  AudioSession interruption began."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption began."; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) {
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening (if it is listening) since it will need to restart its loop after an interruption.
            if(error) NSLog(@"Error while stopping listening in audioSessionInterruptionDidBegin: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the interruption to the audio session ended.
    - (void) audioSessionInterruptionDidEnd {
        NSLog(@"Local callback:  AudioSession interruption ended."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption ended."; // Show it in the status box.
        // We're restarting the previously-stopped listening loop.
        if(![OEPocketsphinxController sharedInstance].isListening){
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't currently listening.
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the audio input became unavailable.
    - (void) audioInputDidBecomeUnavailable {
        NSLog(@"Local callback:  The audio input has become unavailable"); // Log it.
        self.statusTextView.text = @"Status: The audio input has become unavailable"; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening){
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening since there is no available input (but only if we are listening).
            if(error) NSLog(@"Error while stopping listening in audioInputDidBecomeUnavailable: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the unavailable audio input became available again.
    - (void) audioInputDidBecomeAvailable {
        NSLog(@"Local callback: The audio input is available"); // Log it.
        self.statusTextView.text = @"Status: The audio input is available"; // Show it in the status box.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition, but only if we aren't already listening.
        }
    }
    // An optional delegate method of OEEventsObserver which informs that there was a change to the audio route (e.g. headphones were plugged in or unplugged).
    - (void) audioRouteDidChangeToRoute:(NSString *)newRoute {
        NSLog(@"Local callback: Audio route change. The new audio route is %@", newRoute); // Log it.
        self.statusTextView.text = [NSString stringWithFormat:@"Status: Audio route change. The new audio route is %@",newRoute]; // Show it in the status box.
        
        NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling the Pocketsphinx loop to shut down and then start listening again on the new route
        
        if(error)NSLog(@"Local callback: error while stopping listening in audioRouteDidChangeToRoute: %@",error);
        
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
    // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
    - (void) pocketsphinxRecognitionLoopDidStart {
        
        NSLog(@"Local callback: Pocketsphinx started."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx started."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
    - (void) pocketsphinxDidStartListening {
        
        NSLog(@"Local callback: Pocketsphinx is now listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx is now listening."; // Show it in the status box.
        
        self.startButton.hidden = TRUE; // React to it with some UI changes.
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
    - (void) pocketsphinxDidDetectSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected speech."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
    // This was added because developers requested being able to time the recognition speed without the speech time. The processing time is the time between
    // this method being called and the hypothesis being returned.
    - (void) pocketsphinxDidDetectFinishedSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected a second of silence, concluding an utterance."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected finished speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
    // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
    - (void) pocketsphinxDidStopListening {
        NSLog(@"Local callback: Pocketsphinx has stopped listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has stopped listening."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
    // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
    // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
    - (void) pocketsphinxDidSuspendRecognition {
        NSLog(@"Local callback: Pocketsphinx has suspended recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has suspended recognition."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
    // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
    // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
    - (void) pocketsphinxDidResumeRecognition {
        NSLog(@"Local callback: Pocketsphinx has resumed recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has resumed recognition."; // Show it in the status box.
    }
    
    // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
    // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
    - (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
        NSLog(@"Local callback: Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@",newLanguageModelPathAsString,newDictionaryPathAsString);
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
    // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
    - (void) fliteDidStartSpeaking {
        NSLog(@"Local callback: Flite has started speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has started speaking."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
    // complex interaction between sound classes.
    - (void) fliteDidFinishSpeaking {
        NSLog(@"Local callback: Flite has finished speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has finished speaking."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Setting up the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to start recognition loop."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Tearing down the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to cleanly end recognition loop."; // Show it in the status box.
    }
    
    - (void) testRecognitionCompleted { // A test file which was submitted for direct recognition via the audio driver is done.
        NSLog(@"Local callback: A test file which was submitted for direct recognition via the audio driver is done."); // Log it.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // If we're listening, stop listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error) NSLog(@"Error while stopping listening in testRecognitionCompleted: %@", error);
        }
        
    }
    /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
    - (void) pocketsphinxFailedNoMicPermissions {
        NSLog(@"Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.");
        self.startupFailedDueToLackOfPermissions = TRUE;
    }
    
    /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a TRUE or a FALSE result  (will only be returned on iOS7 or later).*/
    - (void) micPermissionCheckCompleted:(BOOL)result {
        if(result) {
            self.restartAttemptsDueToPermissionRequests++;
            if(self.restartAttemptsDueToPermissionRequests == 1 && self.startupFailedDueToLackOfPermissions) { // If we get here because there was an attempt to start which failed due to lack of permissions, and now permissions have been requested and they returned true, we restart exactly once with the new permissions.
                NSError *error = nil;
                if([OEPocketsphinxController sharedInstance].isListening){
                    error = [[OEPocketsphinxController sharedInstance] stopListening]; // Stop listening if we are listening.
                    if(error) NSLog(@"Error while stopping listening in micPermissionCheckCompleted: %@", error);
                }
                if(!error && ![OEPocketsphinxController sharedInstance].isListening) { // If there was no error and we aren't listening, start listening.
                    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition.
                    self.startupFailedDueToLackOfPermissions = FALSE;
                }
            }
        }
    }
    
    #pragma mark -
    #pragma mark UI
    
    // This is not OpenEars-specific stuff, just some UI behavior
    
    - (IBAction) suspendListeningButtonAction { // This is the action for the button which suspends listening without ending the recognition loop
        [[OEPocketsphinxController sharedInstance] suspendRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = FALSE;
    }
    
    - (IBAction) resumeListeningButtonAction { // This is the action for the button which resumes listening if it has been suspended
        [[OEPocketsphinxController sharedInstance] resumeRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) stopButtonAction { // This is the action for the button which shuts down the recognition loop.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // Stop if we are currently listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error)NSLog(@"Error stopping listening in stopButtonAction: %@", error);
        }
        self.startButton.hidden = FALSE;
        self.stopButton.hidden = TRUE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) startButtonAction { // This is the action for the button which starts up the recognition loop again if it has been shut down.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    #pragma mark -
    #pragma mark Example for reading out Pocketsphinx and Flite audio levels without locking the UI by using an NSTimer
    
    // What follows are not OpenEars methods, just an approach for level reading
    // that I've included with this sample app. My example implementation does make use of two OpenEars
    // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
    // method of OEFliteController.
    //
    // The example is meant to show one way that you can read those levels continuously without locking the UI,
    // by using an NSTimer, but the OpenEars level-reading methods
    // themselves do not include multithreading code since I believe that you will want to design your own
    // code approaches for level display that are tightly-integrated with your interaction design and the
    // graphics API you choose.
    //
    // Please note that if you use my sample approach, you should pay attention to the way that the timer is always stopped in
    // dealloc. This should prevent you from having any difficulties with deallocating a class due to a running NSTimer process.
    
    - (void) startDisplayingLevels { // Start displaying the levels using a timer
        [self stopDisplayingLevels]; // We never want more than one timer valid so we'll stop any running timers first.
        self.uiUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/kLevelUpdatesPerSecond target:self selector:@selector(updateLevelsUI) userInfo:nil repeats:YES];
    }
    
    - (void) stopDisplayingLevels { // Stop displaying the levels by stopping the timer if it's running.
        if(self.uiUpdateTimer && [self.uiUpdateTimer isValid]) { // If there is a running timer, we'll stop it here.
            [self.uiUpdateTimer invalidate];
            self.uiUpdateTimer = nil;
        }
    }
    
    - (void) updateLevelsUI { // And here is how we obtain the levels.  This method includes the actual OpenEars methods and uses their results to update the UI of this view controller.
        
        self.pocketsphinxDbLabel.text = [NSString stringWithFormat:@"Pocketsphinx Input level:%f",[[OEPocketsphinxController sharedInstance] pocketsphinxInputLevel]];  //pocketsphinxInputLevel is an OpenEars method of the class OEPocketsphinxController.
        
        if(self.fliteController.speechInProgress) {
            self.fliteDbLabel.text = [NSString stringWithFormat:@"Flite Output level: %f",[self.fliteController fliteOutputLevel]]; // fliteOutputLevel is an OpenEars method of the class OEFliteController.
        }
    }
    
    @end
    

    Definitely, is not suppressing at all. Here the result in the log:

    2015-01-07 13:01:28.314 OpenEarsSampleApp[842:60b] Starting OpenEars logging for OpenEars version 2.01 on 32-bit device (or build): iPhone running iOS version: 7.000000
    2015-01-07 13:01:28.318 OpenEarsSampleApp[842:60b] Creating shared instance of OEPocketsphinxController
    2015-01-07 13:01:28.380 OpenEarsSampleApp[842:60b] Starting dynamic language model generation
    ## Vocab generated by v2 of the CMU-Cambridge Statistcal
    ## Language Modeling toolkit.
    ##
    ## Includes 12 words ##
    wfreq2vocab : Done.
    text2idngram
    Vocab : /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.vocab
    Output idngram : /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.idngram
    N-gram buffer size : 10
    Hash table size : 5000
    Temp directory : /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/cmuclmtk-jsQ2W3
    Max open files : 20
    FOF size : 10
    n : 3
    Initialising hash table…
    Reading vocabulary…
    Allocating memory for the n-gram buffer…
    Reading text into the n-gram buffer…
    20,000 n-grams processed for each “.”, 1,000,000 for each line.

    Sorting n-grams…
    Writing sorted n-grams to temporary file /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/cmuclmtk-jsQ2W3/1
    Merging 1 temporary files…

    2-grams occurring: N times > N times Sug. -spec_num value
    0 21 31
    1 20 1 11
    2 0 1 11
    3 0 1 11
    4 0 1 11
    5 0 1 11
    6 0 1 11
    7 0 1 11
    8 0 1 11
    9 0 1 11
    10 1 0 10

    3-grams occurring: N times > N times Sug. -spec_num value
    0 30 40
    1 30 0 10
    2 0 0 10
    3 0 0 10
    4 0 0 10
    5 0 0 10
    6 0 0 10
    7 0 0 10
    8 0 0 10
    9 0 0 10
    10 0 0 10
    text2idngram : Done.

    read_wlist_into_siht: a list of 12 words was read from “/var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.vocab”.
    read_wlist_into_array: a list of 12 words was read from “/var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.vocab”.
    Unigram was renormalized to absorb a mass of 0.5
    prob[UNK] = 1e-99
    ARPA-style 3-gram will be written to /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa
    idngram2lm : Done.
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa \
    -o /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP \
    -debug 10

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -case
    -debug 10
    -help no no
    -i /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa
    -ienc
    -ifmt
    -logbase 1.0001 1.000100e+00
    -mmap no no
    -o /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
    -oenc utf8 utf8
    -ofmt

    INFO: ngram_model_arpa.c(504): ngrams 1=12, 2=20, 3=10
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543): 12 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561): 20 = #bigrams created
    INFO: ngram_model_arpa.c(562): 3 = #prob2 entries
    INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583): 10 = #trigrams created
    INFO: ngram_model_arpa.c(584): 2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model…
    INFO: ngram_model_dmp.c(548): 12 = #unigrams created
    INFO: ngram_model_dmp.c(649): 20 = #bigrams created
    INFO: ngram_model_dmp.c(650): 3 = #prob2 entries
    INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661): 10 = #trigrams created
    INFO: ngram_model_dmp.c(662): 2 = #prob3 entries
    2015-01-07 13:01:28.439 OpenEarsSampleApp[842:60b] Done creating language model with CMUCLMTK in 0.058217 seconds.
    2015-01-07 13:01:28.477 OpenEarsSampleApp[842:60b] The word ADIOS was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.479 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word ADIOS
    2015-01-07 13:01:28.480 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.493 OpenEarsSampleApp[842:60b] The word HORCHATA was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.495 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word HORCHATA
    2015-01-07 13:01:28.496 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.503 OpenEarsSampleApp[842:60b] The word LECHUGA was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.505 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word LECHUGA
    2015-01-07 13:01:28.506 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.513 OpenEarsSampleApp[842:60b] The word MAINZ was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.514 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word MAINZ
    2015-01-07 13:01:28.516 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.520 OpenEarsSampleApp[842:60b] I’m done running performDictionaryLookup and it took 0.053911 seconds
    2015-01-07 13:01:28.526 OpenEarsSampleApp[842:60b] I’m done running dynamic language model generation and it took 0.200190 seconds
    2015-01-07 13:01:28.532 OpenEarsSampleApp[842:60b] Starting dynamic language model generation
    ## Vocab generated by v2 of the CMU-Cambridge Statistcal
    ## Language Modeling toolkit.
    ##
    ## Includes 12 words ##
    wfreq2vocab : Done.
    text2idngram
    Vocab : /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.vocab
    Output idngram : /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.idngram
    N-gram buffer size : 10
    Hash table size : 5000
    Temp directory : /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/cmuclmtk-no6FQp
    Max open files : 20
    FOF size : 10
    n : 3
    Initialising hash table…
    Reading vocabulary…
    Allocating memory for the n-gram buffer…
    Reading text into the n-gram buffer…
    20,000 n-grams processed for each “.”, 1,000,000 for each line.

    Sorting n-grams…
    Writing sorted n-grams to temporary file /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/cmuclmtk-no6FQp/1
    Merging 1 temporary files…

    2-grams occurring: N times > N times Sug. -spec_num value
    0 21 31
    1 20 1 11
    2 0 1 11
    3 0 1 11
    4 0 1 11
    5 0 1 11
    6 0 1 11
    7 0 1 11
    8 0 1 11
    9 0 1 11
    10 1 0 10

    3-grams occurring: N times > N times Sug. -spec_num value
    0 30 40
    1 30 0 10
    2 0 0 10
    3 0 0 10
    4 0 0 10
    5 0 0 10
    6 0 0 10
    7 0 0 10
    8 0 0 10
    9 0 0 10
    10 0 0 10
    text2idngram : Done.

    read_wlist_into_siht: a list of 12 words was read from “/var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.vocab”.
    read_wlist_into_array: a list of 12 words was read from “/var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.vocab”.
    Unigram was renormalized to absorb a mass of 0.5
    prob[UNK] = 1e-99
    ARPA-style 3-gram will be written to /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa
    idngram2lm : Done.
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa \
    -o /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP \
    -debug 10

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -case
    -debug 10
    -help no no
    -i /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa
    -ienc
    -ifmt
    -logbase 1.0001 1.000100e+00
    -mmap no no
    -o /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP
    -oenc utf8 utf8
    -ofmt

    INFO: ngram_model_arpa.c(504): ngrams 1=12, 2=20, 3=10
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543): 12 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561): 20 = #bigrams created
    INFO: ngram_model_arpa.c(562): 3 = #prob2 entries
    INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583): 10 = #trigrams created
    INFO: ngram_model_arpa.c(584): 2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model…
    INFO: ngram_model_dmp.c(548): 12 = #unigrams created
    INFO: ngram_model_dmp.c(649): 20 = #bigrams created
    INFO: ngram_model_dmp.c(650): 3 = #prob2 entries
    INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661): 10 = #trigrams created
    INFO: ngram_model_dmp.c(662): 2 = #prob3 entries
    2015-01-07 13:01:28.583 OpenEarsSampleApp[842:60b] Done creating language model with CMUCLMTK in 0.049580 seconds.
    2015-01-07 13:01:28.621 OpenEarsSampleApp[842:60b] The word ADIOS was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.622 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word ADIOS
    2015-01-07 13:01:28.624 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.636 OpenEarsSampleApp[842:60b] The word HORCHATA was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.638 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word HORCHATA
    2015-01-07 13:01:28.639 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.646 OpenEarsSampleApp[842:60b] The word LECHUGA was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.648 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word LECHUGA
    2015-01-07 13:01:28.649 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.657 OpenEarsSampleApp[842:60b] The word MAINZ was not found in the dictionary /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2015-01-07 13:01:28.658 OpenEarsSampleApp[842:60b] Now using the fallback method to look up the word MAINZ
    2015-01-07 13:01:28.659 OpenEarsSampleApp[842:60b] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the Spanish phonetic lookup dictionary is that your words are not in Spanish or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2015-01-07 13:01:28.664 OpenEarsSampleApp[842:60b] I’m done running performDictionaryLookup and it took 0.053968 seconds
    2015-01-07 13:01:28.670 OpenEarsSampleApp[842:60b] I’m done running dynamic language model generation and it took 0.141725 seconds
    2015-01-07 13:01:28.672 OpenEarsSampleApp[842:60b]

    Welcome to the OpenEars sample project. This project understands the words:
    BACKWARD,
    CHANGE,
    FORWARD,
    GO,
    LEFT,
    MODEL,
    RIGHT,
    TURN,
    and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
    CHANGE,
    MODEL,
    MONDAY,
    TUESDAY,
    WEDNESDAY,
    THURSDAY,
    FRIDAY,
    SATURDAY,
    SUNDAY,
    QUIDNUNC
    2015-01-07 13:01:28.674 OpenEarsSampleApp[842:60b] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2015-01-07 13:01:28.678 OpenEarsSampleApp[842:60b] User gave mic permission for this app.
    2015-01-07 13:01:28.680 OpenEarsSampleApp[842:60b] Valid setSecondsOfSilence value of 0.500000 will be used.
    2015-01-07 13:01:28.681 OpenEarsSampleApp[842:60b] Successfully started listening session from startListeningWithLanguageModelAtPath:
    2015-01-07 13:01:28.682 OpenEarsSampleApp[842:1803] Starting listening.
    2015-01-07 13:01:28.683 OpenEarsSampleApp[842:1803] about to set up audio session
    2015-01-07 13:01:28.718 OpenEarsSampleApp[842:3b03] Audio route has changed for the following reason:
    2015-01-07 13:01:28.723 OpenEarsSampleApp[842:3b03] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2015-01-07 13:01:28.732 OpenEarsSampleApp[842:3b03] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x1667dc20,
    inputs = (null);
    outputs = (
    “<AVAudioSessionPortDescription: 0x1667d800, type = Speaker; name = Altavoz; UID = Speaker; selectedDataSource = (null)>”
    )>.
    2015-01-07 13:01:29.142 OpenEarsSampleApp[842:1803] done starting audio unit
    INFO: cmd_ln.c(702): Parsing command line:
    \
    -lm /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP \
    -vad_prespeech 10 \
    -vad_postspeech 50 \
    -vad_threshold 3.500000 \
    -remove_noise yes \
    -remove_silence yes \
    -bestpath yes \
    -lw 6.500000 \
    -dict /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic \
    -hmm /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -agc none none
    -agcthresh 2.0 2.000000e+00
    -allphone
    -allphone_ci no no
    -alpha 0.97 9.700000e-01
    -argfile
    -ascale 20.0 2.000000e+01
    -aw 1 1
    -backtrace no no
    -beam 1e-48 1.000000e-48
    -bestpath yes yes
    -bestpathlw 9.5 9.500000e+00
    -bghist no no
    -ceplen 13 13
    -cmn current current
    -cmninit 8.0 8.0
    -compallsen no no
    -debug 0
    -dict /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    -dictcase no no
    -dither no no
    -doublebw no no
    -ds 1 1
    -fdict
    -feat 1s_c_d_dd 1s_c_d_dd
    -featparams
    -fillprob 1e-8 1.000000e-08
    -frate 100 100
    -fsg
    -fsgusealtpron yes yes
    -fsgusefiller yes yes
    -fwdflat yes yes
    -fwdflatbeam 1e-64 1.000000e-64
    -fwdflatefwid 4 4
    -fwdflatlw 8.5 8.500000e+00
    -fwdflatsfwin 25 25
    -fwdflatwbeam 7e-29 7.000000e-29
    -fwdtree yes yes
    -hmm /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle
    -input_endian little little
    -jsgf
    -kdmaxbbi -1 -1
    -kdmaxdepth 0 0
    -kdtree
    -keyphrase
    -kws
    -kws_plp 1e-1 1.000000e-01
    -kws_threshold 1 1.000000e+00
    -latsize 5000 5000
    -lda
    -ldadim 0 0
    -lextreedump 0 0
    -lifter 0 0
    -lm /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
    -lmctl
    -lmname
    -logbase 1.0001 1.000100e+00
    -logfn
    -logspec no no
    -lowerf 133.33334 1.333333e+02
    -lpbeam 1e-40 1.000000e-40
    -lponlybeam 7e-29 7.000000e-29
    -lw 6.5 6.500000e+00
    -maxhmmpf 10000 10000
    -maxnewoov 20 20
    -maxwpf -1 -1
    -mdef
    -mean
    -mfclogdir
    -min_endfr 0 0
    -mixw
    -mixwfloor 0.0000001 1.000000e-07
    -mllr
    -mmap yes yes
    -ncep 13 13
    -nfft 512 512
    -nfilt 40 40
    -nwpen 1.0 1.000000e+00
    -pbeam 1e-48 1.000000e-48
    -pip 1.0 1.000000e+00
    -pl_beam 1e-10 1.000000e-10
    -pl_pbeam 1e-5 1.000000e-05
    -pl_window 0 0
    -rawlogdir
    -remove_dc no no
    -remove_noise yes yes
    -remove_silence yes yes
    -round_filters yes yes
    -samprate 16000 1.600000e+04
    -seed -1 -1
    -sendump
    -senlogdir
    -senmgau
    -silprob 0.005 5.000000e-03
    -smoothspec no no
    -svspec
    -tmat
    -tmatfloor 0.0001 1.000000e-04
    -topn 4 4
    -topn_beam 0 0
    -toprule
    -transform legacy legacy
    -unit_area yes yes
    -upperf 6855.4976 6.855498e+03
    -usewdphones no no
    -uw 1.0 1.000000e+00
    -vad_postspeech 50 50
    -vad_prespeech 10 10
    -vad_threshold 2.0 3.500000e+00
    -var
    -varfloor 0.0001 1.000000e-04
    -varnorm no no
    -verbose no no
    -warp_params
    -warp_type inverse_linear inverse_linear
    -wbeam 7e-29 7.000000e-29
    -wip 0.65 6.500000e-01
    -wlen 0.025625 2.562500e-02

    INFO: cmd_ln.c(702): Parsing command line:
    \
    -feat s3_1x39

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -agc none none
    -agcthresh 2.0 2.000000e+00
    -alpha 0.97 9.700000e-01
    -ceplen 13 13
    -cmn current current
    -cmninit 8.0 8.0
    -dither no no
    -doublebw no no
    -feat 1s_c_d_dd s3_1x39
    -frate 100 100
    -input_endian little little
    -lda
    -ldadim 0 0
    -lifter 0 0
    -logspec no no
    -lowerf 133.33334 1.333333e+02
    -ncep 13 13
    -nfft 512 512
    -nfilt 40 40
    -remove_dc no no
    -remove_noise yes yes
    -remove_silence yes yes
    -round_filters yes yes
    -samprate 16000 1.600000e+04
    -seed -1 -1
    -smoothspec no no
    -svspec
    -transform legacy legacy
    -unit_area yes yes
    -upperf 6855.4976 6.855498e+03
    -vad_postspeech 50 50
    -vad_prespeech 10 10
    -vad_threshold 2.0 3.500000e+00
    -varnorm no no
    -verbose no no
    -warp_params
    -warp_type inverse_linear inverse_linear
    -wlen 0.025625 2.562500e-02

    INFO: acmod.c(252): Parsed model-specific feature parameters from /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/feat.params
    INFO: feat.c(715): Initializing feature stream to type: ‘s3_1x39′, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 27954 * 8 bytes (218 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/transition_matrices
    INFO: acmod.c(124): Attempting to use SCHMM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/means
    INFO: ms_gauden.c(292): 2630 codebook, 1 feature, size:
    INFO: ms_gauden.c(294): 16×39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/variances
    INFO: ms_gauden.c(292): 2630 codebook, 1 feature, size:
    INFO: ms_gauden.c(294): 16×39
    INFO: ms_gauden.c(354): 16 variance values floored
    INFO: acmod.c(126): Attempting to use PTHMM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/means
    INFO: ms_gauden.c(292): 2630 codebook, 1 feature, size:
    INFO: ms_gauden.c(294): 16×39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/variances
    INFO: ms_gauden.c(292): 2630 codebook, 1 feature, size:
    INFO: ms_gauden.c(294): 16×39
    INFO: ms_gauden.c(354): 16 variance values floored
    INFO: ptm_mgau.c(792): Number of codebooks exceeds 256: 2630
    INFO: acmod.c(128): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/means
    INFO: ms_gauden.c(292): 2630 codebook, 1 feature, size:
    INFO: ms_gauden.c(294): 16×39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/variances
    INFO: ms_gauden.c(292): 2630 codebook, 1 feature, size:
    INFO: ms_gauden.c(294): 16×39
    INFO: ms_gauden.c(354): 16 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/OpenEarsSampleApp.app/AcousticModelSpanish.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2630 senones: 1 features x 16 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: dict.c(320): Allocating 4106 * 20 bytes (80 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Applications/3258C065-2A15-463F-A98B-D502DF01812B/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 10 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 26^3 * 2 bytes (34 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 8216 bytes (8 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 8216 bytes (8 KiB) for single-phone word triphones
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=12, 2=20, 3=10
    INFO: ngram_model_dmp.c(266): 12 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312): 20 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338): 10 = LM.trigrams read
    INFO: ngram_model_dmp.c(363): 3 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383): 3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403): 2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431): 1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487): 12 = ascii word strings read
    INFO: ngram_search_fwdtree.c(99): 9 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 4 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 4 single-phone words
    INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 163
    INFO: ngram_search_fwdtree.c(339): after: 9 root, 35 non-root channels, 3 single-phone words
    INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
    2015-01-07 13:01:30.877 OpenEarsSampleApp[842:1803] Listening.
    2015-01-07 13:01:30.879 OpenEarsSampleApp[842:1803] Project has these words or phrases in its dictionary:
    ADIOS
    BARCELONA
    CHORIZO
    HOLA
    HORCHATA
    LECHUGA
    MADRID
    MAINZ
    PARIS
    ROMA
    2015-01-07 13:01:30.880 OpenEarsSampleApp[842:1803] Recognition loop has started
    2015-01-07 13:01:30.905 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx is now listening.
    2015-01-07 13:01:30.908 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx started.
    2015-01-07 13:01:31.210 OpenEarsSampleApp[842:1803] Speech detected…
    2015-01-07 13:01:31.212 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected speech.
    2015-01-07 13:01:33.335 OpenEarsSampleApp[842:1803] End of speech detected…
    INFO: cmn_prior.c(131): cmn_prior_update: from < 8.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 9.20 0.70 -0.18 -0.14 -0.30 -0.40 -0.35 -0.32 -0.38 -0.21 -0.22 -0.25 -0.19 >
    INFO: ngram_search_fwdtree.c(1550): 277 words recognized (1/fr)
    INFO: ngram_search_fwdtree.c(1552): 14886 senones evaluated (66/fr)
    INFO: ngram_search_fwdtree.c(1556): 3780 channels searched (16/fr), 505 1st, 2058 last
    INFO: ngram_search_fwdtree.c(1559): 304 words for which last channels evaluated (1/fr)
    INFO: ngram_search_fwdtree.c(1561): 54 candidate words for entering last phone (0/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.34 CPU 0.152 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 2.44 wall 1.084 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
    2015-01-07 13:01:33.337 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: ngram_search_fwdflat.c(938): 318 words recognized (1/fr)
    INFO: ngram_search_fwdflat.c(940): 17608 senones evaluated (78/fr)
    INFO: ngram_search_fwdflat.c(942): 5444 channels searched (24/fr)
    INFO: ngram_search_fwdflat.c(944): 473 words searched (2/fr)
    INFO: ngram_search_fwdflat.c(947): 57 word transitions (0/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.23 CPU 0.103 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.23 wall 0.103 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using HOLA.223 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node HOLA.2
    INFO: ngram_search.c(1294): Eliminated 32 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 36 nodes, 1 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(HOLA:2:223) = -478053
    INFO: ps_lattice.c(1403): Joint P(O,S) = -478053 P(S|O) = 0
    INFO: ngram_search.c(890): bestpath 0.01 CPU 0.002 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.000 xRT
    2015-01-07 13:01:33.569 OpenEarsSampleApp[842:1803] Pocketsphinx heard “HOLA” with a score of (0) and an utterance ID of 0.
    2015-01-07 13:01:33.570 OpenEarsSampleApp[842:60b] Flite sending interrupt speech request.
    2015-01-07 13:01:33.572 OpenEarsSampleApp[842:60b] Local callback: The received hypothesis is HOLA with a score of 0 and an ID of 0
    2015-01-07 13:01:33.574 OpenEarsSampleApp[842:60b] I’m running flite
    2015-01-07 13:01:33.716 OpenEarsSampleApp[842:60b] I’m done running flite and it took 0.140953 seconds
    2015-01-07 13:01:33.717 OpenEarsSampleApp[842:60b] Flite audio player was nil when referenced so attempting to allocate a new audio player.
    2015-01-07 13:01:33.719 OpenEarsSampleApp[842:60b] Loading speech data for Flite concluded successfully.
    2015-01-07 13:01:33.770 OpenEarsSampleApp[842:60b] Local callback: hypothesisArray is (
    {
    Hypothesis = HOLA;
    Score = “-9037”;
    }
    )
    2015-01-07 13:01:33.772 OpenEarsSampleApp[842:60b] Flite sending suspend recognition notification.
    2015-01-07 13:01:33.774 OpenEarsSampleApp[842:60b] Local callback: Flite has started speaking
    2015-01-07 13:01:33.780 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has suspended recognition.
    2015-01-07 13:01:34.979 OpenEarsSampleApp[842:60b] AVAudioPlayer did finish playing with success flag of 1
    2015-01-07 13:01:35.132 OpenEarsSampleApp[842:60b] Flite sending resume recognition notification.
    2015-01-07 13:01:35.635 OpenEarsSampleApp[842:60b] Local callback: Flite has finished speaking
    2015-01-07 13:01:35.642 OpenEarsSampleApp[842:60b] Valid setSecondsOfSilence value of 0.500000 will be used.
    2015-01-07 13:01:35.643 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has resumed recognition.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 9.20 0.70 -0.18 -0.14 -0.30 -0.40 -0.35 -0.32 -0.38 -0.21 -0.22 -0.25 -0.19 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 9.20 0.70 -0.18 -0.14 -0.30 -0.40 -0.35 -0.32 -0.38 -0.21 -0.22 -0.25 -0.19 >
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
    2015-01-07 13:01:37.487 OpenEarsSampleApp[842:1803] Speech detected…
    2015-01-07 13:01:37.489 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected speech.
    2015-01-07 13:01:41.814 OpenEarsSampleApp[842:1803] End of speech detected…
    INFO: cmn_prior.c(131): cmn_prior_update: from < 9.20 0.70 -0.18 -0.14 -0.30 -0.40 -0.35 -0.32 -0.38 -0.21 -0.22 -0.25 -0.19 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 9.31 0.33 -0.27 0.11 -0.30 -0.45 -0.30 -0.15 -0.25 -0.18 -0.20 -0.17 -0.11 >
    INFO: ngram_search_fwdtree.c(1550): 1881 words recognized (4/fr)
    INFO: ngram_search_fwdtree.c(1552): 95871 senones evaluated (209/fr)
    INFO: ngram_search_fwdtree.c(1556): 29159 channels searched (63/fr), 3690 1st, 15184 last
    INFO: ngram_search_fwdtree.c(1559): 2097 words for which last channels evaluated (4/fr)
    INFO: ngram_search_fwdtree.c(1561): 1023 candidate words for entering last phone (2/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 1.66 CPU 0.362 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 6.03 wall 1.316 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 12 words
    2015-01-07 13:01:41.815 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: ngram_search_fwdflat.c(938): 1483 words recognized (3/fr)
    INFO: ngram_search_fwdflat.c(940): 94703 senones evaluated (207/fr)
    INFO: ngram_search_fwdflat.c(942): 37245 channels searched (81/fr)
    INFO: ngram_search_fwdflat.c(944): 3256 words searched (7/fr)
    INFO: ngram_search_fwdflat.c(947): 1500 word transitions (3/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 1.24 CPU 0.270 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 1.22 wall 0.266 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using <sil>.456 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node <sil>.386
    INFO: ngram_search.c(1294): Eliminated 5 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 67 nodes, 69 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(<sil>:386:456) = -1095420
    INFO: ps_lattice.c(1403): Joint P(O,S) = -1112740 P(S|O) = -17320
    INFO: ngram_search.c(890): bestpath 0.00 CPU 0.000 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.000 xRT
    2015-01-07 13:01:43.036 OpenEarsSampleApp[842:1803] Pocketsphinx heard “MAINZ HORCHATA LECHUGA ADIOS” with a score of (-17320) and an utterance ID of 1.
    2015-01-07 13:01:43.037 OpenEarsSampleApp[842:60b] Flite sending interrupt speech request.
    2015-01-07 13:01:43.038 OpenEarsSampleApp[842:60b] Local callback: The received hypothesis is MAINZ HORCHATA LECHUGA ADIOS with a score of -17320 and an ID of 1
    2015-01-07 13:01:43.041 OpenEarsSampleApp[842:60b] I’m running flite
    2015-01-07 13:01:43.054 OpenEarsSampleApp[842:1803] Speech detected…
    2015-01-07 13:01:43.292 OpenEarsSampleApp[842:60b] I’m done running flite and it took 0.249377 seconds
    2015-01-07 13:01:43.293 OpenEarsSampleApp[842:60b] Flite audio player was nil when referenced so attempting to allocate a new audio player.
    2015-01-07 13:01:43.294 OpenEarsSampleApp[842:60b] Loading speech data for Flite concluded successfully.
    2015-01-07 13:01:43.337 OpenEarsSampleApp[842:60b] Local callback: hypothesisArray is (
    {
    Hypothesis = “MAINZ HORCHATA LECHUGA ADIOS”;
    Score = “-20469”;
    },
    {
    Hypothesis = “MAINZ HORCHATA LECHUGA PARIS”;
    Score = “-20469”;
    },
    {
    Hypothesis = “MAINZ HORCHATA LECHUGA MAINZ”;
    Score = “-20469”;
    },
    {
    Hypothesis = “MAINZ MAINZ HORCHATA LECHUGA ADIOS”;
    Score = “-20469”;
    },
    {
    Hypothesis = “BARCELONA LECHUGA ADIOS”;
    Score = “-20469”;
    }
    )
    2015-01-07 13:01:43.339 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected speech.
    2015-01-07 13:01:43.342 OpenEarsSampleApp[842:60b] Flite sending suspend recognition notification.
    2015-01-07 13:01:43.344 OpenEarsSampleApp[842:60b] Local callback: Flite has started speaking
    2015-01-07 13:01:43.351 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has suspended recognition.
    2015-01-07 13:01:45.939 OpenEarsSampleApp[842:60b] AVAudioPlayer did finish playing with success flag of 1
    2015-01-07 13:01:46.092 OpenEarsSampleApp[842:60b] Flite sending resume recognition notification.
    2015-01-07 13:01:46.595 OpenEarsSampleApp[842:60b] Local callback: Flite has finished speaking
    2015-01-07 13:01:46.602 OpenEarsSampleApp[842:60b] Valid setSecondsOfSilence value of 0.500000 will be used.
    2015-01-07 13:01:46.603 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has resumed recognition.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 9.31 0.33 -0.27 0.11 -0.30 -0.45 -0.30 -0.15 -0.25 -0.18 -0.20 -0.17 -0.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 9.31 0.35 -0.27 0.13 -0.30 -0.46 -0.30 -0.16 -0.25 -0.19 -0.20 -0.17 -0.12 >
    INFO: ngram_search_fwdtree.c(1550): 149 words recognized (3/fr)
    INFO: ngram_search_fwdtree.c(1552): 8840 senones evaluated (173/fr)
    INFO: ngram_search_fwdtree.c(1556): 2560 channels searched (50/fr), 423 1st, 1352 last
    INFO: ngram_search_fwdtree.c(1559): 199 words for which last channels evaluated (3/fr)
    INFO: ngram_search_fwdtree.c(1561): 74 candidate words for entering last phone (1/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.51 CPU 0.994 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 3.76 wall 7.378 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 5 words
    INFO: ngram_search_fwdflat.c(938): 55 words recognized (1/fr)
    INFO: ngram_search_fwdflat.c(940): 8242 senones evaluated (162/fr)
    INFO: ngram_search_fwdflat.c(942): 3030 channels searched (59/fr)
    INFO: ngram_search_fwdflat.c(944): 262 words searched (5/fr)
    INFO: ngram_search_fwdflat.c(947): 144 word transitions (2/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.12 CPU 0.226 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.12 wall 0.236 xRT
    2015-01-07 13:01:48.672 OpenEarsSampleApp[842:1803] Speech detected…
    2015-01-07 13:01:48.673 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 9.31 0.35 -0.27 0.13 -0.30 -0.46 -0.30 -0.16 -0.25 -0.19 -0.20 -0.17 -0.12 >
    INFO: cmn_prior.c(116): cmn_prior_update: to < 9.32 0.34 -0.26 0.13 -0.30 -0.46 -0.33 -0.17 -0.25 -0.19 -0.20 -0.17 -0.13 >
    2015-01-07 13:01:49.894 OpenEarsSampleApp[842:1803] End of speech detected…
    INFO: cmn_prior.c(131): cmn_prior_update: from < 9.32 0.34 -0.26 0.13 -0.30 -0.46 -0.33 -0.17 -0.25 -0.19 -0.20 -0.17 -0.13 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 9.41 0.38 -0.27 0.09 -0.32 -0.45 -0.29 -0.18 -0.23 -0.20 -0.20 -0.15 -0.14 >
    INFO: ngram_search_fwdtree.c(1550): 440 words recognized (4/fr)
    INFO: ngram_search_fwdtree.c(1552): 26366 senones evaluated (214/fr)
    INFO: ngram_search_fwdtree.c(1556): 7946 channels searched (64/fr), 1022 1st, 4338 last
    INFO: ngram_search_fwdtree.c(1559): 544 words for which last channels evaluated (4/fr)
    INFO: ngram_search_fwdtree.c(1561): 279 candidate words for entering last phone (2/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.53 CPU 0.427 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 2.97 wall 2.418 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 8 words
    2015-01-07 13:01:49.896 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: ngram_search_fwdflat.c(938): 325 words recognized (3/fr)
    INFO: ngram_search_fwdflat.c(940): 25549 senones evaluated (208/fr)
    INFO: ngram_search_fwdflat.c(942): 9753 channels searched (79/fr)
    INFO: ngram_search_fwdflat.c(944): 826 words searched (6/fr)
    INFO: ngram_search_fwdflat.c(947): 472 word transitions (3/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.34 CPU 0.278 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.34 wall 0.276 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using HOLA.121 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node HOLA.100
    INFO: ngram_search.c(1294): Eliminated 1 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 31 nodes, 26 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(HOLA:100:121) = -331976
    INFO: ps_lattice.c(1403): Joint P(O,S) = -342116 P(S|O) = -10140
    INFO: ngram_search.c(890): bestpath 0.01 CPU 0.006 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.000 xRT
    2015-01-07 13:01:50.237 OpenEarsSampleApp[842:1803] Pocketsphinx heard “ADIOS ROMA HOLA” with a score of (-10140) and an utterance ID of 2.
    2015-01-07 13:01:50.238 OpenEarsSampleApp[842:60b] Flite sending interrupt speech request.
    2015-01-07 13:01:50.240 OpenEarsSampleApp[842:60b] Local callback: The received hypothesis is ADIOS ROMA HOLA with a score of -10140 and an ID of 2
    2015-01-07 13:01:50.243 OpenEarsSampleApp[842:60b] I’m running flite
    2015-01-07 13:01:50.459 OpenEarsSampleApp[842:60b] I’m done running flite and it took 0.214843 seconds
    2015-01-07 13:01:50.460 OpenEarsSampleApp[842:60b] Flite audio player was nil when referenced so attempting to allocate a new audio player.
    2015-01-07 13:01:50.461 OpenEarsSampleApp[842:60b] Loading speech data for Flite concluded successfully.
    2015-01-07 13:01:50.489 OpenEarsSampleApp[842:60b] Local callback: hypothesisArray is (
    {
    Hypothesis = “ADIOS ROMA HOLA”;
    Score = “-5782”;
    },
    {
    Hypothesis = “MAINZ ROMA HOLA”;
    Score = “-5782”;
    },
    {
    Hypothesis = “CHORIZO ROMA HOLA”;
    Score = “-5782”;
    },
    {
    Hypothesis = “ROMA HOLA”;
    Score = “-5782”;
    },
    {
    Hypothesis = “ADIOS ROMA HOLA”;
    Score = “-5782”;
    }
    )
    2015-01-07 13:01:50.491 OpenEarsSampleApp[842:60b] Flite sending suspend recognition notification.
    2015-01-07 13:01:50.493 OpenEarsSampleApp[842:60b] Local callback: Flite has started speaking
    2015-01-07 13:01:50.499 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has suspended recognition.
    2015-01-07 13:01:52.627 OpenEarsSampleApp[842:60b] AVAudioPlayer did finish playing with success flag of 1
    2015-01-07 13:01:52.780 OpenEarsSampleApp[842:60b] Flite sending resume recognition notification.
    2015-01-07 13:01:53.283 OpenEarsSampleApp[842:60b] Local callback: Flite has finished speaking
    2015-01-07 13:01:53.289 OpenEarsSampleApp[842:60b] Valid setSecondsOfSilence value of 0.500000 will be used.
    2015-01-07 13:01:53.290 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has resumed recognition.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 9.41 0.38 -0.27 0.09 -0.32 -0.45 -0.29 -0.18 -0.23 -0.20 -0.20 -0.15 -0.14 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 9.41 0.38 -0.27 0.09 -0.32 -0.45 -0.29 -0.18 -0.23 -0.20 -0.20 -0.15 -0.14 >
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
    2015-01-07 13:01:54.077 OpenEarsSampleApp[842:1803] Speech detected…
    2015-01-07 13:01:54.079 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 9.41 0.38 -0.27 0.09 -0.32 -0.45 -0.29 -0.18 -0.23 -0.20 -0.20 -0.15 -0.14 >
    INFO: cmn_prior.c(116): cmn_prior_update: to < 9.19 0.49 -0.17 0.08 -0.34 -0.46 -0.31 -0.20 -0.24 -0.22 -0.23 -0.16 -0.18 >
    2015-01-07 13:01:57.566 OpenEarsSampleApp[842:1803] End of speech detected…
    INFO: cmn_prior.c(131): cmn_prior_update: from < 9.19 0.49 -0.17 0.08 -0.34 -0.46 -0.31 -0.20 -0.24 -0.22 -0.23 -0.16 -0.18 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 8.68 0.43 -0.14 0.05 -0.31 -0.46 -0.32 -0.19 -0.22 -0.20 -0.23 -0.15 -0.18 >
    INFO: ngram_search_fwdtree.c(1550): 1269 words recognized (4/fr)
    INFO: ngram_search_fwdtree.c(1552): 74257 senones evaluated (206/fr)
    INFO: ngram_search_fwdtree.c(1556): 21743 channels searched (60/fr), 3037 1st, 11112 last
    INFO: ngram_search_fwdtree.c(1559): 1527 words for which last channels evaluated (4/fr)
    INFO: ngram_search_fwdtree.c(1561): 803 candidate words for entering last phone (2/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 1.27 CPU 0.353 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 4.12 wall 1.143 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 9 words
    2015-01-07 13:01:57.567 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: ngram_search_fwdflat.c(938): 966 words recognized (3/fr)
    INFO: ngram_search_fwdflat.c(940): 65712 senones evaluated (183/fr)
    INFO: ngram_search_fwdflat.c(942): 24595 channels searched (68/fr)
    INFO: ngram_search_fwdflat.c(944): 2147 words searched (5/fr)
    INFO: ngram_search_fwdflat.c(947): 882 word transitions (2/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.87 CPU 0.242 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.88 wall 0.244 xRT
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node </s>.354
    INFO: ngram_search.c(1294): Eliminated 0 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 77 nodes, 73 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(</s>:354:358) = -1031992
    INFO: ps_lattice.c(1403): Joint P(O,S) = -1042543 P(S|O) = -10551
    INFO: ngram_search.c(890): bestpath 0.00 CPU 0.001 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.000 xRT
    2015-01-07 13:01:58.446 OpenEarsSampleApp[842:1803] Pocketsphinx heard “ADIOS ROMA CHORIZO HORCHATA HORCHATA HORCHATA” with a score of (-10551) and an utterance ID of 3.
    2015-01-07 13:01:58.448 OpenEarsSampleApp[842:60b] Flite sending interrupt speech request.
    2015-01-07 13:01:58.449 OpenEarsSampleApp[842:60b] Local callback: The received hypothesis is ADIOS ROMA CHORIZO HORCHATA HORCHATA HORCHATA with a score of -10551 and an ID of 3
    2015-01-07 13:01:58.452 OpenEarsSampleApp[842:60b] I’m running flite
    2015-01-07 13:01:58.738 OpenEarsSampleApp[842:1803] Speech detected…
    2015-01-07 13:01:58.815 OpenEarsSampleApp[842:60b] I’m done running flite and it took 0.361972 seconds
    2015-01-07 13:01:58.816 OpenEarsSampleApp[842:60b] Flite audio player was nil when referenced so attempting to allocate a new audio player.
    2015-01-07 13:01:58.818 OpenEarsSampleApp[842:60b] Loading speech data for Flite concluded successfully.
    2015-01-07 13:01:58.847 OpenEarsSampleApp[842:60b] Local callback: hypothesisArray is (
    {
    Hypothesis = “ADIOS ROMA CHORIZO HORCHATA HORCHATA HORCHATA”;
    Score = “-18498”;
    },
    {
    Hypothesis = “ADIOS ROMA CHORIZO LECHUGA ROMA HORCHATA”;
    Score = “-18498”;
    },
    {
    Hypothesis = “ADIOS ROMA CHORIZO HORCHATA HOLA HORCHATA”;
    Score = “-18498”;
    },
    {
    Hypothesis = “ADIOS ROMA CHORIZO HORCHATA HORCHATA”;
    Score = “-18498”;
    },
    {
    Hypothesis = “HOLA ROMA CHORIZO HORCHATA HORCHATA HORCHATA”;
    Score = “-18498”;
    }
    )
    2015-01-07 13:01:58.848 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has detected speech.
    2015-01-07 13:01:58.850 OpenEarsSampleApp[842:60b] Flite sending suspend recognition notification.
    2015-01-07 13:01:58.852 OpenEarsSampleApp[842:60b] Local callback: Flite has started speaking
    2015-01-07 13:01:58.859 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has suspended recognition.
    2015-01-07 13:02:02.658 OpenEarsSampleApp[842:60b] AVAudioPlayer did finish playing with success flag of 1
    2015-01-07 13:02:02.810 OpenEarsSampleApp[842:60b] Flite sending resume recognition notification.
    2015-01-07 13:02:03.314 OpenEarsSampleApp[842:60b] Local callback: Flite has finished speaking
    2015-01-07 13:02:03.320 OpenEarsSampleApp[842:60b] Valid setSecondsOfSilence value of 0.500000 will be used.
    2015-01-07 13:02:03.321 OpenEarsSampleApp[842:60b] Local callback: Pocketsphinx has resumed recognition.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 8.68 0.43 -0.14 0.05 -0.31 -0.46 -0.32 -0.19 -0.22 -0.20 -0.23 -0.15 -0.18 >
    INFO: cmn_prior.c(149): cmn_prior_update: to < 8.79 0.45 -0.13 0.05 -0.30 -0.47 -0.36 -0.19 -0.22 -0.21 -0.24 -0.16 -0.19 >
    INFO: ngram_search_fwdtree.c(1550): 600 words recognized (5/fr)
    INFO: ngram_search_fwdtree.c(1552): 34910 senones evaluated (277/fr)
    INFO: ngram_search_fwdtree.c(1556): 10828 channels searched (85/fr), 1096 1st, 6179 last
    INFO: ngram_search_fwdtree.c(1559): 736 words for which last channels evaluated (5/fr)
    INFO: ngram_search_fwdtree.c(1561): 563 candidate words for entering last phone (4/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.99 CPU 0.788 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 5.01 wall 3.973 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 10 words
    INFO: ngram_search_fwdflat.c(938): 321 words recognized (3/fr)
    INFO: ngram_search_fwdflat.c(940): 34110 senones evaluated (271/fr)
    INFO: ngram_search_fwdflat.c(942): 14032 channels searched (111/fr)
    INFO: ngram_search_fwdflat.c(944): 1130 words searched (8/fr)
    INFO: ngram_search_fwdflat.c(947): 464 word transitions (3/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.46 CPU 0.363 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.46 wall 0.362 xRT

    Here the link to download the speech I am using:

    https://dl.dropboxusercontent.com/u/6380067/openears4.wav.zip

    In this speech I just said two words in Spanish “MADRID” in second 10 and “ROMA” in second 20. Rest are noises around me.

    I hope this helps you again.

    maxgarmar
    Participant

    Ok Halle, after an exhaustive test comparing openEars 2.0 noise treatment and 1.x. OpenEars 2.0.1 has improved a lot, congratulations and thanks by the way, but still the sensibility is bigger than 1.x. If you like I can send you again another .wav with cases that 2.0.1 is still recognizing. But I think is not necessary.
    Thinking about apps using this library, almost all the cases will have noises around it so I would increase the vadThreshold to a bigger value like 6.0 or whatever to avoid more noises for people like me that think about how the library is used. If I record with Apple’s Voice Memos app from apple you can see that the noises around me are not moving even the bars of the app until I am speaking directly to the mic. But still openEars 2 is listening.

    People that want to recognize more noises then they could decrease this value to 1.0 for instance.

    What do you think? Please don’t hesitate to ask or collect any information from me. I am just trying to help you.

    Thanks

    maxgarmar
    Participant

    Hi Halle,

    How far did you get with the problem I was facing ? as you see the recognition thread never ends while my recording is working (because the TV in background) with 1.x it’s stop in every word I say. That’s the problem I think. 2.0 never stops listening because the noise.

    By the way, playing around your nice framework I just realized if you add any word that before was not recognizing to the LanguageModelGeneratorLookupList.text file then the recognition works perfectly. So my problem is solved for one specific word which is crazy if I have to add all the words which the framework is not recognizing.
    Then I looked at the same file in 2.0 version and I see that that dictionary is exactly the same like 1.x but still it is working for words that 1.x does not recognize except that you add them to the dictionary.
    I hope it helps you to find what’s happening.

    Thanks

    maxgarmar
    Participant

    Sorry I was trying to add the link via the tag “link” from the editor but I don’t see the result finally. I edited two times but did not work. Or perhaps it is hidden ?

    anyway here it is without the tag:

    https://dl.dropboxusercontent.com/u/6380067/openears.wav.zip

    and compressed.

    Sorry for writing again.

    Thanks

    maxgarmar
    Participant

    Ok here we go.

    This is the complete code I changed on ViewController.m of the test application. You could see how I set the values and everything.

    //  ViewController.m
    //  OpenEarsSampleApp
    //
    //  ViewController.m demonstrates the use of the OpenEars framework. 
    //
    //  Copyright Politepix UG (haftungsbeschränkt) 2014. All rights reserved.
    //  https://www.politepix.com
    //  Contact at https://www.politepix.com/contact
    //
    //  This file is licensed under the Politepix Shared Source license found in the root of the source distribution.
    
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // IMPORTANT NOTE: Audio driver and hardware behavior is completely different between the Simulator and a real device. It is not informative to test OpenEars' accuracy on the Simulator, and please do not report Simulator-only bugs since I only actively support 
    // the device driver. Please only do testing/bug reporting based on results on a real device such as an iPhone or iPod Touch. Thanks!
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    // **************************************************************************************************************************************************************
    
    #import "ViewController.h"
    #import <OpenEars/OEPocketsphinxController.h>
    #import <OpenEars/OEFliteController.h>
    #import <OpenEars/OELanguageModelGenerator.h>
    #import <OpenEars/OELogging.h>
    #import <OpenEars/OEAcousticModel.h>
    #import <Slt/Slt.h>
    
    @interface ViewController()
    
    // UI actions, not specifically related to OpenEars other than the fact that they invoke OpenEars methods.
    - (IBAction) stopButtonAction;
    - (IBAction) startButtonAction;
    - (IBAction) suspendListeningButtonAction;
    - (IBAction) resumeListeningButtonAction;
    
    // Example for reading out the input audio levels without locking the UI using an NSTimer
    
    - (void) startDisplayingLevels;
    - (void) stopDisplayingLevels;
    
    // These three are the important OpenEars objects that this class demonstrates the use of.
    @property (nonatomic, strong) Slt *slt;
    
    @property (nonatomic, strong) OEEventsObserver *openEarsEventsObserver;
    @property (nonatomic, strong) OEPocketsphinxController *pocketsphinxController;
    @property (nonatomic, strong) OEFliteController *fliteController;
    
    // Some UI, not specifically related to OpenEars.
    @property (nonatomic, strong) IBOutlet UIButton *stopButton;
    @property (nonatomic, strong) IBOutlet UIButton *startButton;
    @property (nonatomic, strong) IBOutlet UIButton *suspendListeningButton;	
    @property (nonatomic, strong) IBOutlet UIButton *resumeListeningButton;	
    @property (nonatomic, strong) IBOutlet UITextView *statusTextView;
    @property (nonatomic, strong) IBOutlet UITextView *heardTextView;
    @property (nonatomic, strong) IBOutlet UILabel *pocketsphinxDbLabel;
    @property (nonatomic, strong) IBOutlet UILabel *fliteDbLabel;
    @property (nonatomic, assign) BOOL usingStartingLanguageModel;
    @property (nonatomic, assign) int restartAttemptsDueToPermissionRequests;
    @property (nonatomic, assign) BOOL startupFailedDueToLackOfPermissions;
    
    // Things which help us show off the dynamic language features.
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToFirstDynamicallyGeneratedDictionary;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedLanguageModel;
    @property (nonatomic, copy) NSString *pathToSecondDynamicallyGeneratedDictionary;
    
    // Our NSTimer that will help us read and display the input and output levels without locking the UI
    @property (nonatomic, strong) 	NSTimer *uiUpdateTimer;
    
    @end
    
    @implementation ViewController
    
    #define kLevelUpdatesPerSecond 18 // We'll have the ui update 18 times a second to show some fluidity without hitting the CPU too hard.
    
    //#define kGetNbest // Uncomment this if you want to try out nbest
    #pragma mark - 
    #pragma mark Memory Management
    
    - (void)dealloc {
        [self stopDisplayingLevels];
    }
    
    #pragma mark -
    #pragma mark View Lifecycle
    
    - (void)viewDidLoad {
        [super viewDidLoad];
        self.fliteController = [[OEFliteController alloc] init];
        self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
        self.openEarsEventsObserver.delegate = self;
        self.slt = [[Slt alloc] init];
        
        self.restartAttemptsDueToPermissionRequests = 0;
        self.startupFailedDueToLackOfPermissions = FALSE;
        
        // [OELogging startOpenEarsLogging]; // Uncomment me for OELogging, which is verbose logging about internal OpenEars operations such as audio settings. If you have issues, show this logging in the forums.
        //[OEPocketsphinxController sharedInstance].verbosePocketSphinx = TRUE; // Uncomment this for much more verbose speech recognition engine output. If you have issues, show this logging in the forums.
        
        [self.openEarsEventsObserver setDelegate:self]; // Make this class the delegate of OpenEarsObserver so we can get all of the messages about what OpenEars is doing.
        
        [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this before setting any OEPocketsphinxController characteristics
        
        [[OEPocketsphinxController sharedInstance] setSecondsOfSilenceToDetect:0.3];
        [[OEPocketsphinxController sharedInstance] setVadThreshold:4.0];
        
        // This is the language model we're going to start up with. The only reason I'm making it a class property is that I reuse it a bunch of times in this example, 
        // but you can pass the string contents directly to OEPocketsphinxController:startListeningWithLanguageModelAtPath:dictionaryAtPath:languageModelIsJSGF:
        
        NSArray *firstLanguageArray = @[@"ADIOS",
                                        @"LECHUGA",
                                        @"MADRID",
                                        @"BARCELONA",
                                        @"PARIS",
                                        @"ROMA",
                                        @"MAINZ",
                                        @"HOLA"];
        
        OELanguageModelGenerator *languageModelGenerator = [[OELanguageModelGenerator alloc] init];
        
        // languageModelGenerator.verboseLanguageModelGenerator = TRUE; // Uncomment me for verbose language model generator debug output.
    
        NSError *error = [languageModelGenerator generateLanguageModelFromArray:firstLanguageArray withFilesNamed:@"FirstOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"]]; // Change "AcousticModelSpanish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        } else {
            self.pathToFirstDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
            self.pathToFirstDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"FirstOpenEarsDynamicLanguageModel"];
        }
        
        self.usingStartingLanguageModel = TRUE; // This is not an OpenEars thing, this is just so I can switch back and forth between the two models in this sample app.
        
        // Here is an example of dynamically creating an in-app grammar.
        
        // We want it to be able to response to the speech "CHANGE MODEL" and a few other things.  Items we want to have recognized as a whole phrase (like "CHANGE MODEL") 
        // we put into the array as one string (e.g. "CHANGE MODEL" instead of "CHANGE" and "MODEL"). This increases the probability that they will be recognized as a phrase. This works even better starting with version 1.0 of OpenEars.
        
        
        NSArray *secondLanguageArray = @[@"ADIOS",
                                        @"LECHUGA",
                                        @"MADRID",
                                        @"BARCELONA",
                                        @"PARIS",
                                        @"ROMA",
                                        @"MAINZ",
                                        @"HOLA"];
        
        // The last entry, quidnunc, is an example of a word which will not be found in the lookup dictionary and will be passed to the fallback method. The fallback method is slower,
        // so, for instance, creating a new language model from dictionary words will be pretty fast, but a model that has a lot of unusual names in it or invented/rare/recent-slang
        // words will be slower to generate. You can use this information to give your users good UI feedback about what the expectations for wait times should be.
        
        // I don't think it's beneficial to lazily instantiate OELanguageModelGenerator because you only need to give it a single message and then release it.
        // If you need to create a very large model or any size of model that has many unusual words that have to make use of the fallback generation method,
        // you will want to run this on a background thread so you can give the user some UI feedback that the task is in progress.
            
        // generateLanguageModelFromArray:withFilesNamed returns an NSError which will either have a value of noErr if everything went fine or a specific error if it didn't.
        error = [languageModelGenerator generateLanguageModelFromArray:secondLanguageArray withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"]]; // Change "AcousticModelSpanish" to "AcousticModelSpanish" in order to create a language model for Spanish recognition instead of English.
        
        //    NSError *error = [languageModelGenerator generateLanguageModelFromTextFile:[NSString stringWithFormat:@"%@/%@",[[NSBundle mainBundle] resourcePath], @"OpenEarsCorpus.txt"] withFilesNamed:@"SecondOpenEarsDynamicLanguageModel" forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"]]; // Try this out to see how generating a language model from a corpus works.
        
        
        if(error) {
            NSLog(@"Dynamic language generator reported error %@", [error description]);	
        }	else {
            
            self.pathToSecondDynamicallyGeneratedLanguageModel = [languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"]; // We'll set our new .languagemodel file to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            self.pathToSecondDynamicallyGeneratedDictionary = [languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:@"SecondOpenEarsDynamicLanguageModel"];; // We'll set our new dictionary to be the one to get switched to when the words "CHANGE MODEL" are recognized.
            
            // Next, an informative message.
            
            NSLog(@"\n\nWelcome to the OpenEars sample project. This project understands the words:\nBACKWARD,\nCHANGE,\nFORWARD,\nGO,\nLEFT,\nMODEL,\nRIGHT,\nTURN,\nand if you say \"CHANGE MODEL\" it will switch to its dynamically-generated model which understands the words:\nCHANGE,\nMODEL,\nMONDAY,\nTUESDAY,\nWEDNESDAY,\nTHURSDAY,\nFRIDAY,\nSATURDAY,\nSUNDAY,\nQUIDNUNC");
            
            // This is how to start the continuous listening loop of an available instance of OEPocketsphinxController. We won't do this if the language generation failed since it will be listening for a command to change over to the generated language.
            
            [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil]; // Call this once before setting properties of the OEPocketsphinxController instance.
            
            
            [[OEPocketsphinxController sharedInstance] setSecondsOfSilenceToDetect:0.3];
            [[OEPocketsphinxController sharedInstance] setVadThreshold:4.0];
    
            [OEPocketsphinxController sharedInstance].pathToTestFile = [[NSBundle mainBundle] pathForResource:@"openears" ofType:@"wav"];  // This is how you could use a test WAV (mono/16-bit/16k) rather than live recognition
           
            
            if(![OEPocketsphinxController sharedInstance].isListening) {
                [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
            }
            // [self startDisplayingLevels] is not an OpenEars method, just a very simple approach for level reading
            // that I've included with this sample app. My example implementation does make use of two OpenEars
            // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
            // method of fliteController. 
            //
            // The example is meant to show one way that you can read those levels continuously without locking the UI, 
            // by using an NSTimer, but the OpenEars level-reading methods 
            // themselves do not include multithreading code since I believe that you will want to design your own 
            // code approaches for level display that are tightly-integrated with your interaction design and the  
            // graphics API you choose. 
       
            [self startDisplayingLevels];
            
            // Here is some UI stuff that has nothing specifically to do with OpenEars implementation
            self.startButton.hidden = TRUE;
            self.stopButton.hidden = TRUE;
            self.suspendListeningButton.hidden = TRUE;
            self.resumeListeningButton.hidden = TRUE;
        }
    }
    
    #pragma mark -
    #pragma mark OEEventsObserver delegate methods
    
    // What follows are all of the delegate methods you can optionally use once you've instantiated an OEEventsObserver and set its delegate to self. 
    // I've provided some pretty granular information about the exact phase of the Pocketsphinx listening loop, the Audio Session, and Flite, but I'd expect 
    // that the ones that will really be needed by most projects are the following:
    //
    //- (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID;
    //- (void) audioSessionInterruptionDidBegin;
    //- (void) audioSessionInterruptionDidEnd;
    //- (void) audioRouteDidChangeToRoute:(NSString *)newRoute;
    //- (void) pocketsphinxDidStartListening;
    //- (void) pocketsphinxDidStopListening;
    //
    // It isn't necessary to have a OEPocketsphinxController or a OEFliteController instantiated in order to use these methods.  If there isn't anything instantiated that will
    // send messages to an OEEventsObserver, all that will happen is that these methods will never fire.  You also do not have to create a OEEventsObserver in
    // the same class or view controller in which you are doing things with a OEPocketsphinxController or OEFliteController; you can receive updates from those objects in
    // any class in which you instantiate an OEEventsObserver and set its delegate to self.
    
    // This is an optional delegate method of OEEventsObserver which delivers the text of speech that Pocketsphinx heard and analyzed, along with its accuracy score and utterance ID.
    - (void) pocketsphinxDidReceiveHypothesis:(NSString *)hypothesis recognitionScore:(NSString *)recognitionScore utteranceID:(NSString *)utteranceID {
        
        NSLog(@"Local callback: The received hypothesis is %@ with a score of %@ and an ID of %@", hypothesis, recognitionScore, utteranceID); // Log it.
        if([hypothesis isEqualToString:@"CHANGE MODEL"]) { // If the user says "CHANGE MODEL", we will switch to the alternate model (which happens to be the dynamically generated model).
            
            // Here is an example of language model switching in OpenEars. Deciding on what logical basis to switch models is your responsibility.
            // For instance, when you call a customer service line and get a response tree that takes you through different options depending on what you say to it,
            // the models are being switched as you progress through it so that only relevant choices can be understood. The construction of that logical branching and 
            // how to react to it is your job, OpenEars just lets you send the signal to switch the language model when you've decided it's the right time to do so.
            
            if(self.usingStartingLanguageModel) { // If we're on the starting model, switch to the dynamically generated one.
                
                // You can only change language models with ARPA grammars in OpenEars (the ones that end in .languagemodel or .DMP). 
                // Trying to switch between JSGF models (the ones that end in .gram) will return no result.
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToSecondDynamicallyGeneratedLanguageModel withDictionary:self.pathToSecondDynamicallyGeneratedDictionary]; 
                self.usingStartingLanguageModel = FALSE;
            } else { // If we're on the dynamically generated model, switch to the start model (this is just an example of a trigger and method for switching models).
                [[OEPocketsphinxController sharedInstance] changeLanguageModelToFile:self.pathToFirstDynamicallyGeneratedLanguageModel withDictionary:self.pathToFirstDynamicallyGeneratedDictionary];
                self.usingStartingLanguageModel = TRUE;
            }
        }
        
        self.heardTextView.text = [NSString stringWithFormat:@"Heard: \"%@\"", hypothesis]; // Show it in the status box.
        
        // This is how to use an available instance of OEFliteController. We're going to repeat back the command that we heard with the voice we've chosen.
        [self.fliteController say:[NSString stringWithFormat:@"You said %@",hypothesis] withVoice:self.slt];
    }
    
    #ifdef kGetNbest   
    - (void) pocketsphinxDidReceiveNBestHypothesisArray:(NSArray *)hypothesisArray { // Pocketsphinx has an n-best hypothesis dictionary.
        NSLog(@"Local callback:  hypothesisArray is %@",hypothesisArray);   
    }
    #endif
    // An optional delegate method of OEEventsObserver which informs that there was an interruption to the audio session (e.g. an incoming phone call).
    - (void) audioSessionInterruptionDidBegin {
        NSLog(@"Local callback:  AudioSession interruption began."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption began."; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) {
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening (if it is listening) since it will need to restart its loop after an interruption.
            if(error) NSLog(@"Error while stopping listening in audioSessionInterruptionDidBegin: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the interruption to the audio session ended.
    - (void) audioSessionInterruptionDidEnd {
        NSLog(@"Local callback:  AudioSession interruption ended."); // Log it.
        self.statusTextView.text = @"Status: AudioSession interruption ended."; // Show it in the status box.
        // We're restarting the previously-stopped listening loop.
        if(![OEPocketsphinxController sharedInstance].isListening){
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't currently listening.    
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the audio input became unavailable.
    - (void) audioInputDidBecomeUnavailable {
        NSLog(@"Local callback:  The audio input has become unavailable"); // Log it.
        self.statusTextView.text = @"Status: The audio input has become unavailable"; // Show it in the status box.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening){
            error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling Pocketsphinx to stop listening since there is no available input (but only if we are listening).
            if(error) NSLog(@"Error while stopping listening in audioInputDidBecomeUnavailable: %@", error);
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the unavailable audio input became available again.
    - (void) audioInputDidBecomeAvailable {
        NSLog(@"Local callback: The audio input is available"); // Log it.
        self.statusTextView.text = @"Status: The audio input is available"; // Show it in the status box.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition, but only if we aren't already listening.
        }
    }
    // An optional delegate method of OEEventsObserver which informs that there was a change to the audio route (e.g. headphones were plugged in or unplugged).
    - (void) audioRouteDidChangeToRoute:(NSString *)newRoute {
        NSLog(@"Local callback: Audio route change. The new audio route is %@", newRoute); // Log it.
        self.statusTextView.text = [NSString stringWithFormat:@"Status: Audio route change. The new audio route is %@",newRoute]; // Show it in the status box.
        
        NSError *error = [[OEPocketsphinxController sharedInstance] stopListening]; // React to it by telling the Pocketsphinx loop to shut down and then start listening again on the new route
        
        if(error)NSLog(@"Local callback: error while stopping listening in audioRouteDidChangeToRoute: %@",error);
    
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
    }
    
    // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
    // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
    - (void) pocketsphinxRecognitionLoopDidStart {
        
        NSLog(@"Local callback: Pocketsphinx started."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx started."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
    - (void) pocketsphinxDidStartListening {
        
        NSLog(@"Local callback: Pocketsphinx is now listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx is now listening."; // Show it in the status box.
        
        self.startButton.hidden = TRUE; // React to it with some UI changes.
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
    - (void) pocketsphinxDidDetectSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected speech."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance. 
    // This was added because developers requested being able to time the recognition speed without the speech time. The processing time is the time between 
    // this method being called and the hypothesis being returned.
    - (void) pocketsphinxDidDetectFinishedSpeech {
        NSLog(@"Local callback: Pocketsphinx has detected a second of silence, concluding an utterance."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has detected finished speech."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most 
    // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
    - (void) pocketsphinxDidStopListening {
        NSLog(@"Local callback: Pocketsphinx has stopped listening."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has stopped listening."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
    // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
    // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
    - (void) pocketsphinxDidSuspendRecognition {
        NSLog(@"Local callback: Pocketsphinx has suspended recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has suspended recognition."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
    // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
    // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
    // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
    - (void) pocketsphinxDidResumeRecognition {
        NSLog(@"Local callback: Pocketsphinx has resumed recognition."); // Log it.
        self.statusTextView.text = @"Status: Pocketsphinx has resumed recognition."; // Show it in the status box.
    }
    
    // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
    // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
    - (void) pocketsphinxDidChangeLanguageModelToFile:(NSString *)newLanguageModelPathAsString andDictionary:(NSString *)newDictionaryPathAsString {
        NSLog(@"Local callback: Pocketsphinx is now using the following language model: \n%@ and the following dictionary: %@",newLanguageModelPathAsString,newDictionaryPathAsString);
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
    // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
    - (void) fliteDidStartSpeaking {
        NSLog(@"Local callback: Flite has started speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has started speaking."; // Show it in the status box.
    }
    
    // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
    // complex interaction between sound classes.
    - (void) fliteDidFinishSpeaking {
        NSLog(@"Local callback: Flite has finished speaking"); // Log it.
        self.statusTextView.text = @"Status: Flite has finished speaking."; // Show it in the status box.
    }
    
    - (void) pocketSphinxContinuousSetupDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Setting up the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to start recognition loop."; // Show it in the status box.	
    }
    
    - (void) pocketSphinxContinuousTeardownDidFailWithReason:(NSString *)reasonForFailure { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
        NSLog(@"Local callback: Tearing down the continuous recognition loop has failed for the reason %@, please turn on [OELogging startOpenEarsLogging] to learn more.", reasonForFailure); // Log it.
        self.statusTextView.text = @"Status: Not possible to cleanly end recognition loop."; // Show it in the status box.	
    }
    
    - (void) testRecognitionCompleted { // A test file which was submitted for direct recognition via the audio driver is done.
        NSLog(@"Local callback: A test file which was submitted for direct recognition via the audio driver is done."); // Log it.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // If we're listening, stop listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error) NSLog(@"Error while stopping listening in testRecognitionCompleted: %@", error);
        }
        
    }
    /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
    - (void) pocketsphinxFailedNoMicPermissions {
        NSLog(@"Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.");
        self.startupFailedDueToLackOfPermissions = TRUE;
    }
    
    /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a TRUE or a FALSE result  (will only be returned on iOS7 or later).*/
    - (void) micPermissionCheckCompleted:(BOOL)result {
        if(result) {
            self.restartAttemptsDueToPermissionRequests++;
            if(self.restartAttemptsDueToPermissionRequests == 1 && self.startupFailedDueToLackOfPermissions) { // If we get here because there was an attempt to start which failed due to lack of permissions, and now permissions have been requested and they returned true, we restart exactly once with the new permissions.
                NSError *error = nil;
                if([OEPocketsphinxController sharedInstance].isListening){
                    error = [[OEPocketsphinxController sharedInstance] stopListening]; // Stop listening if we are listening.
                    if(error) NSLog(@"Error while stopping listening in micPermissionCheckCompleted: %@", error);
                }
                if(!error && ![OEPocketsphinxController sharedInstance].isListening) { // If there was no error and we aren't listening, start listening.
                    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition.
                        self.startupFailedDueToLackOfPermissions = FALSE;
                }
            }
        }
    }
    
    #pragma mark -
    #pragma mark UI
    
    // This is not OpenEars-specific stuff, just some UI behavior
    
    - (IBAction) suspendListeningButtonAction { // This is the action for the button which suspends listening without ending the recognition loop
        [[OEPocketsphinxController sharedInstance] suspendRecognition];	
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = FALSE;
    }
    
    - (IBAction) resumeListeningButtonAction { // This is the action for the button which resumes listening if it has been suspended
        [[OEPocketsphinxController sharedInstance] resumeRecognition];
        
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;	
    }
    
    - (IBAction) stopButtonAction { // This is the action for the button which shuts down the recognition loop.
        NSError *error = nil;
        if([OEPocketsphinxController sharedInstance].isListening) { // Stop if we are currently listening.
            error = [[OEPocketsphinxController sharedInstance] stopListening];
            if(error)NSLog(@"Error stopping listening in stopButtonAction: %@", error);
        }
        self.startButton.hidden = FALSE;
        self.stopButton.hidden = TRUE;
        self.suspendListeningButton.hidden = TRUE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    - (IBAction) startButtonAction { // This is the action for the button which starts up the recognition loop again if it has been shut down.
        if(![OEPocketsphinxController sharedInstance].isListening) {
            [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:self.pathToFirstDynamicallyGeneratedLanguageModel dictionaryAtPath:self.pathToFirstDynamicallyGeneratedDictionary acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelSpanish"] languageModelIsJSGF:FALSE]; // Start speech recognition if we aren't already listening.
        }
        self.startButton.hidden = TRUE;
        self.stopButton.hidden = FALSE;
        self.suspendListeningButton.hidden = FALSE;
        self.resumeListeningButton.hidden = TRUE;
    }
    
    #pragma mark -
    #pragma mark Example for reading out Pocketsphinx and Flite audio levels without locking the UI by using an NSTimer
    
    // What follows are not OpenEars methods, just an approach for level reading
    // that I've included with this sample app. My example implementation does make use of two OpenEars
    // methods:	the pocketsphinxInputLevel method of OEPocketsphinxController and the fliteOutputLevel
    // method of OEFliteController. 
    //
    // The example is meant to show one way that you can read those levels continuously without locking the UI, 
    // by using an NSTimer, but the OpenEars level-reading methods 
    // themselves do not include multithreading code since I believe that you will want to design your own 
    // code approaches for level display that are tightly-integrated with your interaction design and the  
    // graphics API you choose. 
    // 
    // Please note that if you use my sample approach, you should pay attention to the way that the timer is always stopped in
    // dealloc. This should prevent you from having any difficulties with deallocating a class due to a running NSTimer process.
    
    - (void) startDisplayingLevels { // Start displaying the levels using a timer
        [self stopDisplayingLevels]; // We never want more than one timer valid so we'll stop any running timers first.
        self.uiUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/kLevelUpdatesPerSecond target:self selector:@selector(updateLevelsUI) userInfo:nil repeats:YES];
    }
    
    - (void) stopDisplayingLevels { // Stop displaying the levels by stopping the timer if it's running.
        if(self.uiUpdateTimer && [self.uiUpdateTimer isValid]) { // If there is a running timer, we'll stop it here.
            [self.uiUpdateTimer invalidate];
            self.uiUpdateTimer = nil;
        }
    }
    
    - (void) updateLevelsUI { // And here is how we obtain the levels.  This method includes the actual OpenEars methods and uses their results to update the UI of this view controller.
        
        self.pocketsphinxDbLabel.text = [NSString stringWithFormat:@"Pocketsphinx Input level:%f",[[OEPocketsphinxController sharedInstance] pocketsphinxInputLevel]];  //pocketsphinxInputLevel is an OpenEars method of the class OEPocketsphinxController.
        
        if(self.fliteController.speechInProgress) {
            self.fliteDbLabel.text = [NSString stringWithFormat:@"Flite Output level: %f",[self.fliteController fliteOutputLevel]]; // fliteOutputLevel is an OpenEars method of the class OEFliteController.
        }
    }
    
    @end
    

    This was the result in the console:

    2015-01-02 20:48:16.057 OpenEarsSampleApp[1154:60b] Local callback: Pocketsphinx is now listening.
    2015-01-02 20:48:16.062 OpenEarsSampleApp[1154:60b] Local callback: Pocketsphinx started.
    2015-01-02 20:48:16.115 OpenEarsSampleApp[1154:60b] Local callback: Pocketsphinx has detected speech.
    2015-01-02 20:48:32.753 OpenEarsSampleApp[1154:60b] Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    2015-01-02 20:48:33.136 OpenEarsSampleApp[1154:60b] Local callback: The received hypothesis is ROMA with a score of 0 and an ID of 0
    2015-01-02 20:48:33.364 OpenEarsSampleApp[1154:60b] Local callback: Flite has started speaking
    2015-01-02 20:48:33.372 OpenEarsSampleApp[1154:60b] Local callback: Pocketsphinx has suspended recognition.
    2015-01-02 20:48:35.130 OpenEarsSampleApp[1154:60b] Local callback: Flite has finished speaking
    2015-01-02 20:48:35.137 OpenEarsSampleApp[1154:60b] Local callback: Pocketsphinx has resumed recognition.

    Here is the link to download the voice recording regarding that I did it with the internal microphone of iPhone 5 perfectly working with the version 1.66 of openEars. I hope this can help us to find what’s going on.

    maxgarmar
    Participant

    Hi,

    regarding what you you told me about where to set the vadThreshold and secondsOfSilenceToDetect, I was setting those values before setActive line, so I would expect that when I move it after this line I would feel a real difference. But I am sorry, nothing changed.

    Tests:

    vadThreshold = 4.0 (maximun)
    secondsOfSilenceToDetect= 0.3

    and with the TV in background with a normal level of sound, the recognition does not stop … and it recognize 6 or 7 words.. when I just said one.

    after this test I changed secondsOfSilenceToDetect to 0.4 and 0.5 but not with better results… I really don’t get what’s going on. What am I doing wrong, Halle?

    Could I get somehow 1.7.1 version ? I would like to make tests with it to see the results.

    Thanks, I hope we can get this working

    maxgarmar
    Participant

    Thanks Halle for that fast answer.

    Ok I will try the couple of things you told me, but then before I would like to know a couple of things to fully understand how this is working.

    First, what are the realistic values for secondsOfSilenceToDetect being the minimum .3 as you told me? What would be the best value for you if I tell you that I’m recognizing maximum 3 words in a phrase but normally it will be just one word?

    Another thing is, I would like to know what is the maximum value for the vadThreshold to play with it.

    Thanks a lot

Viewing 27 posts - 1 through 27 (of 27 total)