JeroenNX

Forum Replies Created

Viewing 3 posts - 1 through 3 (of 3 total)

  • Author
    Posts
  • in reply to: Mimic Pocketsphinx's handling of background noise #1031844
    JeroenNX
    Participant

    Hi Halle,

    Yes I did temporarily disable Rejecto like you asked, by replacing this:

    NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words withFilesNamed:name
                                                     withOptionalExclusions:nil
                                                            usingVowelsOnly:FALSE
                                                                 withWeight:nil
                                                     forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];

    With this:
    NSError *err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];

    The difference is noticeable instantly; because if the trigger word was for example ‘Caroline’, (in silence) wíth Rejecto, the app (mostly) only responds to ‘Caroli’ and ‘Caroline’, but without Rejecto it also responds to ‘Care’, ‘Caro’, ‘Carol’, etc. However, in both cases, with and without Rejecto, as soon as a bit of background noise (for example music) is introduced, it stops triggering completely.

    For both pocketsphinx (Pi) and OpenEars (iOS) I am using the default accoustic model that comes with the package.

    I’ll try if I can reproduce the issue with the OpenEarsSampleApp.

    in reply to: Mimic Pocketsphinx's handling of background noise #1031840
    JeroenNX
    Participant

    I have tried as you suggested, but unfortunately this does not provide a solution.
    I have tried every value between 4.5 and 1.5 for vadThreshold; but this does not change anything. For values above 4.1 it never perceives my word, and for values below 4.2 it properly perceives my word, but only in very quiet surroundings.
    Maybe you misinterpreted the issue in my original post, but the problem is not that it reacts to music; the problem is that it never perceives/reacts to my word anymore once I introduce very soft background noise (for example music, or wind, or anything else; music was just an example/easy to reproduce); so it does not react to the music and it also does not react to my voice/the trigger-word.

    In summary: without any background noise (for example very soft music) my trigger-word is detected (almost) every time; however, as soon as I introduce a very soft background noise (for example music playing from an iPhone 3 feet away on very low volume), the iPad/OpenEars becomes completely deaf and never detects my trigger-word anymore, regardless of how loud or close I get.
    I do not have this issue when I try the exact same thing with a Raspberry Pi with Pocketsphinx (see opening post); even when I move the iPhone playing music much closer to the mic and turn up the volume, my trigger-word is still detected when I say it.

    in reply to: Mimic Pocketsphinx's handling of background noise #1031838
    JeroenNX
    Participant

    Thanks for your reply.

    Language: English.

    Rejecto settings/relevant code snippets:

    OELanguageModelGenerator *lmGenerator = [[OELanguageModelGenerator alloc] init];
    NSString *name = @"NameIWantForMyLanguageModelFiles";
    NSArray *words = [NSArray arrayWithObjects:@"CAROLINE", nil];
    NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words withFilesNamed:name withOptionalExclusions:nil usingVowelsOnly:FALSE withWeight:nil forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
    if (err == nil)
    {
        lmPath = [lmGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:name];
        dicPath = [lmGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:name];
            
    } else
    {
        NSLog(@"Error: %@",[err localizedDescription]);
    }
        
    self.openEarsEventsObserver = [[OEEventsObserver alloc] init];
    [self.openEarsEventsObserver setDelegate:self];
    [[OEPocketsphinxController sharedInstance] setActive:TRUE error:nil];
    [[OEPocketsphinxController sharedInstance] startListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"] languageModelIsJSGF:NO];
Viewing 3 posts - 1 through 3 (of 3 total)