Yes I did temporarily disable Rejecto like you asked, by replacing this:
NSError *err = [lmGenerator generateRejectingLanguageModelFromArray:words withFilesNamed:name withOptionalExclusions:nil usingVowelsOnly:FALSE withWeight:nil forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
NSError *err = [lmGenerator generateLanguageModelFromArray:words withFilesNamed:name forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
The difference is noticeable instantly; because if the trigger word was for example ‘Caroline’, (in silence) wíth Rejecto, the app (mostly) only responds to ‘Caroli’ and ‘Caroline’, but without Rejecto it also responds to ‘Care’, ‘Caro’, ‘Carol’, etc. However, in both cases, with and without Rejecto, as soon as a bit of background noise (for example music) is introduced, it stops triggering completely.
For both pocketsphinx (Pi) and OpenEars (iOS) I am using the default accoustic model that comes with the package.
I’ll try if I can reproduce the issue with the OpenEarsSampleApp.