Yes, any non-English accent will unfortunately have a distinct effect on recognition. I wish that wasn’t the case but it’s unavoidable since the actual phoneme sounds (the building blocks of recognition) are different for native speakers of different languages, even when speaking a word in a second (or third, etc) language that you are so highly fluent in. I have the same problems when I use German speech recognition since German is my second language, even though many native German speakers say that my pronunciation sounds only mildly-accented to their ear. Since the result is uncertainty for the engine, it isn’t surprising that the wrong results can be differently wrong.
However, I think the bigger issue is that what you’ve shown above appears to be an NSDictionary of an NSArray of more NSDictionaries, is that correct? It doesn’t match the format of any kind of input that OpenEars takes in order to create language models or grammars, so it isn’t possible that you are using it to successfully create a language model or grammar using LanguageModelGenerator. Perhaps the issue is that the model is not being created but a different model is being used with different words? Or that a very buggy model is being created from that input. Take a look at the standard input for LanguageModelGenerator in the tutorial to get a sense of what the input for a successful model generation looks like. It’s a very good idea to turn on verboseLanguageModelGenerator, verbosePocketsphinxController and OpenEarsLogging while troubleshooting these issues so that you can see any error messages/warning messages encountered.
BTW, OpenEars vocabulary has to be submitted in uppercase – submitting it in lowercase can have a big effect on accuracy.