OpenEars can’t be used with vocabularies of that size. A good vocabulary size for offline recognition on a handheld device will probably be fewer than ~200-500 words depending on content.
If you have a language model and dictionary like the one that you described, that is oversized for offline recognition and which contains words which are unlikely to be spoken by an iPhone user such as the CMU general language model I think you are referring to, you can use it with OpenEars very easily using its method:
- (void) startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF;
This method is documented in the following places:
In the sample app with examples
In the OpenEars documents and PDF and epub
In the OpenEars tutorial with examples
In the PocketsphinxController.h header
The path to the DMP is passed to the languageModelPath argument and the path to the .dic is passed to the dictionaryPath argument.
But the results of doing so will be inaccurate speech recognition, because the model is too large and its contents will not correspond to language that your app users will use.