You can also import a vocabulary list from text using the method:
- (NSError *) generateLanguageModelFromTextFile:(NSString *)pathToTextFile withFilesNamed:(NSString *)fileName forAcousticModelAtPath:(NSString *)acousticModelPath;
When you have a very large vocabulary that you want to accurately recognize using offline recognition, what you want to think about is how to split it up into multiple smaller language models. If you were showing them a website that had a listing of 14,000 items, you wouldn’t show them all 14,000 items on a single page and ask them to click — instead you would have them navigate a short product hierarchy, i.e. “Technology” then “Smartphones” then “iPhone 5S”. So in the first step, if there are 14 departments, you are already excluding 13,000 items on average. In the second step, if there are 10 subsections, you are excluding another 900 items on average, and you are left with 100 smartphones that the user can view and click on, and they choose the iPhone 5S. For accuracy, you do the same tricks to reduce the search space with speech. You can have them first state a major category, and then use the language model switching to substitute a language model that just has the items from that category. If needed, you could drill down once more, but depending on the items, a vocabulary of 1000 might just work, so I’d do a bit of experimentation and see.