This is unfortunately something that doesn’t work that wonderfully with speech recognition — accuracy for this application of the functionality is never great. But it shouldn’t be any problem using the dynamic generation to create a language model that recognizes letters and numbers. In what sense is there no success, do you have low accuracy (that is unfortunately to be expected for the specified requirement) or do the language models lack entries for your letters and numbers?
One thing you can try if you are dealing with entire acronyms that are known at the time of language model creation (versus arbitrary combinations of letters) is to use the whole acronym as the corpus or array entry and then edit the phonetic dictionary.
So, instead of having these in the array: @”A”, @”B”, @”C”
You would have this in the array: @”ABC”
And then you would need to edit the phonetic dictionary which is created so that the pronunciation associated with the word “ABC” is the correct phoneme sounds for “A”, “B” and “C” in sequence.
To work with already-created language model and dictionary files instead of making new ones at runtime you can follow these instructions from the docs:
If you need to create a fixed language model ahead of time instead of creating it dynamically in your app, just use this method (or generateLanguageModelFromTextFile:withFilesNamed:) to submit your full language model using the Simulator and then use the Simulator documents folder script to get the language model and dictionary file out of the documents folder and add it to your app bundle, referencing it from there.