Thanks for the response Halle. I’m going finish reading the resources you provided, and also spend some time playing with sample JSGF grammars when I get on my development machine. I had a hunch that JSGF would be the way to go, but the added delay involved with stopping and starting the listener may rule it out for me, since my current implementation used 2 distinct ARPA models (about 1200 phrases each) that I frequently switch between. I think i could only make it work with my project if I combined both models into a JSGF grammar thus eliminating the need to switch language models. My dilemma then is deciding whether I could produce more successful results by creating a large JSGF grammar or 2 medium-sized ARPA grammars plus adding some post-processing to the hypothesis. I’m currently working on the latter option, and I think I might be able to process the hypothesis well enough to produce a valid NSString for my purposes. One issue that I see coming up constantly is that the library gets tripped up between similar sounded words — for example it might interpret EIGHTY as EIGHT or vice versa. Would changing the grammar to JSGF help avoid this at all?
Also, it is my understanding that a JSGF grammar would allow multiple language model phrases in a single hypothesis, which would be necessary for my project. Is this the case, or would every hypothesis necessarily have to be strictly specified in the language model?