I actually originally developed OpenEars as part of an app which recognized spoken names from the contact list, and my experience was pretty good. However, the names on my list were either of English origin or German, where the phonemes could be estimated pretty well. Basically, the farther away the names in the contact list are from English words/phonology, the worse OEPocketsphinxController will do with them. However, you could also add names with phonetic transcription to the lookup list if you want to.
Can you elaborate more on the multiple dictionaries question? I’m not sure yet whether you are asking about having more than one language model and switching between them over time (very easy with OpenEars) or doing two separate recognition passes on the same speech input using two different language models (not so easy).