Home › Forums › OpenEars › Retrieve ASCII phonemes and bypass local vocab matching › Reply To: Retrieve ASCII phonemes and bypass local vocab matching
1) Do you think it could be possible to convert Pocketsphinx current decoder to comething that could be sent to my server/database? My problem is, is I don’t fully understand the data that Pocketsphinx uses to find a hypothesis.
You can use SaveThatWave to save utterances as WAV files and send them to a cloud decoder if you like. It should also be possible to turn off local recognition by setting self.pocketsphinxController.processSpeechLocally = FALSE (I’m slightly hesitant on that one because I just noticed that I left it out of my testbed so I’m not 100% on whether it’s currently working, but I would expect it to).
2) I do know that phoneme recognition is inaccurate, but I plan on making up for it on the database query side of things. Each record in the DB has multiple phoneme-based pronunciations and MySQL has very fast searching capabilities. Would it be possible for you to show me or explain to me how to do this with Rejecto?
I don’t think the underlying issue with phoneme-based recognition is that there isn’t enough data about phonemes in words, but that there isn’t any context anymore, but I think it’s reasonable to experiment and find out how it works in your own implementation. You can receive the actual Rejecto phoneme that was received if you set Rejecto’s setter method – (void) deliverRejectedSpeechInHypotheses:(BOOL)trueorfalse; to TRUE when initially setting up the LanguageModelGenerator.