how integrate the dynamic language model

Home Forums OpenEars how integrate the dynamic language model

Viewing 2 posts - 1 through 2 (of 2 total)

  • Author
    Posts
  • #1025254
    viveknagpal16
    Participant

    HI,

    I am curently working on a transcription app and using Open Ear to transcribe. However, there are some issues I am facing for which I would need your help.

    I am using your SDK in my iOS app to convert voice to text. Currently this functionality is working fine, however it only works for a specific set of words, which have been manually entered by me. I now want to integrate a universal library and thereby switch to a dynamic model so that the app can identify ALL known words in the english language and convert recorded voice to a text file.

    I am not sure about the procedure or method needed to a) get a universal library and b) add it to my app. If you can please list down the steps or send a link where my question is addressed it would really help me.

    Look forward to an early revert from you.

    Thanks
    Vivek

    #1025255
    Halle Winkler
    Politepix

    Hello,

    Transcription of any possible word spoken in a language isn’t a feature of offline recognition that is performed on a handheld device, sorry. In offline recognition on a handheld device, you can work with vocabularies of maybe up to 2000 words which you can choose, and you can also switch between them, but you can’t use vocabularies with the hundreds of thousands of words that would be needed in order to understand any possible utterance a user might say, since among other things, that would require more space than the available memory on the device for a single app. You will want to investigate online services providing transcription services, such as the Nuance NDK.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.