JSGF code example?

Home Forums OpenEars JSGF code example?

Viewing 11 posts - 1 through 11 (of 11 total)

  • Author
    Posts
  • #1016981
    catdsnny
    Participant

    Is there any example code on implementing a JSGF grammar?  The documentation only seems to show how to use the ARPA model.

    #1016982
    Halle Winkler
    Politepix

    There is a JSGF file shipped with the sample app (OpenEars1.gram) that you can pass into startListening: instead of the starting language model (remembering to set isJSGF to TRUE).

    #1016983
    catdsnny
    Participant

    I’m more looking for what the call is – I can create a grammar file.  The ARPA examples all refer to creating a LanguageModelGenerator object.  Is the process the same for JSGF?  The docs are very light on how to build a JSGF based app.

    #1016984
    Halle Winkler
    Politepix

    Is your question about how to dynamically create a JSGF/dictionary pair using LanguageModelGenerator? You can’t create a JSGF using LanguageModelGenerator, but you can put the words from an existing JSGF into LanguageModelGenerator and use the resulting .dic file along with your pre-existing JSGF. Then you just pass the two files into startListening: (remembering to set isJSGF to TRUE) instead of an ARPA file and a .dic file.

    It isn’t heavily documented because I can’t provide much support for the fairly complex subject of the JSGF standard and the Sphinx implementation thereof, so it is more something I make available as an option than actively promote, if that makes sense.

    #1016985
    catdsnny
    Participant

    I guess what I’m suggesting is that it would be helpful to have a short tutorial on using the JSGF grammar.  I’m not suggesting that JSGF is the focal point of the tutorial.  What seems to be missing is a step by step guide to implementing a JSGF grammar with openears.  Not how to build a JSGF grammar, just how to use JSGF grammars with the library.  I think a few lines of code showing how to do the steps that you just outlined above would be useful for developers.  For example, you refer to “two files” – what two files?  How are they created?  What do they contain?  None of this seems to be covered in the docs – as opposed to the ARPA model, which seems to be heavily documented in a step by step manner.

    #1016995
    catdsnny
    Participant

    Any chance on getting a few lines of code on how to implement the JSGF grammar with open ears?

    #1016996
    Halle Winkler
    Politepix

    I’m sorry, I don’t understand what you require if you already have a JSGF grammar. If you already have a JSGF grammar, there isn’t anything to describe beyond what I said previously:

    you can put the words from an existing JSGF into LanguageModelGenerator and use the resulting .dic file along with your pre-existing JSGF. Then you just pass the two files into startListening: (remembering to set isJSGF to TRUE) instead of an ARPA file and a .dic file.

    OpenEars doesn’t handle any other aspects of JSGF for you (unlike ARPA) so there isn’t anything else I can give you instructions for.

    #1016998
    catdsnny
    Participant

    Ok, so to confirm, there is no “gram” file?  I just create a text file with my grammar, use that as the input to generateLanguageModelFromTextFile and then pass the lmpath/dicpath that results into the startListeningWithLanguageModelAtPath?

    Being that the only example is ARPA, and openears supports JSGF, I think it would be helpful for users to have a few lines of sample code to show how JSGF is implemented.

    #1016999
    Halle Winkler
    Politepix

    Nope, you are using your pre-created JSGF file (with whatever ending you like — the one that ships with the sample app has the ending “.gram”) and if you don’t already have a phonetic dictionary that goes with it you can use LanguageModelGenerator to create one as I described. Then you can pass your JSGF file and the .dic file that you used LanguageModelGenerator to create to startListeningWithLanguageModelAtPath:.

    As I mentioned in your other thread, JSGF is a more difficult approach because you are responsible for your own research on its requirements. There is nothing to document on the OpenEars side of things; you have to already have a JSGF file and phonetic dictionary and know what those files are (the fact that there is a hacky, unsupported way to create a .dic file using LanguageModelGenerator nonwithstanding).

    OpenEars supports JSGF because it’s a common format that is used by existing speech UI designers, but it isn’t a goal of the project to introduce users to developing JSGF or its components.

    #1020924
    Halle Winkler
    Politepix

    Please check out the new dynamic generation language for OpenEars added with version 1.7: https://www.politepix.com/2014/04/10/openears-1-7-introducing-dynamic-grammar-generation/

    #1021033
    Halle Winkler
    Politepix

    In addition to the dynamic grammar generation that has been added to stock OpenEars in version 1.7, there is also a new plugin called RuleORama which can use the same API in order to generate grammars which are a bit faster and compatible with RapidEars: https://www.politepix.com/ruleorama

Viewing 11 posts - 1 through 11 (of 11 total)
  • You must be logged in to reply to this topic.