- This topic has 10 replies, 2 voices, and was last updated 10 years ago by Halle Winkler.
-
AuthorPosts
-
April 17, 2013 at 3:03 pm #1016981catdsnnyParticipant
Is there any example code on implementing a JSGF grammar? The documentation only seems to show how to use the ARPA model.
April 17, 2013 at 3:17 pm #1016982Halle WinklerPolitepixThere is a JSGF file shipped with the sample app (OpenEars1.gram) that you can pass into startListening: instead of the starting language model (remembering to set isJSGF to TRUE).
April 17, 2013 at 3:34 pm #1016983catdsnnyParticipantI’m more looking for what the call is – I can create a grammar file. The ARPA examples all refer to creating a LanguageModelGenerator object. Is the process the same for JSGF? The docs are very light on how to build a JSGF based app.
April 17, 2013 at 3:45 pm #1016984Halle WinklerPolitepixIs your question about how to dynamically create a JSGF/dictionary pair using LanguageModelGenerator? You can’t create a JSGF using LanguageModelGenerator, but you can put the words from an existing JSGF into LanguageModelGenerator and use the resulting .dic file along with your pre-existing JSGF. Then you just pass the two files into startListening: (remembering to set isJSGF to TRUE) instead of an ARPA file and a .dic file.
It isn’t heavily documented because I can’t provide much support for the fairly complex subject of the JSGF standard and the Sphinx implementation thereof, so it is more something I make available as an option than actively promote, if that makes sense.
April 17, 2013 at 3:58 pm #1016985catdsnnyParticipantI guess what I’m suggesting is that it would be helpful to have a short tutorial on using the JSGF grammar. I’m not suggesting that JSGF is the focal point of the tutorial. What seems to be missing is a step by step guide to implementing a JSGF grammar with openears. Not how to build a JSGF grammar, just how to use JSGF grammars with the library. I think a few lines of code showing how to do the steps that you just outlined above would be useful for developers. For example, you refer to “two files” – what two files? How are they created? What do they contain? None of this seems to be covered in the docs – as opposed to the ARPA model, which seems to be heavily documented in a step by step manner.
April 18, 2013 at 7:12 pm #1016995catdsnnyParticipantAny chance on getting a few lines of code on how to implement the JSGF grammar with open ears?
April 18, 2013 at 7:20 pm #1016996Halle WinklerPolitepixI’m sorry, I don’t understand what you require if you already have a JSGF grammar. If you already have a JSGF grammar, there isn’t anything to describe beyond what I said previously:
you can put the words from an existing JSGF into LanguageModelGenerator and use the resulting .dic file along with your pre-existing JSGF. Then you just pass the two files into startListening: (remembering to set isJSGF to TRUE) instead of an ARPA file and a .dic file.
OpenEars doesn’t handle any other aspects of JSGF for you (unlike ARPA) so there isn’t anything else I can give you instructions for.
April 18, 2013 at 7:59 pm #1016998catdsnnyParticipantOk, so to confirm, there is no “gram” file? I just create a text file with my grammar, use that as the input to generateLanguageModelFromTextFile and then pass the lmpath/dicpath that results into the startListeningWithLanguageModelAtPath?
Being that the only example is ARPA, and openears supports JSGF, I think it would be helpful for users to have a few lines of sample code to show how JSGF is implemented.
April 18, 2013 at 8:13 pm #1016999Halle WinklerPolitepixNope, you are using your pre-created JSGF file (with whatever ending you like — the one that ships with the sample app has the ending “.gram”) and if you don’t already have a phonetic dictionary that goes with it you can use LanguageModelGenerator to create one as I described. Then you can pass your JSGF file and the .dic file that you used LanguageModelGenerator to create to startListeningWithLanguageModelAtPath:.
As I mentioned in your other thread, JSGF is a more difficult approach because you are responsible for your own research on its requirements. There is nothing to document on the OpenEars side of things; you have to already have a JSGF file and phonetic dictionary and know what those files are (the fact that there is a hacky, unsupported way to create a .dic file using LanguageModelGenerator nonwithstanding).
OpenEars supports JSGF because it’s a common format that is used by existing speech UI designers, but it isn’t a goal of the project to introduce users to developing JSGF or its components.
April 21, 2014 at 3:33 pm #1020924Halle WinklerPolitepixPlease check out the new dynamic generation language for OpenEars added with version 1.7: https://www.politepix.com/2014/04/10/openears-1-7-introducing-dynamic-grammar-generation/
April 24, 2014 at 6:08 pm #1021033Halle WinklerPolitepixIn addition to the dynamic grammar generation that has been added to stock OpenEars in version 1.7, there is also a new plugin called RuleORama which can use the same API in order to generate grammars which are a bit faster and compatible with RapidEars: https://www.politepix.com/ruleorama
-
AuthorPosts
- You must be logged in to reply to this topic.