- This topic has 5 replies, 2 voices, and was last updated 7 years, 4 months ago by Halle Winkler.
November 13, 2015 at 10:03 pm #1027308
I am using a JSFG dictionary which allows ordering and optional phrases. I would like to have the dictionary contain separate phrases/commands that could be given: Example (paraphrasing the syntax):
(Rule1) ( MASTER | COMPUTER ) ( LIFT | PICK UP | GET ) ( SHOVEL | SPADE )
(Rule2) ( DROP | EMPTY ) ( INVENTORY | SATCHEL )
I am able to do this using JSFG but would like to know if it is possible to determine which rule triggered the hypothesis. Otherwise, I would need to scan the returned hypothesis and complete the same logic which OpenEars just used to trigger the recognition.
PS. Thanks for creating the framework – I am excited to use it.November 13, 2015 at 10:15 pm #1027309
I think this would probably be quite easy to do if you used OpenEars’ dynamic grammar language rather than writing JSGF directly, since you’d then have all of your rules in a dictionary and you can easily enumerate through it to find the match. Additionally, it will make dynamic JSGF generation easier to do programmatically at runtime. Here is a blog post about it, there is a bit more info in the docs: https://www.politepix.com/2014/04/10/openears-1-7-introducing-dynamic-grammar-generation/November 13, 2015 at 10:23 pm #1027310
No, sorry, let me amend that – it will be possible for rules which are fixed phrases or simple enough combinations to match to your dictionary contents without logic that gets very complex, but it won’t work well for rules where there are optionals and multiple OR cases, I’d expect.November 13, 2015 at 10:39 pm #1027311
I was actually going down that path and had read the doc you mentioned.
The problem I ran into was generating an example like the one I had (actually I used the one you wrote in the article and simply added a second set of commands).
Since it is a key followed by an array of phrases the compiler complained that their was duplicated keys (if re-using the key ThisCanBeSaidOnce over and over at the top level).
I then tried to nest the two command structures a level deeper so that the top key was only one ThisWillBeSaidOnce . At one point I was able to get it to run but then my program complained the dictionary was too complex (I had only added one additional command to your example).
Finally, I ran across your other document outlining a way to have the dictionary and grammar files pre-made which would save the generation time at initialization. I thought I could then avoid the additional level of nested dictionary keys.
But now I see it would still be better to have the dictionary available to parse the resulting hypothesis.
Maybe the way I was nesting the 2nd set of commands was not correct. I also searched for additional examples of openEars dictionary setup in code but could not find any.
Thanks for your quick reply.November 13, 2015 at 10:47 pm #1027312
Thinking ahead I also like the idea of having the dictionary files already available at runtime and potentially using multiple ones (switching them in and out) per context of the application. This would allow for larger vocabularies.
I think you mention this in other posts.November 14, 2015 at 10:24 am #1027314
Yup, the design of the language is intended to give you the same kind of dynamism and small-as-possible model size that you have with the language model generation tools, and I think it’s also better to be able to read the logic in plain language for maintenance/tweaking reasons, particularly if it may be worked on by someone who doesn’t know the JSGF spec someday.
In stock OpenEars, the grammar language actual results in a well-formed JSGF file being output (in RuleORama, the same grammar language is used but the endpoint isn’t a JSGF file, so probably it’s a good idea to think of the output file type as an implementation detail rather than a promise) so the engine complaining that the JSGF is too big is interesting to me. Are you sure your direct-to-JSGF version and your OpenEars grammar language version are the same rules?
The grammar language doesn’t get nearly the road-testing that language model generation gets because even though it’s more flexible than writing JSGF, it’s still notably harder to use than submitting an NSArray of words, which means that bugs can appear (there is currently one with switching between JSGFs which I expect to have fixed in the upcoming version – right now you have to stop the engine and start again with a new JSGF, which doesn’t have a big time impact but is more complex to write than it needs to be). Let me know about any unexpected results like the one you’ve described.
- You must be logged in to reply to this topic.