Tagged: grammar, script, verification
- This topic has 9 replies, 2 voices, and was last updated 7 years ago by Halle Winkler.
-
AuthorPosts
-
September 9, 2016 at 4:19 pm #1030961sdsaccParticipant
Hello, we would like to use OpenEars/RapidEars to perform verification on a voice script(s) used in a sales setting. The idea is that a sales person is on a call with a client (assume the audio of the call is piped into the mic input of the iPhone). The sales agent is to read statements that may or may not include the client’s name and ask the client yes/no questions. For example one such script is:
—
Today’s date is _______ and the time is __________. My name is ____________ with My Sales Company.
Mr./Mrs. __________ do I have permission to record this call?
Can you please state your mother’s maiden name for security purposes?
Please speak the last four of your social security number.
Mr./Mrs. __________ this call is to verify that you received the Pennsylvania Disclosure Statement that was mailed to you.
Can you please state your name on the recording?
—
Do you have any suggestions for how best to construct a grammar to insure the script was read properly and the client answered in the appropriate spot.
September 9, 2016 at 4:42 pm #1030962Halle WinklerPolitepixWelcome,
I think the docs have a good rundown of how the human-readable grammar language works with RuleORama, RapidEars and OpenEars, and for statistical language models there are examples in the tutorial and the sample app to start with. Unfortunately I can’t construct the grammar for you on a case-by-base basis, but I’ll be happy to answer specific questions if you have unexpected results with specific grammar rules after taking a look at the docs.
September 15, 2016 at 10:52 pm #1030989sdsaccParticipantThanks very much.
The docs did have some good examples but I am a little confused by what I am seeing. I have the following grammar defined:
NSDictionary *grammarOpening = @{
ThisWillBeSaidWithOptionalRepetitions: @[
@{ThisWillBeSaidOnce : @[
@{ ThisWillBeSaidOnce : @[@”TODAYS”]},
@{ ThisWillBeSaidOnce : @[@”DATE”]},
@{ ThisWillBeSaidOnce : @[@”IS”]}
]}
]};Basically looking for the phrase “TODAYS DATE IS”. The issue is I get a hypothesis if I just say the word TODAY or I say the whole thing. This is true even if I group the words into a single line. Is that expected? If so how can I determine the whole phrase was spoken?
September 16, 2016 at 1:05 pm #1030990Halle WinklerPolitepixOK, this may be more what you are intending:
NSDictionary *grammarOpening = @{ ThisWillBeSaidOnce : @[ @{ ThisWillBeSaidOnce : @[@"TODAYS"]}, @{ ThisWillBeSaidOnce : @[@"DATE"]}, @{ ThisWillBeSaidOnce : @[@"IS"]} ] };
Wrapping the entire ruleset in optional repetitions doesn’t do anything on the outside of the ruleset (it’s a given that the whole ruleset may be used more than one time) but it can probably lead to some unexpected outcomes, even more so with RuleORama which doesn’t support it as a tag (this can be seen in the logging if you turn it on). Is this grammar for use with stock OpenEars or with RuleORama?
BTW, you can use lowercase/mixed case and apostrophes with OpenEars if you want to.
September 17, 2016 at 4:15 pm #1030992Halle WinklerPolitepixDue to a server migration, part of this discussion was lost, but this is the most recent post from the original poster:
Well I understand that so I will ask something more specific.
The first item in the grammar I game is the phrase “TODAYS DATE IS”. I have a wav file that says “Today’s date is September 16th”.
When I pass this file to runRecognitionOnWavFileAtPath it identifies the phrase at the beginning. If I tack on extra seconds of voice after the September 16th, but not containing any other parts of the grammar, the phrase at the beginning is not recognized.
I am not sure what I could send you to help understand the issue. I have a project adapted from the OpenEarsSample app I could send along if that would help.
September 18, 2016 at 10:46 am #1030999Halle WinklerPolitepixThis is correct behavior. The reason to use a ruleset is so that utterances that don’t conform to the rules are rejected. If your grammar is compared to an utterance that has information that doesn’t match the ruleset, the utterance shouldn’t be recognized. The options for this situation are either to use a language model rather than a grammar (examples of this can be found in the tutorial and sample app) or to use a grammar which contains entries representing the additional speech.
September 18, 2016 at 11:39 pm #1031003sdsaccParticipantAh, I see. That makes sense. So my usage model is going to be recording a snippet using the iPhone mic and then passing the recording to runRecognitionOnWavFileAtPath. When I set up the AVAudioRecorder I am using the following setup:
// Define the recorder setting
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:16000.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];// Initiate and prepare the recorder
NSError *error = nil;
recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:&error];Will this produce a file of the proper format for OpenEars or do I need to do some other conversion?
Thanks!
September 19, 2016 at 12:13 am #1031004Halle WinklerPolitepixThat’s the correct format.
November 30, 2016 at 4:46 am #1031328sdsaccParticipantOn the product page there is an options for 2 months of email support. Does this include any help in designing grammars for applications? Or is it only for answering email questions?
November 30, 2016 at 7:49 am #1031329Halle WinklerPolitepixGreetings,
It’s for answering questions of the type that would be posted in this forum, but via email. Here are the support terms: https://www.politepix.com/supportterms/
-
AuthorPosts
- You must be logged in to reply to this topic.