trying to use two "say" statements

Home Forums OpenEars trying to use two "say" statements

Tagged: 

Viewing 15 posts - 1 through 15 (of 15 total)

  • Author
    Posts
  • #1022180
    boombatz
    Participant

    HI,
    I’m trying to use two say statements. the first one and the second one called in a method, the first one plays but the one in the method does not one right after the other. like this

    [self.fliteController say:@”Welcome. ” withVoice:self.slt];
    [self newMethod};

    – (void)newMethod {
    [self.fliteController say:@”Please say your name” withVoice:self.slt];
    }

    #1022181
    Halle Winkler
    Politepix

    Welcome,

    When you say a statement, it stops an existing statement and overrides it, so if you say two simultaneously, only the second one will be heard. If you want to play multiple statements, you have to wait for the fliteDidFinishSpeaking callback from OpenEarsEventsObserver before calling the next one.

    #1022182
    boombatz
    Participant

    just tried this test – one right after another and only the first statement plays

    [self.fliteController say:[NSString stringWithFormat:@”Hello and Welcome”] withVoice:self.slt];

    [self.fliteController say:[NSString stringWithFormat:@”This is the second statement”] withVoice:self.slt];

    Thanks.
    Marc

    #1022183
    Halle Winkler
    Politepix

    You have to wait to get the OpenEarsEventsObserver callback that the statement is complete before going to the next statement.

    #1022189
    boombatz
    Participant

    HI,
    I have tried to do that and I have put a delay in my code, but I still cannot and I see on the console that it is “finished speaking”, but I cannot get this to work.
    Do have any example code that might get me going? It would be much appreciated.
    Thanks.

    BTW, I am building an app to help Parkinson’s patients moderate their speech

    #1022193
    Halle Winkler
    Politepix

    OK, can you show me the code you are using to wait for the callback before starting the second speech utterance?

    #1022197
    boombatz
    Participant

    I’ve tried GCD and NSOperation but I can’t get either to work.
    I don’t think I understand of waiting for a callback. Perhaps you have some sample code or might point me to a sample that illustrates this. This would be very helpful and much appreciated.
    Thanks.

    #1022198
    Halle Winkler
    Politepix

    Sure, there is sample code in the OpenEars sample app which shows how the OpenEarsEventsObserver callbacks work. The general question about how callbacks work and how Objective-C uses delegate methods as callbacks is a little beyond the scope of the support I can give in these forums, regretfully, but there is a lot of info online if you search for those keywords (e.g. “objective-c” and “delegates” and “delegate methods” and “callbacks”). Very briefly, it is a way that you can have a method in your view controller that, rather than _you_ calling it yourself to cause something to happen, some other object in your app can call the method when something happens to _it_. And when that method gets called, you know that the event you are waiting for happened, and you can take the opportunity to react to that. For instance, if you are waiting to hear that FliteController finished playing a piece of speech so you can start a new piece of speech.

    In the OpenEars sample app, there is an example of how this works using the delegate method of OpenEarsEventsObserver called fliteDidFinishSpeaking. The chunk of code you can search for in the sample app looks like this:

    – (void) fliteDidFinishSpeaking {
    NSLog(@”Flite has finished speaking”); // Log it.
    self.statusTextView.text = @”Status: Flite has finished speaking.”; // Show it in the status box.
    }

    So when FliteController finishes saying a phrase, it (rather than you) can call this callback via OpenEarsEventsObserver and you receive notification in that method that the speaking is complete and it’s an OK time to start a new phrase. When you run the sample app, this will be demonstrated when the logging statement “Flite has finished speaking” appears in the console.

    #1022201
    boombatz
    Participant

    Thanks. I guess where I’m having trouble is making the app wait until I receive the callback. I’ve tried all kinds of things but I just can’t get the app to pause until I receive the callback. Any suggestions?
    Thanks.

    #1022202
    Halle Winkler
    Politepix

    OK, there shouldn’t be any kind of timing- or pausing-related need – you can just give the first “say:” in viewDidLoad (or whatever method contains the first speech requirement) and then, for example, give the second one in the callback once it is called. Don’t do anything with threads, sleeping, anything like that, just use the fact that the callback was called as the event which tells you to make the second call to the say method.

    #1022203
    boombatz
    Participant

    OK, so in other words I should put the second “say” right in the fliteDidFinishSpeaking delegate method? I can see how that might work for this one time, but doesn’t that cause a problem because everytime it finishes speaking it will then execute the “say” that is written in the delegate method??
    I guess I’m still confused, but getting there.
    Thanks

    #1022204
    Halle Winkler
    Politepix

    Without knowing all of the things that your app does, I can’t suggest a logical flow for how to handle your speech synthesis (that’s a little beyond what I can help with), but initiating an action and waiting for a callback is a pretty common pattern in asynchronous software (for instance, it’s also the way that AVAudioPlayer lets you know it’s done playing back audio files).

    So for this:

    doesn’t that cause a problem because everytime it finishes speaking it will then execute the “say” that is written in the delegate method??

    This would be the case if there was no logic in the callback and it just said [self.fliteController say:@”hi there” withVoice:self.slt]; but if your goal is to speak a continuous statement in a few parts, you could, for instance, have a switch or series of if/else statements which track what has already been said and what is next to say if it’s very short, or you could put your NSStrings into a mutable array and work through them one at a time, removing the ones which have been spoken, or track what is next to do in other ways.

    NeatSpeech does this kind of queueing and tracking of long statements and new statements automatically, but it still uses callbacks to do some things and logic design requirements like this will still exist in other areas of the app.

    #1022205
    boombatz
    Participant

    OK! Now I’ve got it. Thanks a lot.

    One other question . . . . I notice in your sample app that you define the same variables twice – once as instance variables and again as properties. Should I do that in my app, or can I skip the instance variables and just do them as properties?
    Thanks.

    #1022206
    Halle Winkler
    Politepix

    Glad it’s working for you! I’d say you can just do them as properties – the sample app is pretty old and that style came from some of the Apple sample code of that moment, but if I were to write it now it would be a bit different.

    #1022207
    boombatz
    Participant

    Great. That’s what I thought.
    Thanks a lot for your help.

Viewing 15 posts - 1 through 15 (of 15 total)
  • You must be logged in to reply to this topic.