Without knowing all of the things that your app does, I can’t suggest a logical flow for how to handle your speech synthesis (that’s a little beyond what I can help with), but initiating an action and waiting for a callback is a pretty common pattern in asynchronous software (for instance, it’s also the way that AVAudioPlayer lets you know it’s done playing back audio files).
So for this:
doesn’t that cause a problem because everytime it finishes speaking it will then execute the “say” that is written in the delegate method??
This would be the case if there was no logic in the callback and it just said [self.fliteController say:@”hi there” withVoice:self.slt]; but if your goal is to speak a continuous statement in a few parts, you could, for instance, have a switch or series of if/else statements which track what has already been said and what is next to say if it’s very short, or you could put your NSStrings into a mutable array and work through them one at a time, removing the ones which have been spoken, or track what is next to do in other ways.
NeatSpeech does this kind of queueing and tracking of long statements and new statements automatically, but it still uses callbacks to do some things and logic design requirements like this will still exist in other areas of the app.