Error when integrating the NeatSpeech demo

Home Forums OpenEars plugins Error when integrating the NeatSpeech demo

Viewing 13 posts - 1 through 13 (of 13 total)

  • Author
    Posts
  • #14891
    appcontrol
    Participant

    Hi there,

    I have a projet with OpenEars (1.2.4) integrated in it, for speech synthesis only. When I try to replace the SLT voice with one of the demo voices, the app crashes at the first synthesis, with the following message in the debug console: “Error: flite_hts_engine: specify models(trees) for each parameter.”

    Did I miss something in the installation document?

    Thanks,
    Vincent.

    #14892
    Halle Winkler
    Politepix

    Hi,

    Can you show the code you used? It just sounds a bit like the Emma voice (or whichever voice) is not instantiated at the time you are calling it.

    #14893
    Halle Winkler
    Politepix

    You can also contact me through the contact form and I’ll give you an address to email your code or project to if you want to use your free support email.

    #14894
    appcontrol
    Participant

    Hi,

    thanks for the swift reaction, this is really good.

    Here’s a code extract, the crash error occurs on the line where the stEmma (static variable) is initialized.

    Thanks,
    Vincent

    ==========

    +(void) sayText:(NSString*)text{

    if(stFliteController == nil){
    stFliteController = [[FliteController alloc]init];
    }

    if(stEmma == nil){
    stEmma = [[Emma alloc]initWithPitch:0.0f speed:0.0f transform:0.0f];
    }

    if(theQueue == nil){
    theQueue = [[NSMutableArray alloc]initWithCapacity:10];
    }

    if([stFliteController speechInProgress]){
    // NSLog(@”AudioSingleton sayText:%@ –> Will queue it”, text);
    [theQueue addObject:text];
    }
    else{
    // NSLog(@”AudioSingleton sayText:%@ –> will say it”, text);
    [stFliteController sayWithNeatSpeech:text withVoice:stEmma];
    }
    }

    #14895
    Halle Winkler
    Politepix

    OK, I see a few issues. The first is that the tutorial gives an example of how to do the memory management for both FliteController and FliteController+Neatspeech voices, and it’s a good idea to use it since it avoids issues related to memory management. This looks like the initialization occurs inside of an instance method of a shared object, which seems like there are a few ways it could be going wrong. There’s no need to put NeatSpeech inside a singleton or do something with queueing since NeatSpeech manages its own queue internally and it is multithreaded and expects to be instantiated in one view controller, not in a singleton whose thread we don’t know.

    I would just set it up like the tutorial example:

    https://www.politepix.com/openears/tutorial

    #14896
    Halle Winkler
    Politepix

    Just to explain a bit more about the internal queueing, you can send text to sayWithNeatSpeech: whenever you want, and if speech is currently in progress the new text will be queued behind the scenes and spoken when previous queued speech is done. Or you can send a single very large piece of text and NeatSpeech will break it down and queue it up on its own. You can also dump the queue. It’s built on the assumption that you will need to queue and manages its whole process of putting synthesis on a secondary thread and keeping the results that are delivered by OpenEarsEventsObserver on mainThread.

    #14897
    appcontrol
    Participant

    Thanks for your answers.

    The singleton is there because of the way the app is using sound, not to encapsulate NeatSpeech. And this works with the Slt voice, so I hoped it would be easy to switch to NeatSpeech.

    Regarding the queuing, it’s in place because Slt and the previous sound mechanism I used could’t handle it, and the text come in chunks because of the nature of the app.

    I did not expect such a difference between OpenEars and NeatSpeech, but I’ll test further, and let you know if I need more support.

    Vincent.

    #14901
    appcontrol
    Participant

    So, at this point I have something equivalent to the tutorial working, and by the way, the NeatSpeech voices are really good compared to the free ones…

    But I’m still struggling to integrate it in my application, even in one of the many view controllers I have. Does NeatSpeech assumes, that the objects will be properties of the root view controller, or can it be any VC ?

    Thanks,
    Vincent

    #14902
    Halle Winkler
    Politepix

    Any VC. They can also be instantiated in a model that is controlled in a VC without any multithreading; the only reason I say to put them in a VC is that they should be on mainThread and not in a singleton but instead something which has a particular location in the view hierarchy and is normally memory managed.

    How are you struggling? Are you instantiating the voice and the fliteController in the emma and fliteController lazy instantiation method that is shown in the tutorial and then referencing them with self. as in:

    [self.fliteController sayWithNeatSpeech:@”I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone.” withVoice:self.emma];

    ?

    I’m here to help, just let me know what the hangup is and I’m sure we can figure it out.

    #14903
    Halle Winkler
    Politepix

    This would be the lazy instantiation approach:

    1. Make sure you’ve imported FliteController+NeatSpeech.h in the VC header after the import of FliteController.h,
    2. Create an ivar and property of the voice and of the FliteController in the VC header, synthesize both in the VC implementation, and for each, override their accessor method with the following lazy accessors:

    - (Emma *)emma {
    	if (emma == nil) {
    		emma = [[Emma alloc]initWithPitch:0.0 speed:0.0 transform:0.0];
    	}
    	return emma;
    }
    
    - (FliteController *)fliteController {
    	if (fliteController == nil) {
    		fliteController = [[FliteController alloc] init];
            
    	}
    	return fliteController;
    }
    
    

    Then, you don’t initialize either ever, or do any checking of whether they are instantiated, and you don’t have to queue, you just reference them like so:

    [self.fliteController sayWithNeatSpeech:@”I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone.” withVoice:self.emma];

    Also, just for sanity, double-check that you’ve added the -ObjC other linker flag to the target.

    #14904
    Halle Winkler
    Politepix

    by the way, the NeatSpeech voices are really good compared to the free ones…

    And thanks for this! Very nice to hear.

    #14905
    appcontrol
    Participant

    O-keeee, I got it working now :-)

    The problem was that I have one single XCode project with several targets, and when I did copy the Voices folder to my project, it got added to only one target, so the voice files were missing in the bundle resources (Build phases -> Copy bundle resources).

    So the problem had nothing to do with architecture, singleton or VCs.

    Thanks for your good support, and expect an order for the full version next week, but for now, time for some more test, and… party.

    With kind regards,
    Vincent

    #14906
    Halle Winkler
    Politepix

    Fantastic! Glad it’s working for you and enjoy the party.

Viewing 13 posts - 1 through 13 (of 13 total)
  • You must be logged in to reply to this topic.