NeatSpeech: Better Voices For OpenEars

Introducing NeatSpeech!

NeatSpeech is a plugin for OpenEars which adds improved voices which sound better and speak continuously with almost no lag even on extremely long statements, with no network connection needed. NeatSpeech handles all speech queuing and multithreading for you, so you can concentrate on making fantastic apps. Transparently priced for your peace of mind.

Introduction, Installation and Basic Concepts

Introduction

NeatSpeech is a plugin for OpenEars® that adds fast, higher-quality speech synthesis (TTS) including multithreaded speech queueing to speak very long phrases without lag or blocking. It uses HTS voices which are ideal for mobile applications due to their diminutive file size, great processing speed and improved speech quality over the default Flite voices. All TTS is performed offline so absolutely no network connection is required.

NeatSpeech supports US English, UK English, Latin American Spanish and Castilian Spanish, in each case with both female and male voices. There are two voice variations for US English.

Installation

How to install and use NeatSpeech:

NeatSpeech is a plugin for OpenEars, so it is added to an already-working OpenEars project in order to enable new OpenEars features. In these instructions we are using the OpenEars sample app as an example for adding the plugin and new features, but the steps are basically the same for any app that already has a working OpenEars installation. Please note that NeatSpeech requires OpenEars 2.5 or greater.

  1. Download and try out the OpenEars distribution and try the OpenEars sample app out. NeatSpeech is a plug-in for OpenEars that is added to an OpenEars project so you first need a known-working OpenEars app to work with. The OpenEars sample app is fine for this to get started. You can also get a complete tutorial on both creating an OpenEars app and adding NeatSpeech to it using the automatic customized tutorial.

  2. Open up the OpenEars Sample App in Xcode. Drag your downloaded NeatSpeechDemo.framework into the OpenEars sample app project file navigator and the entire contents of the folder Voices (you can later remove voice content you aren't using, although the extra voices aren't large).

  3. Open up the Build Settings tab of your app or OpenEarsSampleApp and find the entry "Other Linker Flags" and add the linker flag "-ObjC". Do this for debug and release builds. More explanation of this step can be seen in the tutorial by selecting the live recognition tutorial, which will also show exactly how to use the new methods added by NeatSpeech.

  4. Next, navigate to Xcode's Build Settings for your target and find the setting "Framework Search Paths".

  5. If adding the framework in previous step did not automatically add it to "Framework Search Paths", add it manually. You can find the path by going into the Project Navigator (the main Xcode project file view), finding and then selecting your just-added NeatSpeechDemo.framework, and typing ⌘⌥-1 to open the File Inspector for it (it may already be open – it is the narrow window pane on the far right of the main Xcode interface). The full path to your added framework is shown under "Identity and Type"->"Full Path". The "Framework Search Path" is this path minus the last path element, so if it says /Users/you/Documents/YourApp/Resources/Framework/NeatSpeechDemo.framework, the path to add to "Framework Search Paths" is /Users/yourname/Documents/YourApp/Resources/Framework/ and you should keep the "Recursive" checkbox unchecked.

  6. While we're here, take a moment to look at your Framework Search Paths build setting and verify that it doesn't contain any peculiar entries (for instance, entries with many extra quotes and/or backslashed quote marks) and that each search path is on its own line and hasn't been concatenated to another entry, and that the setting isn't pointing to old versions of the frameworks you're installing that are in other locations.

  7. Also verify that the Framework Search Paths are present for any voices you have added.

  8. For the last step, change the name of the implementation source file in which you are going to call NeatSpeech methods from .m to .mm (for instance, if the implementation is named ViewController.m, change its name to ViewController.mm and verify in the Finder that the name of the file has changed) and then make sure that in your target Build Settings, under the section "C++ Standard Library", the setting "libstdc++ (Gnu C++ standard library)" is selected. If you receive errors like "Undefined symbols for architecture i386: std::basic_ios<char, std::char_traits<char> >::widen(char) const", that means that this step needs special attention.

Support

With your demo download you can receive support via the forums, according to its rules of conduct. In order to receive forum support, it is necessary to have used accurate information with your initial demo download such as a valid email address and your name.

Once you have completed licensing of the framework for your app, forum support will continue to be available to you, and if you need private support via email it is possible to purchase a support contract or individual support incidents at the shop.

Licensing the framework requires giving the exact application name that the framework will be linked to, so don't purchase the license until you know the app name, and again, please try the demo first. It is not possible to change bundle IDs after a purchase, and there are no refunds post-purchase, due to the ability to completely test the comprehensive demo over a full development period.

Please read on for the NeatSpeech documentation.

BACK TO TOP

OEEventsObserver+NeatSpeech Category Reference

Detailed Description

The NeatSpeech plugin's extensions to OEEventsObserver.

Usage examples

What to add to your implementation:

At the top of your header after the line
#import <OpenEars/OEEventsObserver.h>
Add the line
#import <NeatSpeechDemo/OEEventsObserver+NeatSpeech.h>
And after this OEEventsObserver delegate method you added to your implementation when setting up your OpenEars app:
- (void) testRecognitionCompleted {
	NSLog(@"A test file that was submitted for recognition is now complete.");
}
just add the following extended delegate methods:
- (void) neatSpeechWillSay:(NSString *)statement {
    NSLog(@"neatSpeechWillSay %@",statement);
}

- (void) NeatSpeechFliteControllerIsPrimedForVoice:(NSString *)voiceName {
    NSLog(@"NeatSpeechFliteControllerIsPrimedForVoice %@", voiceName); 
}

Warning
It is a requirement that any OEEventsObserver you use in a view controller or other object is a property of that object, or it won't work.

Method Documentation

- (void) neatSpeechWillSay:(NSString *) statement

An OEEventsObserver delegate callback that tells you which statement or sub-statement NeatSpeech is about to say. This is primarily intended to let you design a UI which also indicates the spoken statement visually at the time it is being said. Swift 3: neatSpeechWillSay(_ statement: String!)

- (void) NeatSpeechFliteControllerIsPrimedForVoice:(NSString *) voiceName

An OEEventsObserver delegate callback that tells you when you have successfully primed a voice using the OEFliteController+Neatspeech method primeNeatSpeechFliteControllerForVoice. This is primarily intended to let you design a UI indicating to the user that the voice is setting up. Swift 3: neatSpeechFliteControllerIsPrimed(forVoice voiceName: String!)

BACK TO TOP

OEFliteController+NeatSpeech Category Reference

Detailed Description

A plugin which adds the ability to do higher-quality multithreaded speech to OEFliteController.

Usage examples

Preparing to use the class:

OEFliteController+NeatSpeech preconditions

In order to use NeatSpeech, as well as importing the framework into your OpenEars-enabled project, it is also necessary to import the voices and voice data files by dragging the "Voice" folder in the disk image into your app project (once your app is working you can read more here about how to remove the elements you don't need in order to have a small app binary size).

Very important: when you drag in the voices and framework folders, make sure that in Xcode's "Add" dialog, "Create groups for any added folders" is selected. Make sure that "Create folder references for any added folders" is not selected or your app will not work.

For the last step, change the name of the implementation source file in which you are going to call NeatSpeech methods from .m to .mm (for instance, if the implementation is named ViewController.m, change its name to ViewController.mm and verify in the Finder that the name of the file has changed) and then make sure that in your target Build Settings, under the section "C++ Standard Library", the setting "libstdc++ (Gnu C++ standard library)" is selected.

If you receive errors like "Undefined symbols for architecture i386: std::basic_ios >::widen(char) const", that means that this step needs special attention.

What to add to your implementation:

OEFliteController+NeatSpeech implementation

OEFliteController+Neatspeech simply replaces OEFliteController's voice type with the advanced NeatSpeech voice type, and it replaces OEFliteController's say:withVoice: method with NeatSpeech's sayWithNeatSpeech:withVoice: method.

In your header replace this:

#import <Slt/Slt.h>
#import <OpenEars/OEFliteController.h>
with this:
#import <Emma/Emma.h>
#import <OpenEars/OEFliteController.h>
#import <NeatSpeechDemo/OEFliteController+NeatSpeech.h>
and replace this:
Slt *slt;
with this:
Emma *emma;
and replace this:
@property (strong, nonatomic) Slt *slt;
with this:
@property (strong, nonatomic) Emma *emma;
in your implementation, replace this:and replace this:
	self.slt = [[Slt alloc] init];
with this:
self.emma = [[Emma alloc]initWithPitch:0.0 speed:0.0 transform:0.0];
and replace this:
[self.fliteController say:@"A short statement" withVoice:self.slt];
with this:
[self.fliteController sayWithNeatSpeech:@"Alice was getting very tired of sitting beside her sister on the bank, and having nothing to do: once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it, and what is the use of a book, thought Alice, without pictures or conversations?" withVoice:self.emma];
And replace any other calls to say:withVoice with sayWithNeatSpeech:withVoice:Once this is definitely working you can remove the Slt or other Flite voice frameworks from your app to reduce app size. You can replace references to the Emma framework and object with any of the other voices to try them out.


The available voice frameworks you'll find in the Voices folder in the distribution are as follows:

Emma (US English, female)
EmmaAlternate (US English, female)
William (US English, male)
WilliamAlternate (US English, male)
Beatrice (UK English, female)
Elliott (UK English, make)
Daniel (Castilian Spanish, male)
Martina (Castilian Spanish, female)
Mateo (Latin American Spanish, male)
Valeria (Latin American Spanish, female)
You can also change the speaking speed, pitch of the voice, and inflection of each voice using the voice's initializer arguments speed, pitch and transform respectively. As an example, to initialize the Emma voice with a higher pitch you could use the following initialization: Emma *emma = [[Emma alloc]initWithPitch:0.2 speed:0.0 transform:0.0];

Once you know how your project is to be configured you can remove the unused voices following these instructions in order to make your app binary size as small as possible.

You can pass the sayWithNeatSpeech:withVoice: method as much data as you want at a time. It will process the speech in phases in the background and return it for playback once it is ready. This means that you should rarely experience long pauses while waiting for synthesis, even for very long paragraphs. Very long statements need to include pause indicators such as periods, exclamation points, question marks, commas, colons, semicolons, etc.

To interrupt ongoing speech while it is in progress, send the message [self.fliteController stopSpeaking];. This will not interrupt speech instantaneously but halt it at the next available opportunity.

Warning
There can only be one OEFliteController+NeatSpeech instance in your app.

Method Documentation

- (void) sayWithNeatSpeech:(NSString *) statement
withVoice:(NeatSpeechVoice *) voiceToUse 

Swift 3:

say(withNeatSpeech: String!, with: NeatSpeechVoice!)

Say a word, phrase or paragraph, using a voice which you have already instantiated. Speech will be processed in the background and streamed back into the output, so feel free to send as much speech at once as you want. You can put in phrase separators (this can improve performance for very long sentences) by inserting the text token "####" and you can insert one pause at an arbitrary interval by inserting the text token "##PAUSE##"

- (void) stopSpeaking

This will stop speech instantly. Swift 3: stopSpeaking

- (void) stopSpeakingAfterCurrentItemInQueue

Stop all speech once the current item in the queue is complete. Effectively, says "don't continue with the queue after current speech is complete." Swift 3: stopSpeakingAfterCurrentItemInQueue

- (void) primeNeatSpeechFliteControllerForVoice:(NeatSpeechVoice *) voice

Use to set up voice for playback ahead of time. Useful to display a "setting up" activity monitor or similar UI to give the user feedback. Returns success over the OEEventsObserver+NeatSpeech delegate method NeatSpeechFliteControllerIsPrimedForVoice: Swift 3: primeNeatSpeechFliteController(for: NeatSpeechVoice!)

BACK TO TOP

NeatSpeechVoice Class Reference

Detailed Description

The NeatSpeech Voice superclass.

Method Documentation

- (id) initWithPitch:(float) pitch
speed:(float) speed
transform:(float) transform 

Swift 3:

init(pitch: Number!, speed: Number!, transform: Number!)

The designated initializer for any NeatSpeech voice (such as Emma, William, Beatrice or Elliott). To use the default settings for the voice, set pitch, speed and transform to 0.0. To change the pitch (highness or lowness of the voice register) speed (duration of speech) or transformation, use a scale of -1.0 <–> 1.0. For instance, setting pitch to 0.3 will make a higher voice, setting it to -0.3 will make a lower voice. Small values have a big impact. In order to use OEFliteController+NeatSpeech's "sayWithNeatSpeech:withVoice:" method it is necessary to have an initialized voice that you can pass to withVoice:. An example of initializing a voice to pass to this method for the Emma voice would be as follows: Emma *emma = [[Emma alloc]initWithPitch:0.0 speed:0.0 transform:0.0]; after which you could sent the message [self.fliteController sayWithNeatSpeech:"My statement" withVoice:emma];

neatspeech_icon_reflection

Demo is identical to licensed version, but times out after a few minutes. Changelog »

Download The NeatSpeech Demo
Go to the quickstart tutorial Buy NeatSpeech

OpenEars™ Plugins


RapidEars

RapidEars is a paid plugin for OpenEars™ that lets you perform live recognition on in-progress speech for times that you can't wait for the user to pause! Try out the RapidEars demo free of charge.

Rejecto

Rejecto is a paid plugin for OpenEars™ that improves accuracy and UX by letting OpenEars™ ignore utterances of words that aren't in its vocabulary. Try out the Rejecto demo free of charge.

RuleORama

Did you know that the free version of OpenEars™ can perform recognition of fixed phrases using rules-based grammars? And RuleORama is a paid plugin that lets you use the same grammar format as stock OpenEars™, but the grammars are fast enough to work with RapidEars. Try out the RuleORama demo free of charge.

NeatSpeech

NeatSpeech is a plugin for OpenEars™ that lets it do fast, high-quality offline speech synthesis which is compatible with iOS6.1, and even lets you edit the pronunciations of words! Try out the NeatSpeech demo free of charge.


Learn more about the OpenEars™ Plugin Platform

 

Help with OpenEars™

There is free public support for OpenEars™ in the OpenEars Forums, and you can also purchase private email support at the Politepix Shop. Most OpenEars™ questions are answered in the OpenEars support FAQ.