Frequently Asked Questions/Support

If you have trouble with some aspect of using OpenEars and you have carefully re-read the documents and examined the example app without it helping, you can ask a question in the OpenEars forum (please turn on OpenEarsLogging and — if the issue relates to recognition — PocketsphinxController’s verbosePocketsphinx property before posting an issue so I have some information to troubleshoot from). The forum is a place to ask questions free of charge, but free private email support is not given for OpenEars. However, you can purchase a support incident if you would like to discuss a question via private email.

FAQ

Table of Contents

Q: I’d like to recognize exact phrases or exact words, or define a rules-based grammar for recognition. Can I do this with OpenEars or the plugins?

A: Yes, you can do this with regular OpenEars using the new API for dynamically generating rules-based grammars at runtime (this is the best way to identify fixed phrases with the words in a certain order) and if you need to output grammars which have faster response times than the JSGF response time or which are compatible with RapidEars, you can also try the new plugin RuleORama which uses the same API to output a new format which is as fast to recognize as OpenEars’ language models and compatible with RapidEars.

[TOP]

Q: OpenEars recognizes noises as words and I want to reduce this.

A: Rejecto is designed to deal with this issue which affects all speech recognition. Before trying Rejecto out, please make sure you aren’t testing on the Simulator since the issue with noises being recognized as speech is much worse on the Simulator than a real device that users will use.

[TOP]

Q: I’m having an issue with Rejecto, or with an app that Rejecto is added to.

A: Please make sure you are using OpenEars version 1.66 or newer.

[TOP]

Q: I followed the tutorial and I’m sure that I did every step, but I’m getting an error similar to ”Slt/Slt.h’ file not found’.

A: If you are using Xcode 5 with a build number of 5A1413 or later, it has a bug which results in frameworks linked by reference being changed to link at incorrect URL paths, so when you add the frameworks it is necessary for you to also check the box that says “Copy items into destination group’s folder (if needed)”, or you may receive errors that header files can’t be found in frameworks which were already added. If the issues persist, take a look at what is found in your Framework Search Paths build setting for the app, since it is this entry that is being changed into a non-working URL when frameworks are added. I hope this bug is fixed soon.

[TOP]

Q: I just tried the tutorial and PocketsphinxController didn’t understand the words that I said.

A: 95% of the time, this is either because you were saying words which aren’t in the vocabulary that PocketsphinxController is listening for, meaning that it doesn’t have a way of recognizing those words, or you are testing recognition on the Simulator. Take a look at which words the app is listening for and test recognition of those words, and make sure to test on a real device. It can also be very damaging to recognition accuracy to have a misspelled word in your vocabulary array, since LanguageModelGenerator will not be able to successfully look it up in the pronunciation dictionary and will have to make its best guess, which may be different from what you or a user is saying to the device.

[TOP]

Q: But I want to write an app that uses different words from the ones in the sample app.

A: LanguageModelGenerator is the class in OpenEars which lets you define which words to listen for. OpenEars works by creating a specific vocabulary to listen for. The tutorial explains how to create your own vocabulary and there are also examples of creating custom vocabularies in the sample app.

[TOP]

Q: I’m trying to use a sound framework like Finch or another OpenAL wrapper and things aren’t working as expected.

A: PocketsphinxController has very specific audio session and audio unit requirements and it can’t be run simultaneously with another framework which requires control over the audio session and audio input. If you want to run such a framework alongside FliteController and you don’t need PocketsphinxController, you can set FliteController’s noAudioSessionOverrides property to TRUE so it doesn’t interact with the audio session.

[TOP]

Q: When I license an OpenEars plugin (not OpenEars, one of its plugins), the license is for one app. Does that mean I need a license for each app user?

A: No, the license is for the app itself, so you need one license for one listing in the App Store. No matter how many users your app gets, it’s just one license needed, and Politepix hopes you get a whole lot.

[TOP]

Q: My app crashes when listening starts

A: This is pretty much always because the acoustic model wasn’t successfully added to your app. If you turn on OpenEarsLogging and verbosePocketSphinx you can verify it by searching for an error with “acmod” in it. To fix it, follow these instructions from the tutorial: “Inside your downloaded OpenEars distribution there is a folder called “Frameworks”. Drag that folder into your app project in Xcode. Make absolutely sure that in the add dialog “Create groups for any added folders” is selected and NOT “Create folder references for any added folders” because the wrong setting here will prevent your app from working.” A missing acoustic model has been the cause of every crash report I’ve received in the last year with only two exceptions, so even if you’re pretty sure that you added the acoustic model, double-check that the files found in the Framework folder are all present in your app. The only other known reason for this happening is that you are passing a bad path for your path to the language model (.lm, .languagemodel or .dmp), grammar (.gram or .jsgf) or phonetic dictionary (.dic).

[TOP]

Q: There is a bug on the Simulator/recognition isn’t good on the Simulator

A: OpenEars has a low-latency audio driver written in Audio Units which requires an Audio Session setting in order to work which isn’t supported by the Simulator. Because it can be slow to debug app logic without using the Simulator, OpenEars has a fallback audio driver that is compatible with the Simulator. However, it isn’t as good as the device driver and very little time has been spent trying to debug it since it is only provided as a nicety. With that understanding, please don’t evaluate OpenEars’ accuracy or behavior based on the Simulator, since it uses a completely different audio driver, and please don’t report Simulator-only bugs since there’s no way to fairly allocate resources towards fixing Simulator-only bugs when no users run apps on the Simulator.

[TOP]

Q: If I purchase RapidEars, will OpenEars be able to recognize anything a user says?

A: No, RapidEars does exactly the same small-vocabulary offline recognition that OpenEars does, but it does it in realtime on speech that is still in-progress rather than having to wait for the user to pause for a second before beginning to do recognition. That’s pretty cool, actually! Both OpenEars and RapidEars are recommended for use with vocabularies that are smaller than 1000 words and give best results with vocabularies that are around 100 words or fewer.

[TOP]

Q: I’m using RapidEars or OpenEars with an acoustic model that I made or downloaded elsewhere and I’m getting the following unexpected results…

A: Politepix can only support the acoustic models that it ships, since it can only test against these models.

[TOP]

Q: I’m getting a linker error with RapidEars, NeatSpeech, Rejecto, SaveThatWave, or another plugin — what should I do?

A: It is necessary to set the linker flag -ObjC for your target when using the plugins, and it is equally necessary that the linker flag -all_load is not set or found anywhere in your project (whether using the plugins or not). If this isn’t the issue, it is otherwise always due to the fact that the plugin requires a certain version of OpenEars or later, and you are using an old version of OpenEars, or an earlier version is still somehow linked to your project, so update to the current version of OpenEars. In the case of NeatSpeech, it is also necessary to give extra attention to this step from the instructions: “For the last step, change the name of the implementation source file in which you are going to call NeatSpeech methods from .m to .mm (for instance, if the implementation is named ViewController.m, change its name to ViewController.mm and verify in the Finder that the name of the file has changed) and then make sure that in your target Build Settings, under the section “C++ Standard Library”, the setting “libstdc++ (Gnu C++ standard library)” is selected. If you receive errors like “Undefined symbols for architecture i386: std::basic_ios >::widen(char) const”, that means that this step needs special attention.”

[TOP]

Q: I have tried a fix for a known issue which others have been able to solve definitively, but it doesn’t work for me.

A: This is often solved by cleaning your project before testing again.

[TOP]

Q: What license does OpenEars use?

A: There are actually five libraries used by OpenEars-enabled projects, only one of which is the OpenEars framework, and you can see the license (which is very liberal) for CMU Pocketsphinx, CMU Sphinxbase, CMU Flite and CMUCLMTK here. You need to observe the terms of those licenses in your app as well as the OpenEars one, which shouldn’t be difficult since they are commercial-friendly licenses.

OpenEars is licensed under the Politepix Public License version 1.0. It gives you the right to use OpenEars to make apps. You have some obligations (such as crediting the libraries involved, including OpenEars, either in your app on on its web page) so please read the license.[TOP]

[TOP]

Q: So I can use this in commercial, closed-source apps?

A: Yes.

[TOP]

Q: Can I or should I reference OpenEars in my support/marketing/etc materials?

A: I’d love it if you want to talk about OpenEars in your marketing! If you want to discuss it in your support documents, just please do so in a way that it doesn’t cause any confusion for your endusers about where to seek support (i.e. it must be clear that you are responsible for supporting your app) and it doesn’t imply an endorsement of your app by Politepix or any of the maintainers of the libraries that OpenEars links to.

[TOP]

Q: How can I trim down the size of the final binary for distribution?

A: There are instructions on doing this here.[TOP]

[TOP]

Q: I thought that this version of OpenEars supported the -all_load linker flag, but I’m getting a duplicate symbol error when I use OpenEars with the flag enabled.

A: Starting with OpenEars and plugins version 1.64, the -all_load linker flag is no longer supported and using it will prevent building. Any use of all_load that another library requires can be substituted with force_load and a reference to that library only, and this has been the case since early versions of Xcode 4, so there is no longer any reason at all to use all_load.

[TOP]

Q: Have any apps ever been rejected for using OpenEars?

A: I have never heard of any apps being rejected for using OpenEars, and I wouldn’t expect them to be since I’ve taken care to make sure OpenEars doesn’t do anything questionable, and where I’ve had any questions I’ve just written Apple and asked them for guidance directly. There is a very long list of apps that were (unsurprisingly) accepted that used OpenEars so it is fine to use OpenEars. I have heard of two apps in the last three years being rejected that linked to OpenEars, but they were not rejected because they linked to OpenEars or because of anything related to OpenEars, but because of other details of the apps that did not originate with OpenEars.

Something that is quite important as of iOS 7 for easy, painless app acceptance is that when you obtain a device capability permission, it is necessary to make it clear to the user what the permission is being used for — there can’t be any “stealth” usage of a device capability happening without it being transparent to the user. This is a great, positive development since we want to be building a user-respecting, forthright platform where users have a basis for trusting their apps. What that means in practice is if you perform speech recognition, and the user is asked to give microphone permission, there has to be some kind of explanation or indication in the app UI or description or introductory text that speech recognition is performed in the app. If you ask for mic permission and then perform speech recognition but there is nothing in the UI that would indicate that recognition is being performed, Apple will probably ask you to improve that so that the user knows what the mic stream is used for. OpenEars gives you UI hooks such as the decibel levels of incoming and outgoing speech so that it is easy for you to build a UI, but it isn’t a UI framework, so questions like how to best show the user what is being done with the mic stream are outside of the support that is given here, but I wanted to mention that this is something that you need to consider for your app now that there is a permission system and an Apple UI guideline for use of capabilities with permission.

[TOP]

Q: I still have a question, how do I get more support?

A: You can always ask for help in the forums and I’ll do my best to answer your question. Please turn on OpenEarsLogging and (if the issue relates to recognition) PocketsphinxController’s verbosePocketsphinx property before posting an issue so I have some information to troubleshoot from. Free private email support is not given for OpenEars, but you can purchase a support incident if you would like to discuss a question via private email. Forum support is free. Other emails regarding OpenEars (i.e. not support requests) can be sent via the contact form.

Q: Can I hire you to create an OpenEars-enabled app for me or adapt OpenEars?

A: Sorry, this is no longer offered.

Q: Anything else?

A:Politepix would like to take this opportunity to thank the CMU Sphinx project for all of its excellent work, Nickolay Shmyrev very specifically for answering many questions from this project, and additional thanks is given by Politepix to Cody Brimhall and his Sscribe class from his project Silabas.

[TOP]


OpenEars Plugins


RapidEars

RapidEars is a paid plugin for OpenEars that lets you perform live recognition on in-progress speech for times that you can't wait for the user to pause! Try out the RapidEars demo free of charge.

Rejecto

Rejecto is a paid plugin for OpenEars that improves accuracy and UX by letting OpenEars ignore utterances of words that aren't in its vocabulary. Try out the Rejecto demo free of charge.

RuleORama

Did you know that the free version of OpenEars can perform recognition of fixed phrases using rules-based grammars? And RuleORama is a paid plugin that lets you use the same grammar format as stock OpenEars, but the grammars are fast enough to work with RapidEars. Try out the RuleORama demo free of charge.

NeatSpeech

NeatSpeech is a plugin for OpenEars that lets it do fast, high-quality offline speech synthesis which is compatible with iOS6.1, and even lets you edit the pronunciations of words! Try out the NeatSpeech demo free of charge.


Learn more about the OpenEars Plugin Platform

 

Help with OpenEars

There is free public support for OpenEars in the OpenEars Forums, and you can also purchase private email support incidents at the Politepix Shop. Most OpenEars questions are answered in the OpenEars support FAQ.