August 27, 2013 at 12:52 pm #1018148
As of the OpenEars Platform 1.5 release, OpenEars has now moved to a much nicer bundle system for letting you access alternate acoustic models in the same app. OpenEars ships with an English and Spanish acoustic model, but you may also want to use or already be using an alternate acoustic model to those two and you might be wondering how you can package it up in the same way so you can use the new AcousticModel class to reference your custom acoustic model. Here is how to do it.
Step 1. In Xcode, create a new project. When you are choosing what kind of project, go to the part in the sidebar that says “OS X” and choose the item “Framework & Library”, and then select “Bundle”. When naming, I would recommend sticking with the OpenEars naming standard for acoustic models and name it something like AcousticModelCustom or AcousticModelChinese or AcousticModelRussian, whatever describes your model best so you can easily reference it in your app. Don’t use AcousticModelEnglish or AcousticModelSpanish since OpenEars already uses those names. In this example we’ll call it AcousticModelCustom. The framework “Core Foundation” should be selected at the bottom of the dialog:
Step 2. Select the project target and open up its Build Settings. Under the setting “Base SDK”, select “Latest iOS”. After doing that, under “Architectures”, select “Standard”. That’s it, now the bundle is an iOS bundle and is prepared for your acoustic model files.
Step 3. Drag your desired acoustic model files into your bundle project, making sure to add them to the bundle target. For instance, if you are creating an acoustic model bundle for this Mandarin Chinese acoustic model: http://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/Mandarin%20Broadcast%20News%20acoustic%20models/
You will add the files within called:
In the case of the Mandarin model, there is another file in there called mixture_weights, and other acoustic models may have such a file. Usually you do not add this file. However, if you are adding an acoustic model that does not have a file called sendump, then you will include mixture_weights. If sendump is there, do not include mixture_weights because it will make your app unnecessarily large; just add sendump.
OK, that’s basically it. You build the bundle, and in the Finder open it as a package in order to verify that your acoustic model files are in there. At that point you can just drag the product “AcousticModelCustom.bundle” into your OpenEars App project and you can reference it in the methods that require an acoustic model path as follows: [AcousticModel pathToModel:@”AcousticModelCustom”] and it should happily coexist in mainBundle along with the other OpenEars acoustic models. You are still going to need to make your own accommodations for creating your ARPA/DMP or JSGF and phonetic dictionary since OpenEars can currently only dynamically generate language models with the acoustic models it ships with. Most acoustic models which are available to use have a general language model (this will have the ending .lm, .DMP, .arpa, or maybe .gram or .jsgf) and .dic file corresponding to them that you can use as a starting point for creating your language models.
That should be it for most developers, but there are a couple of rare cases:
For a custom acoustic model, you can’t use LanguageModelGenerator except in the unusual case that it uses exactly the same phonemes as the English or the Spanish acoustic models that ship with OpenEars. This is not likely, but if it is the case, you probably know that it is. In this very specific case, you will also want to add a basic lookup dictionary to your acoustic model. If your model is English, add LanguageModelGeneratorLookupList.txt from the default English acoustic model. If your model is Spanish, add LanguageModelGeneratorLookupList.txt from the Spanish model.
Another uncommon case is if you aren’t using an alternate acoustic model but you want to use a custom language model generator lookup dictionary for phonetic dictionary generation (this used to be handled by using the LanguageModelGenerator property dictionaryPathAsString but that is now removed since it made multi-language support extremely difficult). Name the custom phonetic dictionary file LanguageModelGeneratorLookupList.txt and put it wherever you want, for instance in mainBundle if you don’t need it to be writeable or copy it to Library/Caches if you intend to write to it. When you make your call to LanguageModelGenerator, pass it the path to this directory as an NSString when calling generateLanguageModelFromArray:withFilesNamed:forAcousticModelAtPath: and it will automatically be found by the method.
Please feel free to ask for help with this if you encounter any issues.
You must be logged in to reply to this topic.