Home › Forums › OpenEars plugins › How to combine wave files generated by SaveThatWave? › Reply To: How to combine wave files generated by SaveThatWave?
I downloaded and installed the beta dist and I see no VAD recalibration problem anymore so thank you for fixing this one.
On the other hand, the major problem in Rejecto is still there. I modified the test to use only one language model from the main bundle at one time. The two models are created separately using generateRejectingLanguageModelFromArray method as in the document from two simulator runs. Then I build them into the main bundle and test one at a time in a real iPhone 5s device. For both language models, I still see high CPU utilization when I said a medium or long sentence (5+ words) using unrelated words (out of dictionary words). I noticed the longer the sentence is, the longer the CPU stays at 100%.
Another question, I tried using:
LanguageModelGenerator *languageModelGenerator = [[LanguageModelGenerator alloc] init];
[languageModelGenerator deliverRejectedSpeechInHypotheses: (BOOL)FALSE];
To eliminate the __REJ words in hypothesis but I still got them no matter I use this function or not. The document indicates I should not need to use it because by default the rejected word will not be delivered. However, the hypothesis always contain them no matter this function is used or not, and if I use it, no matter the parameter is set to TRUE or FALSE.