Forum Replies Created
-
AuthorPosts
-
February 18, 2014 at 9:28 am in reply to: How to combine wave files generated by SaveThatWave? #1020178steve100Participant
Halle,
Yes. I mean the ContinousAudioUnit.m file. Depending on my app’s flag, sometimes I don’t call the memset() so that I can hear whoever is speaking. Will this be a problem? But it seems even I always call the memset(), I still got the problem. I’m trying to test it using the original framework without my changes to see what the behavior is.
Thanks,
Steve
February 18, 2014 at 5:57 am in reply to: How to combine wave files generated by SaveThatWave? #1020175steve100ParticipantHi Halle,
I work with Thomas.
In the ContinousAudio.m file, you have code
memset(ioData->mBuffers[0].mData, 0, ioData->mBuffers[0].mDataByteSize); // write out silence to the buffer for no-playback timesI just added a flag to decide whether to call this buffer reset. This will enable audio output to earphone. In both scenarios when the flag is on or off, we got this problem.
Thanks,
Steve
February 10, 2014 at 5:34 pm in reply to: How to combine wave files generated by SaveThatWave? #1020101steve100ParticipantHi Halle,
I called [self.saveThatWaveController startSessionDebugRecord] after calling [self.pocketsphinxController startListing] and called [self.pocketsphinxController stopListing] after speaking something. But I never got the indication for the wav file. Both device and simulator seem not working. Here is the message I got for simulator:
2014-02-09 22:27:37.036 hear4me[6021:a0b] Flite has finished speaking
2014-02-09 22:27:37.037 hear4me[6021:a0b] Valid setSecondsOfSilence value of 0.700000 will be used.
2014-02-09 22:27:37.037 hear4me[6021:a0b] Pocketsphinx has resumed recognition.
2014-02-09 22:27:38.728 hear4me[6021:a0b] recogStopClicked started.
2014-02-09 22:27:38.755 hear4me[6021:4f03] .raw files in caches directory are (
)
INFO: file_omitted(0): TOTAL fwdtree 0.19 CPU 0.138 xRT
INFO: file_omitted(0): TOTAL fwdtree 2.73 wall 1.995 xRT
INFO: file_omitted(0): TOTAL fwdflat 0.02 CPU 0.012 xRT
INFO: file_omitted(0): TOTAL fwdflat 0.02 wall 0.012 xRT
INFO: file_omitted(0): TOTAL bestpath 0.01 CPU 0.004 xRT
INFO: file_omitted(0): TOTAL bestpath 0.00 wall 0.004 xRT
2014-02-09 22:27:38.757 hear4me[6021:4f03] No longer listening.
2014-02-09 22:27:38.757 hear4me[6021:a0b] Pocketsphinx has stopped listening.February 9, 2014 at 8:19 am in reply to: How to combine wave files generated by SaveThatWave? #1020086steve100ParticipantHi Halle,
I got the new plugin and found a new method startSessionDebugRecord. I use this method to replace the place where I called start. But when can I get the wav file? After I call stop, I still couldn’t get the wav file.
Thanks,
Steve
November 5, 2013 at 8:03 pm in reply to: How to play live audio while recognition is going on? #1018820steve100ParticipantThank you for the info. Can I also get the data in the buffer and play from there? I also noticed there is a openAudioDevice function inside ContinousAudioUnit.mm. That function uses Remote Audio Unit. Can I also use that to play back the voice?
If I purchase SaveThatWave, I also get the source code for that?
Thanks,
Steve
November 5, 2013 at 7:48 pm in reply to: How to play live audio while recognition is going on? #1018818steve100ParticipantYes. I want to play back the user’s own voice or any voice that around the user, just like a hearing aid.
Another question I have is normally how close to the device for an voice to get accurate recognition? If the user talks to another person in one meter distance, can the engine get pretty accurate recognition for the other person?
Thanks,
Steve
November 5, 2013 at 7:54 am in reply to: How to play live audio while recognition is going on? #1018815steve100ParticipantBasically, I want to playback whatever audio my app received, do the speech recognition and save the audio into a file if the engine can recognize anything.
What is the best approach to do all these? Can the SaveTheWave plug-in save all recognized voices into files?
Thanks,
Steve
-
AuthorPosts