Home › Forums › OpenEars › delegate resumeRecognition execute but not really take place resumeRecognition
- This topic has 1 reply, 2 voices, and was last updated 8 years ago by Halle Winkler.
-
AuthorPosts
-
November 17, 2015 at 11:04 am #1027326lyc2345Participant
Hello. First, thanks for your free time and pay attention for this question.
I use OpenEars For a voice assistant to trigger language and translate.
The situation i ran into is when i speak “OKAY COMMAND”, it will trigger another voice detect(here I use SKRecognizer aka Nuance Siri), then I speak “English to Chinese” so the picker view will select English and Chinese.
here is the routine of my app:
OpenEars detecting -> detect “OKAY COMMAND”, suspend OpenEars -> now Nuance Siri detecting ->Nuance stopRecording (it select language on picker view by itself)-> OpenEars resumingRecognize(it ran into resumingRecognize Delegate but not really take place resumingRecognize).it will show some warning like
1.AVAudioSession.mm:646: -[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
2015-11-17 17:47:34.032 G-rootiOS7[503:164180] release audio session 02. 2015-11-17 17:47:34.147 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:34.151 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —Speaker—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x174208250,
inputs = (
“<AVAudioSessionPortDescription: 0x1742081d0, type = MicrophoneBuiltIn; name = iPhone \U9ea5\U514b\U98a8; UID = Built-In Microphone; selectedDataSource = \U4e0b>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x174208010, type = Speaker; name = \U64f4\U97f3; UID = Speaker; selectedDataSource = (null)>”
)>.
2015-11-17 17:47:34.156 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:34.157 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:34.159 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:34.164 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —Speaker—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x170217500,
inputs = (
“<AVAudioSessionPortDescription: 0x170216050, type = MicrophoneBuiltIn; name = iPhone \U9ea5\U514b\U98a8; UID = Built-In Microphone; selectedDataSource = \U4e0b>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1702161d0, type = Receiver; name = \U63a5\U6536\U5668; UID = Built-In Receiver; selectedDataSource = (null)>”
)>.
2015-11-17 17:48:17.014 G-rootiOS7[503:164287] [NMSP_ERROR] Session idle for too long timer fired, disconnecting.
2015-11-17 17:49:21.474 G-rootiOS7[503:164245] VERBOSE: GoogleAnalytics 3.13 -[GAIRequestBuilder requestGetUrl:payload:] (GAIRequestBuilder.m:195): building URLRequest for https://ssl.google-analytics.com/collect
2015-11-17 17:49:21.475 G-rootiOS7[503:164245] VERBOSE: GoogleAnalytics 3.13 -[GAIBatchingDispatcher dispatchWithCompletionHandler:] (GAIBatchingDispatcher.m:632): Sending hit(s) GET: https://ssl.google-analytics.com/collect?av=1.3.4&cid=ecc2c11b-d980-4b1e-9a68-d8feb0ed0c4a&tid=UA-69844088-2&a=892922383&dm=iPhone6%2C2&cd=VLViewController&t=screenview&aid=com.Dayoo.G-root&ul=zh-hant&_u=.neoK9L&ds=app&sr=640×1136&v=1&_s=7&_crc=0&an=G-root&_v=mi3.1.3&ht=1447753641456&qt=120018&z=9029055560723320914
2015-11-17 17:49:21.639 G-rootiOS7[503:164180] INFO: GoogleAnalytics 3.13 -[GAIBatchingDispatcher didSendHits:response:data:error:] (GAIBatchingDispatcher.m:226): Hit(s) dispatched: HTTP status -1
2015-11-17 17:49:21.641 G-rootiOS7[503:164245] INFO: GoogleAnalytics 3.13 -[GAIBatchingDispatcher deleteHits:] (GAIBatchingDispatcher.m:529): hit(s) Successfully deleted
2015-11-17 17:49:21.645 G-rootiOS7[503:164245] INFO: GoogleAnalytics 3.13 -[GAIBatchingDispatcher didSendHits:] (GAIBatchingDispatcher.m:237): 1 hit(s) sentPlease help me arrrrrrrrrr!
************************************************************************************THE FOLLOWING IS WHOLE LOG START WITH First Launch APP Then Speak To OPENEARS Switch To Nuance then resumingRecognize:
2015-11-17 17:47:20.143 G-rootiOS7[503:164180] The configuration file ‘GoogleService-Info.plist’ is for another bundle identifier (‘com.Dayoo.G-rootiOS7’). Using this file the services may not be configured correctly. To continue with this configuration file, you may change your app’s bundle identifier to ‘com.Dayoo.G-rootiOS7’. Or you can download a new configuration file that matches your bundle identifier from https://developers.google.com/mobile/add and replace the current one.
2015-11-17 17:47:20.145 G-rootiOS7[503:501] <GMR/INFO> App measurement v.1201000 started
2015-11-17 17:47:20.207 G-rootiOS7[503:164180] SKPayment add observer for self
2015-11-17 17:47:20.207 G-rootiOS7[503:164180] Not Purchased: Unlimited_Translation_Package
2015-11-17 17:47:20.629 G-rootiOS7[503:164242] APPIRATER Tracking version: 1
2015-11-17 17:47:20.632 G-rootiOS7[503:164242] APPIRATER Use count: 6
2015-11-17 17:47:20.667 G-rootiOS7[503:164180] -[UIPickerView setFrame:]: invalid height value 139.0 pinned to 162.0
2015-11-17 17:47:20.669 G-rootiOS7[503:164180] -[UIPickerView setFrame:]: invalid height value 139.0 pinned to 162.0
2015-11-17 17:47:21.247 G-rootiOS7[503:164245] INFO: GoogleAnalytics 3.13 -[GAIBatchingDispatcher hitsForDispatch] (GAIBatchingDispatcher.m:368): No pending hits.
2015-11-17 17:47:21.262 G-rootiOS7[503:164180] set session Active 0
2015-11-17 17:47:21.329 G-rootiOS7[503:164180] sample rate = 44100.000000
2015-11-17 17:47:21.333 G-rootiOS7[503:164180] audio input route(iOS5 or above): MicrophoneBuiltIn
2015-11-17 17:47:21.333 G-rootiOS7[503:164180] audiosource = MicrophoneBuiltIn
2015-11-17 17:47:21.334 G-rootiOS7[503:164180] [NMSP_ERROR] check status Error: 696e6974 init -> line: 485
makeFullPathname /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Application Support/vst_dns_cache.dat
2015-11-17 17:47:21.385 G-rootiOS7[503:164180] release audio session 1
2015-11-17 17:47:21.439 G-rootiOS7[503:164180] Starting OpenEars logging for OpenEars version 2.041 on 64-bit device (or build): iPhone running iOS version: 8.100000
2015-11-17 17:47:21.441 G-rootiOS7[503:164180] Creating shared instance of OEPocketsphinxController
2015-11-17 17:47:21.457 G-rootiOS7[503:164180] Starting dynamic language model generation
2015-11-17 17:47:21.466 G-rootiOS7[503:164245] VERBOSE: GoogleAnalytics 3.13 -[GAIBatchingDispatcher persist:] (GAIBatchingDispatcher.m:517): Saved hit: {
parameters = {
“&_crc” = 0;
“&_s” = 7;
“&_u” = “.neoK9L”;
“&_v” = “mi3.1.3”;
“&a” = 892922383;
“&aid” = “com.Dayoo.G-root”;
“&an” = “G-root”;
“&av” = “1.3.4”;
“&cd” = VLViewController;
“&cid” = “ecc2c11b-d980-4b1e-9a68-d8feb0ed0c4a”;
“&dm” = “iPhone6,2”;
“&ds” = app;
“&sr” = 640×1136;
“&t” = screenview;
“&tid” = “UA-69844088-2”;
“&ul” = “zh-hant”;
“&v” = 1;
“&z” = 9029055560723320914;
gaiVersion = “3.13”;
};
timestamp = “2015-11-17 09:47:21 +0000”;
}
2015-11-17 17:47:21.467 G-rootiOS7[503:164245] VERBOSE: GoogleAnalytics 3.13 __70-[GAIBatchingDispatcher checkIAdCampaignAttributionWithHitParameters:]_block_invoke (GAIBatchingDispatcher.m:749): iAd campaign tracking disabled because the iAd framework is not linked. See http://goo.gl/426NGa for instructions.INFO: cmd_ln.c(703): Parsing command line:
sphinx_lm_convert \
-i /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.arpa \
-o /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.DMPCurrent configuration:
[NAME] [DEFLT] [VALUE]
-case
-debug 0
-help no no
-i /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.arpa
-ifmt
-logbase 1.0001 1.000100e+00
-mmap no no
-o /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.DMP
-ofmtINFO: ngram_model_arpa.c(503): ngrams 1=7, 2=10, 3=9
INFO: ngram_model_arpa.c(135): Reading unigrams
INFO: ngram_model_arpa.c(542): 7 = #unigrams created
INFO: ngram_model_arpa.c(195): Reading bigrams
INFO: ngram_model_arpa.c(560): 10 = #bigrams created
INFO: ngram_model_arpa.c(561): 5 = #prob2 entries
INFO: ngram_model_arpa.c(569): 5 = #bo_wt2 entries
INFO: ngram_model_arpa.c(292): Reading trigrams
INFO: ngram_model_arpa.c(582): 9 = #trigrams created
INFO: ngram_model_arpa.c(583): 3 = #prob3 entries
INFO: ngram_model_dmp.c(518): Building DMP model…
INFO: ngram_model_dmp.c(548): 7 = #unigrams created
INFO: ngram_model_dmp.c(649): 10 = #bigrams created
INFO: ngram_model_dmp.c(650): 5 = #prob2 entries
INFO: ngram_model_dmp.c(657): 5 = #bo_wt2 entries
INFO: ngram_model_dmp.c(661): 9 = #trigrams created
INFO: ngram_model_dmp.c(662): 3 = #prob3 entries
2015-11-17 17:47:21.528 G-rootiOS7[503:164180] Done creating language model with CMUCLMTK in 0.070992 seconds.
2015-11-17 17:47:21.565 G-rootiOS7[503:164180] I’m done running performDictionaryLookup and it took 0.029963 seconds
2015-11-17 17:47:21.574 G-rootiOS7[503:164180] I’m done running dynamic language model generation and it took 0.123865 seconds
2015-11-17 17:47:21.575 G-rootiOS7[503:164180] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2015-11-17 17:47:21.576 G-rootiOS7[503:164180] User gave mic permission for this app.
2015-11-17 17:47:21.576 G-rootiOS7[503:164180] setSecondsOfSilence wasn’t set, using default of 0.700000.
2015-11-17 17:47:21.577 G-rootiOS7[503:164180] Successfully started listening session from startListeningWithLanguageModelAtPath:
2015-11-17 17:47:21.577 G-rootiOS7[503:164196] Starting listening.
2015-11-17 17:47:21.577 G-rootiOS7[503:164196] about to set up audio session
2015-11-17 17:47:21.578 G-rootiOS7[503:164196] Creating audio session with default settings.
2015-11-17 17:47:21.609 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:21.609 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:21.613 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:21.843 G-rootiOS7[503:164196] done starting audio unit
INFO: cmd_ln.c(703): Parsing command line:
\
-lm /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.DMP \
-vad_prespeech 10 \
-vad_postspeech 69 \
-vad_threshold 2.000000 \
-remove_noise yes \
-remove_silence yes \
-bestpath yes \
-lw 6.500000 \
-dict /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.dic \
-hmm /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundleCurrent configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-argfile
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-ceplen 13 13
-cmn current current
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-keyphrase
-kws
-kws_delay 10 10
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lifter 0 0
-lm /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 30000 30000
-maxwpf -1 -1
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-10 1.000000e-10
-pl_pip 1.0 1.000000e+00
-pl_weight 3.0 3.000000e+00
-pl_window 5 5
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-uw 1.0 1.000000e+00
-vad_postspeech 50 69
-vad_prespeech 20 10
-vad_startspeech 10 10
-vad_threshold 2.0 2.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-022015-11-17 17:47:21.855 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x17420c640,
inputs = (null);
outputs = (
“<AVAudioSessionPortDescription: 0x17420c610, type = Speaker; name = \U64f4\U97f3; UID = Speaker; selectedDataSource = (null)>”
)>.
INFO: cmd_ln.c(703): Parsing command line:
\
-nfilt 25 \
-lowerf 130 \
-upperf 6800 \
-feat 1s_c_d_dd \
-svspec 0-12/13-25/26-38 \
-agc none \
-cmn current \
-varnorm no \
-transform dct \
-lifter 22 \
-cmninit 40Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 22
-logspec no no
-lowerf 133.33334 1.300000e+02
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-vad_postspeech 50 69
-vad_prespeech 20 10
-vad_startspeech 10 10
-vad_threshold 2.0 2.000000e+00
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.562500e-02INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/feat.params
INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(124): Attempting to use PTM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: ptm_mgau.c(805): Number of codebooks doesn’t match number of ciphones, doesn’t look like PTM: 1 != 46
INFO: acmod.c(126): Attempting to use semi-continuous computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
INFO: dict.c(320): Allocating 4110 * 32 bytes (128 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/282AED0B-22A2-451F-8058-52E90FFB83A5/Library/Caches/NameIWantForMyLanguageModelFiles.dic
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(336): 5 words read
INFO: dict.c(358): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/A28710FA-2BA2-46BD-ACF9-FFAA5DD5A496/G-rootiOS7.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(361): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(77): No \data\ mark in LM file
INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
INFO: ngram_model_dmp.c(220): ngrams 1=7, 2=10, 3=9
INFO: ngram_model_dmp.c(266): 7 = LM.unigrams(+trailer) read
INFO: ngram_model_dmp.c(312): 10 = LM.bigrams(+trailer) read
INFO: ngram_model_dmp.c(338): 9 = LM.trigrams read
INFO: ngram_model_dmp.c(363): 5 = LM.prob2 entries read
INFO: ngram_model_dmp.c(383): 5 = LM.bo_wt2 entries read
INFO: ngram_model_dmp.c(403): 3 = LM.prob3 entries read
INFO: ngram_model_dmp.c(431): 1 = LM.tseg_base entries read
INFO: ngram_model_dmp.c(487): 7 = ascii word strings read
INFO: ngram_search_fwdtree.c(99): 4 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 137
INFO: ngram_search_fwdtree.c(339): after: 4 root, 9 non-root channels, 9 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
2015-11-17 17:47:21.895 G-rootiOS7[503:164196] There was no previous CMN value in the plist so we are using the fresh CMN value 42.000000.
2015-11-17 17:47:21.895 G-rootiOS7[503:164196] Listening.
2015-11-17 17:47:21.896 G-rootiOS7[503:164196] Project has these words or phrases in its dictionary:
COMMAND
OK
OKAY
SPEAK
SWITCH2015-11-17 17:47:21.896 G-rootiOS7[503:164196] Recognition loop has started
2015-11-17 17:47:22.008 G-rootiOS7[503:164180] Pocketsphinx is now listening.
2015-11-17 17:47:22.009 G-rootiOS7[503:164180] INFO: GoogleAnalytics 3.13 -[GAIReachabilityChecker reachabilityFlagsChanged:] (GAIReachabilityChecker.m:159): Reachability flags update: 0X000002
2015-11-17 17:47:22.281 G-rootiOS7[503:164244] Speech detected…
2015-11-17 17:47:22.305 G-rootiOS7[503:164180] Pocketsphinx has detected speech.
2015-11-17 17:47:26.893 G-rootiOS7[503:164242] End of speech detected…
INFO: cmn_prior.c(131): cmn_prior_update: from < 42.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 47.62 6.24 -7.42 -2.39 -2.02 2.09 -3.27 7.43 3.23 -2.67 -1.30 -7.78 -2.65 >
2015-11-17 17:47:26.894 G-rootiOS7[503:164180] Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: ngram_search_fwdtree.c(1553): 3564 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 33843 senones evaluated (73/fr)
INFO: ngram_search_fwdtree.c(1559): 13587 channels searched (29/fr), 1797 1st, 9326 last
INFO: ngram_search_fwdtree.c(1562): 3894 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 319 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.41 CPU 0.090 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.74 wall 1.027 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 4 words
INFO: ngram_search_fwdflat.c(948): 3691 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 16860 senones evaluated (36/fr)
INFO: ngram_search_fwdflat.c(952): 9401 channels searched (20/fr)
INFO: ngram_search_fwdflat.c(954): 4350 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 176 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.06 CPU 0.014 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.07 wall 0.016 xRT
INFO: ngram_search.c(1280): lattice start node <s>.0 end node </s>.458
INFO: ngram_search.c(1306): Eliminated 6 nodes before end node
INFO: ngram_search.c(1411): Lattice has 2029 nodes, 13746 links
INFO: ps_lattice.c(1380): Bestpath score: -55658
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:458:460) = -2746878
INFO: ps_lattice.c(1441): Joint P(O,S) = -2894516 P(S|O) = -147638
INFO: ngram_search.c(899): bestpath 0.07 CPU 0.015 xRT
INFO: ngram_search.c(902): bestpath 0.07 wall 0.016 xRT
2015-11-17 17:47:27.039 G-rootiOS7[503:164242] Pocketsphinx heard “OKAY COMMAND” with a score of (-147638) and an utterance ID of 0.
2015-11-17 17:47:27.039 G-rootiOS7[503:164180] Local callback: The received hypothesis is OKAY COMMAND with a score of -147638 and an ID of 0
2015-11-17 17:47:27.065 G-rootiOS7[503:164242] set session Active 0
2015-11-17 17:47:27.066 G-rootiOS7[503:164180] Pocketsphinx has suspended recognition.
2015-11-17 17:47:27.797 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:27.800 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:28.108 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:28.171 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x170217000,
inputs = (
“<AVAudioSessionPortDescription: 0x170216f60, type = MicrophoneBuiltIn; name = iPhone \U9ea5\U514b\U98a8; UID = Built-In Microphone; selectedDataSource = \U4e0b>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x170216dc0, type = Speaker; name = \U64f4\U97f3; UID = Speaker; selectedDataSource = (null)>”
)>.
2015-11-17 17:47:28.209 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:28.210 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:28.211 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:28.226 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x170217210,
inputs = (null);
outputs = (
“<AVAudioSessionPortDescription: 0x170216fc0, type = Speaker; name = \U64f4\U97f3; UID = Speaker; selectedDataSource = (null)>”
)>.
2015-11-17 17:47:28.336 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:28.338 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:28.339 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:28.349 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —SpeakerMicrophoneBuiltIn—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x174207a00,
inputs = (
“<AVAudioSessionPortDescription: 0x174207a60, type = MicrophoneBuiltIn; name = iPhone \U9ea5\U514b\U98a8; UID = Built-In Microphone; selectedDataSource = \U4e0b>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x174207c20, type = Receiver; name = \U63a5\U6536\U5668; UID = Built-In Receiver; selectedDataSource = (null)>”
)>.
2015-11-17 17:47:28.742 G-rootiOS7[503:164180] VoiceCommand Begin.
2015-11-17 17:47:28.742 G-rootiOS7[503:164180] Pocketsphinx has suspended recognition.
2015-11-17 17:47:31.437 G-rootiOS7[503:164180] VoiceCommand Finished.
2015-11-17 17:47:31.437 G-rootiOS7[503:164180] setSecondsOfSilence wasn’t set, using default of 0.700000.
2015-11-17 17:47:31.526 G-rootiOS7[503:164180] Pocketsphinx has resumed recognition.
INFO: cmn_prior.c(131): cmn_prior_update: from < 47.62 6.24 -7.42 -2.39 -2.02 2.09 -3.27 7.43 3.23 -2.67 -1.30 -7.78 -2.65 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 47.62 6.24 -7.42 -2.39 -2.02 2.09 -3.27 7.43 3.23 -2.67 -1.30 -7.78 -2.65 >
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
2015-11-17 17:47:32.028 G-rootiOS7[503:164195] Speech detected…
2015-11-17 17:47:34.030 G-rootiOS7[503:164180] 17:47:34.029 ERROR: [0x198de2310] AVAudioSession.mm:646: -[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
2015-11-17 17:47:34.032 G-rootiOS7[503:164180] release audio session 0
2015-11-17 17:47:34.033 G-rootiOS7[503:164180] Pocketsphinx has detected speech.
2015-11-17 17:47:34.034 G-rootiOS7[503:164180] Session id [eea4a231-40a7-4115-b025-d8aadeb76e13].
2015-11-17 17:47:34.034 G-rootiOS7[503:164180] Got Result: English To Chinese
2015-11-17 17:47:34.035 G-rootiOS7[503:164180] front: English,rear: Chinese
2015-11-17 17:47:34.037 G-rootiOS7[503:164180] English, Chinese
2015-11-17 17:47:34.073 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:34.096 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:34.107 G-rootiOS7[503:164180]
left: en, right: zh-TW,
leftS: en_US, rightS: zh_TW
2015-11-17 17:47:34.147 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:34.151 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —Speaker—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x174208250,
inputs = (
“<AVAudioSessionPortDescription: 0x1742081d0, type = MicrophoneBuiltIn; name = iPhone \U9ea5\U514b\U98a8; UID = Built-In Microphone; selectedDataSource = \U4e0b>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x174208010, type = Speaker; name = \U64f4\U97f3; UID = Speaker; selectedDataSource = (null)>”
)>.
2015-11-17 17:47:34.156 G-rootiOS7[503:164277] route change!!!
2015-11-17 17:47:34.157 G-rootiOS7[503:164277] Audio route has changed for the following reason:
2015-11-17 17:47:34.159 G-rootiOS7[503:164277] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2015-11-17 17:47:34.164 G-rootiOS7[503:164277] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —Speaker—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x170217500,
inputs = (
“<AVAudioSessionPortDescription: 0x170216050, type = MicrophoneBuiltIn; name = iPhone \U9ea5\U514b\U98a8; UID = Built-In Microphone; selectedDataSource = \U4e0b>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1702161d0, type = Receiver; name = \U63a5\U6536\U5668; UID = Built-In Receiver; selectedDataSource = (null)>”
)>.November 17, 2015 at 11:14 am #1027327Halle WinklerPolitepixWelcome,
Make sure that any other audio framework (including SKRecognizer) is fully stopped before stopping OEPocketsphinxController – it can’t work while other frameworks are altering its audio session settings.
-
AuthorPosts
- You must be logged in to reply to this topic.