- This topic has 10 replies, 2 voices, and was last updated 7 years, 9 months ago by Halle Winkler.
-
AuthorPosts
-
June 29, 2016 at 9:24 pm #1030636bhavinParticipant
Hi,
I am using OpenEars 2.502, and am facing a problem where none of my words declared while generateLanguageModelFromArray are recognized.Words used are: REPLAY, REPEAT, OK, YES, NO.
Any help will be appreciated, i just cannot figure out the issue.
Thank you.The verbose log is as attached:
Record Permission Granted. Starting.
INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/feat.params
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-allphone
-allphone_ci no no
-alpha 0.97 9.700000e-01
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath yes yes
-bestpathlw 9.5 9.500000e+00
-ceplen 13 13
-cmn current current
-cmninit 8.0 40
-compallsen no no
-debug 0
-dict /var/mobile/Containers/Data/Application/DEFD9D47-1357-4B71-B4D9-A4466E520DB9/Library/Caches/OpenEarsLanguageModel.dic
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/noisedict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/feat.params
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes yes
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle
-input_endian little little
-jsgf
-keyphrase
-kws
-kws_delay 10 10
-kws_plp 1e-1 1.000000e-01
-kws_threshold 1 1.000000e+00
-latsize 5000 5000
-lda
-ldadim 0 0
-lifter 0 22
-lm /var/mobile/Containers/Data/Application/DEFD9D47-1357-4B71-B4D9-A4466E520DB9/Library/Caches/OpenEarsLanguageModel.DMP
-lmctl
-lmname
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.300000e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf 30000 30000
-maxwpf -1 -1
-mdef /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/mdef
-mean /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/means
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 25
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-10 1.000000e-10
-pl_pip 1.0 1.000000e+00
-pl_weight 3.0 3.000000e+00
-pl_window 5 5
-rawlogdir
-remove_dc no no
-remove_noise yes yes
-remove_silence yes yes
-round_filters yes yes
-samprate 16000 1.600000e+04
-seed -1 -1
-sendump /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/sendump
-senlogdir
-senmgau-silprob 0.005 5.000000e-03
-smoothspec no no
-svspec 0-12/13-25/26-38
-tmat /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/transition_matrices
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 6.800000e+03
-uw 1.0 1.000000e+00
-vad_postspeech 50 69
-vad_prespeech 20 10
-vad_startspeech 10 10
-vad_threshold 2.0 2.300000e+00
-var /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/variances
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 0.65 6.500000e-01
-wlen 0.025625 2.562500e-02INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(164): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/mdef
INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/mdef
INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/transition_matrices
INFO: acmod.c(117): Attempting to use PTM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: ptm_mgau.c(805): Number of codebooks doesn’t match number of ciphones, doesn’t look like PTM: 1 != 46
INFO: acmod.c(119): Attempting to use semi-continuous computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(294): 512×13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(904): Loading senones from dump file /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/sendump
INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
INFO: dict.c(320): Allocating 4111 * 32 bytes (128 KiB) for word entries
INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/DEFD9D47-1357-4B71-B4D9-A4466E520DB9/Library/Caches/OpenEarsLanguageModel.dic
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(336): 6 words read
INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/E58A1D58-4D9E-4EDB-A381-43A9DF280DE4/app.app/AcousticModelEnglish.bundle/noisedict
INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(361): 9 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
INFO: ngram_model_trie.c(424): Trying to read LM in bin format
INFO: ngram_model_trie.c(457): Header doesn’t match
INFO: ngram_model_trie.c(180): Trying to read LM in arpa format
INFO: ngram_model_trie.c(71): No \data\ mark in LM file
INFO: ngram_model_trie.c(537): Trying to read LM in DMP format
INFO: ngram_model_trie.c(632): ngrams 1=7, 2=10, 3=5
INFO: lm_trie.c(317): Training quantizer
INFO: lm_trie.c(323): Building LM trie
INFO: ngram_search_fwdtree.c(99): 5 unique initial diphones
INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 138
INFO: ngram_search_fwdtree.c(339): after: 5 root, 10 non-root channels, 9 single-phone words
INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
Pocketsphinx is now listening.
pocketsphinxRecognitionLoopDidStart
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 34.70 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 36.38 7.71 6.66 9.41 4.72 5.36 -0.34 4.59 6.15 11.65 1.91 -13.53 -9.78 >
INFO: ngram_search_fwdtree.c(1553): 644 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5180 senones evaluated (64/fr)
INFO: ngram_search_fwdtree.c(1559): 1779 channels searched (21/fr), 385 1st, 1157 last
INFO: ngram_search_fwdtree.c(1562): 714 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 48 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.64 CPU 0.796 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.30 wall 5.314 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 644 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3411 senones evaluated (42/fr)
INFO: ngram_search_fwdflat.c(952): 1591 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 733 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 88 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.015 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.7
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 352 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -43721
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:7:79) = -3057677
INFO: ps_lattice.c(1441): Joint P(O,S) = -3069072 P(S|O) = -11395
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.005 xRT
[{
Hypothesis = “”;
Score = “-64809”;
}, {
Hypothesis = “”;
Score = “-809879”;
}, {
Hypothesis = “”;
Score = “-809899”;
}, {
Hypothesis = “”;
Score = “-809934”;
}, {
Hypothesis = “”;
Score = “-809936”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 36.38 7.71 6.66 9.41 4.72 5.36 -0.34 4.59 6.15 11.65 Pocketsphinx has detected a period of silence, concluding an utterance.
1.91 -13.53 -9.78 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 37.92 2.19 -2.86 3.62 3.16 1.43 -5.95 2.84 3.46 4.59 -3.50 -8.98 -3.48 >
INFO: ngram_search_fwdtree.c(1553): 652 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5014 senones evaluated (60/fr)
INFO: ngram_search_fwdtree.c(1559): 1580 channels searched (19/fr), 395 1st, 974 last
INFO: ngram_search_fwdtree.c(1562): 724 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 40 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.98 CPU 1.175 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 5.01 wall 6.039 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.00 CPU 0.001 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.00 wall 0.003 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 382 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43226
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3065590
INFO: ps_lattice.c(1441): Joint P(O,S) = -3065591 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.001 xRT
[{
Hypothesis = “”;
Score = “-64314”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 37.92 2.19 -2.86 3.62 3.16 1.43 -5.95 2.84 3.46 4.59 Po-3.50 cketsphinx has detected a period of silence, concluding an utterance.
-8.98 -3.48 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 37.16 -0.57 -5.07 -0.67 -0.13 1.01 -5.50 3.13 6.79 7.10 -1.23 -12.24 -6.94 >
INFO: ngram_search_fwdtree.c(1553): 660 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5005 senones evaluated (60/fr)
INFO: ngram_search_fwdtree.c(1559): 1543 channels searched (18/fr), 400 1st, 931 last
INFO: ngram_search_fwdtree.c(1562): 731 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 37 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.23 CPU 0.277 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 1.00 wall 1.190 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 660 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1689 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 723 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 723 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 65 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.00 CPU 0.001 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.00 wall 0.003 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.13
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 385 nodes, 58 links
INFO: ps_lattice.c(1380): Bestpath score: -44415
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:13:82) = -3053194
INFO: ps_lattice.c(1441): Joint P(O,S) = -3066103 P(S|O) = -12909
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.001 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.001 xRT
[{
Hypothesis = “”;
Score = “-65600”;
}, {
Hypothesis = “”;
Score = “-810411”;
}, {
Hypothesis = “”;
Score = “-810600”;
}, {
Hypothesis = “”;
Score = “-810802”;
}, {
Hypothesis = “”;
Score = “-810860”;
}]
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 37.16 -0.57 -5.07 -0.67 -0.13 1.01 -5.50 3.13 6.79 7.10 -1.23 -12.24 -6.94 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 36.07 -1.21 -5.67 -1.60 -1.51 1.27 -3.78 2.47 5.58 5.97 -0.91 -10.95 -5.91 >
INFO: ngram_search_fwdtree.c(1553): 653 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5154 senones evaluated (62/fr)
INFO: ngram_search_fwdtree.c(1559): 1685 channels searched (20/fr), 395 1st, 1075 last
INFO: ngram_search_fwdtree.c(1562): 728 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 41 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 1.49 CPU 1.799 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 7.96 wall 9.585 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 653 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.009 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.016 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 389 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43223
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3059805
INFO: ps_lattice.c(1441): Joint P(O,S) = -3059806 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.004 xRT
[{
Hypothesis = “”;
Score = “-64311”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 36.07 -1.21 -5.67 -1.60 -1.51 1.27 -3.78 Pocketsphinx has detected a period of silence, concluding an utterance.
2.47 5.58 5.97 -0.91 -10.95 -5.91 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 35.20 -1.54 -5.64 -1.70 -2.11 1.98 -2.39 2.45 6.07 5.67 -0.47 -10.81 -5.91 >
INFO: ngram_search_fwdtree.c(1553): 655 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5230 senones evaluated (63/fr)
INFO: ngram_search_fwdtree.c(1559): 1741 channels searched (20/fr), 395 1st, 1125 last
INFO: ngram_search_fwdtree.c(1562): 730 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 44 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.25 CPU 0.299 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 1.40 wall 1.686 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 58 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.00 CPU 0.003 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.6
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 387 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -43469
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:6:81) = -3057275
INFO: ps_lattice.c(1441): Joint P(O,S) = -3065796 P(S|O) = -8521
INFO: ngram_search.c(901): bestpath 0.01 CPU 0.010 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.004 xRT
[{
Hypothesis = “”;
Score = “-64557”;
}, {
Hypothesis = “”;
Score = “-809659”;
}, {
Hypothesis = “”;
Score = “-809741”;
}, {
Hypothesis = “”;
Score = “-809800”;
}, {
Hypothesis = “”;
Score = “-809825”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 35.20 -1.54 -5.64 -1.70 -2.11 1.98 -2.39 Po 2.45 cketsph 6.07 i 5.67 nx-0.47 h-10.81 as-5.91 >
detINFO: cmn_prior.c(149): cmn_prior_update: to < e35.02 cte-2.01 d-6.85 a -2.90 p-3.60 eri 1.88 o-2.47 d o 2.96 f 6.52 si 7.11 l 0.40 en-11.53 ce-6.21 , >
concluding an utterance.
INFO: ngram_search_fwdtree.c(1553): 644 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5162 senones evaluated (63/fr)
INFO: ngram_search_fwdtree.c(1559): 1717 channels searched (20/fr), 390 1st, 1116 last
INFO: ngram_search_fwdtree.c(1562): 721 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 45 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 1.01 CPU 1.232 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.46 wall 5.442 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 644 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1647 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 705 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 705 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 57 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.009 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.013 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.5
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 370 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43376
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:5:80) = -3057142
INFO: ps_lattice.c(1441): Joint P(O,S) = -3057143 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.005 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.005 xRT
[{
Hypothesis = “”;
Score = “-64464”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 35.02 -2.01 -6.85 -2.90 -3.60 1.88 -2.47 2.96 6.52 7.11 0.40 -11.53 -6.21 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 34.85 -2.29 -7.86 -3.16 -3.64 1.38 -3.15 2.34 5.94 5.66 -0.67 -9.98 -5.11 >
INFO: ngram_search_fwdtree.c(1553): 649 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5289 senones evaluated (64/fr)
INFO: ngram_search_fwdtree.c(1559): 1809 channels searched (22/fr), 390 1st, 1191 last
INFO: ngram_search_fwdtree.c(1562): 724 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 46 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.39 CPU 0.472 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 2.00 wall 2.439 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: ngram_search_fwdflat.c(948): 648 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3432 senones evaluated (42/fr)
INFO: ngram_search_fwdflat.c(952): 1600 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 742 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 85 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.02 CPU 0.022 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.014 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 367 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43079
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:80) = -3054685
INFO: ps_lattice.c(1441): Joint P(O,S) = -3054686 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.002 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.004 xRT
[{
Hypothesis = “”;
Score = “-64167”;
}]
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 34.85 -2.29 -7.86 -3.16 -3.64 1.38 -3.15 2.34 5.94 5.66 -0.67 -9.98 -5.11 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 34.94 -2.75 -8.52 -3.87 -4.56 1.27 -3.59 2.58 6.96 6.54 -0.64 -11.64 -6.32 >
INFO: ngram_search_fwdtree.c(1553): 652 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 4908 senones evaluated (59/fr)
INFO: ngram_search_fwdtree.c(1559): 1498 channels searched (18/fr), 395 1st, 896 last
INFO: ngram_search_fwdtree.c(1562): 721 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 35 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.66 CPU 0.789 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 3.58 wall 4.318 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 65 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.010 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.13
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 386 nodes, 84 links
INFO: ps_lattice.c(1380): Bestpath score: -44381
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:13:81) = -3048563
INFO: ps_lattice.c(1441): Joint P(O,S) = -3062570 P(S|O) = -14007
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.001 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.002 xRT
[{
Hypothesis = “”;
Score = “-65547”;
}, {
Hypothesis = “”;
Score = “-810377”;
}, {
Hypothesis = “”;
Score = “-810548”;
}, {
Hypothesis = “”;
Score = “-810766”;
}, {
Hypothesis = “”;
Score = “-810804”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 34.94 -2.75 -8.52 -3.87 -4.56 1.27 Po-3.59 c 2.58 ketsphinx has detected a period of silence, concluding an utterance.
6.96 6.54 -0.64 -11.64 -6.32 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 35.00 -2.22 -7.67 -3.04 -3.81 1.95 -3.37 3.87 8.19 7.31 -0.49 -12.30 -6.95 >
INFO: ngram_search_fwdtree.c(1553): 650 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5194 senones evaluated (63/fr)
INFO: ngram_search_fwdtree.c(1559): 1734 channels searched (21/fr), 390 1st, 1116 last
INFO: ngram_search_fwdtree.c(1562): 723 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 46 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 2.22 CPU 2.706 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 9.16 wall 11.169 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 649 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3432 senones evaluated (42/fr)
INFO: ngram_search_fwdflat.c(952): 1600 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 742 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 86 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.010 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.5
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 363 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43299
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:5:80) = -3058576
INFO: ps_lattice.c(1441): Joint P(O,S) = -3058577 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.004 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.005 xRT
[{
Hypothesis = “”;
Score = “-64387”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < P35.00 ocketsphinx has detected a period of silence, concluding an utterance.
-2.22 -7.67 -3.04 -3.81 1.95 -3.37 3.87 8.19 7.31 -0.49 -12.30 -6.95 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 40.67 -3.60 -8.77 -4.02 -4.03 1.88 -3.04 1.48 6.15 6.36 -0.67 -10.28 -6.39 >
INFO: ngram_search_fwdtree.c(1553): 1115 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 9786 senones evaluated (73/fr)
INFO: ngram_search_fwdtree.c(1559): 3716 channels searched (27/fr), 650 1st, 2601 last
INFO: ngram_search_fwdtree.c(1562): 1232 words for which last channels evaluated (9/fr)
INFO: ngram_search_fwdtree.c(1564): 114 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.76 CPU 0.567 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 2.34 wall 1.749 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 1101 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 6460 senones evaluated (48/fr)
INFO: ngram_search_fwdflat.c(952): 3074 channels searched (22/fr)
INFO: ngram_search_fwdflat.c(954): 1247 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 142 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.02 CPU 0.012 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.02 wall 0.013 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.85
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 539 nodes, 2242 links
INFO: ps_lattice.c(1380): Bestpath score: -53444
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:85:132) = -3365036
INFO: ps_lattice.c(1441): Joint P(O,S) = -3411395 P(S|O) = -46359
INFO: ngram_search.c(901): bestpath 0.01 CPU 0.009 xRT
INFO: ngram_search.c(904): bestpath 0.01 wall 0.011 xRT
[{
Hypothesis = “”;
Score = “-74733”;
}, {
Hypothesis = “”;
Score = “-819440”;
}, {
Hypothesis = “”;
Score = “-819492”;
}, {
Hypothesis = “”;
Score = “-819701”;
}, {
Hypothesis = “”;
Score = “-819894”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 40.67 -3.60 -8.77 -4.02 -4.03 1.88 -3.04 Pocketsphinx has detected a period of silence, concluding an utterance.
1.48 6.15 6.36 -0.67 -10.28 -6.39 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 41.08 -4.19 -10.77 -3.79 -4.21 1.73 -2.36 2.45 5.92 5.87 -1.37 -9.78 -5.54 >
INFO: ngram_search_fwdtree.c(1553): 652 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5112 senones evaluated (62/fr)
INFO: ngram_search_fwdtree.c(1559): 1657 channels searched (19/fr), 395 1st, 1051 last
INFO: ngram_search_fwdtree.c(1562): 727 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 43 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.70 CPU 0.839 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.04 wall 4.873 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.011 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.014 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 380 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43204
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3069379
INFO: ps_lattice.c(1441): Joint P(O,S) = -3069380 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.002 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.003 xRT
[{
Hypothesis = “”;
Score = “-64292”;
}]
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 41.08 -4.19 -10.77 -3.79 -4.21 1.73 -2.36 2.45 5.92 5.87 -1.37 -9.78 -5.54 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 40.66 -4.34 -11.17 -4.03 -4.50 1.66 -2.19 2.08 5.56 5.48 -1.01 -9.35 -5.37 >
INFO: ngram_search_fwdtree.c(1553): 655 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5268 senones evaluated (63/fr)
INFO: ngram_search_fwdtree.c(1559): 1768 channels searched (21/fr), 395 1st, 1149 last
INFO: ngram_search_fwdtree.c(1562): 731 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 42 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.18 CPU 0.214 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 0.98 wall 1.185 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.00 CPU 0.000 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.00 wall 0.003 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 387 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43197
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3063952
INFO: ps_lattice.c(1441): Joint P(O,S) = -3063953 P(S|O) = -1
INFO: ngram_search.c(901): bestpath -0.00 CPU -0.000 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.001 xRT
[{
Hypothesis = “”;
Score = “-64285”;
}]
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 40.66 -4.34 -11.17 -4.03 -4.50 1.66 -2.19 2.08 5.56 5.48 -1.01 -9.35 -5.37 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 39.82 -4.82 -10.83 -4.60 -5.01 0.83 -2.18 1.44 6.04 5.83 -1.12 -9.86 -5.63 >
INFO: ngram_search_fwdtree.c(1553): 1310 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 10231 senones evaluated (64/fr)
INFO: ngram_search_fwdtree.c(1559): 3343 channels searched (20/fr), 780 1st, 2080 last
INFO: ngram_search_fwdtree.c(1562): 1433 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 95 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.66 CPU 0.413 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 2.85 wall 1.783 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 1307 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 6266 senones evaluated (39/fr)
INFO: ngram_search_fwdflat.c(952): 2923 channels searched (18/fr)
INFO: ngram_search_fwdflat.c(954): 1467 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 126 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.02 CPU 0.011 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.03 wall 0.016 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.80
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 765 nodes, 967 links
INFO: ps_lattice.c(1380): Bestpath score: -50681
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:80:158) = -3433494
INFO: ps_lattice.c(1441): Joint P(O,S) = -3446007 P(S|O) = -12513
INFO: ngram_search.c(901): bestpath 0.02 CPU 0.010 xRT
INFO: ngram_search.c(904): bestpath 0.01 wall 0.007 xRT
[{
Hypothesis = “”;
Score = “-71769”;
}, {
Hypothesis = “”;
Score = “-816729”;
}, {
Hypothesis = “”;
Score = “-817034”;
}, {
Hypothesis = “”;
Score = “-817092”;
}, {
Hypothesis = “”;
Score = “-817109”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 39.82 -4.82 -10.83 Pockets-4.60 phinx has detected a period of silence, concluding an utterance.
-5.01 0.83 -2.18 1.44 6.04 5.83 -1.12 -9.86 -5.63 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 39.55 -4.96 -10.89 -4.75 -5.40 0.66 -2.63 1.52 6.57 6.19 -0.95 -10.63 -6.14 >
INFO: ngram_search_fwdtree.c(1553): 652 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 4994 senones evaluated (60/fr)
INFO: ngram_search_fwdtree.c(1559): 1561 channels searched (18/fr), 395 1st, 955 last
INFO: ngram_search_fwdtree.c(1562): 724 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 39 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 1.91 CPU 2.307 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 9.30 wall 11.202 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.02 CPU 0.019 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.014 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 385 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43289
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3060010
INFO: ps_lattice.c(1441): Joint P(O,S) = -3060010 P(S|O) = 0
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.003 xRT
[{
Hypothesis = “”;
Score = “-64377”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 39.55 -4.96 -10.89 -4.75 -5.40 0.66 -2.63 1.52 6.57 6.19 -0.95 -10.63 -6.14 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 39.29 -5.07 -10.92 -4.98 -5.61 0.66 -2.86 1.64 6.69 6.56 -0.65 -10.91 -6.40 >
INFO: ngram_search_fwdtree.c(1553): 656 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5033 senones evaluated (61/fr)
INFO: ngram_search_fwdtree.c(1559): 1598 channels searched (19/fr), 395 1st, 1000 last
INFO: ngram_search_fwdtree.c(1562): 725 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 44 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 2.21 CPU 2.662 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 11.11 wall 13.390 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: ngram_search_fwdflat.c(948): 653 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 57 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.02 CPU 0.019 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.015 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.5
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 390 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43328
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:5:81) = -3050947
INFO: ps_lattice.c(1441): Joint P(O,S) = -3050948 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.001 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.003 xRT
[{
Hypothesis = “”;
Score = “-64416”;
}]
Pocketsphinx has detected speech.
PockeINFO: cmn_prior.c(131): cmn_prior_update: from < tsphinx has detected a period of silence, concluding an utterance.
39.29 -5.07 -10.92 -4.98 -5.61 0.66 -2.86 1.64 6.69 6.56 -0.65 -10.91 -6.40 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 39.03 -5.14 -10.90 -5.16 -5.87 0.48 -2.93 1.53 6.58 6.50 -0.57 -10.99 -6.45 >
INFO: ngram_search_fwdtree.c(1553): 652 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5044 senones evaluated (61/fr)
INFO: ngram_search_fwdtree.c(1559): 1604 channels searched (19/fr), 395 1st, 1000 last
INFO: ngram_search_fwdtree.c(1562): 725 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 42 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.96 CPU 1.160 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 3.94 wall 4.751 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 652 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 383 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43261
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3059753
INFO: ps_lattice.c(1441): Joint P(O,S) = -3059754 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.01 wall 0.007 xRT
[{
Hypothesis = “”;
Score = “-64349”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 39.03 -5.14 -10.90 -5.16 -5.87 0.48 -2.93 1.53 6.58 6.50 -0.57 -10.99 -6.45 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 38.74 -5.09 -10.78 -5.34 -6.26 0.21 -2.63 1.57 6.51 6.35 -0.38 -10.86 -6.29 >
INFO: ngram_search_fwdtree.c(1553): 653 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 4814 senones evaluated (58/fr)
INFO: ngram_search_fwdtree.c(1559): 1428 channels searched (17/fr), 395 1st, 844 last
INFO: ngram_search_fwdtree.c(1562): 719 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 33 candidate words for entering last phone (0/fr)
PINFO: ngram_search_fwdtree.c(1567): fwdtree 3.41 CPU 4.106 xRT
ocketsphinxINFO: ngram_search_fwdtree.c(1570): fwdtree 13.43 wall 16.185 xRT
has detected a period of silence, concluding an utterance.
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 653 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.010 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 398 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43239
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:81) = -3048643
INFO: ps_lattice.c(1441): Joint P(O,S) = -3048644 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.006 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.004 xRT
[{
Hypothesis = “”;
Score = “-64327”;
}]
Pocketsphinx has detected speech.
PocketsphINFO: cmn_prior.c(131): cmn_prior_update: from < inx has detec38.74 te-5.09 d a period of silence, concluding an utterance.
-10.78 -5.34 -6.26 0.21 -2.63 1.57 6.51 6.35 -0.38 -10.86 -6.29 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 38.18 -5.07 -10.74 -5.63 -6.44 0.13 -2.59 1.29 6.42 6.12 -0.42 -10.97 -6.23 >
INFO: ngram_search_fwdtree.c(1553): 1209 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 9762 senones evaluated (66/fr)
INFO: ngram_search_fwdtree.c(1559): 3346 channels searched (22/fr), 715 1st, 2164 last
INFO: ngram_search_fwdtree.c(1562): 1324 words for which last channels evaluated (9/fr)
INFO: ngram_search_fwdtree.c(1564): 98 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 4.97 CPU 3.382 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 21.85 wall 14.862 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 1203 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3012 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 1290 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 1290 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 76 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.010 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.02 wall 0.011 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.68
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 719 nodes, 837 links
INFO: ps_lattice.c(1380): Bestpath score: -49774
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:68:145) = -3377392
INFO: ps_lattice.c(1441): Joint P(O,S) = -3389790 P(S|O) = -12398
INFO: ngram_search.c(901): bestpath 0.01 CPU 0.007 xRT
INFO: ngram_search.c(904): bestpath 0.01 wall 0.008 xRT
[{
Hypothesis = “”;
Score = “-70862”;
}, {
Hypothesis = “”;
Score = “-815842”;
}, {
Hypothesis = “”;
Score = “-816134”;
}, {
Hypothesis = “”;
Score = “-816178”;
}, {
Hypothesis = “”;
Score = “-816185”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 38.18 -5.07 -10.74 -5.63 -6.44 0.13 P-2.59 ocketsphinx has detected a period of silence, concluding an utter 1.29 6.42 6.12 -0.42 -10.97 ance.-6.23 >INFO: cmn_prior.c(149): cmn_prior_update: to < 37.98 -5.11 -10.77 -5.82 -6.55 0.06 -2.81 1.42 6.64 6.46 -0.33 -11.49 -6.46 >
INFO: ngram_search_fwdtree.c(1553): 657 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5149 senones evaluated (62/fr)
INFO: ngram_search_fwdtree.c(1559): 1677 channels searched (20/fr), 395 1st, 1050 last
INFO: ngram_search_fwdtree.c(1562): 727 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 45 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 2.57 CPU 3.097 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 8.94 wall 10.776 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 653 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1668 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 714 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 714 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 57 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.007 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.010 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.5
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 369 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43402
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:5:81) = -3070455
INFO: ps_lattice.c(1441): Joint P(O,S) = -3070456 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.002 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.001 xRT
[{
Hypothesis = “”;
Score = “-64490”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 37.98 -5.11 -10.77 -5.82 -6.55 0.06 P-2.81 o 1.42 ck 6.64 et 6.46 sp-0.33 hi-11.49 nx has detected a period of silence, concluding an utterance.
-6.46 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 39.48 -4.45 -10.18 -5.60 -6.34 -0.35 -2.77 1.22 6.90 6.20 -0.45 -11.17 -6.21 >
INFO: ngram_search_fwdtree.c(1553): 652 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5675 senones evaluated (69/fr)
INFO: ngram_search_fwdtree.c(1559): 2083 channels searched (25/fr), 390 1st, 1468 last
INFO: ngram_search_fwdtree.c(1562): 736 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 51 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 1.38 CPU 1.679 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 6.99 wall 8.528 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 649 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3536 senones evaluated (43/fr)
INFO: ngram_search_fwdflat.c(952): 1654 channels searched (20/fr)
INFO: ngram_search_fwdflat.c(954): 744 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 91 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.011 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.014 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.7
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 378 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -43770
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:7:80) = -3078404
INFO: ps_lattice.c(1441): Joint P(O,S) = -3088785 P(S|O) = -10381
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.005 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.004 xRT
[{
Hypothesis = “”;
Score = “-64858”;
}, {
Hypothesis = “”;
Score = “-809924”;
}, {
Hypothesis = “”;
Score = “-810022”;
}, {
Hypothesis = “”;
Score = “-810025”;
}, {
Hypothesis = “”;
Score = “-810045”;
}]
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 39.48 -4.45 -10.18 -5.60 -6.34 -0.35 -2.77 1.22 6.90 6.20 -0.45 -11.17 -6.21 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 40.96 -3.56 -9.97 -5.82 -6.21 -0.85 -2.57 1.40 6.88 6.02 -0.60 -10.60 -5.80 >
INFO: ngram_search_fwdtree.c(1553): 675 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5860 senones evaluated (70/fr)
INFO: ngram_search_fwdtree.c(1559): 2154 channels searched (25/fr), 400 1st, 1499 last
INFO: ngram_search_fwdtree.c(1562): 755 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 58 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 1.31 CPU 1.557 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 4.47 wall 5.322 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 672 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3578 senones evaluated (43/fr)
INFO: ngram_search_fwdflat.c(952): 1672 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 762 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 89 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.010 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.015 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.6
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 372 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -43580
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:6:82) = -3087179
INFO: ps_lattice.c(1441): Joint P(O,S) = -3096669 P(S|O) = -9490
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.01 wall 0.006 xRT
[{
Hypothesis = “”;
Score = “-64668”;
}, {
Hypothesis = “”;
Score = “-809781”;
}, {
Hypothesis = “”;
Score = “-809844”;
}, {
Hypothesis = “”;
Score = “-809845”;
}, {
Hypothesis = “”;
Score = “-809888”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 40.96 -3.56 -9.97 -5.82 -6.21 -0.85 -2.57 P 1.40 ock 6.88 etsphinx has detected a period of silence, concluding an utterance.
6.02 -0.60 -10.60 -5.80 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 41.17 -2.54 -9.17 -5.42 -6.24 -0.90 -2.79 1.18 6.45 5.72 -0.63 -10.30 -5.63 >
INFO: ngram_search_fwdtree.c(1553): 646 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5244 senones evaluated (65/fr)
INFO: ngram_search_fwdtree.c(1559): 1803 channels searched (22/fr), 385 1st, 1182 last
INFO: ngram_search_fwdtree.c(1562): 715 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 42 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.43 CPU 0.529 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 2.18 wall 2.693 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 643 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3411 senones evaluated (42/fr)
INFO: ngram_search_fwdflat.c(952): 1591 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 733 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 86 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.018 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.016 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.5
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 361 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43360
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:5:79) = -3069533
INFO: ps_lattice.c(1441): Joint P(O,S) = -3069533 P(S|O) = 0
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.005 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.002 xRT
[{
Hypothesis = “”;
Score = “-64448”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 41.17 -2.54 -9.17 -5.42 -6.24 -0.90 -2.79 P 1.18 ocketsphinx has detected a period of silence, concluding an utterance.
6.45 5.72 -0.63 -10.30 -5.63 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 41.49 -1.33 -8.32 -5.13 -6.08 -0.88 -2.81 1.03 6.10 5.27 -0.67 -10.01 -5.42 >
INFO: ngram_search_fwdtree.c(1553): 669 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5497 senones evaluated (65/fr)
INFO: ngram_search_fwdtree.c(1559): 1904 channels searched (22/fr), 400 1st, 1260 last
INFO: ngram_search_fwdtree.c(1562): 744 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 49 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 3.04 CPU 3.614 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 9.73 wall 11.583 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 668 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3474 senones evaluated (41/fr)
INFO: ngram_search_fwdflat.c(952): 1618 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 760 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 88 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.010 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.013 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.7
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 390 nodes, 15 links
INFO: ps_lattice.c(1380): Bestpath score: -43655
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:7:82) = -3066640
INFO: ps_lattice.c(1441): Joint P(O,S) = -3076701 P(S|O) = -10061
INFO: ngram_search.c(901): bestpath 0.01 CPU 0.012 xRT
INFO: ngram_search.c(904): bestpath 0.01 wall 0.010 xRT
[{
Hypothesis = “”;
Score = “-64743”;
}, {
Hypothesis = “”;
Score = “-809820”;
}, {
Hypothesis = “”;
Score = “-809900”;
}, {
Hypothesis = “”;
Score = “-809911”;
}, {
Hypothesis = “”;
Score = “-809949”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 41.49 -1.33 -8.32 -5.13 -6.08 -0.88 -2.81 1.03 6.10 5.27 -0.67 -10.01 -5.42 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 42.40 -1.49 -8.69 -5.08 -5.72 -0.58 -2.96 1.15 5.99 5.08 -0.75 -9.53 -5.08 >
INFO: ngram_search_fwdtree.c(1553): 672 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5504 senones evaluated (65/fr)
INFO: ngram_search_fwdtree.c(1559): 1888 channels searched (22/fr), 405 1st, 1244 last
INFO: ngram_search_fwdtree.c(1562): 752 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 51 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 1.08 CPU 1.272 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 3.93 wall 4.627 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: ngram_search_fwdflat.c(948): 669 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1710 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 732 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 732 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 56 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.014 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.006 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.4
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 400 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43125
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:4:83) = -3089245
INFO: ps_lattice.c(1441): Joint P(O,S) = -3089245 P(S|O) = 0
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.001 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.001 xRT
[{
Hypothesis = “”;
Score = “-64213”;
}]
Pocketsphinx has detected speech.
INFO: cmn_prior.c(131): cmn_prior_update: from < 42.40 -1.49 -8.69 -5.08 -5.72 -0.58 -2.96 P 1.15 ocketsphinx has detected a period of silence, concluding an utterance.
5.99 5.08 -0.75 -9.53 -5.08 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 42.16 -1.72 -8.83 -5.26 -5.89 -0.61 -3.01 1.27 6.30 5.30 -0.70 -10.07 -5.48 >
INFO: ngram_search_fwdtree.c(1553): 643 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 4778 senones evaluated (58/fr)
INFO: ngram_search_fwdtree.c(1559): 1426 channels searched (17/fr), 390 1st, 835 last
INFO: ngram_search_fwdtree.c(1562): 710 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 39 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.31 CPU 0.379 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 1.66 wall 2.023 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
INFO: ngram_search_fwdflat.c(948): 643 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 1647 senones evaluated (20/fr)
INFO: ngram_search_fwdflat.c(952): 705 channels searched (8/fr)
INFO: ngram_search_fwdflat.c(954): 705 words searched (8/fr)
INFO: ngram_search_fwdflat.c(957): 55 word transitions (0/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.010 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.012 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.3
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 389 nodes, 1 links
INFO: ps_lattice.c(1380): Bestpath score: -43035
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:3:80) = -3044496
INFO: ps_lattice.c(1441): Joint P(O,S) = -3044497 P(S|O) = -1
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.003 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.004 xRT
[{
Hypothesis = “”;
Score = “-64123”;
}]
Pocketsphinx has detected speech.
Pocketsphinx has detected a period of silence, concluding an utterance.
INFO: cmn_prior.c(131): cmn_prior_update: from < 42.16 -1.72 -8.83 -5.26 -5.89 -0.61 -3.01 1.27 6.30 5.30 -0.70 -10.07 -5.48 >
INFO: cmn_prior.c(149): cmn_prior_update: to < 42.26 -1.68 -9.20 -5.23 -5.97 -0.66 -2.93 1.13 6.06 5.41 -0.51 -10.04 -5.44 >
INFO: ngram_search_fwdtree.c(1553): 633 words recognized (8/fr)
INFO: ngram_search_fwdtree.c(1555): 5321 senones evaluated (67/fr)
INFO: ngram_search_fwdtree.c(1559): 1885 channels searched (23/fr), 380 1st, 1272 last
INFO: ngram_search_fwdtree.c(1562): 710 words for which last channels evaluated (8/fr)
INFO: ngram_search_fwdtree.c(1564): 49 candidate words for entering last phone (0/fr)
INFO: ngram_search_fwdtree.c(1567): fwdtree 0.77 CPU 0.960 xRT
INFO: ngram_search_fwdtree.c(1570): fwdtree 3.58 wall 4.470 xRT
INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
INFO: ngram_search_fwdflat.c(948): 630 words recognized (8/fr)
INFO: ngram_search_fwdflat.c(950): 3338 senones evaluated (42/fr)
INFO: ngram_search_fwdflat.c(952): 1555 channels searched (19/fr)
INFO: ngram_search_fwdflat.c(954): 723 words searched (9/fr)
INFO: ngram_search_fwdflat.c(957): 92 word transitions (1/fr)
INFO: ngram_search_fwdflat.c(960): fwdflat 0.01 CPU 0.008 xRT
INFO: ngram_search_fwdflat.c(963): fwdflat 0.01 wall 0.015 xRT
INFO: ngram_search.c(1290): lattice start node <s>.0 end node </s>.11
INFO: ngram_search.c(1320): Eliminated 5 nodes before end node
INFO: ngram_search.c(1445): Lattice has 354 nodes, 46 links
INFO: ps_lattice.c(1380): Bestpath score: -2606
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(</s>:11:78) = 1464832
INFO: ps_lattice.c(1441): Joint P(O,S) = 1464832 P(S|O) = 0
INFO: ngram_search.c(901): bestpath 0.00 CPU 0.002 xRT
INFO: ngram_search.c(904): bestpath 0.00 wall 0.003 xRT
INFO: ngram_search_fwdtree.c(432): TOTAL fwdtree 34.50 CPU 1.489 xRT
INFO: ngram_search_fwdtree.c(435): TOTAL fwdtree 150.22 wall 6.483 xRT
INFO: ngram_search_fwdflat.c(176): TOTAL fwdflat 0.23 CPU 0.010 xRT
INFO: ngram_search_fwdflat.c(179): TOTAL fwdflat 0.28 wall 0.012 xRT
INFO: ngram_search.c(308): TOTAL bestpath 0.10 CPU 0.004 xRT
INFO: ngram_search.c(311): TOTAL bestpath 0.11 wall 0.005 xRT
recognizedSpeech
Heard: NO
[{
Hypothesis = NO;
Score = “-23694”;
}, {
Hypothesis = “”;
Score = “-65714”;
}, {
Hypothesis = “”;
Score = “-810687”;
}, {
Hypothesis = “”;
Score = “-810715”;
}, {
Hypothesis = “”;
Score = “-810753”;
}]
Pocketsphinx has stopped listening.June 29, 2016 at 9:52 pm #1030637Halle WinklerPolitepixWelcome,
Why are the hypotheses in an array? Your logging has verbosePocketsphinx (or maybe verboseRapidEars) turned on, but it is necessary to also turn on OELogging and post the entire app session output to troubleshoot an implementation issue, take a look here: https://www.politepix.com/forums/topic/install-issues-and-their-solutions/
June 30, 2016 at 4:05 pm #1030639bhavinParticipantThank you for your quick response.
Sorry About that. I turned on OELogging and turned verbose off for now. The log shows there seems to issue with audio routing through the bluetooth headset.Finally it picket up the word ‘NO’ as default i guess. Please can help me out.
Thank you once again.Record Permission Granted. Starting.
2016-06-29 17:08:03.875 app[4002:1552350] Creating shared instance of OEPocketsphinxController
2016-06-29 17:08:03.878 app[4002:1552350] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2016-06-29 17:08:03.880 app[4002:1552350] User gave mic permission for this app.
2016-06-29 17:08:03.881 app[4002:1552350] setSecondsOfSilence wasn’t set, using default of 0.700000.
2016-06-29 17:08:03.884 app[4002:1552458] Starting listening.
2016-06-29 17:08:03.884 app[4002:1552458] About to set up audio session
2016-06-29 17:08:04.290 app[4002:1552458] Creating audio session with default settings.
2016-06-29 17:08:04.290 app[4002:1552458] Done setting audio session category.
2016-06-29 17:08:04.291 app[4002:1552458] Sample rate is already the preferred rate of 16000.000000 so not setting it.
2016-06-29 17:08:04.293 app[4002:1552458] number of channels is already the preferred number of 1 so not setting it.
2016-06-29 17:08:04.292 app[4002:1552439] Audio route has changed for the following reason:
2016-06-29 17:08:04.298 app[4002:1552458] Done setting session’s preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.128000
2016-06-29 17:08:04.300 app[4002:1552458] Done setting up audio session
2016-06-29 17:08:04.298 app[4002:1552439] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-29 17:08:04.302 app[4002:1552458] About to set up audio IO unit in a session with a sample rate of 16000.000000, a channel number of 1 and a buffer duration of 0.128000.
2016-06-29 17:08:04.316 app[4002:1552439] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x12dfa8c30,
inputs = (
“<AVAudioSessionPortDescription: 0x130543950, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1302dee70, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
2016-06-29 17:08:04.320 app[4002:1552458] Done setting up audio unit
2016-06-29 17:08:04.321 app[4002:1552458] About to start audio IO unit
2016-06-29 17:08:04.321 app[4002:1552439] Audio route has changed for the following reason:
2016-06-29 17:08:04.766 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:04.893 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.021 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.150 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.278 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.406 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.533 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.662 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.789 app[4002:1552722] Audio Unit render error: kAudioUnitErr_CannotDoInCurrentContext
2016-06-29 17:08:05.797 app[4002:1552458] Done starting audio unit
2016-06-29 17:08:05.800 app[4002:1552439] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-29 17:08:05.833 app[4002:1552439] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x12f2ec850,
inputs = (
“<AVAudioSessionPortDescription: 0x12df41640, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x12f231a00, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>”
)>”.
2016-06-29 17:08:05.918 app[4002:1552458] Restoring SmartCMN value of 31.968506
2016-06-29 17:08:05.918 app[4002:1552458] Listening.
2016-06-29 17:08:05.918 app[4002:1552458] Project has these words or phrases in its dictionary:
NO
OK
REPEAT
REPEAT(2)
REPLAY
YES2016-06-29 17:08:05.919 app[4002:1552458] Recognition loop has started
2016-06-29 17:08:05.919 app[4002:1552350] Successfully started listening session from startListeningWithLanguageModelAtPath:
Pocketsphinx is now listening.
pocketsphinxRecognitionLoopDidStart
2016-06-29 17:08:08.877 app[4002:1552429] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:09.582 app[4002:1552604] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:09.599 app[4002:1552604] Pocketsphinx heard “” with a score of (0) and an utterance ID of 0.
2016-06-29 17:08:09.602 app[4002:1552604] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:08:14.417 app[4002:1552604] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:15.158 app[4002:1552604] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:15.172 app[4002:1552604] Pocketsphinx heard “” with a score of (-9021) and an utterance ID of 1.
2016-06-29 17:08:15.172 app[4002:1552604] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:08:16.638 app[4002:1552604] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:17.373 app[4002:1552604] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:17.391 app[4002:1552604] Pocketsphinx heard “” with a score of (-15018) and an utterance ID of 2.
2016-06-29 17:08:17.392 app[4002:1552604] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:08:25.405 app[4002:1552429] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:26.163 app[4002:1552604] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:26.166 app[4002:1552604] Pocketsphinx heard “” with a score of (-11047) and an utterance ID of 3.
2016-06-29 17:08:26.166 app[4002:1552604] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:08:26.442 app[4002:1552429] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:27.194 app[4002:1552429] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:27.208 app[4002:1552429] Pocketsphinx heard “” with a score of (-1) and an utterance ID of 4.
2016-06-29 17:08:27.208 app[4002:1552429] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:08:29.757 app[4002:1552604] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:30.562 app[4002:1552604] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:30.574 app[4002:1552604] Pocketsphinx heard “” with a score of (0) and an utterance ID of 5.
2016-06-29 17:08:30.575 app[4002:1552604] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:08:41.716 app[4002:1552429] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:08:42.431 app[4002:1552429] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:08:42.445 app[4002:1552429] Pocketsphinx heard “” with a score of (-1) and an utterance ID of 6.
2016-06-29 17:08:42.446 app[4002:1552429] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:09:12.181 app[4002:1552604] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:09:12.945 app[4002:1553034] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:09:12.949 app[4002:1553034] Pocketsphinx heard “” with a score of (-1) and an utterance ID of 7.
2016-06-29 17:09:12.949 app[4002:1553034] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:09:16.147 app[4002:1552873] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:09:16.912 app[4002:1552873] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:09:16.922 app[4002:1552873] Pocketsphinx heard “” with a score of (-1) and an utterance ID of 8.
2016-06-29 17:09:16.924 app[4002:1552873] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-29 17:09:23.559 app[4002:1552439] Audio route has changed for the following reason:
2016-06-29 17:09:23.566 app[4002:1552439] There was a route override.
2016-06-29 17:09:23.574 app[4002:1552439] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “Receiver”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x130500960,
inputs = (
“<AVAudioSessionPortDescription: 0x12f2c8be0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x12f2e9460, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
2016-06-29 17:09:23.839 app[4002:1553016] Speech detected…
Pocketsphinx has detected speech.
2016-06-29 17:09:26.884 app[4002:1552873] End of speech detected…
Pocketsphinx has detected a period of silence, concluding an utterance.
2016-06-29 17:09:27.044 app[4002:1552873] Pocketsphinx heard “NO” with a score of (-229438) and an utterance ID of 9.
2016-06-29 17:09:27.046 app[4002:1552350] Stopping listening.
2016-06-29 17:09:27.472 app[4002:1552350] No longer listening.
recognizedSpeech
Heard: NO
Pocketsphinx has stopped listening.June 30, 2016 at 4:06 pm #1030640bhavinParticipanti forgot to mention, i had turned the n best hypothesis on, so it was in an array.
June 30, 2016 at 4:57 pm #1030641Halle WinklerPolitepixHello,
OK, I’d probably recommend that you troubleshoot basic functionality issues like this without n-best, and turn it on at the end when everything else is working, only if you discover in testing that it is helpful to the enduser in some way. It’s relatively uncommon that it is used in shipping applications.
If you have future questions, make sure not to edit anything out of the OELogging output like in the excerpt above.
What the errors mean is that there is an incompatibility between the bluetooth device and Apple’s low-level audio API causing a silent failure, which happens sometimes and is the reason that OpenEars’ bluetooth support is experimental – I don’t have any input into Apple’s API or how hardware manufacturers implement their bluetooth devices, so I can’t offer a lot of support with those issues. The one thing I can suggest is that since OpenEars 2.052, it is possible to turn off setting the preferred sample rate, the preferred buffer size, and the preferred number of channels, and it can be the case that the incompatibility is with one of those three settings. So, I would try disabling them in sequence and possibly all together and see if that helps. More information on that is in the OEPocketsphinxController documentation. The FAQ has some info about audio issues so it is probably worth checking out: https://www.politepix.com/openears/support
June 30, 2016 at 6:47 pm #1030642bhavinParticipantHi thanks once for your insights.
So i got it to work by setting:
OEPocketsphinxController.sharedInstance().disablePreferredSampleRate = true OEPocketsphinxController.sharedInstance().disablePreferredBufferSize = trueThe error still remains like the previous log i copied. If needed please let me know if you need a more detailed log or more information.
But now the loudness increases after it has completed. I read an existing post with the same issue. Is there anything specific i can try, like what audio mode to try or anything else.
Thank you for your help.
The loudness thing is less clear. I don’t know the reason for that, but my best guess is that it is related to the route. Playback can either be routed to the external speaker or the ear speaker and one is much louder than the other. Different audio sessions have different default routes and need to be overridden to get different results. PocketsphinxController does re-route to the louder speaker, but it is possible that something about the session mixing is causing it to not successfully do that and then it is only when the new session is created that the sound is fully routed to the speaker. It could also be due to a side effect of the mixing setting, for instance if it does some kind of active volume reduction on the assumption that there are two output streams that have to be combined without clipping.Lastly it could just be a difference between the two different audio session settings that is more noticeable when the session is returned to ambient.
June 30, 2016 at 10:13 pm #1030643Halle WinklerPolitepixThe error still remains like the previous log i copied. If needed please let me know if you need a more detailed log or more information.
It sounds like it may be running the error a few times at the start of the buffer callback but then working correctly, so if there are no recognition issues I wouldn’t worry about it. As I mentioned, the incompatibility errors are silent so there may not be a clear output result in the logs when it starts working either.
Is that part at the end of your post from you, or is it quoting my or someone else’s writing in another post? That is a bit hard to understand when it is added to the end of your own post without comment; maybe you can write the part you wanted to ask me about separately below or maybe reformat the post above with the blockquote button, thank you.
June 30, 2016 at 11:59 pm #1030644bhavinParticipantThis part is from a previous reply of yours from a different post.
The loudness thing is less clear. I don’t know the reason for that, but my best guess is that it is related to the route. Playback can either be routed to the external speaker or the ear speaker and one is much louder than the other. Different audio sessions have different default routes and need to be overridden to get different results. PocketsphinxController does re-route to the louder speaker, but it is possible that something about the session mixing is causing it to not successfully do that and then it is only when the new session is created that the sound is fully routed to the speaker. It could also be due to a side effect of the mixing setting, for instance if it does some kind of active volume reduction on the assumption that there are two output streams that have to be combined without clipping.Lastly it could just be a difference between the two different audio session settings that is more noticeable when the session is returned to ambient.
The function i am using to start listening is:
func startListening() {
OEPocketsphinxController.sharedInstance().disablePreferredSampleRate = true
OEPocketsphinxController.sharedInstance().disablePreferredBufferSize = true
do {
try OEPocketsphinxController.sharedInstance().setActive(true)
} catch {
print(error)
} OEPocketsphinxController.sharedInstance().startListeningWithLanguageModelAtPath(lmPath, dictionaryAtPath: dicPath, acousticModelAtPath: OEAcousticModel.pathToModel(“AcousticModelEnglish”), languageModelIsJSGF: false)
}My question is why does the volume of the bluetooth headset get really loud whenever i start listening. It really is very loud.
The loudness goes away when i stop the audio session (setting setActive to false) that i am usng for TTS (using AVSpeechSynthesizer). The session i use for my TTS is set with AVAudioSessionCategoryPlayback category and with the DuckOthers options.
But every time i use OpenEars it appears again. Any advise is highly appreciated.I am also attaching the while log below. Hope it helps.
Thank You.
Log:
2016-06-30 17:38:32.970 app[4598:1792249] Starting OpenEars logging for OpenEars version 2.502 on 64-bit device (or build): iPhone running iOS version: 9.300000
2016-06-30 17:38:32.988 app[4598:1792249] Starting dynamic language model generation2016-06-30 17:38:33.051 app[4598:1792249] Done creating language model with CMUCLMTK in 0.063187 seconds.
2016-06-30 17:38:33.052 app[4598:1792249] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
2016-06-30 17:38:33.107 app[4598:1792249] I’m done running performDictionaryLookup and it took 0.039200 seconds
2016-06-30 17:38:33.113 app[4598:1792249] I’m done running dynamic language model generation and it took 0.139315 seconds
2016-06-30 17:38:33.231 app[4598:1792249] Configuring the default app.
2016-06-30 17:38:33.262: <FIRInstanceID/WARNING> FIRInstanceID AppDelegate proxy enabled, will swizzle app delegate remote notification handlers. To disable add “FirebaseAppDelegateProxyEnabled” to your Info.plist and set it to NO
2016-06-30 17:38:33.272: <FIRInstanceID/WARNING> Failed to fetch APNS token Error Domain=com.firebase.iid Code=1001 “(null)”
2016-06-30 17:38:33.280: <FIRMessaging/INFO> FIRMessaging library version 1.1.0
2016-06-30 17:38:33.295: <FIRMessaging/WARNING> FIRMessaging AppDelegate proxy enabled, will swizzle app delegate remote notification receiver handlers. Add “FirebaseAppDelegateProxyEnabled” to your Info.plist and set it to NO
2016-06-30 17:38:33.380 app[4598:] <FIRAnalytics/INFO> Firebase Analytics v.3200000 started
2016-06-30 17:38:33.381 app[4598:] <FIRAnalytics/INFO> To enable debug logging set the following application argument: -FIRAnalyticsDebugEnabled (see http://goo.gl/Y0Yjwu)
2016-06-30 17:38:33.405 app[4598:] <FIRAnalytics/INFO> Successfully created Firebase Analytics App Delegate Proxy automatically. To disable the proxy, set the flag FirebaseAppDelegateProxyEnabled to NO in the Info.plist
2016-06-30 17:38:33.443 app[4598:] <FIRAnalytics/INFO> Firebase Analytics enabled
2016-06-30 17:38:33.506: <FIRInstanceID/WARNING> APNS Environment in profile: development
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.ConnectionMessage, message: “Copilot connected.”)]
Setting audio timer.
Audio Session Started.
Removing audio timer.
2016-06-30 17:39:07.877 app[4598:1792587] Building MacinTalk voice for asset: (null)
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.BatteryNotification, message: “11 and a half hours of talktime remaining.”)]
Setting audio timer.
Audio Session Stopped.
Audio Session Started.
Removing audio timer.
Audio Session Stopped.
[message: Test122., sender: Bhavin Modi, collapse_key: do_not_collapse, msgid: 312, timestamp: 1467322788447, from: 579452197266]
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.RemoteNotification, message: “New Message from Bhavin Modi. Test122.”)]
Setting audio timer.
Audio Session Started.
Removing audio timer.
Audio Session Stopped.
Audio Session Started.
Record Permission Granted. Starting.
2016-06-30 17:39:57.985 app[4598:1792249] Creating shared instance of OEPocketsphinxController
2016-06-30 17:39:57.992 app[4598:1792249] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2016-06-30 17:39:58.029 app[4598:1792249] User gave mic permission for this app.
2016-06-30 17:39:58.033 app[4598:1792249] setSecondsOfSilence wasn’t set, using default of 0.700000.
2016-06-30 17:39:58.034 app[4598:1792357] Starting listening.
2016-06-30 17:39:58.036 app[4598:1792357] About to set up audio session
2016-06-30 17:40:00.228 app[4598:1792357] Creating audio session with default settings.
2016-06-30 17:40:00.228 app[4598:1792357] Done setting audio session category.
2016-06-30 17:40:00.258 app[4598:1792357] Not setting a preferred sample rate at developer request, keeping the default rate for this hardware of 16000.000000.
2016-06-30 17:40:00.266 app[4598:1792357] number of channels is already the preferred number of 1 so not setting it.
2016-06-30 17:40:00.267 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:00.271 app[4598:1792357] Not setting a preferred buffer duration at developer request, keeping the default duration for this hardware of 0.016000.
2016-06-30 17:40:00.271 app[4598:1792357] Done setting up audio session
2016-06-30 17:40:00.276 app[4598:1792357] About to set up audio IO unit in a session with a sample rate of 16000.000000, a channel number of 1 and a buffer duration of 0.016000.
2016-06-30 17:40:00.275 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:00.320 app[4598:1792357] Done setting up audio unit
2016-06-30 17:40:00.323 app[4598:1792357] About to start audio IO unit
2016-06-30 17:40:00.321 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153546cf0,
inputs = (
“<AVAudioSessionPortDescription: 0x14f8a82a0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x14f88d500, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
2016-06-30 17:40:00.365 app[4598:1792357] Done starting audio unit
2016-06-30 17:40:00.389 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:00.392 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:00.395 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x15337fe20,
inputs = (
“<AVAudioSessionPortDescription: 0x153324510, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x15360b6e0, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>”
)>”.
2016-06-30 17:40:00.468 app[4598:1792357] Restoring SmartCMN value of 34.620361
2016-06-30 17:40:00.469 app[4598:1792357] Listening.
2016-06-30 17:40:00.469 app[4598:1792357] Project has these words or phrases in its dictionary:
OK
REPEAT
REPEAT(2)
REPLAY2016-06-30 17:40:00.469 app[4598:1792357] Recognition loop has started
2016-06-30 17:40:00.470 app[4598:1792249] Successfully started listening session from startListeningWithLanguageModelAtPath:
2016-06-30 17:40:00.590 app[4598:1792357] Speech detected…
2016-06-30 17:40:01.546 app[4598:1792575] End of speech detected…
2016-06-30 17:40:01.605 app[4598:1792575] Pocketsphinx heard “” with a score of (-22356) and an utterance ID of 0.
2016-06-30 17:40:01.605 app[4598:1792575] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-30 17:40:03.864 app[4598:1792357] Speech detected…
2016-06-30 17:40:07.403 app[4598:1792575] End of speech detected…
2016-06-30 17:40:07.596 app[4598:1792575] Pocketsphinx heard “” with a score of (-73378) and an utterance ID of 1.
2016-06-30 17:40:07.597 app[4598:1792575] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-30 17:40:07.811 app[4598:1792334] Speech detected…
2016-06-30 17:40:08.723 app[4598:1792575] End of speech detected…
2016-06-30 17:40:08.776 app[4598:1792575] Pocketsphinx heard “OK” with a score of (-45625) and an utterance ID of 2.
2016-06-30 17:40:08.777 app[4598:1792249] Stopping listening.
2016-06-30 17:40:09.355 app[4598:1792249] No longer listening.
Audio Session Started.
2016-06-30 17:40:10.980 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:10.982 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:10.986 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153140fc0,
inputs = (
“<AVAudioSessionPortDescription: 0x1531ae1e0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1535432a0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
2016-06-30 17:40:10.998 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:11.000 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:11.076 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x1533e8260,
inputs = (
“<AVAudioSessionPortDescription: 0x153321460, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1533e6ab0, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
Audio Session Stopped.
2016-06-30 17:40:14.279 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:14.292 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:14.335 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x15350a400,
inputs = (
“<AVAudioSessionPortDescription: 0x1531b69a0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153136790, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
2016-06-30 17:40:29.138 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:29.138 app[4598:1792342] An old device became unavailable
2016-06-30 17:40:29.139 app[4598:1792342] the audio input is now unavailable.
2016-06-30 17:40:29.144 app[4598:1792342] This is a case for performing a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “Speaker”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x14e7e74c0,
inputs = (
“<AVAudioSessionPortDescription: 0x153321460, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1533e2c30, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
2016-06-30 17:40:43.319 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:43.341 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:43.348 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x1531d15a0,
inputs = (
“<AVAudioSessionPortDescription: 0x1531ccb30, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x14f8a9700, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>”
)>”.
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.ConnectionMessage, message: “Copilot connected.”)]
Setting audio timer.
Audio Session Started.
Removing audio timer.
2016-06-30 17:40:45.117 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:45.118 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:45.121 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “Speaker”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153310de0,
inputs = (
“<AVAudioSessionPortDescription: 0x1533f5860, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153310660, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
2016-06-30 17:40:48.288 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:48.312 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:48.319 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x14e5b8800,
inputs = (
“<AVAudioSessionPortDescription: 0x14e762c90, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153602140, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>”
)>”.
Audio Session Stopped.
2016-06-30 17:40:50.926 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:50.927 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:50.928 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153368d90,
inputs = (
“<AVAudioSessionPortDescription: 0x153367920, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153153d10, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.BatteryNotification, message: “11 and a half hours of talktime remaining.”)]
Setting audio timer.
Audio Session Started.
Removing audio timer.
2016-06-30 17:40:53.670 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:40:53.678 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:40:53.681 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153195680,
inputs = (
“<AVAudioSessionPortDescription: 0x14f824770, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1531d4da0, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
[message: Test123., sender: Bhavin Modi, collapse_key: do_not_collapse, msgid: 313, timestamp: 1467322855513, from: 579452197266]
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.RemoteNotification, message: “New Message from Bhavin Modi. Test123.”)]
Setting audio timer.
Audio Session Stopped.
2016-06-30 17:41:00.807 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:00.822 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:00.834 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x15352cf20,
inputs = (
“<AVAudioSessionPortDescription: 0x1531d0d30, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153543ef0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
Audio Session Started.
Removing audio timer.
2016-06-30 17:41:02.749 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:02.750 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:02.754 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x1533f5d90,
inputs = (
“<AVAudioSessionPortDescription: 0x153367c00, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1533e5370, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
Audio Session Stopped.
Audio Session Started.
Record Permission Granted. Starting.
2016-06-30 17:41:11.018 app[4598:1792249] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2016-06-30 17:41:11.019 app[4598:1792249] User gave mic permission for this app.
2016-06-30 17:41:11.019 app[4598:1792249] setSecondsOfSilence wasn’t set, using default of 0.700000.
2016-06-30 17:41:11.020 app[4598:1792334] Starting listening.
2016-06-30 17:41:11.020 app[4598:1792334] About to set up audio session
2016-06-30 17:41:12.978 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:14.521 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:14.564 app[4598:1792334] Creating audio session with default settings.
2016-06-30 17:41:14.565 app[4598:1792334] Done setting audio session category.
2016-06-30 17:41:14.566 app[4598:1792334] Not setting a preferred sample rate at developer request, keeping the default rate for this hardware of 16000.000000.
2016-06-30 17:41:14.566 app[4598:1792334] number of channels is already the preferred number of 1 so not setting it.
2016-06-30 17:41:14.567 app[4598:1792334] Not setting a preferred buffer duration at developer request, keeping the default duration for this hardware of 0.016000.
2016-06-30 17:41:14.568 app[4598:1792334] Done setting up audio session
2016-06-30 17:41:14.578 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153136b30,
inputs = (
“<AVAudioSessionPortDescription: 0x14f8a2050, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153547250, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
2016-06-30 17:41:14.596 app[4598:1792334] About to set up audio IO unit in a session with a sample rate of 16000.000000, a channel number of 1 and a buffer duration of 0.016000.
2016-06-30 17:41:14.661 app[4598:1792334] Done setting up audio unit
2016-06-30 17:41:14.662 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:14.665 app[4598:1792334] About to start audio IO unit
2016-06-30 17:41:14.685 app[4598:1792334] Done starting audio unit
2016-06-30 17:41:14.665 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:14.724 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153155770,
inputs = (
“<AVAudioSessionPortDescription: 0x1531fa680, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x14e50c200, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
2016-06-30 17:41:14.745 app[4598:1792334] Restoring SmartCMN value of 33.343750
2016-06-30 17:41:14.749 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:14.751 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:14.750 app[4598:1792334] Listening.
2016-06-30 17:41:14.753 app[4598:1792334] Project has these words or phrases in its dictionary:
OK
REPEAT
REPEAT(2)
REPLAY2016-06-30 17:41:14.754 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x1533f5d90,
inputs = (
“<AVAudioSessionPortDescription: 0x14e7d0260, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x15360bc50, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
2016-06-30 17:41:14.755 app[4598:1792334] Recognition loop has started
2016-06-30 17:41:14.757 app[4598:1792249] Successfully started listening session from startListeningWithLanguageModelAtPath:
2016-06-30 17:41:14.759 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:14.761 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:14.763 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153543c40,
inputs = (
“<AVAudioSessionPortDescription: 0x153526720, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x1531386b0, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>”
)>”.
2016-06-30 17:41:14.925 app[4598:1793135] Speech detected…
2016-06-30 17:41:16.409 app[4598:1793284] End of speech detected…
2016-06-30 17:41:16.526 app[4598:1793284] Pocketsphinx heard “” with a score of (-22381) and an utterance ID of 3.
2016-06-30 17:41:16.529 app[4598:1793284] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-30 17:41:18.529 app[4598:1793284] Speech detected…
2016-06-30 17:41:20.175 app[4598:1792685] End of speech detected…
2016-06-30 17:41:20.248 app[4598:1792685] Pocketsphinx heard “” with a score of (-98738) and an utterance ID of 4.
2016-06-30 17:41:20.249 app[4598:1792685] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-06-30 17:41:22.346 app[4598:1793284] Speech detected…
2016-06-30 17:41:23.371 app[4598:1792685] End of speech detected…
2016-06-30 17:41:23.425 app[4598:1792685] Pocketsphinx heard “OK” with a score of (-61574) and an utterance ID of 5.
2016-06-30 17:41:23.433 app[4598:1792249] Stopping listening.
Listening can’t stop yet for the following reason or reasons: an utterance is still in progress | Trying again.
2016-06-30 17:41:23.997 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:24.004 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:24.013 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x153530820,
inputs = (
“<AVAudioSessionPortDescription: 0x153135900, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x15352de80, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.
2016-06-30 17:41:24.023 app[4598:1792249] Attempting to stop an unstopped utterance so listening can stop.
2016-06-30 17:41:24.025 app[4598:1792249] No longer listening.
Audio Session Started.
2016-06-30 17:41:25.690 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:25.697 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:25.703 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x1535152a0,
inputs = (
“<AVAudioSessionPortDescription: 0x1531350e0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x153310660, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tacl; selectedDataSource = (null)>”
)>”.
Audio Session Stopped.
2016-06-30 17:41:29.025 app[4598:1792342] Audio route has changed for the following reason:
2016-06-30 17:41:29.053 app[4598:1792342] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-06-30 17:41:29.087 app[4598:1792342] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x14f8a6350,
inputs = (
“<AVAudioSessionPortDescription: 0x153536200, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x15315e2a0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:4C-tsco; selectedDataSource = (null)>”
)>”.July 1, 2016 at 3:54 pm #1030647Halle WinklerPolitepixHi,
I feel like it would be a generally good idea to set aside a few minutes to give a quick read through the docs for the OpenEars classes you are using, because the info I’m writing here about disabling the preferred settings for bluetooth results improvement is in there, as well as more related options which might help you troubleshoot on your own once you’ve seen them. A bigger issue it will help with is that this code is out of order and will invisibly affect your results with the disabling properties you are calling:
OEPocketsphinxController.sharedInstance().disablePreferredSampleRate = true OEPocketsphinxController.sharedInstance().disablePreferredBufferSize = true do { try OEPocketsphinxController.sharedInstance().setActive(true) } catch { print(error) }
and it would be my preference to give more help after you’ve had a look and had a chance to see why, since that’s also in the docs and it’s the same explanation by the same author as would potentially be re-written in this post :) .
Once you can fix that up and narrow down which of those two properties is really taking effect, and maybe after you’ve seen what your results are with the other available overrides listed in the docs that you’ll learn about, show me your new initialization code and let’s come back to the loudness issue if it is still happening.
July 1, 2016 at 9:14 pm #1030650bhavinParticipantHalle, Thank you for your suggestion. I went through the class documentation and also played around with disabling the properties.
It looks like the:
OEPocketsphinxController.sharedInstance().disablePreferredBufferSize = true
is the one that is actually having an effect. I do not disable anything else now and the loudness issue still persists. The volume increases whenever i start the listening.
Function:
func startListening() {
OEPocketsphinxController.sharedInstance().disablePreferredBufferSize = truedo {
try OEPocketsphinxController.sharedInstance().setActive(true)
} catch {
print(error)
}OEPocketsphinxController.sharedInstance().startListeningWithLanguageModelAtPath(lmPath, dictionaryAtPath: dicPath, acousticModelAtPath: OEAcousticModel.pathToModel(“AcousticModelEnglish”), languageModelIsJSGF: false)
}Log:
2016-07-01 15:09:37.904 app[5044:1990481] Starting OpenEars logging for OpenEars version 2.502 on 64-bit device (or build): iPhone running iOS version: 9.300000
2016-07-01 15:09:37.922 app[5044:1990481] Starting dynamic language model generation2016-07-01 15:09:37.996 app[5044:1990481] Done creating language model with CMUCLMTK in 0.073687 seconds.
2016-07-01 15:09:37.997 app[5044:1990481] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelEnglish
2016-07-01 15:09:38.044 app[5044:1990481] I’m done running performDictionaryLookup and it took 0.032958 seconds
2016-07-01 15:09:38.050 app[5044:1990481] I’m done running dynamic language model generation and it took 0.143244 seconds
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.ConnectionMessage, message: “Copilot connected.”)]
Setting audio timer.
2016-07-01 15:09:38.169 app[5044:1990481] Configuring the default app.
2016-07-01 15:09:38.196: <FIRInstanceID/WARNING> FIRInstanceID AppDelegate proxy enabled, will swizzle app delegate remote notification handlers. To disable add “FirebaseAppDelegateProxyEnabled” to your Info.plist and set it to NO
2016-07-01 15:09:38.203: <FIRInstanceID/WARNING> Failed to fetch APNS token Error Domain=com.firebase.iid Code=1001 “(null)”
2016-07-01 15:09:38.218: <FIRMessaging/INFO> FIRMessaging library version 1.1.0
2016-07-01 15:09:38.229: <FIRMessaging/WARNING> FIRMessaging AppDelegate proxy enabled, will swizzle app delegate remote notification receiver handlers. Add “FirebaseAppDelegateProxyEnabled” to your Info.plist and set it to NO
2016-07-01 15:09:38.327 app[5044:] <FIRAnalytics/INFO> Firebase Analytics v.3200000 started
2016-07-01 15:09:38.331 app[5044:] <FIRAnalytics/INFO> To enable debug logging set the following application argument: -FIRAnalyticsDebugEnabled (see http://goo.gl/Y0Yjwu)
2016-07-01 15:09:38.356 app[5044:] <FIRAnalytics/INFO> Successfully created Firebase Analytics App Delegate Proxy automatically. To disable the proxy, set the flag FirebaseAppDelegateProxyEnabled to NO in the Info.plist
2016-07-01 15:09:38.459 app[5044:] <FIRAnalytics/INFO> Firebase Analytics enabled
2016-07-01 15:09:38.562: <FIRInstanceID/WARNING> APNS Environment in profile: development
Audio Session Started.
Removing audio timer.
2016-07-01 15:09:39.240 app[5044:1990655] Building MacinTalk voice for asset: (null)
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.BatteryNotification, message: “4 and a half hours of talktime remaining.”)]
Setting audio timer.
Audio Session Stopped.
Audio Session Started.
Removing audio timer.
[message: Test142., sender: Bhavin Modi, collapse_key: do_not_collapse, msgid: 376, timestamp: 1467400187730, from: 579452197266]
[app.AudioHandler.AudioEvent(speakMessageType: app.AudioHandler.SpeakMessageType.RemoteNotification, message: “New Message from Bhavin Modi. Test142.”)]
Setting audio timer.
Audio Session Stopped.
Audio Session Started.
Removing audio timer.
Audio Session Stopped.
Audio Session Started.
Record Permission Granted. Starting.
2016-07-01 15:10:01.326 app[5044:1990481] Creating shared instance of OEPocketsphinxController
2016-07-01 15:10:01.329 app[5044:1990481] Attempting to start listening session from startListeningWithLanguageModelAtPath:
2016-07-01 15:10:01.331 app[5044:1990481] User gave mic permission for this app.
2016-07-01 15:10:01.332 app[5044:1990481] setSecondsOfSilence wasn’t set, using default of 0.700000.
2016-07-01 15:10:01.333 app[5044:1990571] Starting listening.
2016-07-01 15:10:01.334 app[5044:1990571] About to set up audio session
2016-07-01 15:10:03.222 app[5044:1990594] Audio route has changed for the following reason:
2016-07-01 15:10:03.226 app[5044:1990571] Creating audio session with default settings.
2016-07-01 15:10:03.227 app[5044:1990571] Done setting audio session category.
2016-07-01 15:10:03.230 app[5044:1990571] Sample rate is already the preferred rate of 16000.000000 so not setting it.
2016-07-01 15:10:03.231 app[5044:1990594] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-07-01 15:10:03.248 app[5044:1990571] number of channels is already the preferred number of 1 so not setting it.
2016-07-01 15:10:03.250 app[5044:1990571] Not setting a preferred buffer duration at developer request, keeping the default duration for this hardware of 0.016000.
2016-07-01 15:10:03.251 app[5044:1990571] Done setting up audio session
2016-07-01 15:10:03.256 app[5044:1990571] About to set up audio IO unit in a session with a sample rate of 16000.000000, a channel number of 1 and a buffer duration of 0.016000.
2016-07-01 15:10:03.264 app[5044:1990594] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x13007bbb0,
inputs = (
“<AVAudioSessionPortDescription: 0x13126bee0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x13161dbc0, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:69-tacl; selectedDataSource = (null)>”
)>”.
2016-07-01 15:10:03.294 app[5044:1990571] Done setting up audio unit
2016-07-01 15:10:03.337 app[5044:1990571] About to start audio IO unit
2016-07-01 15:10:03.337 app[5044:1990594] Audio route has changed for the following reason:
2016-07-01 15:10:03.341 app[5044:1990571] Done starting audio unit
2016-07-01 15:10:03.347 app[5044:1990594] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-07-01 15:10:03.353 app[5044:1990594] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x13140d7c0,
inputs = (
“<AVAudioSessionPortDescription: 0x1302f8b60, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x130222460, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>”
)>”.
2016-07-01 15:10:03.458 app[5044:1990571] Restoring SmartCMN value of 35.950439
2016-07-01 15:10:03.458 app[5044:1990571] Listening.
2016-07-01 15:10:03.459 app[5044:1990571] Project has these words or phrases in its dictionary:
OK
REPEAT
REPEAT(2)
REPLAY2016-07-01 15:10:03.459 app[5044:1990571] Recognition loop has started
2016-07-01 15:10:03.459 app[5044:1990481] Successfully started listening session from startListeningWithLanguageModelAtPath:
2016-07-01 15:10:03.617 app[5044:1990571] Speech detected…
2016-07-01 15:10:04.597 app[5044:1990814] End of speech detected…
2016-07-01 15:10:04.613 app[5044:1990814] Pocketsphinx heard “” with a score of (-29001) and an utterance ID of 0.
2016-07-01 15:10:04.614 app[5044:1990814] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-07-01 15:10:05.642 app[5044:1990814] Speech detected…
2016-07-01 15:10:07.790 app[5044:1990619] End of speech detected…
2016-07-01 15:10:07.940 app[5044:1990619] Pocketsphinx heard “” with a score of (-45240) and an utterance ID of 1.
2016-07-01 15:10:07.941 app[5044:1990619] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
2016-07-01 15:10:08.437 app[5044:1990814] Speech detected…
2016-07-01 15:10:10.359 app[5044:1990814] End of speech detected…
2016-07-01 15:10:10.463 app[5044:1990814] Pocketsphinx heard “” with a score of (-37409) and an utterance ID of 2.
2016-07-01 15:10:10.465 app[5044:1990814] Hypothesis was null so we aren’t returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController’s property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
Regsitered for nod callback
2016-07-01 15:10:11.968 app[5044:1990571] Speech detected…
2016-07-01 15:10:12.910 app[5044:1990814] End of speech detected…
2016-07-01 15:10:12.964 app[5044:1990814] Pocketsphinx heard “OK” with a score of (-33460) and an utterance ID of 3.
Heard: OK
2016-07-01 15:10:12.978 app[5044:1990481] Stopping listening.
Listening can’t stop yet for the following reason or reasons: an utterance is still in progress | Trying again.
2016-07-01 15:10:13.530 app[5044:1990594] Audio route has changed for the following reason:
2016-07-01 15:10:13.555 app[5044:1990594] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-07-01 15:10:13.563 app[5044:1990594] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x131408fe0,
inputs = (
“<AVAudioSessionPortDescription: 0x12eef5cc0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:69-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x12efe6a50, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:69-tsco; selectedDataSource = (null)>”
)>”.
2016-07-01 15:10:13.569 app[5044:1990481] Attempting to stop an unstopped utterance so listening can stop.
2016-07-01 15:10:13.571 app[5044:1990481] No longer listening.
Unregistering success.
Audio Session Started.
2016-07-01 15:10:13.621 app[5044:1990594] Audio route has changed for the following reason:
2016-07-01 15:10:15.080 app[5044:1990594] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-07-01 15:10:15.119 app[5044:1990594] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “BluetoothHFP”. Output route or routes: “BluetoothHFP”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x12efdaff0,
inputs = (
“<AVAudioSessionPortDescription: 0x131441f90, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x12ef1cbb0, type = BluetoothA2DPOutput; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:69-tacl; selectedDataSource = (null)>”
)>”.
Audio Session Stopped.
2016-07-01 15:10:18.410 app[5044:1990594] Audio route has changed for the following reason:
2016-07-01 15:10:18.439 app[5044:1990594] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
2016-07-01 15:10:18.452 app[5044:1990594] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: “MicrophoneBuiltIn”. Output route or routes: “BluetoothA2DPOutput”>. The previous route before changing to this route was “<AVAudioSessionRouteDescription: 0x13121e4f0,
inputs = (
“<AVAudioSessionPortDescription: 0x1312f2180, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:69-tsco; selectedDataSource = (null)>”
);
outputs = (
“<AVAudioSessionPortDescription: 0x131238fc0, type = BluetoothHFP; name = Maven Co-Pilot; UID = 00:07:80:C3:A1:69-tsco; selectedDataSource = (null)>”
)>”.July 1, 2016 at 9:38 pm #1030651Halle WinklerPolitepixHi,
I’m asking you to read the class documentation so you can find out how to use OEPocketsphinxController.sharedInstance().setActive(true) correctly, and see and experiment with the other properties documented in the class that might have a connection to your issue before asking me to troubleshoot. Let me know when you’ve had a chance to do that – we can troubleshoot this more when your function is designed according to the docs and you can let me know what happened when you tried the other documented override functions besides the ones I mentioned above, thanks.
-
AuthorPosts
- You must be logged in to reply to this topic.