anhtu

Forum Replies Created

Viewing 11 posts - 1 through 11 (of 11 total)

  • Author
    Posts
  • in reply to: [RapidEars][Rejecto] for C++ or android platform #1026556
    anhtu
    Participant

    Thanks,

    Do you know any framework can help me in this case?

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026498
    anhtu
    Participant

    I’m sorry. This is my mistake. I dont upgrade OpenEars to version 2.041. The problem is solved. Thank you very much.

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026494
    anhtu
    Participant

    Well, it is so weird. My co-worker said he updated RapidEars to 2.04.
    I will ask him to update and try again. And I will post the result tomorrow also.

    Thank you.

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026492
    anhtu
    Participant

    Here is the contents:crash when came to letter D

    .dic file

    D D IY

    .arpa file

    #############################################################################
    ## Copyright (c) 1996, Carnegie Mellon University, Cambridge University,
    ## Ronald Rosenfeld and Philip Clarkson
    ## Version 3, Copyright (c) 2006, Carnegie Mellon University 
    ## Contributors includes Wen Xu, Ananlada Chotimongkol, 
    ## David Huggins-Daines, Arthur Chan and Alan Black 
    #############################################################################
    =============================================================================
    ===============  This file was produced by the CMU-Cambridge  ===============
    ===============     Statistical Language Modeling Toolkit     ===============
    =============================================================================
    This is a 3-gram language model, based on a vocabulary of 3 words,
      which begins "</s>", "<s>", "D"...
    This is a CLOSED-vocabulary model
      (OOVs eliminated from training data and are forbidden in test data)
    Witten Bell discounting was applied.
    This file is in the ARPA-standard format introduced by Doug Paul.
    
    p(wd3|wd1,wd2)= if(trigram exists)           p_3(wd1,wd2,wd3)
                    else if(bigram w1,w2 exists) bo_wt_2(w1,w2)*p(wd3|wd2)
                    else                         p(wd3|w2)
    
    p(wd2|wd1)= if(bigram exists) p_2(wd1,wd2)
                else              bo_wt_1(wd1)*p_1(wd2)
    
    All probs and back-off weights (bo_wt) are given in log10 form.
    
    Data formats:
    
    Beginning of data mark: \data\
    ngram 1=nr            # number of 1-grams
    ngram 2=nr            # number of 2-grams
    ngram 3=nr            # number of 3-grams
    
    \1-grams:
    p_1     wd_1 bo_wt_1
    \2-grams:
    p_2     wd_1 wd_2 bo_wt_2
    \3-grams:
    p_3     wd_1 wd_2 wd_3 
    
    end of data mark: \end\
    
    \data\
    ngram 1=3
    ngram 2=2
    ngram 3=1
    
    \1-grams:
    -98.6990 </s>	0.0000
    -98.6990 <s>	-99.9990
    0.0000 D	-0.3010
    
    \2-grams:
    -0.3010 <s> D 0.0000
    -0.3010 D </s> -0.3010
    
    \3-grams:
    -0.3010 <s> D </s> 
    
    \end\
    

    the log:

    2015-08-04 17:18:43.475 App_Name[50827:4272534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-04 17:18:43.525 App_Name[50827:4272534] Starting dynamic language model generation
    
    2015-08-04 17:18:43.647 App_Name[50827:4272534] Done creating language model with CMUCLMTK in 0.122107 seconds.
    2015-08-04 17:18:43.669 App_Name[50827:4272534] I'm done running performDictionaryLookup and it took 0.000215 seconds
    2015-08-04 17:18:43.677 App_Name[50827:4272534] I'm done running dynamic language model generation and it took 0.196122 seconds
    2015-08-04 17:18:43.678 App_Name[50827:4272534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-04 17:18:43.685 App_Name[50827:4272534] cocos2d: surface size: 2048x1536
    2015-08-04 17:18:44.690 App_Name[50827:4272534] User gave mic permission for this app.
    2015-08-04 17:18:44.691 App_Name[50827:4272534] Valid setSecondsOfSilence value of 0.200000 will be used.
    2015-08-04 17:18:44.692 App_Name[50827:4273012] Starting listening.
    2015-08-04 17:18:44.692 App_Name[50827:4273012] about to set up audio session
    2015-08-04 17:18:44.842 App_Name[50827:4272710] Audio route has changed for the following reason:
    2015-08-04 17:18:44.846 App_Name[50827:4272710] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2015-08-04 17:18:44.852 App_Name[50827:4272710] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is ---SpeakerMicrophoneBuiltIn---. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x170806470,
    inputs = (null);
    outputs = (
               "<AVAudioSessionPortDescription: 0x17080b520, type = Speaker; name = Speaker; UID = Built-In Speaker; selectedDataSource = (null)>"
               )>.
    2015-08-04 17:18:44.866 App_Name[50827:4273012] done starting audio unit
    INFO: cmd_ln.c(702): Parsing command line:
    \
    -lm /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_A_0.DMP \
    -vad_prespeech 10 \
    -vad_postspeech 20 \
    -vad_threshold 2.000000 \
    -remove_noise yes \
    -remove_silence yes \
    -bestpath yes \
    -lw 6.500000 \
    -dict /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_A_0.dic \
    -hmm /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -allphone
    -allphone_ci	no		no
    -alpha		0.97		9.700000e-01
    -argfile
    -ascale		20.0		2.000000e+01
    -aw		1		1
    -backtrace	no		no
    -beam		1e-48		1.000000e-48
    -bestpath	yes		yes
    -bestpathlw	9.5		9.500000e+00
    -bghist		no		no
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		8.0
    -compallsen	no		no
    -debug				0
    -dict				/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_A_0.dic
    -dictcase	no		no
    -dither		no		no
    -doublebw	no		no
    -ds		1		1
    -fdict
    -feat		1s_c_d_dd	1s_c_d_dd
    -featparams
    -fillprob	1e-8		1.000000e-08
    -frate		100		100
    -fsg
    -fsgusealtpron	yes		yes
    -fsgusefiller	yes		yes
    -fwdflat	yes		yes
    -fwdflatbeam	1e-64		1.000000e-64
    -fwdflatefwid	4		4
    -fwdflatlw	8.5		8.500000e+00
    -fwdflatsfwin	25		25
    -fwdflatwbeam	7e-29		7.000000e-29
    -fwdtree	yes		yes
    -hmm				/private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle
    -input_endian	little		little
    -jsgf
    -kdmaxbbi	-1		-1
    -kdmaxdepth	0		0
    -kdtree
    -keyphrase
    -kws
    -kws_plp	1e-1		1.000000e-01
    -kws_threshold	1		1.000000e+00
    -latsize	5000		5000
    -lda
    -ldadim		0		0
    -lextreedump	0		0
    -lifter		0		0
    -lm				/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_A_0.DMP
    -lmctl
    -lmname
    -logbase	1.0001		1.000100e+00
    -logfn
    -logspec	no		no
    -lowerf		133.33334	1.333333e+02
    -lpbeam		1e-40		1.000000e-40
    -lponlybeam	7e-29		7.000000e-29
    -lw		6.5		6.500000e+00
    -maxhmmpf	10000		10000
    -maxnewoov	20		20
    -maxwpf		-1		-1
    -mdef
    -mean
    -mfclogdir
    -min_endfr	0		0
    -mixw
    -mixwfloor	0.0000001	1.000000e-07
    -mllr
    -mmap		yes		yes
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		40
    -nwpen		1.0		1.000000e+00
    -pbeam		1e-48		1.000000e-48
    -pip		1.0		1.000000e+00
    -pl_beam	1e-10		1.000000e-10
    -pl_pbeam	1e-5		1.000000e-05
    -pl_window	0		0
    -rawlogdir
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob	0.005		5.000000e-03
    -smoothspec	no		no
    -svspec
    -tmat
    -tmatfloor	0.0001		1.000000e-04
    -topn		4		4
    -topn_beam	0		0
    -toprule
    -transform	legacy		legacy
    -unit_area	yes		yes
    -upperf		6855.4976	6.855498e+03
    -usewdphones	no		no
    -uw		1.0		1.000000e+00
    -vad_postspeech	50		20
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -var
    -varfloor	0.0001		1.000000e-04
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wbeam		7e-29		7.000000e-29
    -wip		0.65		6.500000e-01
    -wlen		0.025625	2.562500e-02
    
    INFO: cmd_ln.c(702): Parsing command line:
    \
    -nfilt 25 \
    -lowerf 130 \
    -upperf 6800 \
    -feat 1s_c_d_dd \
    -svspec 0-12/13-25/26-38 \
    -agc none \
    -cmn current \
    -varnorm no \
    -transform dct \
    -lifter 22 \
    -cmninit 40
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -alpha		0.97		9.700000e-01
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		40
    -dither		no		no
    -doublebw	no		no
    -feat		1s_c_d_dd	1s_c_d_dd
    -frate		100		100
    -input_endian	little		little
    -lda
    -ldadim		0		0
    -lifter		0		22
    -logspec	no		no
    -lowerf		133.33334	1.300000e+02
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		25
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -smoothspec	no		no
    -svspec				0-12/13-25/26-38
    -transform	legacy		dct
    -unit_area	yes		yes
    -upperf		6855.4976	6.800000e+03
    -vad_postspeech	50		20
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wlen		0.025625	2.562500e-02
    
    INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/feat.params
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
    INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/mdef
    INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
    INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/mdef
    INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
    INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/transition_matrices
    INFO: acmod.c(124): Attempting to use SCHMM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/sendump
    INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
    INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
    INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
    INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
    INFO: dict.c(320): Allocating 4107 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_A_0.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 2 words read
    INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(345): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_dmp.c(266):        3 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312):        2 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338):        1 = LM.trigrams read
    INFO: ngram_model_dmp.c(363):        2 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383):        3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403):        2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431):        1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487):        3 = ascii word strings read
    INFO: ngram_search_fwdtree.c(99): 0 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 12 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 12 single-phone words
    INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 128
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 11 single-phone words
    INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
    2015-08-04 17:18:44.981 App_Name[50827:4273012] Restoring SmartCMN value of 55.322021
    2015-08-04 17:18:44.982 App_Name[50827:4273012] Listening.
    2015-08-04 17:18:44.982 App_Name[50827:4273012] Project has these words or phrases in its dictionary:
    A
    A(2)
    2015-08-04 17:18:44.983 App_Name[50827:4273012] Recognition loop has started
    2015-08-04 17:18:44.987 App_Name[50827:4272534] Pocketsphinx is now listening.
    2015-08-04 17:18:45.589 App_Name[50827:4273012] Speech detected...
    2015-08-04 17:18:45.591 App_Name[50827:4273012] Pocketsphinx heard "" with a score of (-1841) and an utterance ID of 0.
    2015-08-04 17:18:45.673 App_Name[50827:4273012] Pocketsphinx heard "" with a score of (-4145) and an utterance ID of 1.
    2015-08-04 17:18:45.776 App_Name[50827:4273012] End of speech detected...
    INFO: cmn_prior.c(131): cmn_prior_update: from < 55.32  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 50.10 -2.86 -21.38  6.18  9.53  9.65 -6.16  1.99 -5.67 -2.94 -0.32 -7.97 -8.61 >
    INFO: ngram_search_fwdtree.c(1550):      344 words recognized (9/fr)
    INFO: ngram_search_fwdtree.c(1552):     1104 senones evaluated (30/fr)
    INFO: ngram_search_fwdtree.c(1556):      366 channels searched (9/fr), 0 1st, 366 last
    INFO: ngram_search_fwdtree.c(1559):      366 words for which last channels evaluated (9/fr)
    INFO: ngram_search_fwdtree.c(1561):        0 candidate words for entering last phone (0/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.09 CPU 0.251 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 0.51 wall 1.383 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 2 words
    INFO: ngram_search_fwdflat.c(938):      230 words recognized (6/fr)
    INFO: ngram_search_fwdflat.c(940):      720 senones evaluated (19/fr)
    INFO: ngram_search_fwdflat.c(942):      267 channels searched (7/fr)
    INFO: ngram_search_fwdflat.c(944):      267 words searched (7/fr)
    INFO: ngram_search_fwdflat.c(947):       62 word transitions (1/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.00 CPU 0.010 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.00 wall 0.009 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using <sil>.35 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node <sil>.2
    INFO: ngram_search.c(1294): Eliminated 90 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 92 nodes, 1 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(<sil>:2:35) = -4823285
    INFO: ps_lattice.c(1403): Joint P(O,S) = -4823286 P(S|O) = -1
    INFO: ngram_search.c(890): bestpath 0.00 CPU 0.000 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.001 xRT
    2015-08-04 17:18:45.781 App_Name[50827:4273012] Pocketsphinx heard "" with a score of (-47303) and an utterance ID of 2.
    2015-08-04 17:18:45.782 App_Name[50827:4273012] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2015-08-04 17:18:46.052 App_Name[50827:4273012] Speech detected...
    2015-08-04 17:18:46.053 App_Name[50827:4273012] Pocketsphinx heard "" with a score of (-3968) and an utterance ID of 3.
    2015-08-04 17:18:46.165 App_Name[50827:4273012] Pocketsphinx heard "A" with a score of (-5233) and an utterance ID of 4.
    2015-08-04 17:18:46.173 App_Name[50827:4272534] rapidEarsDidReceiveFinishedSpeechHypothesis: A with score: -5233
    2015-08-04 17:18:46.174 App_Name[50827:4272534] Play effect sound
    2015-08-04 17:18:49.159 App_Name[50827:4272534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-04 17:18:49.172 App_Name[50827:4272534] Starting dynamic language model generation
    
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.arpa \
    -o /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.DMP
    
    Current configuration:
    [NAME]		[DEFLT]	[VALUE]
    -case
    -debug			0
    -help		no	no
    -i			/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.arpa
    -ienc
    -ifmt
    -logbase	1.0001	1.000100e+00
    -mmap		no	no
    -o			/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.DMP
    -oenc		utf8	utf8
    -ofmt
    
    INFO: ngram_model_arpa.c(504): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543):        3 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561):        2 = #bigrams created
    INFO: ngram_model_arpa.c(562):        2 = #prob2 entries
    INFO: ngram_model_arpa.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583):        1 = #trigrams created
    INFO: ngram_model_arpa.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model...
    INFO: ngram_model_dmp.c(548):        3 = #unigrams created
    INFO: ngram_model_dmp.c(649):        2 = #bigrams created
    INFO: ngram_model_dmp.c(650):        2 = #prob2 entries
    INFO: ngram_model_dmp.c(657):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661):        1 = #trigrams created
    INFO: ngram_model_dmp.c(662):        2 = #prob3 entries
    2015-08-04 17:18:49.299 App_Name[50827:4272534] Done creating language model with CMUCLMTK in 0.126771 seconds.
    2015-08-04 17:18:49.307 App_Name[50827:4272534] I'm done running performDictionaryLookup and it took 0.002471 seconds
    2015-08-04 17:18:49.317 App_Name[50827:4272534] I'm done running dynamic language model generation and it took 0.157271 seconds
    2015-08-04 17:18:49.318 App_Name[50827:4272534] Valid setSecondsOfSilence value of 0.200000 will be used.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 50.10 -2.86 -21.38  6.18  9.53  9.65 -6.16  1.99 -5.67 -2.94 -0.32 -7.97 -8.61 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 56.52  4.14 -13.40  9.27 -0.63  2.79 -13.06 -7.02 -0.07 -7.23 -1.33 -5.36 -9.11 >
    INFO: ngram_search_fwdtree.c(1550):      344 words recognized (9/fr)
    INFO: ngram_search_fwdtree.c(1552):     1131 senones evaluated (31/fr)
    INFO: ngram_search_fwdtree.c(1556):      366 channels searched (9/fr), 0 1st, 366 last
    INFO: ngram_search_fwdtree.c(1559):      366 words for which last channels evaluated (9/fr)
    INFO: ngram_search_fwdtree.c(1561):        0 candidate words for entering last phone (0/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.82 CPU 2.215 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 3.71 wall 10.024 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 4 words
    INFO: ngram_search_fwdflat.c(938):      302 words recognized (8/fr)
    INFO: ngram_search_fwdflat.c(940):     1162 senones evaluated (31/fr)
    INFO: ngram_search_fwdflat.c(942):      387 channels searched (10/fr)
    INFO: ngram_search_fwdflat.c(944):      387 words searched (10/fr)
    INFO: ngram_search_fwdflat.c(947):      134 word transitions (3/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.00 CPU 0.009 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.00 wall 0.010 xRT
    2015-08-04 17:18:49.495 App_Name[50827:4273056] there is a request to change to the language model file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.DMP
    2015-08-04 17:18:49.496 App_Name[50827:4273056] The language model ID is 1438683529
    INFO: cmd_ln.c(702): Parsing command line:
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -allphone
    -allphone_ci	no		no
    -alpha		0.97		9.700000e-01
    -ascale		20.0		2.000000e+01
    -aw		1		1
    -backtrace	no		no
    -beam		1e-48		1.000000e-48
    -bestpath	yes		yes
    -bestpathlw	9.5		9.500000e+00
    -bghist		no		no
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		8.0
    -compallsen	no		no
    -debug				0
    -dict
    -dictcase	no		no
    -dither		no		no
    -doublebw	no		no
    -ds		1		1
    -fdict
    -feat		1s_c_d_dd	1s_c_d_dd
    -featparams
    -fillprob	1e-8		1.000000e-08
    -frate		100		100
    -fsg
    -fsgusealtpron	yes		yes
    -fsgusefiller	yes		yes
    -fwdflat	yes		yes
    -fwdflatbeam	1e-64		1.000000e-64
    -fwdflatefwid	4		4
    -fwdflatlw	8.5		8.500000e+00
    -fwdflatsfwin	25		25
    -fwdflatwbeam	7e-29		7.000000e-29
    -fwdtree	yes		yes
    -hmm
    -input_endian	little		little
    -jsgf
    -kdmaxbbi	-1		-1
    -kdmaxdepth	0		0
    -kdtree
    -keyphrase
    -kws
    -kws_plp	1e-1		1.000000e-01
    -kws_threshold	1		1.000000e+00
    -latsize	5000		5000
    -lda
    -ldadim		0		0
    -lextreedump	0		0
    -lifter		0		0
    -lm
    -lmctl
    -lmname
    -logbase	1.0001		1.000100e+00
    -logfn
    -logspec	no		no
    -lowerf		133.33334	1.333333e+02
    -lpbeam		1e-40		1.000000e-40
    -lponlybeam	7e-29		7.000000e-29
    -lw		6.5		6.500000e+00
    -maxhmmpf	10000		10000
    -maxnewoov	20		20
    -maxwpf		-1		-1
    -mdef
    -mean
    -mfclogdir
    -min_endfr	0		0
    -mixw
    -mixwfloor	0.0000001	1.000000e-07
    -mllr
    -mmap		yes		yes
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		40
    -nwpen		1.0		1.000000e+00
    -pbeam		1e-48		1.000000e-48
    -pip		1.0		1.000000e+00
    -pl_beam	1e-10		1.000000e-10
    -pl_pbeam	1e-5		1.000000e-05
    -pl_window	0		0
    -rawlogdir
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob	0.005		5.000000e-03
    -smoothspec	no		no
    -svspec
    -tmat
    -tmatfloor	0.0001		1.000000e-04
    -topn		4		4
    -topn_beam	0		0
    -toprule
    -transform	legacy		legacy
    -unit_area	yes		yes
    -upperf		6855.4976	6.855498e+03
    -usewdphones	no		no
    -uw		1.0		1.000000e+00
    -vad_postspeech	50		50
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -var
    -varfloor	0.0001		1.000000e-04
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wbeam		7e-29		7.000000e-29
    -wip		0.65		6.500000e-01
    -wlen		0.025625	2.562500e-02
    
    INFO: dict.c(320): Allocating 4106 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 1 words read
    INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(345): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 9 single-phone words
    2015-08-04 17:18:49.515 App_Name[50827:4273056] Success loading the specified dictionary file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.dic.
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_dmp.c(266):        3 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312):        2 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338):        1 = LM.trigrams read
    INFO: ngram_model_dmp.c(363):        2 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383):        3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403):        2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431):        1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487):        3 = ascii word strings read
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 128
    INFO: ngram_search_fwdtree.c(339): after: 1 root, 0 non-root channels, 9 single-phone words
    INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
    2015-08-04 17:18:49.518 App_Name[50827:4273056] Success loading the specified language model file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.DMP.
    2015-08-04 17:18:49.518 App_Name[50827:4272534] Pocketsphinx is now using the following language model:
    /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.DMP and the following dictionary: /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_B_1.dic
    2015-08-04 17:18:49.519 App_Name[50827:4273056] Changed language model. Project has these words or phrases in its dictionary:
    B
    INFO: cmn_prior.c(131): cmn_prior_update: from < 56.52  4.14 -13.40  9.27 -0.63  2.79 -13.06 -7.02 -0.07 -7.23 -1.33 -5.36 -9.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 56.52  4.14 -13.40  9.27 -0.63  2.79 -13.06 -7.02 -0.07 -7.23 -1.33 -5.36 -9.11 >
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
    2015-08-04 17:18:49.744 App_Name[50827:4273056] Speech detected...
    2015-08-04 17:18:49.745 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-2471) and an utterance ID of 5.
    2015-08-04 17:18:49.745 App_Name[50827:4273056] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2015-08-04 17:18:49.875 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-3611) and an utterance ID of 6.
    2015-08-04 17:18:50.015 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-6506) and an utterance ID of 7.
    2015-08-04 17:18:50.134 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-7907) and an utterance ID of 8.
    2015-08-04 17:18:50.261 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-9050) and an utterance ID of 9.
    2015-08-04 17:18:50.383 App_Name[50827:4273056] End of speech detected...
    INFO: cmn_prior.c(131): cmn_prior_update: from < 56.52  4.14 -13.40  9.27 -0.63  2.79 -13.06 -7.02 -0.07 -7.23 -1.33 -5.36 -9.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 57.29  5.73 -11.50  8.72 -0.13  7.50 -5.11 -8.93 -2.48 -6.79 -6.31 -2.05 -8.11 >
    INFO: ngram_search_fwdtree.c(1550):      575 words recognized (8/fr)
    INFO: ngram_search_fwdtree.c(1552):     4967 senones evaluated (72/fr)
    INFO: ngram_search_fwdtree.c(1556):     2542 channels searched (36/fr), 65 1st, 2477 last
    INFO: ngram_search_fwdtree.c(1559):      617 words for which last channels evaluated (8/fr)
    INFO: ngram_search_fwdtree.c(1561):       63 candidate words for entering last phone (0/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.14 CPU 0.201 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 0.86 wall 1.252 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
    INFO: ngram_search_fwdflat.c(938):      491 words recognized (7/fr)
    INFO: ngram_search_fwdflat.c(940):     4890 senones evaluated (71/fr)
    INFO: ngram_search_fwdflat.c(942):     2482 channels searched (35/fr)
    INFO: ngram_search_fwdflat.c(944):      622 words searched (9/fr)
    INFO: ngram_search_fwdflat.c(947):      130 word transitions (1/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.01 CPU 0.017 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.01 wall 0.013 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using B.67 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node B.55
    INFO: ngram_search.c(1294): Eliminated 4 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 182 nodes, 1073 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(B:55:67) = -541208
    INFO: ps_lattice.c(1403): Joint P(O,S) = -541660 P(S|O) = -452
    INFO: ngram_search.c(890): bestpath 0.00 CPU 0.003 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.003 xRT
    2015-08-04 17:18:50.395 App_Name[50827:4273056] Pocketsphinx heard "B" with a score of (-10515) and an utterance ID of 10.
    2015-08-04 17:18:50.397 App_Name[50827:4272534] rapidEarsDidReceiveFinishedSpeechHypothesis: B with score: -10515
    2015-08-04 17:18:50.397 App_Name[50827:4272534] Play effect sound
    2015-08-04 17:18:52.427 App_Name[50827:4272534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-04 17:18:52.438 App_Name[50827:4272534] Starting dynamic language model generation
    
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.arpa \
    -o /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.DMP
    
    Current configuration:
    [NAME]		[DEFLT]	[VALUE]
    -case
    -debug			0
    -help		no	no
    -i			/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.arpa
    -ienc
    -ifmt
    -logbase	1.0001	1.000100e+00
    -mmap		no	no
    -o			/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.DMP
    -oenc		utf8	utf8
    -ofmt
    
    INFO: ngram_model_arpa.c(504): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543):        3 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561):        2 = #bigrams created
    INFO: ngram_model_arpa.c(562):        2 = #prob2 entries
    INFO: ngram_model_arpa.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583):        1 = #trigrams created
    INFO: ngram_model_arpa.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model...
    INFO: ngram_model_dmp.c(548):        3 = #unigrams created
    INFO: ngram_model_dmp.c(649):        2 = #bigrams created
    INFO: ngram_model_dmp.c(650):        2 = #prob2 entries
    INFO: ngram_model_dmp.c(657):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661):        1 = #trigrams created
    INFO: ngram_model_dmp.c(662):        2 = #prob3 entries
    2015-08-04 17:18:52.554 App_Name[50827:4272534] Done creating language model with CMUCLMTK in 0.115380 seconds.
    2015-08-04 17:18:52.579 App_Name[50827:4272534] I'm done running performDictionaryLookup and it took 0.004318 seconds
    2015-08-04 17:18:52.587 App_Name[50827:4272534] I'm done running dynamic language model generation and it took 0.159274 seconds
    2015-08-04 17:18:52.587 App_Name[50827:4272534] Valid setSecondsOfSilence value of 0.200000 will be used.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 57.29  5.73 -11.50  8.72 -0.13  7.50 -5.11 -8.93 -2.48 -6.79 -6.31 -2.05 -8.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 57.29  5.73 -11.50  8.72 -0.13  7.50 -5.11 -8.93 -2.48 -6.79 -6.31 -2.05 -8.11 >
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
    2015-08-04 17:18:52.692 App_Name[50827:4273056] there is a request to change to the language model file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.DMP
    2015-08-04 17:18:52.693 App_Name[50827:4273056] The language model ID is 1438683532
    INFO: cmd_ln.c(702): Parsing command line:
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -allphone
    -allphone_ci	no		no
    -alpha		0.97		9.700000e-01
    -ascale		20.0		2.000000e+01
    -aw		1		1
    -backtrace	no		no
    -beam		1e-48		1.000000e-48
    -bestpath	yes		yes
    -bestpathlw	9.5		9.500000e+00
    -bghist		no		no
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		8.0
    -compallsen	no		no
    -debug				0
    -dict
    -dictcase	no		no
    -dither		no		no
    -doublebw	no		no
    -ds		1		1
    -fdict
    -feat		1s_c_d_dd	1s_c_d_dd
    -featparams
    -fillprob	1e-8		1.000000e-08
    -frate		100		100
    -fsg
    -fsgusealtpron	yes		yes
    -fsgusefiller	yes		yes
    -fwdflat	yes		yes
    -fwdflatbeam	1e-64		1.000000e-64
    -fwdflatefwid	4		4
    -fwdflatlw	8.5		8.500000e+00
    -fwdflatsfwin	25		25
    -fwdflatwbeam	7e-29		7.000000e-29
    -fwdtree	yes		yes
    -hmm
    -input_endian	little		little
    -jsgf
    -kdmaxbbi	-1		-1
    -kdmaxdepth	0		0
    -kdtree
    -keyphrase
    -kws
    -kws_plp	1e-1		1.000000e-01
    -kws_threshold	1		1.000000e+00
    -latsize	5000		5000
    -lda
    -ldadim		0		0
    -lextreedump	0		0
    -lifter		0		0
    -lm
    -lmctl
    -lmname
    -logbase	1.0001		1.000100e+00
    -logfn
    -logspec	no		no
    -lowerf		133.33334	1.333333e+02
    -lpbeam		1e-40		1.000000e-40
    -lponlybeam	7e-29		7.000000e-29
    -lw		6.5		6.500000e+00
    -maxhmmpf	10000		10000
    -maxnewoov	20		20
    -maxwpf		-1		-1
    -mdef
    -mean
    -mfclogdir
    -min_endfr	0		0
    -mixw
    -mixwfloor	0.0000001	1.000000e-07
    -mllr
    -mmap		yes		yes
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		40
    -nwpen		1.0		1.000000e+00
    -pbeam		1e-48		1.000000e-48
    -pip		1.0		1.000000e+00
    -pl_beam	1e-10		1.000000e-10
    -pl_pbeam	1e-5		1.000000e-05
    -pl_window	0		0
    -rawlogdir
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob	0.005		5.000000e-03
    -smoothspec	no		no
    -svspec
    -tmat
    -tmatfloor	0.0001		1.000000e-04
    -topn		4		4
    -topn_beam	0		0
    -toprule
    -transform	legacy		legacy
    -unit_area	yes		yes
    -upperf		6855.4976	6.855498e+03
    -usewdphones	no		no
    -uw		1.0		1.000000e+00
    -vad_postspeech	50		50
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -var
    -varfloor	0.0001		1.000000e-04
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wbeam		7e-29		7.000000e-29
    -wip		0.65		6.500000e-01
    -wlen		0.025625	2.562500e-02
    
    INFO: dict.c(320): Allocating 4106 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 1 words read
    INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(345): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 9 single-phone words
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 9 single-phone words
    2015-08-04 17:18:52.711 App_Name[50827:4273056] Success loading the specified dictionary file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.dic.
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_dmp.c(266):        3 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312):        2 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338):        1 = LM.trigrams read
    INFO: ngram_model_dmp.c(363):        2 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383):        3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403):        2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431):        1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487):        3 = ascii word strings read
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 128
    INFO: ngram_search_fwdtree.c(339): after: 1 root, 0 non-root channels, 9 single-phone words
    INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
    2015-08-04 17:18:52.713 App_Name[50827:4273056] Success loading the specified language model file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.DMP.
    2015-08-04 17:18:52.714 App_Name[50827:4273056] Changed language model. Project has these words or phrases in its dictionary:
    C
    INFO: cmn_prior.c(131): cmn_prior_update: from < 57.29  5.73 -11.50  8.72 -0.13  7.50 -5.11 -8.93 -2.48 -6.79 -6.31 -2.05 -8.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 57.29  5.73 -11.50  8.72 -0.13  7.50 -5.11 -8.93 -2.48 -6.79 -6.31 -2.05 -8.11 >
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
    2015-08-04 17:18:52.724 App_Name[50827:4272534] Pocketsphinx is now using the following language model:
    /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.DMP and the following dictionary: /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_C_2.dic
    2015-08-04 17:18:52.829 App_Name[50827:4273056] Speech detected...
    2015-08-04 17:18:52.830 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-3821) and an utterance ID of 11.
    2015-08-04 17:18:52.950 App_Name[50827:4273056] Pocketsphinx heard "" with a score of (-4873) and an utterance ID of 12.
    2015-08-04 17:18:53.070 App_Name[50827:4273056] End of speech detected...
    INFO: cmn_prior.c(131): cmn_prior_update: from < 57.29  5.73 -11.50  8.72 -0.13  7.50 -5.11 -8.93 -2.48 -6.79 -6.31 -2.05 -8.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 58.03  3.50 -9.27  6.71 -0.63  7.91 -2.92 -6.81 -2.32 -6.02 -8.06 -2.33 -7.31 >
    INFO: ngram_search_fwdtree.c(1550):      290 words recognized (8/fr)
    INFO: ngram_search_fwdtree.c(1552):     2547 senones evaluated (73/fr)
    INFO: ngram_search_fwdtree.c(1556):     1181 channels searched (33/fr), 31 1st, 1150 last
    INFO: ngram_search_fwdtree.c(1559):      310 words for which last channels evaluated (8/fr)
    INFO: ngram_search_fwdtree.c(1561):       29 candidate words for entering last phone (0/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 0.06 CPU 0.177 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 0.36 wall 1.018 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
    INFO: ngram_search_fwdflat.c(938):      242 words recognized (7/fr)
    INFO: ngram_search_fwdflat.c(940):     2652 senones evaluated (76/fr)
    INFO: ngram_search_fwdflat.c(942):     1228 channels searched (35/fr)
    INFO: ngram_search_fwdflat.c(944):      329 words searched (9/fr)
    INFO: ngram_search_fwdflat.c(947):       91 word transitions (2/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.00 CPU 0.013 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.00 wall 0.013 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using C.33 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node C.26
    INFO: ngram_search.c(1294): Eliminated 9 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 61 nodes, 198 links
    INFO: ps_lattice.c(1368): Normalizer P(O) = alpha(C:26:33) = -297153
    INFO: ps_lattice.c(1403): Joint P(O,S) = -297589 P(S|O) = -436
    INFO: ngram_search.c(890): bestpath 0.00 CPU 0.002 xRT
    INFO: ngram_search.c(893): bestpath 0.00 wall 0.002 xRT
    2015-08-04 17:18:53.080 App_Name[50827:4273056] Pocketsphinx heard "C" with a score of (-5748) and an utterance ID of 13.
    2015-08-04 17:18:53.090 App_Name[50827:4272534] rapidEarsDidReceiveFinishedSpeechHypothesis: C with score: -5748
    2015-08-04 17:18:53.091 App_Name[50827:4272534] Play effect sound
    2015-08-04 17:18:55.093 App_Name[50827:4272534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-04 17:18:55.104 App_Name[50827:4272534] Starting dynamic language model generation
    
    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.arpa \
    -o /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.DMP
    
    Current configuration:
    [NAME]		[DEFLT]	[VALUE]
    -case
    -debug			0
    -help		no	no
    -i			/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.arpa
    -ienc
    -ifmt
    -logbase	1.0001	1.000100e+00
    -mmap		no	no
    -o			/var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.DMP
    -oenc		utf8	utf8
    -ofmt
    
    INFO: ngram_model_arpa.c(504): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543):        3 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561):        2 = #bigrams created
    INFO: ngram_model_arpa.c(562):        2 = #prob2 entries
    INFO: ngram_model_arpa.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583):        1 = #trigrams created
    INFO: ngram_model_arpa.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model...
    INFO: ngram_model_dmp.c(548):        3 = #unigrams created
    INFO: ngram_model_dmp.c(649):        2 = #bigrams created
    INFO: ngram_model_dmp.c(650):        2 = #prob2 entries
    INFO: ngram_model_dmp.c(657):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661):        1 = #trigrams created
    INFO: ngram_model_dmp.c(662):        2 = #prob3 entries
    2015-08-04 17:18:55.224 App_Name[50827:4272534] Done creating language model with CMUCLMTK in 0.119595 seconds.
    2015-08-04 17:18:55.236 App_Name[50827:4272534] I'm done running performDictionaryLookup and it took 0.006811 seconds
    2015-08-04 17:18:55.245 App_Name[50827:4272534] I'm done running dynamic language model generation and it took 0.150872 seconds
    2015-08-04 17:18:55.246 App_Name[50827:4272534] Valid setSecondsOfSilence value of 0.200000 will be used.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 58.03  3.50 -9.27  6.71 -0.63  7.91 -2.92 -6.81 -2.32 -6.02 -8.06 -2.33 -7.31 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 58.03  3.50 -9.27  6.71 -0.63  7.91 -2.92 -6.81 -2.32 -6.02 -8.06 -2.33 -7.31 >
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 0 words
    2015-08-04 17:18:55.376 App_Name[50827:4273012] there is a request to change to the language model file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.DMP
    2015-08-04 17:18:55.377 App_Name[50827:4273012] The language model ID is 1438683535
    INFO: cmd_ln.c(702): Parsing command line:
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -allphone
    -allphone_ci	no		no
    -alpha		0.97		9.700000e-01
    -ascale		20.0		2.000000e+01
    -aw		1		1
    -backtrace	no		no
    -beam		1e-48		1.000000e-48
    -bestpath	yes		yes
    -bestpathlw	9.5		9.500000e+00
    -bghist		no		no
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		8.0
    -compallsen	no		no
    -debug				0
    -dict
    -dictcase	no		no
    -dither		no		no
    -doublebw	no		no
    -ds		1		1
    -fdict
    -feat		1s_c_d_dd	1s_c_d_dd
    -featparams
    -fillprob	1e-8		1.000000e-08
    -frate		100		100
    -fsg
    -fsgusealtpron	yes		yes
    -fsgusefiller	yes		yes
    -fwdflat	yes		yes
    -fwdflatbeam	1e-64		1.000000e-64
    -fwdflatefwid	4		4
    -fwdflatlw	8.5		8.500000e+00
    -fwdflatsfwin	25		25
    -fwdflatwbeam	7e-29		7.000000e-29
    -fwdtree	yes		yes
    -hmm
    -input_endian	little		little
    -jsgf
    -kdmaxbbi	-1		-1
    -kdmaxdepth	0		0
    -kdtree
    -keyphrase
    -kws
    -kws_plp	1e-1		1.000000e-01
    -kws_threshold	1		1.000000e+00
    -latsize	5000		5000
    -lda
    -ldadim		0		0
    -lextreedump	0		0
    -lifter		0		0
    -lm
    -lmctl
    -lmname
    -logbase	1.0001		1.000100e+00
    -logfn
    -logspec	no		no
    -lowerf		133.33334	1.333333e+02
    -lpbeam		1e-40		1.000000e-40
    -lponlybeam	7e-29		7.000000e-29
    -lw		6.5		6.500000e+00
    -maxhmmpf	10000		10000
    -maxnewoov	20		20
    -maxwpf		-1		-1
    -mdef
    -mean
    -mfclogdir
    -min_endfr	0		0
    -mixw
    -mixwfloor	0.0000001	1.000000e-07
    -mllr
    -mmap		yes		yes
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		40
    -nwpen		1.0		1.000000e+00
    -pbeam		1e-48		1.000000e-48
    -pip		1.0		1.000000e+00
    -pl_beam	1e-10		1.000000e-10
    -pl_pbeam	1e-5		1.000000e-05
    -pl_window	0		0
    -rawlogdir
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob	0.005		5.000000e-03
    -smoothspec	no		no
    -svspec
    -tmat
    -tmatfloor	0.0001		1.000000e-04
    -topn		4		4
    -topn_beam	0		0
    -toprule
    -transform	legacy		legacy
    -unit_area	yes		yes
    -upperf		6855.4976	6.855498e+03
    -usewdphones	no		no
    -uw		1.0		1.000000e+00
    -vad_postspeech	50		50
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -var
    -varfloor	0.0001		1.000000e-04
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wbeam		7e-29		7.000000e-29
    -wip		0.65		6.500000e-01
    -wlen		0.025625	2.562500e-02
    
    INFO: dict.c(320): Allocating 4106 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 1 words read
    INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/54D4920A-C3CD-4480-94F2-8654421B4D13/App_Name.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(345): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 9 single-phone words
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 9 single-phone words
    INFO: ngram_search_fwdtree.c(99): 1 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 9 single-phone words
    2015-08-04 17:18:55.393 App_Name[50827:4273012] Success loading the specified dictionary file /var/mobile/Containers/Data/Application/AE23615A-AB76-4AC4-98BB-0F72F026A2DB/Library/Caches/lession_vocal_D_3.dic.
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_dmp.c(266):        3 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312):        2 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338):        1 = LM.trigrams read
    INFO: ngram_model_dmp.c(363):        2 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383):        3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403):        2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431):        1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487):        3 = ascii word strings read
    App_Name(50827,0x103588000) malloc: *** error for object 0x1702910d0: Invalid pointer dequeued from free list
    *** set a breakpoint in malloc_error_break to debug
    (lldb)
    

    call stack:

    Thread 12Queue : com.apple.root.default-qos (concurrent)
    #0	0x00000001948c7270 in __pthread_kill ()
    #1	0x0000000194965170 in pthread_kill ()
    #2	0x000000019483eb18 in abort ()
    #3	0x00000001949023e4 in nanozone_error ()
    #4	0x0000000194902550 in _nano_malloc_check_clear ()
    #5	0x00000001949010dc in nano_calloc ()
    #6	0x00000001948f595c in malloc_zone_calloc ()
    #7	0x00000001948f58bc in calloc ()
    #8	0x00000001004c7478 in __ckd_calloc_2d__ ()
    #9	0x0000000100474c40 in ngram_search_init ()
    #10	0x000000010046933c in ps_set_lm_file ()
    #11	0x0000000100479f9c in ___lldb_unnamed_function183$$StampDrill ()
    #12	0x0000000100461438 in ___lldb_unnamed_function78$$StampDrill ()
    #13	0x0000000100461ac8 in ___lldb_unnamed_function82$$StampDrill ()
    #14	0x000000010137cf94 in _dispatch_client_callout ()
    #15	0x0000000101394848 in _dispatch_source_latch_and_call ()
    #16	0x000000010137f1c0 in _dispatch_source_invoke ()
    #17	0x000000010138a5d4 in _dispatch_root_queue_drain ()
    #18	0x000000010138c248 in _dispatch_worker_thread3 ()
    #19	0x000000019496122c in _pthread_wqthread ()
    
    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026490
    anhtu
    Participant

    Sorry, I will create one again. I reseted the source code from git and forgot it.

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026488
    anhtu
    Participant

    Here is the contents of the files which you need. In this case , I tested from A-G. When came to letter C, it crashed.

    lession_vocal_C_2.dic file

    C S IY

    lession_vocal_C_2.arpa

    #############################################################################
    ## Copyright (c) 1996, Carnegie Mellon University, Cambridge University,
    ## Ronald Rosenfeld and Philip Clarkson
    ## Version 3, Copyright (c) 2006, Carnegie Mellon University 
    ## Contributors includes Wen Xu, Ananlada Chotimongkol, 
    ## David Huggins-Daines, Arthur Chan and Alan Black 
    #############################################################################
    =============================================================================
    ===============  This file was produced by the CMU-Cambridge  ===============
    ===============     Statistical Language Modeling Toolkit     ===============
    =============================================================================
    This is a 3-gram language model, based on a vocabulary of 3 words,
      which begins "</s>", "<s>", "C"...
    This is a CLOSED-vocabulary model
      (OOVs eliminated from training data and are forbidden in test data)
    Witten Bell discounting was applied.
    This file is in the ARPA-standard format introduced by Doug Paul.
    
    p(wd3|wd1,wd2)= if(trigram exists)           p_3(wd1,wd2,wd3)
                    else if(bigram w1,w2 exists) bo_wt_2(w1,w2)*p(wd3|wd2)
                    else                         p(wd3|w2)
    
    p(wd2|wd1)= if(bigram exists) p_2(wd1,wd2)
                else              bo_wt_1(wd1)*p_1(wd2)
    
    All probs and back-off weights (bo_wt) are given in log10 form.
    
    Data formats:
    
    Beginning of data mark: \data\
    ngram 1=nr            # number of 1-grams
    ngram 2=nr            # number of 2-grams
    ngram 3=nr            # number of 3-grams
    
    \1-grams:
    p_1     wd_1 bo_wt_1
    \2-grams:
    p_2     wd_1 wd_2 bo_wt_2
    \3-grams:
    p_3     wd_1 wd_2 wd_3 
    
    end of data mark: \end\
    
    \data\
    ngram 1=3
    ngram 2=2
    ngram 3=1
    
    \1-grams:
    -98.6990 </s>	0.0000
    -98.6990 <s>	-99.9990
    0.0000 C	-0.3010
    
    \2-grams:
    -0.3010 <s> C 0.0000
    -0.3010 C </s> -0.3010
    
    \3-grams:
    -0.3010 <s> C </s> 
    
    \end\
    

    the log console

    2015-08-04 16:37:00.758 App_Name[50514:4263641] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    
    2015-08-04 16:37:00.768 App_Name[50514:4263641] Starting dynamic language model generation
    
    2015-08-04 16:37:00.892 App_Name[50514:4263641] Done creating language model with CMUCLMTK in 0.123174 seconds.
    
    2015-08-04 16:37:00.900 App_Name[50514:4263641] I'm done running performDictionaryLookup and it took 0.002335 seconds
    
    2015-08-04 16:37:00.908 App_Name[50514:4263641] I'm done running dynamic language model generation and it took 0.148800 seconds
    
    2015-08-04 16:37:00.908 App_Name[50514:4263641] Valid setSecondsOfSilence value of 0.200000 will be used.
    
    2015-08-04 16:37:01.017 App_Name[50514:4264050] there is a request to change to the language model file /var/mobile/Containers/Data/Application/A9D01313-E149-4F27-9067-EB2DA15AB639/Library/Caches/lession_vocal_B_1.DMP
    
    2015-08-04 16:37:01.017 App_Name[50514:4264050] The language model ID is 1438681021
    
    2015-08-04 16:37:01.032 App_Name[50514:4264050] Success loading the specified dictionary file /var/mobile/Containers/Data/Application/A9D01313-E149-4F27-9067-EB2DA15AB639/Library/Caches/lession_vocal_B_1.dic.
    
    2015-08-04 16:37:01.033 App_Name[50514:4264050] Success loading the specified language model file /var/mobile/Containers/Data/Application/A9D01313-E149-4F27-9067-EB2DA15AB639/Library/Caches/lession_vocal_B_1.DMP.
    
    2015-08-04 16:37:01.033 App_Name[50514:4263641] Pocketsphinx is now using the following language model: 
    
    /var/mobile/Containers/Data/Application/A9D01313-E149-4F27-9067-EB2DA15AB639/Library/Caches/lession_vocal_B_1.DMP and the following dictionary: /var/mobile/Containers/Data/Application/A9D01313-E149-4F27-9067-EB2DA15AB639/Library/Caches/lession_vocal_B_1.dic
    
    2015-08-04 16:37:01.039 App_Name[50514:4264050] Changed language model. Project has these words or phrases in its dictionary:
    
    B
    
    2015-08-04 16:37:01.392 App_Name[50514:4264050] Speech detected...
    
    2015-08-04 16:37:01.393 App_Name[50514:4264050] Pocketsphinx heard "" with a score of (-2680) and an utterance ID of 3.
    
    2015-08-04 16:37:01.394 App_Name[50514:4264050] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    
    2015-08-04 16:37:01.544 App_Name[50514:4264050] Pocketsphinx heard "B" with a score of (-5403) and an utterance ID of 4.
    
    2015-08-04 16:37:01.585 App_Name[50514:4263641] rapidEarsDidReceiveFinishedSpeechHypothesis: B with score: -5403
    
    2015-08-04 16:37:01.775 App_Name[50514:4263763] End of speech detected...
    
    2015-08-04 16:37:01.779 App_Name[50514:4263641] pocketsphinxDidDetectFinishedSpeech
    
    2015-08-04 16:37:01.785 App_Name[50514:4263763] Pocketsphinx heard "B" with a score of (-8131) and an utterance ID of 5.
    
    2015-08-04 16:37:01.792 App_Name[50514:4263641] rapidEarsDidReceiveFinishedSpeechHypothesis: B with score: -8131
    
    2015-08-04 16:37:03.404 App_Name[50514:4263641] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    
    2015-08-04 16:37:03.416 App_Name[50514:4263641] Starting dynamic language model generation
    
    2015-08-04 16:37:03.517 App_Name[50514:4263641] Done creating language model with CMUCLMTK in 0.100489 seconds.
    
    2015-08-04 16:37:03.528 App_Name[50514:4263641] I'm done running performDictionaryLookup and it took 0.004014 seconds
    
    2015-08-04 16:37:03.536 App_Name[50514:4263641] I'm done running dynamic language model generation and it took 0.131257 seconds
    
    2015-08-04 16:37:03.536 App_Name[50514:4263641] Valid setSecondsOfSilence value of 0.200000 will be used.
    
    2015-08-04 16:37:03.567 App_Name[50514:4264074] there is a request to change to the language model file /var/mobile/Containers/Data/Application/A9D01313-E149-4F27-9067-EB2DA15AB639/Library/Caches/lession_vocal_C_2.DMP
    
    2015-08-04 16:37:03.569 App_Name[50514:4264074] The language model ID is 1438681023
    
    App_Name(50514,0x104f10000) malloc: *** error for object 0x900000008: pointer being freed was not allocated
    
    *** set a breakpoint in malloc_error_break to debug

    call stack

    Thread 16Queue : com.apple.root.default-qos (concurent)
    #0	0x00000001948c7270 in __pthread_kill ()
    #1	0x0000000194965170 in pthread_kill ()
    #2	0x000000019483eb18 in abort ()
    #3	0x00000001948f42fc in free ()
    #4	0x0000000100473054 in ngram_model_set_map_words ()
    #5	0x0000000100476598 in ___lldb_unnamed_function156$$App_Name ()
    #6	0x00000001004693e4 in ps_load_dict ()
    #7	0x0000000100479c10 in ___lldb_unnamed_function183$$App_Name ()
    #8	0x00000001004611c8 in ___lldb_unnamed_function78$$App_Name ()
    #9	0x0000000100461858 in ___lldb_unnamed_function82$$App_Name ()
    #10	0x0000000101388f94 in _dispatch_client_callout ()
    #11	0x00000001013a0848 in _dispatch_source_latch_and_call ()
    #12	0x000000010138b1c0 in _dispatch_source_invoke ()
    #13	0x00000001013965d4 in _dispatch_root_queue_drain ()
    #14	0x0000000101398248 in _dispatch_worker_thread3 ()
    #15	0x000000019496122c in _pthread_wqthread ()

    P.S: when I use REJECTO (demo version), this problem doesn’t appear. Here is the code I use:

     error = [_languageModelGenerator generateRejectingLanguageModelFromArray:arrWords
                                                                withFilesNamed:name
                                                        withOptionalExclusions:[GlobalData sharedInstance].optionalExclusion
                                                               usingVowelsOnly:[GlobalData sharedInstance].usingVowelsOnly
                                                                    withWeight:[NSNumber numberWithFloat: [GlobalData sharedInstance].weight_rejecto]
                                                        forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026486
    anhtu
    Participant

    The contents of ‘lession_vocal_D_3.dic’ and ‘lession_vocal_D_3.DMP’ are the files which are created when the crash occurs in ‘D’ letter.

    I will try to do experiments and post log + .DMP + .dic of a crash letter again.

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026481
    anhtu
    Participant

    Here is the contents in 2 files:
    lession_vocal_D_3.dic
    D D IY

    lession_vocal_D_3.DMP

    Darpa Trigram LMr/var/mobile/Containers/Data/Application/2E00812C-8A05-4A6B-BFDC-475501A642DB/Library/Caches/lession_vocal_D_3.DMPˇˇˇˇBEGIN FILE FORMAT DESCRIPTION?Header string length (int32) and string (including trailing 0)OOriginal LM filename string-length (int32) and filename (including trailing 0)0(int32) version number (present iff value <= 0)G(int32) original LM file modification timestamp (iff version# present)O(int32) string-length and string (including trailing 0) (iff version# present)H... previous entry continued any number of times (iff version# present)C(int32) 0 (terminating sequence of strings) (iff version# present)S(int32) log_bg_seg_sz (present iff different from default value of LOG2_BG_SEG_SZ)"(int32) lm_t.ucount (must be > 0)(int32) lm_t.bcount(int32) lm_t.tcount,lm_t.ucount+1 unigrams (including sentinel)ulm_t.bcount+1 bigrams (including sentinel 64 bits (bg_t) each if version=-1/-2, 128 bits (bg32_t) each if version=-3}lm_t.tcount trigrams (present iff lm_t.tcount > 0 32 bits (tg_t) each if version=-1/-2, 64 bits (tg32_t) each if version=-3)(int32) lm_t.n_prob2(int32) lm_t.prob2[]4(int32) lm_t.n_bo_wt2 (present iff lm_t.tcount > 0)4(int32) lm_t.bo_wt2[] (present iff lm_t.tcount > 0)3(int32) lm_t.n_prob3 (present iff lm_t.tcount > 0)3(int32) lm_t.prob3[] (present iff lm_t.tcount > 0)B(int32) (lm_t.bcount+1)/BG_SEG_SZ+1 (present iff lm_t.tcount > 0)7(int32) lm_t.tseg_base[] (present iff lm_t.tcount > 0)D(int32) Sum(all word string-lengths, including trailing 0 for each)1All word strings (including trailing 0 for each)END FILE FORMAT DESCRIPTION!!!ˇˇˇˇ„e≈¬ˇˇˇˇ„e≈¬|ˇ«¬ˇˇˇˇÚöæˇˇˇˇ˝@.«˝@.«Ø%∂«ÚöæØ%∂«ÚöæØ%∂«Úöæ</s><s>D

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026474
    anhtu
    Participant

    I’m using the default “AcousticModelEnglish.bundle” in OpenEars package.
    I want the app can recognize letters A-Z.
    The crash appears event if I keep “LanguageModelGeneratorLookupList.text” in the default or just keep A-Z.

    NSString *name = [NSString stringWithFormat:@"lession_vocal_%@_%d",
                      [[QuestionManager sharedManager] getWordAtIndex:[index intValue]], [index intValue]];
    
    NSError *error = nil;
    NSString * curLetter = [[QuestionManager sharedManager] getWordAtIndex:[index intValue]];
    arrWords = @[curLetter];
    
    if(![GlobalData sharedInstance].isRejectoON){
    
      error = [_languageModelGenerator generateLanguageModelFromArray:arrWords
                                                       withFilesNamed:name
                                               forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
      
    }else{
      error = [_languageModelGenerator generateRejectingLanguageModelFromArray:arrWords
                                                                withFilesNamed:name
                                                        withOptionalExclusions:[GlobalData sharedInstance].optionalExclusion
                                                               usingVowelsOnly:[GlobalData sharedInstance].usingVowelsOnly
                                                                    withWeight:[NSNumber numberWithFloat: [GlobalData sharedInstance].weight_rejecto]
                                                        forAcousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
    }
    
    lmPath = nil;
    dicPath = nil;
    
    if(error != noErr) {
      NSLog(@"Dynamic language generator reported error %@", [error description]);
    } else {
      dicPath = [_languageModelGenerator pathToSuccessfullyGeneratedDictionaryWithRequestedName:name];
      lmPath = [_languageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:name];
    }
    
      [openEarsEventsObserver setDelegate:self];
      //call before set properties
      [pocketsphinxController setActive:TRUE error:nil];
      [pocketsphinxController setRapidEarsToVerbose:YES];
      [pocketsphinxController setFinalizeHypothesis:NO];
      [pocketsphinxController setReturnDuplicatePartials:false];
      [pocketsphinxController setReturnNullHypotheses:false];
      [pocketsphinxController setSecondsOfSilenceToDetect:0.2];
      [OELogging startOpenEarsLogging];
      
        if(!pocketsphinxController.isListening) {
          [pocketsphinxController startRealtimeListeningWithLanguageModelAtPath:lmPath dictionaryAtPath:dicPath acousticModelAtPath:[OEAcousticModel pathToModel:@"AcousticModelEnglish"]];
        }else{
          
          [pocketsphinxController changeLanguageModelToFile:lmPath withDictionary:dicPath];
          if (pocketsphinxController.isSuspended) {
            [pocketsphinxController resumeRecognition];
          }
        }

    The crash occurs rarely randomly and so weird. Because it doesn’t appear if I turn on Rejecto (using Rejecto)

    The crash occurs on RapidEars 2.03 also.

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026469
    anhtu
    Participant

    and the error is “Thread 14: EXC_BAD_ACCESS (code=1,address=xxxx)”

    in reply to: RapidEars 2.04 – crash when using within Cocos2d #1026466
    anhtu
    Participant

    Sorry, here is the log (removed my logs):

    2015-08-03 19:27:13.864 APP_NAME[49252:4174534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-03 19:27:13.920 APP_NAME[49252:4174534] Starting dynamic language model generation
    
    2015-08-03 19:27:14.062 APP_NAME[49252:4174534] Done creating language model with CMUCLMTK in 0.141510 seconds.
    2015-08-03 19:27:14.064 APP_NAME[49252:4174534] I'm done running performDictionaryLookup and it took 0.000200 seconds
    2015-08-03 19:27:14.071 APP_NAME[49252:4174534] I'm done running dynamic language model generation and it took 0.200374 seconds
    2015-08-03 19:27:14.072 APP_NAME[49252:4174534] Starting OpenEars logging for OpenEars version 2.03 on 64-bit device (or build): iPad running iOS version: 8.300000
    2015-08-03 19:27:14.080 APP_NAME[49252:4174534] cocos2d: surface size: 2048x1536
    2015-08-03 19:27:15.083 APP_NAME[49252:4174534] User gave mic permission for this app.
    2015-08-03 19:27:15.085 APP_NAME[49252:4174534] Valid setSecondsOfSilence value of 0.200000 will be used.
    2015-08-03 19:27:15.087 APP_NAME[49252:4174708] Starting listening.
    2015-08-03 19:27:15.088 APP_NAME[49252:4174708] about to set up audio session
    2015-08-03 19:27:15.246 APP_NAME[49252:4174642] Audio route has changed for the following reason:
    2015-08-03 19:27:15.249 APP_NAME[49252:4174642] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2015-08-03 19:27:15.256 APP_NAME[49252:4174642] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is ---SpeakerMicrophoneBuiltIn---. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x170c118b0,
    inputs = (null);
    outputs = (
               "<AVAudioSessionPortDescription: 0x170c11810, type = Speaker; name = Speaker; UID = Built-In Speaker; selectedDataSource = (null)>"
               )>.
    2015-08-03 19:27:15.268 APP_NAME[49252:4174708] done starting audio unit
    INFO: cmd_ln.c(702): Parsing command line:
    \
    -lm /var/mobile/Containers/Data/Application/662EA17C-590D-4F09-9312-520CD035DE4B/Library/Caches/lession_vocal_A_0.DMP \
    -vad_prespeech 10 \
    -vad_postspeech 20 \
    -vad_threshold 2.000000 \
    -remove_noise yes \
    -remove_silence yes \
    -bestpath yes \
    -lw 6.500000 \
    -dict /var/mobile/Containers/Data/Application/662EA17C-590D-4F09-9312-520CD035DE4B/Library/Caches/lession_vocal_A_0.dic \
    -hmm /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -allphone
    -allphone_ci	no		no
    -alpha		0.97		9.700000e-01
    -argfile
    -ascale		20.0		2.000000e+01
    -aw		1		1
    -backtrace	no		no
    -beam		1e-48		1.000000e-48
    -bestpath	yes		yes
    -bestpathlw	9.5		9.500000e+00
    -bghist		no		no
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		8.0
    -compallsen	no		no
    -debug				0
    -dict				/var/mobile/Containers/Data/Application/662EA17C-590D-4F09-9312-520CD035DE4B/Library/Caches/lession_vocal_A_0.dic
    -dictcase	no		no
    -dither		no		no
    -doublebw	no		no
    -ds		1		1
    -fdict
    -feat		1s_c_d_dd	1s_c_d_dd
    -featparams
    -fillprob	1e-8		1.000000e-08
    -frate		100		100
    -fsg
    -fsgusealtpron	yes		yes
    -fsgusefiller	yes		yes
    -fwdflat	yes		yes
    -fwdflatbeam	1e-64		1.000000e-64
    -fwdflatefwid	4		4
    -fwdflatlw	8.5		8.500000e+00
    -fwdflatsfwin	25		25
    -fwdflatwbeam	7e-29		7.000000e-29
    -fwdtree	yes		yes
    -hmm				/private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle
    -input_endian	little		little
    -jsgf
    -kdmaxbbi	-1		-1
    -kdmaxdepth	0		0
    -kdtree
    -keyphrase
    -kws
    -kws_plp	1e-1		1.000000e-01
    -kws_threshold	1		1.000000e+00
    -latsize	5000		5000
    -lda
    -ldadim		0		0
    -lextreedump	0		0
    -lifter		0		0
    -lm				/var/mobile/Containers/Data/Application/662EA17C-590D-4F09-9312-520CD035DE4B/Library/Caches/lession_vocal_A_0.DMP
    -lmctl
    -lmname
    -logbase	1.0001		1.000100e+00
    -logfn
    -logspec	no		no
    -lowerf		133.33334	1.333333e+02
    -lpbeam		1e-40		1.000000e-40
    -lponlybeam	7e-29		7.000000e-29
    -lw		6.5		6.500000e+00
    -maxhmmpf	10000		10000
    -maxnewoov	20		20
    -maxwpf		-1		-1
    -mdef
    -mean
    -mfclogdir
    -min_endfr	0		0
    -mixw
    -mixwfloor	0.0000001	1.000000e-07
    -mllr
    -mmap		yes		yes
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		40
    -nwpen		1.0		1.000000e+00
    -pbeam		1e-48		1.000000e-48
    -pip		1.0		1.000000e+00
    -pl_beam	1e-10		1.000000e-10
    -pl_pbeam	1e-5		1.000000e-05
    -pl_window	0		0
    -rawlogdir
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob	0.005		5.000000e-03
    -smoothspec	no		no
    -svspec
    -tmat
    -tmatfloor	0.0001		1.000000e-04
    -topn		4		4
    -topn_beam	0		0
    -toprule
    -transform	legacy		legacy
    -unit_area	yes		yes
    -upperf		6855.4976	6.855498e+03
    -usewdphones	no		no
    -uw		1.0		1.000000e+00
    -vad_postspeech	50		20
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -var
    -varfloor	0.0001		1.000000e-04
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wbeam		7e-29		7.000000e-29
    -wip		0.65		6.500000e-01
    -wlen		0.025625	2.562500e-02
    
    INFO: cmd_ln.c(702): Parsing command line:
    \
    -nfilt 25 \
    -lowerf 130 \
    -upperf 6800 \
    -feat 1s_c_d_dd \
    -svspec 0-12/13-25/26-38 \
    -agc none \
    -cmn current \
    -varnorm no \
    -transform dct \
    -lifter 22 \
    -cmninit 40
    
    Current configuration:
    [NAME]		[DEFLT]		[VALUE]
    -agc		none		none
    -agcthresh	2.0		2.000000e+00
    -alpha		0.97		9.700000e-01
    -ceplen		13		13
    -cmn		current		current
    -cmninit	8.0		40
    -dither		no		no
    -doublebw	no		no
    -feat		1s_c_d_dd	1s_c_d_dd
    -frate		100		100
    -input_endian	little		little
    -lda
    -ldadim		0		0
    -lifter		0		22
    -logspec	no		no
    -lowerf		133.33334	1.300000e+02
    -ncep		13		13
    -nfft		512		512
    -nfilt		40		25
    -remove_dc	no		no
    -remove_noise	yes		yes
    -remove_silence	yes		yes
    -round_filters	yes		yes
    -samprate	16000		1.600000e+04
    -seed		-1		-1
    -smoothspec	no		no
    -svspec				0-12/13-25/26-38
    -transform	legacy		dct
    -unit_area	yes		yes
    -upperf		6855.4976	6.800000e+03
    -vad_postspeech	50		20
    -vad_prespeech	10		10
    -vad_threshold	2.0		2.000000e+00
    -varnorm	no		no
    -verbose	no		no
    -warp_params
    -warp_type	inverse_linear	inverse_linear
    -wlen		0.025625	2.562500e-02
    
    INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/feat.params
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
    INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/mdef
    INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
    INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/mdef
    INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
    INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/transition_matrices
    INFO: acmod.c(124): Attempting to use SCHMM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(294):  512x13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/sendump
    INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
    INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
    INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
    INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
    INFO: dict.c(320): Allocating 4110 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/662EA17C-590D-4F09-9312-520CD035DE4B/Library/Caches/lession_vocal_A_0.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 1 words read
    INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/204102C4-D4A7-4B27-ABC4-9072CA281718/APP_NAME.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(345): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 51152 bytes (49 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 51152 bytes (49 KiB) for single-phone word triphones
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=3, 2=2, 3=1
    INFO: ngram_model_dmp.c(266):        3 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312):        2 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338):        1 = LM.trigrams read
    INFO: ngram_model_dmp.c(363):        2 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383):        3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403):        2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431):        1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487):        3 = ascii word strings read
    INFO: ngram_search_fwdtree.c(99): 0 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 11 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 11 single-phone words
    INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 128
    ERROR: "ngram_search_fwdtree.c", line 336: No word from the language model has pronunciation in the dictionary
    INFO: ngram_search_fwdtree.c(339): after: 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
    2015-08-03 19:27:15.363 APP_NAME[49252:4174708] Restoring SmartCMN value of 55.322021
    2015-08-03 19:27:15.363 APP_NAME[49252:4174708] Listening.
    2015-08-03 19:27:15.364 APP_NAME[49252:4174708] Project has these words or phrases in its dictionary:
    A
    2015-08-03 19:27:15.364 APP_NAME[49252:4174708] Recognition loop has started
    2015-08-03 19:27:15.368 APP_NAME[49252:4174534] Pocketsphinx is now listening.
    2015-08-03 19:27:20.707 APP_NAME[49252:4174708] Speech detected...
    2015-08-03 19:27:20.708 APP_NAME[49252:4174708] Pocketsphinx heard "" with a score of (-1580) and an utterance ID of 0.
    
    2015-08-03 19:27:20.802 APP_NAME[49252:4174708] Pocketsphinx heard "" with a score of (-2757) and an utterance ID of 1.
    2015-08-03 19:27:20.897 APP_NAME[49252:4174708] Pocketsphinx heard "" with a score of (-4058) and an utterance ID of 2.
    2015-08-03 19:27:21.056 APP_NAME[49252:4174708] Pocketsphinx heard "" with a score of (-6635) and an utterance ID of 3.
    2015-08-03 19:27:21.169 APP_NAME[49252:4174708] End of speech detected...
    INFO: cmn_prior.c(131): cmn_prior_update: from < 55.32  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 49.58 16.76 -1.75  9.75 -9.41  9.58  1.37  2.40 -8.62  6.90 -3.80 -2.46  6.97 >
    INFO: ngram_search_fwdtree.c(1550):      483 words recognized (9/fr)
    INFO: ngram_search_fwdtree.c(1552):     1585 senones evaluated (29/fr)
    INFO: ngram_search_fwdtree.c(1556):      503 channels searched (9/fr), 0 1st, 503 last
    INFO: ngram_search_fwdtree.c(1559):      503 words for which last channels evaluated (9/fr)
    INFO: ngram_search_fwdtree.c(1561):        0 candidate words for entering last phone (0/fr)
    INFO: ngram_search_fwdtree.c(1564): fwdtree 1.15 CPU 2.137 xRT
    INFO: ngram_search_fwdtree.c(1567): fwdtree 5.64 wall 10.438 xRT
    INFO: ngram_search_fwdflat.c(302): Utterance vocabulary contains 3 words
    INFO: ngram_search_fwdflat.c(938):      400 words recognized (7/fr)
    INFO: ngram_search_fwdflat.c(940):     1638 senones evaluated (30/fr)
    INFO: ngram_search_fwdflat.c(942):      519 channels searched (9/fr)
    INFO: ngram_search_fwdflat.c(944):      519 words searched (9/fr)
    INFO: ngram_search_fwdflat.c(947):      121 word transitions (2/fr)
    INFO: ngram_search_fwdflat.c(950): fwdflat 0.01 CPU 0.022 xRT
    INFO: ngram_search_fwdflat.c(953): fwdflat 0.01 wall 0.021 xRT
    INFO: ngram_search.c(1215): </s> not found in last frame, using A.52 instead
    INFO: ngram_search.c(1268): lattice start node <s>.0 end node A.49
    INFO: ngram_search.c(1294): Eliminated 1 nodes before end node
    INFO: ngram_search.c(1399): Lattice has 99 nodes, 600 links
    2015-08-03 19:27:21.185 APP_NAME[49252:4174534] pocketsphinxDidDetectFinishedSpeech
    (lldb) 
    

    and call stack:

    Thread 14Queue : com.apple.root.default-qos (concurrent)
    #0	0x00000001004ec83c in logmath_add ()
    #1	0x00000001004c9128 in ps_lattice_bestpath ()
    #2	0x00000001004b556c in ___lldb_unnamed_function157$$StampDrill ()
    #3	0x000000010049ef3c in ps_get_hyp_vnnetvisstampdrill ()
    #4	0x000000010049e69c in ___lldb_unnamed_function74$$StampDrill ()
    #5	0x00000001004a04e8 in ___lldb_unnamed_function78$$StampDrill ()
    #6	0x00000001004a0758 in ___lldb_unnamed_function82$$StampDrill ()
    #7	0x00000001013d4f94 in _dispatch_client_callout ()
    #8	0x00000001013ec848 in _dispatch_source_latch_and_call ()
    #9	0x00000001013d71c0 in _dispatch_source_invoke ()
    #10	0x00000001013e25d4 in _dispatch_root_queue_drain ()
    #11	0x00000001013e4248 in _dispatch_worker_thread3 ()
    #12	0x000000019496122c in _pthread_wqthread ()
    

    Thank you very much

Viewing 11 posts - 1 through 11 (of 11 total)