Recognize short Command in nonEnglish

Home Forums OpenEars Recognize short Command in nonEnglish

Viewing 86 posts - 1 through 86 (of 86 total)

  • Author
    Posts
  • #1032298
    iKK
    Participant

    Hi,
    I am trying to recognise a short command-sentence in Swiss-german language.
    The command consists of 4 words only. The goal is to recognize only this command (and nothing else) in my iOS-App.

    I have OpenEars running doing the following:

    1.) Use OpenEar’s “generateGrammar” method together with a “words”-array and the dict-setting “[OneOfTheseWillBeSaidOnce : words]”

    2. Use the “AcousticModelEnglish” and set the “words” array with ‘fictive words”, such that if these words are spoken in English they will sound like the Swiss-german command-sentence. There are approx 20 Words and approx. 10 Sentences provided inside the “words” array. (for all applies: if spoken in English they sound like the Swiss-german command sentence)

    3. Alternatively the same thing for the “AcousticModelSpanish” (with a different “words” array

    4. Since yesterday, I tried also the Rejecto-Plugin (using its Rejecto-method “generateRejectingLanguageModel” instead of “generateGrammer”)…

    But for all of the above variations, there is a problem: Too much is recognized.
    Which means, the Swiss-german command-sentence is recognized at 100% but unfortunately many many other words and sentences are recognized as well. (i.e. the recognition is not specific enough !).

    What could I do to improve the recognition specificity for this Swiss-german command-sentence ????

    Any help appreciated.

    #1032299
    Halle Winkler
    Politepix

    Welcome,

    Let’s just troubleshoot one case if that’s OK – two languages and two different vocabulary structures might get a little confusing. Would you rather troubleshoot your grammar, or the Rejecto model? BTW, maybe this goal would be a better match for the German acoustic model.

    #1032300
    iKK
    Participant

    I was using only one Accoustic-model at the time (but happen to try English and Spanish so far). And no – there is no reason (anymore) to not also try with the German-Accoustic model – in fact I just did 5 Minutes ago. (we did have other products evaluated in EnglishOrSpanish and therefore the choice of EnglishOrSpanish so far…)

    About grammar vs. Rejecto: You tell me what suits better for this kind of problem ??? (i.e. recognising a Swiss-german short sentence with highest specificity)

    #1032301
    iKK
    Participant

    About the decision – it is hard since both its documentation advertize what is needed in my case :

    –> generateGrammar sais: “This will recognize exact phrases instead of probabilistically recognizing word combinations in any sequence.”

    –> Rejecto’s doc sais: “Rejecto makes sure that your speech app does not attempt to recognize words which are not part of your vocabulary. This lets your app stick to listening for just the words it knows, and that makes your users happy.”

    Therefore both seem to be necessary somehow – what choice do you suggest. Or can you even use both at the same time ??

    #1032302
    Halle Winkler
    Politepix

    OK, well, let’s just pick a single Acoustic Model for this troubleshooting case so that we don’t have a lot of variables – you can tell me which one we’re using. I recommend the German one.

    –> generateGrammar sais: “This will recognize exact phrases instead of probabilistically recognizing word combinations in any sequence.”

    –> Rejecto’s doc sais: “Rejecto makes sure that your speech app does not attempt to recognize words which are not part of your vocabulary. This lets your app stick to listening for just the words it knows, and that makes your users happy.”

    Yes, the intention of this documentation is to clarify that a grammar can listen for a multi-word phrase in exclusive terms (i.e. it won’t attempt to evaluate statistical nearness to your phrase but just try to hear it when complete, not hear it when not complete) and Rejecto will reject unknown words from a model made up of words. So if the goal is a sentence, a grammar is probably the right choice. If you were looking for one of several words by themselves, or phrases where you didn’t mind possible partial recognition of the phrase contents, Rejecto would be better.

    #1032303
    iKK
    Participant

    The thing is that the Swiss-german short sentence only consists of 4 words. And in 99% of the time they are spoken in such a fast manner that is sounds like ONE long word. There is some background-noise (imagine a train-station-kind-of background-noise).

    What do you suggest for this use-case ? (grammar vs. Rejecto) ? You are the expert :)

    #1032304
    Halle Winkler
    Politepix

    Sorry, it’s difficult for me to imagine this case without an example. Can you share with me something similar enough so I can understand how a four-word phrase can sound like a single word when spoken?

    #1032305
    iKK
    Participant

    ok – it is hard to explain :) But let’s imagine the german sentence “Wir sind Wir” and then it is spoken “MiaSanMia” :)

    #1032306
    iKK
    Participant

    It is just spoken without break inbetween the words and also some letters are ommitted at the beginning of each single word (which is possible in Swiss german)

    #1032307
    iKK
    Participant

    It is not as an articulated language like German.

    #1032308
    Halle Winkler
    Politepix

    I see, the aspect where it sounds like a single word is a standard characteristic of spoken Schweizerdeutsch for a sentence this short and simple, is that correct?

    #1032309
    iKK
    Participant

    Yes !

    #1032310
    Halle Winkler
    Politepix

    Got it, thank you for explaining. OK, let’s first try and see whether the simplest thing works, and then we can explore other ways if it doesn’t. Let’s start with using a grammar with the German acoustic model and see how far we get. What are your results when you do that? You grammar should have the whole sentence as a single string as a single array item in your dictionary, not an array of individual words.

    #1032311
    iKK
    Participant

    Is it one word as a single element in the array – or rather 4 words as a single string-array element ??

    #1032312
    iKK
    Participant

    And also, what is the vadThreshold value for German ?

    #1032313
    Halle Winkler
    Politepix

    An array with a single element, which is a string containing four words with spaces between them.

    #1032314
    Halle Winkler
    Politepix

    And also, what is the vadThreshold value for German ?

    This always has to be tested out on your end of things for your usage case (doing this will also help with your background noise issue, with luck). I think there is a note about it at the bottom of the acoustic model page with more info.

    #1032315
    iKK
    Participant

    Same issue: The sentence is recognized 100% – but many more sentences having a similar lenght. For example I speak the sentence with the last (of the four) words replace by another word – and the grammar method still recognizes the sentence, unfortunately!

    #1032316
    Halle Winkler
    Politepix

    OK, can you show me the shortest possible code excerpt (i.e. just the part where you create the grammar and start listening for it, making absolutely sure that none of your other troubleshooting attempts are still also in effect), replacing the four words with a different four words if you need to for confidentiality?

    #1032317
    iKK
    Participant

    Sure:

    func startListening()
    {
    if OEPocketsphinxController.sharedInstance().isListening {
    stopListening()
    }

    var acousticModelName = “AcousticModelGerman”
    var fileName = “GermanModel”

    OEPocketsphinxController.sharedInstance().vadThreshold = 3.6;

    var error: Error?

    error = lmGenerator.generateGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: acousticModelName))

    var lmPath = “”
    var dictPath = “”

    if(error == nil) {

    lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
    dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
    lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)

    } else {
    print(“Error: \(error!.localizedDescription)”)
    }

    try? OEPocketsphinxController.sharedInstance().setActive(true)
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: acousticModelName), languageModelIsJSGF: true)
    }

    #1032318
    Halle Winkler
    Politepix

    I would also need to see the content of the array words, even if each word has been substituted with another word.

    #1032319
    iKK
    Participant

    The words array sais

    words = [“esch” “da” “no” “fey”]

    #1032320
    iKK
    Participant

    sorry typo: it sais:

    words = [“esch”, “da”, “no”, “fey”]

    #1032321
    Halle Winkler
    Politepix

    OK, it needs to say

    words = ["esch da no fey"]

    #1032322
    iKK
    Participant

    Sorry – yes, of course. I have too many versions going on.

    I do have it the way you just stated ! But same issue…

    #1032323
    Halle Winkler
    Politepix

    OK, I’m a bit worried that there is cross-contamination with your multiple troubleshooting approaches in the code you are working with and showing me (I’ve touched on this concern a couple of times in this discussion so far because it is a common situation after the developer has tried many things), so I think I’d like to try to rule that out before we continue.

    Could you put away your existing work and zip it into an archive somewhere for the time being, and then make an entirely new project with the tutorial only using the approach we’ve chosen here (a grammar using stock OpenEars and the German acoustic model), and do it with four words from the start where you would be comfortable sharing all the vocabulary and listening initialization code with me and 100% of the logging output? Then we can continue without worrying about replacing words or old code hanging around. Thanks!

    #1032332
    iKK
    Participant

    H Halle,
    I completed a test-example completely from scratch. Please refer to my email. Hope to hear from you soon.

    #1032333
    Halle Winkler
    Politepix

    Hello,

    Sorry, there is no email support for these kinds of issues, please keep all discussion here and using the debug tools that are possible to work with via these forums, thank you!

    #1032334
    iKK
    Participant

    Here is the link to the the test-project you asked for (an entirely new project with the tutorial only using the approach we’ve chosen here (a grammar using stock OpenEars and the German acoustic model)):

    Test Project OpenEars with German-Acc.Model

    Unfortunately, I observe still the very same issue as before with my other tests. (i.e. too many sentences are recognized that have nothing to do with the one provided in the words-array).

    Can you please help any further here ??

    #1032335
    Halle Winkler
    Politepix

    Hi, sorry, I really don’t run projects locally for this kind of troubleshooting – I can only ask for your logging output and code excerpts as I have time to assist. I asked you to make a new project so we could be sure we were not mixing your old approaches because your old code was entering into the troubleshooting process, but I apologize if this was confusing about the question of whether I would be running the test project locally and debugging it myself.

    #1032336
    iKK
    Participant

    Yes, it was confusing.

    And also, I am not asking you for debugging. I am asking you on how to apply the view settings OpenEars offers to get the recognition-success-rate (and specificity!) to an order where it is acceptable for production.

    So – is there anymore settings I can do in order to improve specificity for our one-sentece words-array ?

    #1032337
    Halle Winkler
    Politepix

    OK, can you now run this project and show me all of the logging output (with no edits at all) requested in this post: https://www.politepix.com/forums/topic/install-issues-and-their-solutions/ and then show me the code excerpt I asked for above which includes the generation of your language model? Thanks!

    #1032339
    Halle Winkler
    Politepix

    I’d also like to take this opportunity to remind that even when we start discussing Rejecto- or RuleORama-related issues, the license of the plugins does not allow them to be redistributed anywhere, so make sure not to post the demos or licensed versions of those products anywhere like Github or anywhere else which enables sharing.

    #1032340
    iKK
    Participant

    Here are the logs:
    Logs

    I did 2 Log-files: The first one contains logs of correctly spoken sentences. The second one contains logs from incorrectly spoken but unfortunately still recognised sentences.

    Also, the link shows a third File which is the ViewController.swift-File containing hopefully all you need in terms of the required “generation of my language model”…

    #1032341
    iKK
    Participant

    The repo is closed for licensing purposes.

    Here are the two LOGs and ViewController.swift File

    (please refer to the two next posts since I want to separate the LOGs from each other…)

    First LOG: “5 times correctly spoken sentence”

    2018-04-19 13:59:28.021225+0200 TestOpenEars[1285:616560] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-19 13:59:28.021943+0200 TestOpenEars[1285:616560] Creating shared instance of OEPocketsphinxController
    2018-04-19 13:59:28.033404+0200 TestOpenEars[1285:616560] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-19 13:59:28.040833+0200 TestOpenEars[1285:616560] User gave mic permission for this app.
    2018-04-19 13:59:28.041866+0200 TestOpenEars[1285:616560] setSecondsOfSilence wasn't set, using default of 0.700000.
    2018-04-19 13:59:28.042950+0200 TestOpenEars[1285:616676] Starting listening.
    2018-04-19 13:59:28.043052+0200 TestOpenEars[1285:616676] About to set up audio session
    2018-04-19 13:59:28.211559+0200 TestOpenEars[1285:616676] Creating audio session with default settings.
    2018-04-19 13:59:28.211630+0200 TestOpenEars[1285:616676] Done setting audio session category.
    2018-04-19 13:59:28.218468+0200 TestOpenEars[1285:616676] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 48000.000000
    2018-04-19 13:59:28.221402+0200 TestOpenEars[1285:616676] number of channels is already the preferred number of 1 so not setting it.
    2018-04-19 13:59:28.226764+0200 TestOpenEars[1285:616676] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.085333
    2018-04-19 13:59:28.226817+0200 TestOpenEars[1285:616676] Done setting up audio session
    2018-04-19 13:59:28.227373+0200 TestOpenEars[1285:616685] Audio route has changed for the following reason:
    2018-04-19 13:59:28.231358+0200 TestOpenEars[1285:616676] About to set up audio IO unit in a session with a sample rate of 48000.000000, a channel number of 1 and a buffer duration of 0.085333.
    2018-04-19 13:59:28.231418+0200 TestOpenEars[1285:616685] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-19 13:59:28.337755+0200 TestOpenEars[1285:616685] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c46028e0,
    inputs = (null);
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c4602890, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>".
    2018-04-19 13:59:28.352755+0200 TestOpenEars[1285:616685] Audio route has changed for the following reason:
    2018-04-19 13:59:28.354261+0200 TestOpenEars[1285:616685] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-19 13:59:28.359673+0200 TestOpenEars[1285:616676] Done setting up audio unit
    2018-04-19 13:59:28.359731+0200 TestOpenEars[1285:616676] About to start audio IO unit
    2018-04-19 13:59:28.365514+0200 TestOpenEars[1285:616685] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c46028e0,
    inputs = (
        "<AVAudioSessionPortDescription: 0x1c46028b0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    );
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c4602a30, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>"
    )>".
    2018-04-19 13:59:28.589286+0200 TestOpenEars[1285:616676] Done starting audio unit
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		30
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/3C7F2CAB-D0E4-4ABF-81DE-9DD3AF3B7BEC/Library/Caches/GermanModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/3C7F2CAB-D0E4-4ABF-81DE-9DD3AF3B7BEC/Library/Caches/GermanModel.gram
    -keyphrase
    -kws
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda
    -ldadim			0		0
    -lifter			0		22
    -lm
    -lmctl
    -lmname
    -logbase		1.0001		1.000100e+00
    -logfn
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    -mean					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/means
    -mfclogdir
    -min_endfr		0		0
    -mixw					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    -mixwfloor		0.0000001	1.000000e-07
    -mllr
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec
    -tmat					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		3.200000e+00
    -var					/var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 53834 * 8 bytes (420 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ptm_mgau.c(801): Number of codebooks exceeds 256: 2129
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: acmod.c(121): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2129 senones: 1 features x 32 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4104 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/3C7F2CAB-D0E4-4ABF-81DE-9DD3AF3B7BEC/Library/Caches/GermanModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 4 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/AF8A9931-95D6-4EED-93E3-6858584A31C9/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 4 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 43^3 * 2 bytes (155 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 44720 bytes (43 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 44720 bytes (43 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <GermanModel.g00000>
    INFO: jsgf.c(691): Defined rule: PUBLIC <GermanModel.rule_0>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_search.c(173): Added 0 alternate word transitions
    INFO: fsg_lextree.c(110): Allocated 440 bytes (0 KiB) for left and right context phones
    INFO: fsg_lextree.c(256): 17 HMM nodes in lextree (11 leaves)
    INFO: fsg_lextree.c(259): Allocated 2448 bytes (2 KiB) for all lextree nodes
    INFO: fsg_lextree.c(262): Allocated 1584 bytes (1 KiB) for lextree leafnodes
    2018-04-19 13:59:29.327456+0200 TestOpenEars[1285:616676] There is no CMN plist so we are using the fresh CMN value 30.000000.
    2018-04-19 13:59:29.327909+0200 TestOpenEars[1285:616676] Listening.
    2018-04-19 13:59:29.328299+0200 TestOpenEars[1285:616676] Project has these words or phrases in its dictionary:
    do
    esch
    frey
    no
    2018-04-19 13:59:29.328539+0200 TestOpenEars[1285:616676] Recognition loop has started
    2018-04-19 13:59:29.329632+0200 TestOpenEars[1285:616560] Successfully started listening session from startListeningWithLanguageModelAtPath:
    Local callback: Pocketsphinx is now listening.
    Local callback: Pocketsphinx started.
    2018-04-19 13:59:29.644460+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 13:59:30.533009+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 30.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 53.23  5.46 -14.11  7.15  0.86  4.65 -7.53  3.98  4.45 -3.50  0.60  0.63 -0.93 >
    INFO: fsg_search.c(843): 93 frames, 677 HMMs (7/fr), 1990 senones (21/fr), 269 history entries (2/fr)
    
    ERROR: "fsg_search.c", line 913: Final result does not match the grammar in frame 93
    2018-04-19 13:59:30.534037+0200 TestOpenEars[1285:616678] Pocketsphinx heard "" with a score of (0) and an utterance ID of 0.
    2018-04-19 13:59:30.534077+0200 TestOpenEars[1285:616678] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2018-04-19 13:59:30.769463+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 13:59:32.152342+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 53.23  5.46 -14.11  7.15  0.86  4.65 -7.53  3.98  4.45 -3.50  0.60  0.63 -0.93 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 61.11 10.12 -7.20 12.76 -3.42  1.16 -3.88  0.29  4.21 -6.41  3.58 -1.38  0.87 >
    INFO: fsg_search.c(843): 143 frames, 1069 HMMs (7/fr), 2736 senones (19/fr), 413 history entries (2/fr)
    
    2018-04-19 13:59:32.155979+0200 TestOpenEars[1285:616678] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 1.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 1
    2018-04-19 13:59:32.720826+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 13:59:34.634173+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 61.11 10.12 -7.20 12.76 -3.42  1.16 -3.88  0.29  4.21 -6.41  3.58 -1.38  0.87 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 58.26 10.58 -4.64  9.03 -2.89  2.28 -0.84  1.48  3.76 -4.75  1.97 -1.18  0.44 >
    INFO: fsg_search.c(843): 197 frames, 2927 HMMs (14/fr), 6548 senones (33/fr), 1064 history entries (5/fr)
    
    2018-04-19 13:59:34.636558+0200 TestOpenEars[1285:616678] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 2.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 2
    2018-04-19 13:59:41.285796+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 13:59:42.569498+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 58.26 10.58 -4.64  9.03 -2.89  2.28 -0.84  1.48  3.76 -4.75  1.97 -1.18  0.44 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 60.54 10.98 -4.75 10.87 -5.02  1.78  1.04 -0.71  3.55 -4.35  2.33 -0.95  0.28 >
    INFO: fsg_search.c(843): 141 frames, 981 HMMs (6/fr), 2593 senones (18/fr), 318 history entries (2/fr)
    
    2018-04-19 13:59:42.570751+0200 TestOpenEars[1285:616678] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 3.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 3
    2018-04-19 13:59:46.632433+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 13:59:48.331594+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 60.54 10.98 -4.75 10.87 -5.02  1.78  1.04 -0.71  3.55 -4.35  2.33 -0.95  0.28 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 62.08 12.85 -4.75 10.76 -6.66  2.06  1.72 -2.32  4.27 -4.30  2.86 -1.06  0.58 >
    INFO: fsg_search.c(843): 170 frames, 1109 HMMs (6/fr), 2679 senones (15/fr), 358 history entries (2/fr)
    
    2018-04-19 13:59:48.334816+0200 TestOpenEars[1285:616678] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 4.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 4
    2018-04-19 13:59:50.847118+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 62.08 12.85 -4.75 10.76 -6.66  2.06  1.72 -2.32  4.27 -4.30  2.86 -1.06  0.58 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 62.62 11.77 -4.58 11.62 -7.05  2.15  2.23 -2.68  4.03 -4.19  3.05 -1.03  0.51 >
    2018-04-19 13:59:53.396359+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 62.62 11.77 -4.58 11.62 -7.05  2.15  2.23 -2.68  4.03 -4.19  3.05 -1.03  0.51 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 60.50 12.69 -5.29 11.47 -6.15  3.94  2.48 -2.55  3.25 -4.41  2.90 -1.68  0.59 >
    INFO: fsg_search.c(843): 256 frames, 1194 HMMs (4/fr), 2965 senones (11/fr), 435 history entries (1/fr)
    
    2018-04-19 13:59:53.397731+0200 TestOpenEars[1285:616678] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 5.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 5
    2018-04-19 13:59:53.582752+0200 TestOpenEars[1285:616678] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 13:59:54.333555+0200 TestOpenEars[1285:616678] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 60.50 12.69 -5.29 11.47 -6.15  3.94  2.48 -2.55  3.25 -4.41  2.90 -1.68  0.59 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 59.83 10.88 -5.47 11.03 -6.10  4.88  2.89 -2.23  2.70 -4.01  2.37 -1.37  0.47 >
    INFO: fsg_search.c(843): 79 frames, 520 HMMs (6/fr), 1624 senones (20/fr), 221 history entries (2/fr)
    #1032342
    iKK
    Participant

    Second LOG: from incorrectly spoken but unfortunately still recognised sentences

    2018-04-19 14:02:29.024746+0200 TestOpenEars[1288:617841] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-19 14:02:29.025006+0200 TestOpenEars[1288:617841] Creating shared instance of OEPocketsphinxController
    2018-04-19 14:02:29.034738+0200 TestOpenEars[1288:617841] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-19 14:02:29.037920+0200 TestOpenEars[1288:617841] User gave mic permission for this app.
    2018-04-19 14:02:29.038176+0200 TestOpenEars[1288:617841] setSecondsOfSilence wasn't set, using default of 0.700000.
    2018-04-19 14:02:29.039275+0200 TestOpenEars[1288:617894] Starting listening.
    2018-04-19 14:02:29.039506+0200 TestOpenEars[1288:617894] About to set up audio session
    2018-04-19 14:02:29.210501+0200 TestOpenEars[1288:617894] Creating audio session with default settings.
    2018-04-19 14:02:29.212404+0200 TestOpenEars[1288:617894] Done setting audio session category.
    2018-04-19 14:02:29.220219+0200 TestOpenEars[1288:617901] Audio route has changed for the following reason:
    2018-04-19 14:02:29.225080+0200 TestOpenEars[1288:617894] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 48000.000000
    2018-04-19 14:02:29.225185+0200 TestOpenEars[1288:617901] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-19 14:02:29.226309+0200 TestOpenEars[1288:617894] number of channels is already the preferred number of 1 so not setting it.
    2018-04-19 14:02:29.228887+0200 TestOpenEars[1288:617901] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route b2018-04-19 14:02:29.239190+0200 TestOpenEars[1288:617894] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.085333
    2018-04-19 14:02:29.279574+0200 TestOpenEars[1288:617894] Done setting up audio session
    efore changing to this route was "<AVAudioSessionRouteDescription: 0x1c041e180,
    inputs = (null);
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c041e110, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>".
    2018-04-19 14:02:29.282389+0200 TestOpenEars[1288:617894] About to set up audio IO unit in a session with a sample rate of 48000.000000, a channel number of 1 and a buffer duration of 0.085333.
    2018-04-19 14:02:29.309608+0200 TestOpenEars[1288:617901] Audio route has changed for the following reason:
    2018-04-19 14:02:29.310813+0200 TestOpenEars[1288:617901] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-19 14:02:29.315959+0200 TestOpenEars[1288:617901] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c4219460,
    inputs = (
        "<AVAudioSessionPortDescription: 0x1c42193a0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    );
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c4219510, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>"
    )>".
    2018-04-19 14:02:29.341243+0200 TestOpenEars[1288:617894] Done setting up audio unit
    2018-04-19 14:02:29.341311+0200 TestOpenEars[1288:617894] About to start audio IO unit
    2018-04-19 14:02:29.560570+0200 TestOpenEars[1288:617894] Done starting audio unit
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		30
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/95C6D225-194A-49A4-907C-BB5A0B8A698B/Library/Caches/GermanModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/95C6D225-194A-49A4-907C-BB5A0B8A698B/Library/Caches/GermanModel.gram
    -keyphrase
    -kws
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda
    -ldadim			0		0
    -lifter			0		22
    -lm
    -lmctl
    -lmname
    -logbase		1.0001		1.000100e+00
    -logfn
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    -mean					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/means
    -mfclogdir
    -min_endfr		0		0
    -mixw					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    -mixwfloor		0.0000001	1.000000e-07
    -mllr
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump
    -senlogdir
    -senmgau
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec
    -tmat					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		3.200000e+00
    -var					/var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 53834 * 8 bytes (420 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ptm_mgau.c(801): Number of codebooks exceeds 256: 2129
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: acmod.c(121): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size:
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2129 senones: 1 features x 32 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4104 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/95C6D225-194A-49A4-907C-BB5A0B8A698B/Library/Caches/GermanModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 4 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/CE15AFDC-1F3A-4E6F-88A6-60733F226865/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 4 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 43^3 * 2 bytes (155 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 44720 bytes (43 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 44720 bytes (43 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <GermanModel.g00000>
    INFO: jsgf.c(691): Defined rule: PUBLIC <GermanModel.rule_0>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_search.c(173): Added 0 alternate word transitions
    INFO: fsg_lextree.c(110): Allocated 440 bytes (0 KiB) for left and right context phones
    INFO: fsg_lextree.c(256): 17 HMM nodes in lextree (11 leaves)
    INFO: fsg_lextree.c(259): Allocated 2448 bytes (2 KiB) for all lextree nodes
    INFO: fsg_lextree.c(262): Allocated 1584 bytes (1 KiB) for lextree leafnodes
    2018-04-19 14:02:30.301919+0200 TestOpenEars[1288:617894] There is no CMN plist so we are using the fresh CMN value 30.000000.
    2018-04-19 14:02:30.302332+0200 TestOpenEars[1288:617894] Listening.
    2018-04-19 14:02:30.302757+0200 TestOpenEars[1288:617894] Project has these words or phrases in its dictionary:
    do
    esch
    frey
    no
    2018-04-19 14:02:30.302834+0200 TestOpenEars[1288:617894] Recognition loop has started
    2018-04-19 14:02:30.303171+0200 TestOpenEars[1288:617841] Successfully started listening session from startListeningWithLanguageModelAtPath:
    Local callback: Pocketsphinx is now listening.
    Local callback: Pocketsphinx started.
    2018-04-19 14:02:31.613456+0200 TestOpenEars[1288:617893] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 14:02:33.406395+0200 TestOpenEars[1288:617893] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 30.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 68.49 12.84 -7.48  5.65 -7.95  0.96  3.01  0.51 -0.81 -2.76  1.39  0.98  0.65 >
    INFO: fsg_search.c(843): 182 frames, 1688 HMMs (9/fr), 4466 senones (24/fr), 548 history entries (3/fr)
    
    2018-04-19 14:02:33.407798+0200 TestOpenEars[1288:617893] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 0.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 0
    2018-04-19 14:02:34.460784+0200 TestOpenEars[1288:617893] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 14:02:37.551469+0200 TestOpenEars[1288:617893] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 68.49 12.84 -7.48  5.65 -7.95  0.96  3.01  0.51 -0.81 -2.76  1.39  0.98  0.65 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 65.39 12.19 -6.64  5.08 -6.56  1.94  4.53  3.21  0.22 -2.76  0.47  1.69  0.13 >
    INFO: fsg_search.c(843): 313 frames, 2582 HMMs (8/fr), 7405 senones (23/fr), 1016 history entries (3/fr)
    
    2018-04-19 14:02:37.552776+0200 TestOpenEars[1288:617893] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 1.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 1
    2018-04-19 14:02:39.212310+0200 TestOpenEars[1288:617893] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-19 14:02:41.103734+0200 TestOpenEars[1288:617893] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 65.39 12.19 -6.64  5.08 -6.56  1.94  4.53  3.21  0.22 -2.76  0.47  1.69  0.13 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 66.10 12.58 -6.59  5.23 -7.36  1.07  4.44  3.00 -0.00 -2.35  0.52  1.84  0.19 >
    INFO: fsg_search.c(843): 194 frames, 1744 HMMs (8/fr), 4751 senones (24/fr), 651 history entries (3/fr)
    
    2018-04-19 14:02:41.107087+0200 TestOpenEars[1288:617893] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 2.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 2
    2018-04-19 14:02:42.661713+0200 TestOpenEars[1288:617893] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 66.10 12.58 -6.59  5.23 -7.36  1.07  4.44  3.00 -0.00 -2.35  0.52  1.84  0.19 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 65.78 10.39 -5.22  5.76 -7.50  0.51  5.25  3.14  0.81 -2.29  0.88  1.77  0.13 >
    2018-04-19 14:02:44.537660+0200 TestOpenEars[1288:617893] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 65.78 10.39 -5.22  5.76 -7.50  0.51  5.25  3.14  0.81 -2.29  0.88  1.77  0.13 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 64.89 13.06 -6.23  4.89 -7.26  1.55  3.95  2.92  0.58 -1.80  0.81  1.76  0.35 >
    INFO: fsg_search.c(843): 192 frames, 1711 HMMs (8/fr), 4736 senones (24/fr), 714 history entries (3/fr)
    
    2018-04-19 14:02:44.538437+0200 TestOpenEars[1288:617893] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 3.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 3
    2018-04-19 14:02:46.248629+0200 TestOpenEars[1288:617893] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 64.89 13.06 -6.23  4.89 -7.26  1.55  3.95  2.92  0.58 -1.80  0.81  1.76  0.35 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 66.86 12.04 -7.14  5.36 -7.09  1.47  3.59  3.22  0.66 -1.95  0.66  1.82  0.27 >
    2018-04-19 14:02:48.643337+0200 TestOpenEars[1288:617893] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 66.86 12.04 -7.14  5.36 -7.09  1.47  3.59  3.22  0.66 -1.95  0.66  1.82  0.27 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 65.41 13.10 -8.44  4.59 -6.98  2.08  2.77  3.03  0.47 -1.86  0.45  1.39 -0.09 >
    INFO: fsg_search.c(843): 248 frames, 2202 HMMs (8/fr), 6079 senones (24/fr), 843 history entries (3/fr)
    
    2018-04-19 14:02:48.645033+0200 TestOpenEars[1288:617893] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 4.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 4
    #1032343
    iKK
    Participant

    Third: ViewController.swift with all the relevant “generation of my language model”

    //
    //  ViewController.swift
    //  TestOpenEars
    //
    //  Created by Stephan Korner on 13.04.18.
    //  Copyright © 2018 Ideen Kaffee Korner. All rights reserved.
    //
    
    import UIKit
    
    class ViewController: UIViewController, OEEventsObserverDelegate {
        
        var openEarsEventsObserver = OEEventsObserver()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            
            self.openEarsEventsObserver.delegate = self
            
            let lmGenerator = OELanguageModelGenerator()
            let accusticModelName = "AcousticModelGerman"
            let fileName = "GermanModel"
            
            let words = ["esch do no frey"]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            var lmPath = ""
            var dictPath = ""
            
            if(err != nil) {
                print("Error while creating initial language model: \(err)")
            } else {
                lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
                dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
                lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
            }
            
            // ************* Necessary for logging **************************
            OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
            OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
            // ************* Necessary for logging **************************
            
            do {
                try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
            } catch {
                print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
            }
            
            OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
        OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
        }
    
        override func didReceiveMemoryWarning() {
            super.didReceiveMemoryWarning()
            // Dispose of any resources that can be recreated.
        }
        
        func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
            print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
        // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
        func pocketsphinxRecognitionLoopDidStart() {
            print("Local callback: Pocketsphinx started.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
        func pocketsphinxDidStartListening() {
            print("Local callback: Pocketsphinx is now listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
        func pocketsphinxDidDetectSpeech() {
            print("Local callback: Pocketsphinx has detected speech.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
        func pocketsphinxDidDetectFinishedSpeech() {
            print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
        // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
        func pocketsphinxDidStopListening() {
            print("Local callback: Pocketsphinx has stopped listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
        // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
        // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
        func pocketsphinxDidSuspendRecognition() {
            print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
        // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
        // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
        func pocketsphinxDidResumeRecognition() {
            print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
        }
        
        // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
        // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
        func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
            
            print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
        // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
        func fliteDidStartSpeaking() {
            print("Local callback: Flite has started speaking") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
        // complex interaction between sound classes.
        func fliteDidFinishSpeaking() {
            print("Local callback: Flite has finished speaking") // Log it.
        }
        
        func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
            print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
        }
        
        func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
            print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
        }
        
        /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
        func pocketsphinxFailedNoMicPermissions() {
            print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
        }
        
        /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result  (will only be returned on iOS7 or later).*/
        
        func micPermissionCheckCompleted(withResult: Bool) {
            print("Local callback: mic check completed.")
        }
    }
    #1032346
    Halle Winkler
    Politepix

    Super, we can test a couple of things now that we’ve level-set. First question: would it be possible for you to use RuleORama, or is Rejecto the only plugin you would like to test with? I’m asking because the easiest first step is to see if your results are better with RuleORama because you already have a working grammar, but if you don’t want to use it, we can skip it and try something with Rejecto.

    #1032348
    iKK
    Participant

    In principle, we can continue with RuleORama. However, since costs are three times as much, I would of course prefer Rejecto somewhat.

    I propose, we make tests with both and decide then on the better outcome, ok? As long as I can test using a free testing license for both of the technologies, we can go ahead comparing.

    I downloaded RuleORama test-version. Give me a moment to set it up, ok. After that we have both technologies to play with.

    #1032350
    iKK
    Participant

    Halle,
    can we please continue with Rejecto. I relaize the RuleORama-demo is again not useful after download – and I feel that I loose trememdeous amount of time just to set up these demo-projects. Also, your manual contains ObjC-Code under the Swift3 chapter – which is not something pleasant either. Can you please provide me with a working RuleORama-demo (Swift4) or we continue with Rejecto. Let me know, ok?

    #1032351
    iKK
    Participant

    With RuleORama, I end up with the following error (see log below). Can you please tell me what is still wrong ? And also, how does the words-array now need to look like. It seems that RuleORama wants a different one ???

    Here is the RuleORama-log: (..still not sure if I translated all ObjC-Code from the manual correctly to Swift)….

    2018-04-23 14:08:20.169350+0300 TestOpenEars[4109:2062039] Error: Error Domain=com.politepix.openears Code=6000 "Language model has no content." UserInfo={NSLocalizedDescription=Language model has no content.}
    2018-04-23 14:08:20.170097+0300 TestOpenEars[4109:2062039] It wasn't possible to create this grammar: {
        OneOfTheseWillBeSaidOnce =     (
            "esch do no frey"
        );
    }
    Error while creating initial language model: Optional(Error Domain=LanguageModelErrorDomain Code=10040 "It wasn't possible to generate a grammar for this dictionary, please turn on OELogging for more information" UserInfo={NSLocalizedDescription=It wasn't possible to generate a grammar for this dictionary, please turn on OELogging for more information})
    2018-04-23 14:08:20.170989+0300 TestOpenEars[4109:2062039] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-23 14:08:20.171130+0300 TestOpenEars[4109:2062039] Creating shared instance of OEPocketsphinxController
    2018-04-23 14:08:20.177706+0300 TestOpenEars[4109:2062039] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-23 14:08:20.177741+0300 TestOpenEars[4109:2062039] Error: you have invoked the method:
    
    startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF
    
    with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this grammar, that means the correct path to your grammar that you should pass to this method's languageModelPath argument is as follows:
    
    NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"TheNameIChoseForMyVocabulary"];
    
    Feel free to copy and paste this code for your path to your grammar, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your grammar or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.
    #1032352
    iKK
    Participant

    My RuleORama-VC looks like this:

    Additionally, I
    – inserted the RuleORama-Framework
    – set Other Linker Flags to -ObjC
    – set the Bridging-Header Path correctly

    Here is the Code:

    import UIKit
    
    class ViewController: UIViewController, OEEventsObserverDelegate {
        
        var openEarsEventsObserver = OEEventsObserver()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            
            self.openEarsEventsObserver.delegate = self
            
            let lmGenerator = OELanguageModelGenerator()
            let accusticModelName = "AcousticModelGerman"
            let fileName = "GermanModel"
            
            let words = ["esch do no frey"]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            // let err: Error! = lmGenerator.generateGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateFastGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            var lmPath = ""
            var dictPath = ""
            
            if(err != nil) {
                print("Error while creating initial language model: \(err)")
            } else {
                lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
                dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
                lmPath = lmGenerator.pathToSuccessfullyGeneratedRuleORamaRuleset(withRequestedName: fileName)
            }
            
            // ************* Necessary for logging **************************
            OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
            OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
            // ************* Necessary for logging **************************
            
            do {
                try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
            } catch {
                print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
            }
            
            OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
        OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
        }
    
        override func didReceiveMemoryWarning() {
            super.didReceiveMemoryWarning()
            // Dispose of any resources that can be recreated.
        }
        
        func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
            print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
        // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
        func pocketsphinxRecognitionLoopDidStart() {
            print("Local callback: Pocketsphinx started.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
        func pocketsphinxDidStartListening() {
            print("Local callback: Pocketsphinx is now listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
        func pocketsphinxDidDetectSpeech() {
            print("Local callback: Pocketsphinx has detected speech.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
        func pocketsphinxDidDetectFinishedSpeech() {
            print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
        // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
        func pocketsphinxDidStopListening() {
            print("Local callback: Pocketsphinx has stopped listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
        // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
        // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
        func pocketsphinxDidSuspendRecognition() {
            print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
        // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
        // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
        func pocketsphinxDidResumeRecognition() {
            print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
        }
        
        // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
        // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
        func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
            
            print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
        // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
        func fliteDidStartSpeaking() {
            print("Local callback: Flite has started speaking") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
        // complex interaction between sound classes.
        func fliteDidFinishSpeaking() {
            print("Local callback: Flite has finished speaking") // Log it.
        }
        
        func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
            print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
        }
        
        func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
            print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
        }
        
        /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
        func pocketsphinxFailedNoMicPermissions() {
            print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
        }
        
        /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result  (will only be returned on iOS7 or later).*/
        
        func micPermissionCheckCompleted(withResult: Bool) {
            print("Local callback: mic check completed.")
        }
    }
    #1032354
    Halle Winkler
    Politepix

    Hi,

    A few things to clarify:

    • it is of course completely fine if you don’t want to use RuleORama, which is the reason I asked first if it was OK for you. This is not an upsell – my question was because there is no easier time to try it out then right after you have set up a grammar, and if you wanted to hear all the options, this was the most convenient moment to explore that one and any other timing will be less convenient because we will be changing from a grammar to a language model afterwards. My intention was to explain to you how to add it to your existing project if you agreed to try it out. It is fine with me either to skip it or to take time to get it working.

    • This is too unconstructive for me while I’m working hard to give you fast and helpful support for an unsupported language, and I’d appreciate it if you’d consider that we both have stresses in this process: “I relaize the RuleORama-demo is again not useful after download – and I feel that I loose trememdeous amount of time just to set up these demo-projects. Also, your manual contains ObjC-Code under the Swift3 chapter – which is not something pleasant either.” I disagree with you about the origin of the issues in this case, but more importantly, I just don’t want to proceed with this tone, which also seemed to come up due to my trying hard to remove all the unknown variables from our troubleshooting process, and I’m likely to choose to close the question if it is an ongoing thing even though we’ve both invested time in it. You don’t have to agree, but I don’t want you to be surprised if I close this discussion for this reason.

    • I want to warn you here that it is possible there is no great solution because this is an unsupported language, so that you have enough info to know whether to invest more time. I am happy to help, and I have some theories about how we might be able to make this work, but not every question has a perfect answer.

    That all said, I just noticed from your RuleORama install that there is something we need to fix in both installs, which is that in both cases the logging is being called too late to log the results of generating the language model or grammar. Can you move these:

    OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
    OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true

    To run right after super.viewDidLoad() and share the logging output from both projects?

    #1032355
    iKK
    Participant

    I am sorry about my tone during RuleORama-demo-trials today – I felt a bit stressed out since things did not fit immediately :/ I do appreciate your help !

    As requested, I did move the log-code accordingly (i..e after viewDidLoad).

    Please see the following two forum-entries for the two Logs

    (1) : Done by Rejecto
    –> There are 5 times I spoke, the two first times are spoken correctly (i.e. our words-array) – the thrid, fourth and fitht time is spoken incorrectly (but unfortunately still recognized by Rejecto)

    (2) : Done by RuleORama
    –> Still, there is a bug in code as can be read in the log…

    #1032356
    iKK
    Participant

    Log from 5 Rejecto trials :

    2018-04-23 23:05:12.918271+0300 TestOpenEars[4509:2258026] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-23 23:05:12.919509+0300 TestOpenEars[4509:2258026] Creating shared instance of OEPocketsphinxController
    2018-04-23 23:05:12.941856+0300 TestOpenEars[4509:2258026] Rejecto version 2.500000
    2018-04-23 23:05:12.943018+0300 TestOpenEars[4509:2258026] Since there is no cached version, loading the g2p model for the acoustic model called AcousticModelGerman
    2018-04-23 23:05:13.041475+0300 TestOpenEars[4509:2258026] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelGerman
    2018-04-23 23:05:13.049510+0300 TestOpenEars[4509:2258026] Returning a cached version of LanguageModelGeneratorLookupList.text
    2018-04-23 23:05:13.049628+0300 TestOpenEars[4509:2258026] Returning a cached version of g2p
    2018-04-23 23:05:13.054494+0300 TestOpenEars[4509:2258026] The word do was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-23 23:05:13.054778+0300 TestOpenEars[4509:2258026] the graphemes "d oo" were created for the word do using the fallback method.
    2018-04-23 23:05:13.059873+0300 TestOpenEars[4509:2258026] The word esch was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-23 23:05:13.060119+0300 TestOpenEars[4509:2258026] the graphemes "@ ss" were created for the word esch using the fallback method.
    2018-04-23 23:05:13.065637+0300 TestOpenEars[4509:2258026] The word frey was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-23 23:05:13.066140+0300 TestOpenEars[4509:2258026] the graphemes "f r @ ii" were created for the word frey using the fallback method.
    2018-04-23 23:05:13.071132+0300 TestOpenEars[4509:2258026] The word no was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-23 23:05:13.071372+0300 TestOpenEars[4509:2258026] the graphemes "n oo" were created for the word no using the fallback method.
    2018-04-23 23:05:13.071437+0300 TestOpenEars[4509:2258026] I'm done running performDictionaryLookup and it took 0.021849 seconds
    2018-04-23 23:05:13.071773+0300 TestOpenEars[4509:2258026] I'm done running performDictionaryLookup and it took 0.022536 seconds
    2018-04-23 23:05:13.077796+0300 TestOpenEars[4509:2258026] A value has been given for weight, but it is identical to the default so we are ignoring it.
    2018-04-23 23:05:13.077861+0300 TestOpenEars[4509:2258026] Starting dynamic language model generation
    
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=26, 2=45, 3=24
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):       26 = #unigrams created
    INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
    INFO: ngram_model_arpa_legacy.c(561):       45 = #bigrams created
    INFO: ngram_model_arpa_legacy.c(562):        3 = #prob2 entries
    INFO: ngram_model_arpa_legacy.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
    INFO: ngram_model_arpa_legacy.c(583):       24 = #trigrams created
    INFO: ngram_model_arpa_legacy.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):       26 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(652):       45 = #bigrams created
    INFO: ngram_model_dmp_legacy.c(653):        3 = #prob2 entries
    INFO: ngram_model_dmp_legacy.c(660):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp_legacy.c(664):       24 = #trigrams created
    INFO: ngram_model_dmp_legacy.c(665):        2 = #prob3 entries
    2018-04-23 23:05:13.173883+0300 TestOpenEars[4509:2258026] Done creating language model with CMUCLMTK in 0.095971 seconds.
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=26, 2=45, 3=24
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):       26 = #unigrams created
    INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
    INFO: ngram_model_arpa_legacy.c(561):       45 = #bigrams created
    INFO: ngram_model_arpa_legacy.c(562):        3 = #prob2 entries
    INFO: ngram_model_arpa_legacy.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
    INFO: ngram_model_arpa_legacy.c(583):       24 = #trigrams created
    INFO: ngram_model_arpa_legacy.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):       26 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(652):       45 = #bigrams created
    INFO: ngram_model_dmp_legacy.c(653):        3 = #prob2 entries
    INFO: ngram_model_dmp_legacy.c(660):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp_legacy.c(664):       24 = #trigrams created
    INFO: ngram_model_dmp_legacy.c(665):        2 = #prob3 entries
    2018-04-23 23:05:13.178718+0300 TestOpenEars[4509:2258026] I'm done running dynamic language model generation and it took 0.235861 seconds
    2018-04-23 23:05:13.180002+0300 TestOpenEars[4509:2258026] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-23 23:05:13.184972+0300 TestOpenEars[4509:2258026] User gave mic permission for this app.
    2018-04-23 23:05:13.185249+0300 TestOpenEars[4509:2258026] setSecondsOfSilence wasn't set, using default of 0.700000.
    2018-04-23 23:05:13.186600+0300 TestOpenEars[4509:2258157] Starting listening.
    2018-04-23 23:05:13.186840+0300 TestOpenEars[4509:2258157] About to set up audio session
    2018-04-23 23:05:13.379133+0300 TestOpenEars[4509:2258157] Creating audio session with default settings.
    2018-04-23 23:05:13.379218+0300 TestOpenEars[4509:2258157] Done setting audio session category.
    2018-04-23 23:05:13.388928+0300 TestOpenEars[4509:2258157] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 48000.000000
    2018-04-23 23:05:13.390500+0300 TestOpenEars[4509:2258157] number of channels is already the preferred number of 1 so not setting it.
    2018-04-23 23:05:13.395573+0300 TestOpenEars[4509:2258157] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.085333
    2018-04-23 23:05:13.395785+0300 TestOpenEars[4509:2258157] Done setting up audio session
    2018-04-23 23:05:13.402184+0300 TestOpenEars[4509:2258166] Audio route has changed for the following reason:
    2018-04-23 23:05:13.404934+0300 TestOpenEars[4509:2258157] About to set up audio IO unit in a session with a sample rate of 48000.000000, a channel number of 1 and a buffer duration of 0.085333.
    2018-04-23 23:05:13.405005+0300 TestOpenEars[4509:2258166] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-23 23:05:13.543573+0300 TestOpenEars[4509:2258166] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c021a550, 
    inputs = (null); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c021a390, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>".
    2018-04-23 23:05:13.546940+0300 TestOpenEars[4509:2258166] Audio route has changed for the following reason:
    2018-04-23 23:05:13.547508+0300 TestOpenEars[4509:2258166] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-23 23:05:13.550799+0300 TestOpenEars[4509:2258166] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c021a4c0, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x1c021a3a0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c44061f0, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>"
    )>".
    2018-04-23 23:05:13.569655+0300 TestOpenEars[4509:2258157] Done setting up audio unit
    2018-04-23 23:05:13.570040+0300 TestOpenEars[4509:2258157] About to start audio IO unit
    2018-04-23 23:05:13.790136+0300 TestOpenEars[4509:2258157] Done starting audio unit
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone				
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		30
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/2235C424-8991-43FD-BD60-771ABE6FEF52/Library/Caches/GermanModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg					
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/2235C424-8991-43FD-BD60-771ABE6FEF52/Library/Caches/GermanModel.gram
    -keyphrase				
    -kws					
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda					
    -ldadim			0		0
    -lifter			0		22
    -lm					
    -lmctl					
    -lmname					
    -logbase		1.0001		1.000100e+00
    -logfn					
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    -mean					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/means
    -mfclogdir				
    -min_endfr		0		0
    -mixw					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    -mixwfloor		0.0000001	1.000000e-07
    -mllr					
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir				
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump				
    -senlogdir				
    -senmgau				
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec					
    -tmat					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule				
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		3.200000e+00
    -var					/var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params				
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 53834 * 8 bytes (420 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ptm_mgau.c(801): Number of codebooks exceeds 256: 2129
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: acmod.c(121): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2129 senones: 1 features x 32 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4124 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/2235C424-8991-43FD-BD60-771ABE6FEF52/Library/Caches/GermanModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 24 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/43D01A3B-05FF-4662-87CD-082AE28DF8B2/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 4 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 43^3 * 2 bytes (155 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 44720 bytes (43 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 44720 bytes (43 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <GermanModel.g00000>
    INFO: jsgf.c(691): Defined rule: PUBLIC <GermanModel.rule_0>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_search.c(173): Added 0 alternate word transitions
    INFO: fsg_lextree.c(110): Allocated 440 bytes (0 KiB) for left and right context phones
    INFO: fsg_lextree.c(256): 17 HMM nodes in lextree (11 leaves)
    INFO: fsg_lextree.c(259): Allocated 2448 bytes (2 KiB) for all lextree nodes
    INFO: fsg_lextree.c(262): Allocated 1584 bytes (1 KiB) for lextree leafnodes
    2018-04-23 23:05:14.614291+0300 TestOpenEars[4509:2258157] There is no CMN plist so we are using the fresh CMN value 30.000000.
    2018-04-23 23:05:14.614826+0300 TestOpenEars[4509:2258157] Listening.
    2018-04-23 23:05:14.615526+0300 TestOpenEars[4509:2258157] Project has these words or phrases in its dictionary:
    ___REJ_yy
    ___REJ_y:
    ___REJ_uu
    ___REJ_ui:
    ___REJ_ui
    ___REJ_u:
    ___REJ_oy
    ___REJ_oo
    ___REJ_o:
    ___REJ_ii
    ___REJ_i:
    ___REJ_ei
    ___REJ_ee:
    ___REJ_ee
    ___REJ_e:
    ___REJ_au
    ___REJ_ai
    ___REJ_aa:
    ___REJ_a
    ___REJ_@
    do
    esch
    frey
    no
    2018-04-23 23:05:14.616365+0300 TestOpenEars[4509:2258157] Recognition loop has started
    2018-04-23 23:05:14.616672+0300 TestOpenEars[4509:2258026] Successfully started listening session from startListeningWithLanguageModelAtPath:
    Local callback: Pocketsphinx is now listening.
    Local callback: Pocketsphinx started.
    2018-04-23 23:05:15.009217+0300 TestOpenEars[4509:2258263] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-23 23:05:15.799115+0300 TestOpenEars[4509:2258263] End of speech detected...
    INFO: cmn_prior.c(131): cmn_prior_update: from < 30.00 Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
     0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 44.41 30.50  8.46 18.30  6.58  0.68  6.31  2.90  2.41 -1.35  6.45  2.09 -2.91 >
    INFO: fsg_search.c(843): 91 frames, 1007 HMMs (11/fr), 2357 senones (25/fr), 460 history entries (5/fr)
    
    ERROR: "fsg_search.c", line 913: Final result does not match the grammar in frame 91
    2018-04-23 23:05:15.800405+0300 TestOpenEars[4509:2258263] Pocketsphinx heard " " with a score of (0) and an utterance ID of 0.
    2018-04-23 23:05:15.800484+0300 TestOpenEars[4509:2258263] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2018-04-23 23:05:18.838269+0300 TestOpenEars[4509:2258158] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-23 23:05:20.650945+0300 TestOpenEars[4509:2258158] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 44.41 30.50  8.46 18.30  6.58  0.68  6.31  2.90  2.41 -1.35  6.45  2.09 -2.91 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 53.26 20.96  4.55 17.38  0.21 -1.67  4.79 -0.35 -0.18 -4.38  6.48 -1.25 -0.39 >
    INFO: fsg_search.c(843): 181 frames, 2069 HMMs (11/fr), 4811 senones (26/fr), 686 history entries (3/fr)
    
    2018-04-23 23:05:20.651764+0300 TestOpenEars[4509:2258158] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 1.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 1
    2018-04-23 23:05:23.066355+0300 TestOpenEars[4509:2258158] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-23 23:05:24.596305+0300 TestOpenEars[4509:2258158] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 53.26 20.96  4.55 17.38  0.21 -1.67  4.79 -0.35 -0.18 -4.38  6.48 -1.25 -0.39 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 54.65 18.41  2.70 19.09 -0.71 -1.76  5.51 -0.36 -1.55 -4.42  6.28 -2.13  0.55 >
    INFO: fsg_search.c(843): 162 frames, 1278 HMMs (7/fr), 3174 senones (19/fr), 459 history entries (2/fr)
    
    2018-04-23 23:05:24.597442+0300 TestOpenEars[4509:2258158] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 2.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 2
    2018-04-23 23:05:27.788021+0300 TestOpenEars[4509:2258158] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-23 23:05:29.850067+0300 TestOpenEars[4509:2258158] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 54.65 18.41  2.70 19.09 -0.71 -1.76  5.51 -0.36 -1.55 -4.42  6.28 -2.13  0.55 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 54.51 15.96  2.96 17.45 -1.80 -3.57  5.57  0.03 -0.98 -3.84  6.16 -2.31  0.42 >
    INFO: fsg_search.c(843): 213 frames, 2056 HMMs (9/fr), 5431 senones (25/fr), 682 history entries (3/fr)
    
    2018-04-23 23:05:29.851350+0300 TestOpenEars[4509:2258158] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 3.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 3
    2018-04-23 23:05:31.887961+0300 TestOpenEars[4509:2258158] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 54.51 15.96  2.96 17.45 -1.80 -3.57  5.57  0.03 -0.98 -3.84  6.16 -2.31  0.42 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 55.54 13.48  2.27 17.55 -2.54 -4.29  5.28 -0.03 -1.65 -3.59  6.26 -2.36  0.88 >
    2018-04-23 23:05:34.105732+0300 TestOpenEars[4509:2258158] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 55.54 13.48  2.27 17.55 -2.54 -4.29  5.28 -0.03 -1.65 -3.59  6.26 -2.36  0.88 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 52.94 14.17  2.28 16.13 -1.96 -3.92  5.22 -0.20 -1.13 -3.16  6.55 -1.67  0.61 >
    INFO: fsg_search.c(843): 224 frames, 1865 HMMs (8/fr), 5165 senones (23/fr), 563 history entries (2/fr)
    
    2018-04-23 23:05:34.107134+0300 TestOpenEars[4509:2258158] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 4.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 4
    2018-04-23 23:05:35.979058+0300 TestOpenEars[4509:2258158] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-23 23:05:37.790857+0300 TestOpenEars[4509:2258158] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 52.94 14.17  2.28 16.13 -1.96 -3.92  5.22 -0.20 -1.13 -3.16  6.55 -1.67  0.61 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 53.41 11.98  2.53 14.97 -2.38 -4.02  5.18  0.31 -1.08 -2.87  5.96 -1.54  0.32 >
    INFO: fsg_search.c(843): 186 frames, 1338 HMMs (7/fr), 3723 senones (20/fr), 409 history entries (2/fr)
    
    2018-04-23 23:05:37.792028+0300 TestOpenEars[4509:2258158] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 5.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 5
    #1032357
    iKK
    Participant

    Log done by RuleORama:

    2018-04-23 23:10:25.782460+0300 TestOpenEars[4514:2260548] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-23 23:10:25.783510+0300 TestOpenEars[4514:2260548] Creating shared instance of OEPocketsphinxController
    2018-04-23 23:10:25.804235+0300 TestOpenEars[4514:2260548] RuleORama version 2.502000
    2018-04-23 23:10:25.821992+0300 TestOpenEars[4514:2260548] Error: Error Domain=com.politepix.openears Code=6000 "Language model has no content." UserInfo={NSLocalizedDescription=Language model has no content.}
    2018-04-23 23:10:25.822185+0300 TestOpenEars[4514:2260548] Error generating this grammar: Error Domain=com.politepix.openears Code=6000 "Language model has no content." UserInfo={NSLocalizedDescription=Language model has no content.}
    2018-04-23 23:10:25.822216+0300 TestOpenEars[4514:2260548] Generating fast grammar took 0.000543 seconds
    2018-04-23 23:10:25.823830+0300 TestOpenEars[4514:2260548] It wasn't possible to create this grammar: {
        OneOfTheseWillBeSaidOnce =     (
            "esch do no frey"
        );
    }
    Error while creating initial language model: Optional(Error Domain=LanguageModelErrorDomain Code=10040 "It wasn't possible to generate a grammar for this dictionary, please turn on OELogging for more information" UserInfo={NSLocalizedDescription=It wasn't possible to generate a grammar for this dictionary, please turn on OELogging for more information})
    2018-04-23 23:10:25.828302+0300 TestOpenEars[4514:2260548] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-23 23:10:25.828354+0300 TestOpenEars[4514:2260548] Error: you have invoked the method:
    
    startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF
    
    with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this grammar, that means the correct path to your grammar that you should pass to this method's languageModelPath argument is as follows:
    
    NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"TheNameIChoseForMyVocabulary"];
    
    Feel free to copy and paste this code for your path to your grammar, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your grammar or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.
    2018-04-23 23:11:00.068429+0300 TestOpenEars[4514:2260548] Status bar could not find cached time string image. Rendering in-process.
    2018-04-23 23:12:00.002388+0300 TestOpenEars[4514:2260548] Status bar could not find cached time string image. Rendering in-process.
    2018-04-23 23:13:00.003170+0300 TestOpenEars[4514:2260548] Status bar could not find cached time string image. Rendering in-process.
    #1032358
    Halle Winkler
    Politepix

    Cool, thank you. Do you have a log for your earlier project that is just OpenEars using a grammar (without Rejecto), with the logging calls moved to the top? I thought that was the main file we had starting with debugging above, and then we were going to quickly try out RuleORama if you wanted. Those two grammar-using projects are the ones I’m curious about right now, because it looks like there is a flaw in the grammar and I want to know if the same error is happening in both.

    #1032359
    Halle Winkler
    Politepix

    I’m talking about the project which uses this VC: https://www.politepix.com/forums/topic/recognize-short-command-in-nonenglish/#post-1032343

    Except moving the logging calls high enough up so that we can see any errors that happen while you’re generating the grammar.

    #1032360
    iKK
    Participant

    Sorry – need to answer you this tomorrow. I don’t think I do have this project anymore in this state as it was… Please let me check tomorrow, ok?

    #1032361
    Halle Winkler
    Politepix

    No problem, just keep in mind that I asked for that project to have a clean slate to work from without mixing up code from multiple approaches, so we want to get back to that state of clarity and simplicity.

    #1032362
    iKK
    Participant

    Ok, I have everything ready again (thanks to TimeMachine ;) – …since I had the OpenEar-only version deleted on git.

    1) OpenEars only with German-accModel and Logging (= version as in your Link – except moved the logging right after viewDidLoad)

    2) OpenEars with Rejecto and German-accModel (with logging right after viewDidLoad)

    3) OpenEars with RuleORama with German-accModel (with logging right after viewDidLoad but still with a bug that I don’t know on how to correct – see above log)

    In the next forum-entry I place the requested Log from the OpenEars-only version that you asked for. If more logs are needed let me know, ok ?

    #1032363
    iKK
    Participant

    OpenEars only version with German acc-Model and Logging:
    –> 3 times spoken sentence correctly and 3 times spoken incorrectly (but unfortunately still recognized by App):

    2018-04-24 15:26:59.451799+0300 TestOpenEars[5111:2385157] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-24 15:26:59.453199+0300 TestOpenEars[5111:2385157] Creating shared instance of OEPocketsphinxController
    2018-04-24 15:26:59.492625+0300 TestOpenEars[5111:2385157] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelGerman
    2018-04-24 15:26:59.500315+0300 TestOpenEars[5111:2385157] Since there is no cached version, loading the g2p model for the acoustic model called AcousticModelGerman
    2018-04-24 15:26:59.560933+0300 TestOpenEars[5111:2385157] The word do was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-24 15:26:59.561292+0300 TestOpenEars[5111:2385157] the graphemes "d oo" were created for the word do using the fallback method.
    2018-04-24 15:26:59.566736+0300 TestOpenEars[5111:2385157] The word esch was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-24 15:26:59.566934+0300 TestOpenEars[5111:2385157] the graphemes "@ ss" were created for the word esch using the fallback method.
    2018-04-24 15:26:59.571940+0300 TestOpenEars[5111:2385157] The word frey was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-24 15:26:59.572382+0300 TestOpenEars[5111:2385157] the graphemes "f r @ ii" were created for the word frey using the fallback method.
    2018-04-24 15:26:59.577309+0300 TestOpenEars[5111:2385157] The word no was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-24 15:26:59.577520+0300 TestOpenEars[5111:2385157] the graphemes "n oo" were created for the word no using the fallback method.
    2018-04-24 15:26:59.577594+0300 TestOpenEars[5111:2385157] I'm done running performDictionaryLookup and it took 0.077345 seconds
    2018-04-24 15:26:59.620226+0300 TestOpenEars[5111:2385157] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-24 15:26:59.624967+0300 TestOpenEars[5111:2385157] User gave mic permission for this app.
    2018-04-24 15:26:59.625738+0300 TestOpenEars[5111:2385157] setSecondsOfSilence wasn't set, using default of 0.700000.
    2018-04-24 15:26:59.626794+0300 TestOpenEars[5111:2385361] Starting listening.
    2018-04-24 15:26:59.627043+0300 TestOpenEars[5111:2385361] About to set up audio session
    2018-04-24 15:26:59.912222+0300 TestOpenEars[5111:2385373] Audio route has changed for the following reason:
    2018-04-24 15:26:59.924468+0300 TestOpenEars[5111:2385361] Creating audio session with default settings.
    2018-04-24 15:26:59.924526+0300 TestOpenEars[5111:2385361] Done setting audio session category.
    2018-04-24 15:26:59.934688+0300 TestOpenEars[5111:2385361] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 48000.000000
    2018-04-24 15:26:59.935107+0300 TestOpenEars[5111:2385361] number of channels is already the preferred number of 1 so not setting it.
    2018-04-24 15:26:59.935530+0300 TestOpenEars[5111:2385373] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-24 15:26:59.938452+0300 TestOpenEars[5111:2385361] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.085333
    2018-04-24 15:26:59.938717+0300 TestOpenEars[5111:2385361] Done setting up audio session
    2018-04-24 15:26:59.939075+0300 TestOpenEars[5111:2385361] About to set up audio IO unit in a session with a sample rate of 48000.000000, a channel number of 1 and a buffer duration of 0.085333.
    2018-04-24 15:26:59.939645+0300 TestOpenEars[5111:2385373] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c421ade0, 
    inputs = (null); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c421ada0, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>".
    2018-04-24 15:27:00.043139+0300 TestOpenEars[5111:2385373] Audio route has changed for the following reason:
    2018-04-24 15:27:00.044041+0300 TestOpenEars[5111:2385373] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-24 15:27:00.048596+0300 TestOpenEars[5111:2385373] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c421ade0, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x1c421ad80, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c421aee0, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>"
    )>".
    2018-04-24 15:27:00.083178+0300 TestOpenEars[5111:2385361] Done setting up audio unit
    2018-04-24 15:27:00.083242+0300 TestOpenEars[5111:2385361] About to start audio IO unit
    2018-04-24 15:27:00.310893+0300 TestOpenEars[5111:2385361] Done starting audio unit
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone				
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		30
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/36B29079-C9F8-4804-BE51-0BDCE309BB18/Library/Caches/GermanModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg					
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/36B29079-C9F8-4804-BE51-0BDCE309BB18/Library/Caches/GermanModel.gram
    -keyphrase				
    -kws					
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda					
    -ldadim			0		0
    -lifter			0		22
    -lm					
    -lmctl					
    -lmname					
    -logbase		1.0001		1.000100e+00
    -logfn					
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    -mean					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/means
    -mfclogdir				
    -min_endfr		0		0
    -mixw					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    -mixwfloor		0.0000001	1.000000e-07
    -mllr					
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir				
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump				
    -senlogdir				
    -senmgau				
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec					
    -tmat					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule				
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		3.200000e+00
    -var					/var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params				
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 53834 * 8 bytes (420 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ptm_mgau.c(801): Number of codebooks exceeds 256: 2129
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: acmod.c(121): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2129 senones: 1 features x 32 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4104 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/36B29079-C9F8-4804-BE51-0BDCE309BB18/Library/Caches/GermanModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 4 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/EF8E1618-8403-456C-8666-01B9C11D392E/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 4 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 43^3 * 2 bytes (155 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 44720 bytes (43 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 44720 bytes (43 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <GermanModel.g00000>
    INFO: jsgf.c(691): Defined rule: PUBLIC <GermanModel.rule_0>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_model.c(428): Adding silence transitions for <sil> to FSG
    INFO: fsg_model.c(448): Added 5 silence word transitions
    INFO: fsg_search.c(173): Added 0 alternate word transitions
    INFO: fsg_lextree.c(110): Allocated 440 bytes (0 KiB) for left and right context phones
    INFO: fsg_lextree.c(256): 17 HMM nodes in lextree (11 leaves)
    INFO: fsg_lextree.c(259): Allocated 2448 bytes (2 KiB) for all lextree nodes
    INFO: fsg_lextree.c(262): Allocated 1584 bytes (1 KiB) for lextree leafnodes
    2018-04-24 15:27:01.081355+0300 TestOpenEars[5111:2385361] There is no CMN plist so we are using the fresh CMN value 30.000000.
    2018-04-24 15:27:01.081752+0300 TestOpenEars[5111:2385361] Listening.
    2018-04-24 15:27:01.082081+0300 TestOpenEars[5111:2385361] Project has these words or phrases in its dictionary:
    do
    esch
    frey
    no
    2018-04-24 15:27:01.082136+0300 TestOpenEars[5111:2385361] Recognition loop has started
    2018-04-24 15:27:01.082386+0300 TestOpenEars[5111:2385157] Successfully started listening session from startListeningWithLanguageModelAtPath:
    Local callback: Pocketsphinx is now listening.
    Local callback: Pocketsphinx started.
    2018-04-24 15:27:01.107309+0300 TestOpenEars[5111:2385365] Speech detected...
    2018-04-24 15:27:01.190684+0300 TestOpenEars[5111:2385157] Status bar could not find cached time string image. Rendering in-process.
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:02.115525+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 30.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 52.05  4.29 -6.30  6.55  2.33  7.62  1.16  3.38  0.15  2.67  4.63 -0.61  7.11 >
    INFO: fsg_search.c(843): 98 frames, 653 HMMs (6/fr), 2042 senones (20/fr), 256 history entries (2/fr)
    
    ERROR: "fsg_search.c", line 913: Final result does not match the grammar in frame 98
    2018-04-24 15:27:02.117015+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "" with a score of (0) and an utterance ID of 0.
    2018-04-24 15:27:02.117123+0300 TestOpenEars[5111:2385365] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2018-04-24 15:27:04.500792+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:06.311372+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 52.05  4.29 -6.30  6.55  2.33  7.62  1.16  3.38  0.15  2.67  4.63 -0.61  7.11 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 58.15 13.64 -1.14 13.16 -1.76  4.38 -0.40 -0.17  2.94 -3.08  6.71 -0.77  0.66 >
    INFO: fsg_search.c(843): 182 frames, 1377 HMMs (7/fr), 3920 senones (21/fr), 578 history entries (3/fr)
    
    ERROR: "fsg_search.c", line 913: Final result does not match the grammar in frame 182
    2018-04-24 15:27:06.312908+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "" with a score of (0) and an utterance ID of 1.
    2018-04-24 15:27:06.312985+0300 TestOpenEars[5111:2385365] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    2018-04-24 15:27:09.497641+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:10.950073+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 58.15 13.64 -1.14 13.16 -1.76  4.38 -0.40 -0.17  2.94 -3.08  6.71 -0.77  0.66 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 59.76 10.97 -1.90 16.73 -0.71  2.84 -0.63 -0.64  2.90 -4.16  8.11 -1.47  1.15 >
    INFO: fsg_search.c(843): 152 frames, 1313 HMMs (8/fr), 3563 senones (23/fr), 481 history entries (3/fr)
    
    2018-04-24 15:27:10.951331+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 2.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 2
    2018-04-24 15:27:14.027713+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:15.556479+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 59.76 10.97 -1.90 16.73 -0.71  2.84 -0.63 -0.64  2.90 -4.16  8.11 -1.47  1.15 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 61.02 10.07 -1.94 18.70 -1.04  2.18 -0.03  0.14  2.00 -4.99  8.23 -1.17  0.73 >
    INFO: fsg_search.c(843): 149 frames, 1388 HMMs (9/fr), 3631 senones (24/fr), 529 history entries (3/fr)
    
    2018-04-24 15:27:15.557777+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 3.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 3
    2018-04-24 15:27:18.889630+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:20.689538+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 61.02 10.07 -1.94 18.70 -1.04  2.18 -0.03  0.14  2.00 -4.99  8.23 -1.17  0.73 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 61.56  8.80  0.48 17.99 -1.61  1.57  1.08  0.47  2.06 -4.60  8.51 -1.48  0.24 >
    INFO: fsg_search.c(843): 191 frames, 1669 HMMs (8/fr), 4600 senones (24/fr), 474 history entries (2/fr)
    
    2018-04-24 15:27:20.690857+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 4.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 4
    INFO: cmn_prior.c(99): cmn_prior_update: from < 61.56  8.80  0.48 17.99 -1.61  1.57  1.08  0.47  2.06 -4.60  8.51 -1.48  0.24 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 61.76  8.78  0.35 18.48 -1.57  1.41  0.91  0.39  2.02 -4.70  8.28 -1.45  0.26 >
    2018-04-24 15:27:22.731373+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:24.515545+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 61.76  8.78  0.35 18.48 -1.57  1.41  0.91  0.39  2.02 -4.70  8.28 -1.45  0.26 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 61.14  6.47  2.57 17.69 -1.60  2.08  2.04  1.05  0.54 -4.29  9.29 -2.48 -0.17 >
    INFO: fsg_search.c(843): 196 frames, 1392 HMMs (7/fr), 3717 senones (18/fr), 497 history entries (2/fr)
    
    2018-04-24 15:27:24.516252+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 5.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 5
    2018-04-24 15:27:26.128084+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    INFO: cmn_prior.c(99): cmn_prior_update: from < 61.14  6.47  2.57 17.69 -1.60  2.08  2.04  1.05  0.54 -4.29  9.29 -2.48 -0.17 >
    INFO: cmn_prior.c(116): cmn_prior_update: to   < 62.24  4.73  2.29 18.02 -2.27  1.41  2.70  1.52  0.78 -4.73  9.26 -2.53  0.30 >
    2018-04-24 15:27:28.084682+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 62.24  4.73  2.29 18.02 -2.27  1.41  2.70  1.52  0.78 -4.73  9.26 -2.53  0.30 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 59.68  4.42  2.95 16.81 -0.93  2.99  2.93  0.78 -0.09 -3.93  9.04 -2.78  0.26 >
    INFO: fsg_search.c(843): 196 frames, 1654 HMMs (8/fr), 4578 senones (23/fr), 563 history entries (2/fr)
    
    2018-04-24 15:27:28.087721+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "esch do no frey" with a score of (0) and an utterance ID of 6.
    Local callback: The received hypothesis is esch do no frey with a score of 0 and an ID of 6
    2018-04-24 15:27:29.222305+0300 TestOpenEars[5111:2385365] Speech detected...
    Local callback: Pocketsphinx has detected speech.
    2018-04-24 15:27:30.032028+0300 TestOpenEars[5111:2385365] End of speech detected...
    Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.
    INFO: cmn_prior.c(131): cmn_prior_update: from < 59.68  4.42  2.95 16.81 -0.93  2.99  2.93  0.78 -0.09 -3.93  9.04 -2.78  0.26 >
    INFO: cmn_prior.c(149): cmn_prior_update: to   < 58.59  3.00  2.81 16.73  0.59  3.92  2.47  0.95  1.43 -2.86  8.13 -2.14  0.18 >
    INFO: fsg_search.c(843): 94 frames, 701 HMMs (7/fr), 1954 senones (20/fr), 268 history entries (2/fr)
    
    ERROR: "fsg_search.c", line 913: Final result does not match the grammar in frame 94
    2018-04-24 15:27:30.033297+0300 TestOpenEars[5111:2385365] Pocketsphinx heard "" with a score of (0) and an utterance ID of 7.
    2018-04-24 15:27:30.033598+0300 TestOpenEars[5111:2385365] Hypothesis was null so we aren't returning it. If you want null hypotheses to also be returned, set OEPocketsphinxController's property returnNullHypotheses to TRUE before starting OEPocketsphinxController.
    #1032368
    Halle Winkler
    Politepix

    OK, let’s see what happens when you make the following changes to the three projects.

    For your regular grammar project and for your RuleORama project, please adjust this code:

    
            let words = ["esch do no frey"]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            // let err: Error! = lmGenerator.generateGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateFastGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    so it matches the grammar instructions with the enclosing ThisWillBeSaidOnce declaration like so:

    
            let words = ["esch do no frey"]
            let grammar = [
    			ThisWillBeSaidOnce : [
    				[ OneOfTheseWillBeSaidOnce : words]
    			]
    		]
    
            // let err: Error! = lmGenerator.generateGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateFastGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    Uncommenting whichever of the generateGrammar/generateFastGrammar lines are to be used by the respective grammar project.

    For your Rejecto project, please open AcousticModelGerman.bundle/LanguageModelGeneratorLookupList.text at whatever location you are really linking to it (please be ABSOLUTELY sure you are performing this change on the real acoustic model that your project links to and moves into your app bundle, wherever that is located, or our troubleshooting work on this project will be guaranteed to be unsuccessful) and look for the following two lines:

    es	ee s
    esf	ee s f

    and change them to this:

    es	ee s
    eschdonofrey	@ ss d oo n oo f r @ ii
    esf	ee s f

    and then you have to change your Rejecto language model generation code (which you have never shown me here) so that it just creates a model for the single word “eschdonofrey”. Do not make this change to your grammar projects. For contrast, you can also try changing the acoustic model entry to this instead with your Rejecto project, with slightly different phonemes:

    es	ee s
    eschdonofrey	ee ss d oo n oo f r ee ii
    esf	ee s f

    If none of these have better results, this will be the point at which we will have to stop the troubleshooting process, because it is guaranteed to get confused results if we try to further troubleshoot three different implementations in parallel which have hosted other different implementations at different times. If one of these projects has improved results, we can do a little bit more investigation of it, under the condition that the other two projects are put away and it is possible for me to rely on the fact that we are only investigating one clean project at a time moving forward. Let me know how it goes!

    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    #1032374
    iKK
    Participant

    The forum entries I submit are no longer shown (in none of my browsers…). This is unfortunate. Can you still read them ??

    #1032383
    Halle Winkler
    Politepix

    They are being marked as spam due to the multiple external links. Please keep all the discussion in here so it is a useful resource to other people with the same issue. I recommend doing this without all the confusion and complexity by returning to the premise of troubleshooting exactly one case at a time. You can choose which to begin with.

    #1032387
    Halle Winkler
    Politepix

    Please put all your documentation of what is going on in this forum, thank you. The Github repo will change or disappear (it has already disappeared and then returned with different content in the course of just this discussion, so there is a previous link to it which is already out of date) and as a consequence make this discussion no use for anyone who has a similar issue to any of the many questions you are asking for information about.

    #1032389
    iKK
    Participant

    Ok – I just feel that the log and VC’s Code make the forum trememdous. And links would be nicer somehow. I can put it into a new github-reop if you prefer. Or I can paste the huge logs in this forum. What do you prefer ?

    #1032390
    Halle Winkler
    Politepix

    Paste the logs and VC contents in this forum, thank you. There are many other discussions here with big logs and they provide a way for searchers to get hits for specific errors and problems they are troubleshooting without my having to answer the same questions many times, as well as for me to be able to go back and find either bugs or points of confusion. When that is all hidden away in a repo they will eventually disappear as the repo changes or is removed, or cause the support request to occur in that repo, and won’t help anyone solve their problems where it is possible for them to follow on with a “I got the same log result but your fix isn’t affecting my case”. It’s a very important part of there being visibility for solutions.

    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    #1032391
    Halle Winkler
    Politepix

    If you want to collapse the logs so that they aren’t as visually big, you can put spoiler tags around them:
    [spoiler]
    [/spoiler]
    this will make them possible to open and close so they don’t take up vertical space if it bothers you.

    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    #1032392
    iKK
    Participant

    Ok – all in this forum:

    Lets continue with Rejecto:

    (I changed the AcousticModelGerman.bundle/LanguageModelGeneratorLookupList.text to what you suggested).

    You wrote last: so that it just creates a model for the single word “eschdonofrey”

    This makes me do :

    let words = ["eschdonofrey"]

    …with its language-model creation as can be seen in the next Forum-entry (for clarity I place this in a new forum-entry)

    But this leads to an error as can be seen in its log
    (–> also this log is placed in its separate forum-entry to make things easier to read)

    If I change the words-array back to let words = ["esch do no frey"] then there is no error – but I feel that I did not fully follow your instructions.

    What is the correct words-array for Rejecto and our new LanguageModelGeneratorLookupList.text ???

    If it is let words = ["eschdonofrey"] then what is the counter-measure to its error ??

    #1032394
    iKK
    Participant

    ViewController code with the language-model creation :

    import UIKit
    
    class ViewController: UIViewController, OEEventsObserverDelegate {
        
        var openEarsEventsObserver = OEEventsObserver()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            
            // ************* Necessary for logging **************************
            OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
            OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
            // ************* Necessary for logging **************************
            
            self.openEarsEventsObserver.delegate = self
            
            let lmGenerator = OELanguageModelGenerator()
            let accusticModelName = "AcousticModelGerman"
            
            let fileName = "GermanModel"
            
            // let words = ["esch do no frey"]
            let words = ["eschdonofrey"]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: true, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            var lmPath = ""
            var dictPath = ""
            
            if(err != nil) {
                print("Error while creating initial language model: \(err)")
            } else {
                // lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
                lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
                dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
            }
            
            do {
                try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
            } catch {
                print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
            }
            
            OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
        OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
        }
    
        override func didReceiveMemoryWarning() {
            super.didReceiveMemoryWarning()
            // Dispose of any resources that can be recreated.
        }
        
        func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
            print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
        // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
        func pocketsphinxRecognitionLoopDidStart() {
            print("Local callback: Pocketsphinx started.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
        func pocketsphinxDidStartListening() {
            print("Local callback: Pocketsphinx is now listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
        func pocketsphinxDidDetectSpeech() {
            print("Local callback: Pocketsphinx has detected speech.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
        func pocketsphinxDidDetectFinishedSpeech() {
            print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
        // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
        func pocketsphinxDidStopListening() {
            print("Local callback: Pocketsphinx has stopped listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
        // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
        // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
        func pocketsphinxDidSuspendRecognition() {
            print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
        // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
        // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
        func pocketsphinxDidResumeRecognition() {
            print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
        }
        
        // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
        // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
        func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
            
            print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
        // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
        func fliteDidStartSpeaking() {
            print("Local callback: Flite has started speaking") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
        // complex interaction between sound classes.
        func fliteDidFinishSpeaking() {
            print("Local callback: Flite has finished speaking") // Log it.
        }
        
        func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
            print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
        }
        
        func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
            print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
        }
        
        /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
        func pocketsphinxFailedNoMicPermissions() {
            print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
        }
        
        /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result  (will only be returned on iOS7 or later).*/
        
        func micPermissionCheckCompleted(withResult: Bool) {
            print("Local callback: mic check completed.")
        }
    }
    #1032395
    iKK
    Participant

    And the Rejecto error-log when let words = ["eschdonofrey"]

    2018-04-26 16:12:57.394517+0200 TestOpenEars[948:251080] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-26 16:12:57.394668+0200 TestOpenEars[948:251080] Creating shared instance of OEPocketsphinxController
    2018-04-26 16:12:57.400109+0200 TestOpenEars[948:251080] Rejecto version 2.500000
    2018-04-26 16:12:57.400656+0200 TestOpenEars[948:251080] Since there is no cached version, loading the g2p model for the acoustic model called AcousticModelGerman
    2018-04-26 16:12:57.447645+0200 TestOpenEars[948:251080] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelGerman
    2018-04-26 16:12:57.452509+0200 TestOpenEars[948:251080] Returning a cached version of LanguageModelGeneratorLookupList.text
    2018-04-26 16:12:57.452586+0200 TestOpenEars[948:251080] Returning a cached version of g2p
    2018-04-26 16:12:57.453390+0200 TestOpenEars[948:251080] I'm done running performDictionaryLookup and it took 0.000826 seconds
    2018-04-26 16:12:57.453572+0200 TestOpenEars[948:251080] I'm done running performDictionaryLookup and it took 0.001270 seconds
    2018-04-26 16:12:57.456686+0200 TestOpenEars[948:251080] A value has been given for weight, but it is identical to the default so we are ignoring it.
    2018-04-26 16:12:57.456719+0200 TestOpenEars[948:251080] Starting dynamic language model generation
    
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=23, 2=42, 3=21
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):       23 = #unigrams created
    INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
    INFO: ngram_model_arpa_legacy.c(561):       42 = #bigrams created
    INFO: ngram_model_arpa_legacy.c(562):        3 = #prob2 entries
    INFO: ngram_model_arpa_legacy.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
    INFO: ngram_model_arpa_legacy.c(583):       21 = #trigrams created
    INFO: ngram_model_arpa_legacy.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):       23 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(652):       42 = #bigrams created
    INFO: ngram_model_dmp_legacy.c(653):        3 = #prob2 entries
    INFO: ngram_model_dmp_legacy.c(660):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp_legacy.c(664):       21 = #trigrams created
    INFO: ngram_model_dmp_legacy.c(665):        2 = #prob3 entries
    2018-04-26 16:12:57.480557+0200 TestOpenEars[948:251080] Done creating language model with CMUCLMTK in 0.023809 seconds.
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=23, 2=42, 3=21
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):       23 = #unigrams created
    INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
    INFO: ngram_model_arpa_legacy.c(561):       42 = #bigrams created
    INFO: ngram_model_arpa_legacy.c(562):        3 = #prob2 entries
    INFO: ngram_model_arpa_legacy.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
    INFO: ngram_model_arpa_legacy.c(583):       21 = #trigrams created
    INFO: ngram_model_arpa_legacy.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):       23 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(652):       42 = #bigrams created
    INFO: ngram_model_dmp_legacy.c(653):        3 = #prob2 entries
    INFO: ngram_model_dmp_legacy.c(660):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp_legacy.c(664):       21 = #trigrams created
    INFO: ngram_model_dmp_legacy.c(665):        2 = #prob3 entries
    2018-04-26 16:12:57.484133+0200 TestOpenEars[948:251080] I'm done running dynamic language model generation and it took 0.083542 seconds
    2018-04-26 16:12:57.484546+0200 TestOpenEars[948:251080] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-26 16:12:57.486653+0200 TestOpenEars[948:251080] User gave mic permission for this app.
    2018-04-26 16:12:57.486778+0200 TestOpenEars[948:251080] setSecondsOfSilence wasn't set, using default of 0.700000.
    2018-04-26 16:12:57.487411+0200 TestOpenEars[948:251148] Starting listening.
    2018-04-26 16:12:57.487511+0200 TestOpenEars[948:251148] About to set up audio session
    2018-04-26 16:12:57.573019+0200 TestOpenEars[948:251148] Creating audio session with default settings.
    2018-04-26 16:12:57.573063+0200 TestOpenEars[948:251148] Done setting audio session category.
    2018-04-26 16:12:57.574976+0200 TestOpenEars[948:251148] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 48000.000000
    2018-04-26 16:12:57.576946+0200 TestOpenEars[948:251148] number of channels is already the preferred number of 1 so not setting it.
    2018-04-26 16:12:57.580976+0200 TestOpenEars[948:251158] Audio route has changed for the following reason:
    2018-04-26 16:12:57.581704+0200 TestOpenEars[948:251148] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.085333
    2018-04-26 16:12:57.581730+0200 TestOpenEars[948:251148] Done setting up audio session
    2018-04-26 16:12:57.581825+0200 TestOpenEars[948:251158] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-26 16:12:57.583983+022018-04-26 16:12:57.585493+0200 TestOpenEars[948:251148] About to set up audio IO unit in a session with a sample rate of 48000.000000, a channel number of 1 and a buffer duration of 0.085333.
    00 TestOpenEars[948:251158] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c060ba30, 
    inputs = (null); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c060ba00, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>".
    2018-04-26 16:12:57.651365+0200 TestOpenEars[948:251158] Audio route has changed for the following reason:
    2018-04-26 16:12:57.654591+0200 TestOpenEars[948:251158] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-26 16:12:57.657603+0200 TestOpenEars[948:251158] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c060ba60, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x1c060b990, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c060b8e0, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>"
    )>".
    2018-04-26 16:12:57.661195+0200 TestOpenEars[948:251148] Done setting up audio unit
    2018-04-26 16:12:57.661236+0200 TestOpenEars[948:251148] About to start audio IO unit
    2018-04-26 16:12:57.869438+0200 TestOpenEars[948:251148] Done starting audio unit
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone				
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		30
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/789AE295-AD6D-4F93-B321-792800594D7E/Library/Caches/GermanModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg					
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle
    -input_endian		little		little
    -jsgf					/var/mobile/Containers/Data/Application/789AE295-AD6D-4F93-B321-792800594D7E/Library/Caches/GermanModel.gram
    -keyphrase				
    -kws					
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda					
    -ldadim			0		0
    -lifter			0		22
    -lm					
    -lmctl					
    -lmname					
    -logbase		1.0001		1.000100e+00
    -logfn					
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		1.000000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    -mean					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/means
    -mfclogdir				
    -min_endfr		0		0
    -mixw					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    -mixwfloor		0.0000001	1.000000e-07
    -mllr					
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir				
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump				
    -senlogdir				
    -senmgau				
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec					
    -tmat					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule				
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		3.200000e+00
    -var					/var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params				
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 53834 * 8 bytes (420 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ptm_mgau.c(801): Number of codebooks exceeds 256: 2129
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: acmod.c(121): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2129 senones: 1 features x 32 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4121 * 32 bytes (128 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/789AE295-AD6D-4F93-B321-792800594D7E/Library/Caches/GermanModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 21 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/6CF2B633-750A-4DE1-8EC3-899218B63A02/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 4 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 43^3 * 2 bytes (155 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 44720 bytes (43 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 44720 bytes (43 KiB) for single-phone word triphones
    INFO: jsgf.c(691): Defined rule: <GermanModel.g00000>
    INFO: jsgf.c(691): Defined rule: <GermanModel.rule_0>
    INFO: jsgf.c(691): Defined rule: <GermanModel.g00002>
    INFO: jsgf.c(691): Defined rule: PUBLIC <GermanModel.rule_1>
    INFO: fsg_model.c(215): Computing transitive closure for null transitions
    INFO: fsg_model.c(277): 0 null transitions added
    INFO: fsg_search.c(227): FSG(beam: -1080, pbeam: -1080, wbeam: -634; wip: -5, pip: 0)
    ERROR: "fsg_search.c", line 141: The word 'esch' is missing in the dictionary
    2018-04-26 16:12:58.311567+0200 TestOpenEars[948:251148] Error: it wasn't possible to initialize the pocketsphinx decoder.
    2018-04-26 16:12:58.311775+0200 TestOpenEars[948:251080] Successfully started listening session from startListeningWithLanguageModelAtPath:
    Local callback: Setting up the continuous recognition loop has failed for the reason Optional("Error: it wasn\'t possible to initialize the pocketsphinx decoder. Please turn on OELogging in order to troubleshoot this. If you need support with this issue, please turn on both OELogging and verbosePocketsphinx in order to get assistance."), please turn on OELogging.startOpenEarsLogging() to learn more.
    2018-04-26 16:13:00.037620+0200 TestOpenEars[948:251080] Status bar could not find cached time string image. Rendering in-process.
    #1032396
    Halle Winkler
    Politepix

    Hi,

    That is happening because this code is a mixed example of an earlier grammar implementation and a later Rejecto implementation. Please change this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
    

    to this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
    

    and also please change this vowels option which looks like it must be left over from some previous round of experimentation and will harm accuracy:

    
    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: true, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
    

    to this:

    
    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
    
    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    #1032399
    Halle Winkler
    Politepix

    If you want to show me more logs from this implementation, make sure to show me the now-changed code again as well.

    #1032400
    iKK
    Participant

    I did the two changes but same thing – still an error in Rejecto log !

    (–> see next forum entry for its log)
    (–> see after next forum entry for the language model creation as it looks now)

    What else is to change ??

    #1032401
    iKK
    Participant

    Log from Rejecto trial :

    2018-04-26 16:31:28.384594+0200 TestOpenEars[972:261652] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-26 16:31:28.385406+0200 TestOpenEars[972:261652] Creating shared instance of OEPocketsphinxController
    2018-04-26 16:31:28.397524+0200 TestOpenEars[972:261652] Rejecto version 2.500000
    2018-04-26 16:31:28.398126+0200 TestOpenEars[972:261652] Since there is no cached version, loading the g2p model for the acoustic model called AcousticModelGerman
    2018-04-26 16:31:28.454973+0200 TestOpenEars[972:261652] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelGerman
    2018-04-26 16:31:28.459570+0200 TestOpenEars[972:261652] Returning a cached version of LanguageModelGeneratorLookupList.text
    2018-04-26 16:31:28.459643+0200 TestOpenEars[972:261652] Returning a cached version of g2p
    2018-04-26 16:31:28.460388+0200 TestOpenEars[972:261652] I'm done running performDictionaryLookup and it took 0.000772 seconds
    2018-04-26 16:31:28.460671+0200 TestOpenEars[972:261652] I'm done running performDictionaryLookup and it took 0.001317 seconds
    2018-04-26 16:31:28.463842+0200 TestOpenEars[972:261652] A value has been given for weight, but it is identical to the default so we are ignoring it.
    2018-04-26 16:31:28.463878+0200 TestOpenEars[972:261652] Starting dynamic language model generation
    
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=45, 2=86, 3=43
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):       45 = #unigrams created
    INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
    INFO: ngram_model_arpa_legacy.c(561):       86 = #bigrams created
    INFO: ngram_model_arpa_legacy.c(562):        3 = #prob2 entries
    INFO: ngram_model_arpa_legacy.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
    INFO: ngram_model_arpa_legacy.c(583):       43 = #trigrams created
    INFO: ngram_model_arpa_legacy.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):       45 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(652):       86 = #bigrams created
    INFO: ngram_model_dmp_legacy.c(653):        3 = #prob2 entries
    INFO: ngram_model_dmp_legacy.c(660):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp_legacy.c(664):       43 = #trigrams created
    INFO: ngram_model_dmp_legacy.c(665):        2 = #prob3 entries
    2018-04-26 16:31:28.485473+0200 TestOpenEars[972:261652] Done creating language model with CMUCLMTK in 0.021533 seconds.
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=45, 2=86, 3=43
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):       45 = #unigrams created
    INFO: ngram_model_arpa_legacy.c(196): Reading bigrams
    INFO: ngram_model_arpa_legacy.c(561):       86 = #bigrams created
    INFO: ngram_model_arpa_legacy.c(562):        3 = #prob2 entries
    INFO: ngram_model_arpa_legacy.c(570):        3 = #bo_wt2 entries
    INFO: ngram_model_arpa_legacy.c(293): Reading trigrams
    INFO: ngram_model_arpa_legacy.c(583):       43 = #trigrams created
    INFO: ngram_model_arpa_legacy.c(584):        2 = #prob3 entries
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):       45 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(652):       86 = #bigrams created
    INFO: ngram_model_dmp_legacy.c(653):        3 = #prob2 entries
    INFO: ngram_model_dmp_legacy.c(660):        3 = #bo_wt2 entries
    INFO: ngram_model_dmp_legacy.c(664):       43 = #trigrams created
    INFO: ngram_model_dmp_legacy.c(665):        2 = #prob3 entries
    2018-04-26 16:31:28.489123+0200 TestOpenEars[972:261652] I'm done running dynamic language model generation and it took 0.091080 seconds
    2018-04-26 16:31:28.489559+0200 TestOpenEars[972:261652] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-26 16:31:28.492174+0200 TestOpenEars[972:261652] User gave mic permission for this app.
    2018-04-26 16:31:28.492310+0200 TestOpenEars[972:261652] setSecondsOfSilence wasn't set, using default of 0.700000.
    2018-04-26 16:31:28.492653+0200 TestOpenEars[972:261771] Starting listening.
    2018-04-26 16:31:28.492840+0200 TestOpenEars[972:261771] About to set up audio session
    2018-04-26 16:31:28.576329+0200 TestOpenEars[972:261771] Creating audio session with default settings.
    2018-04-26 16:31:28.576380+0200 TestOpenEars[972:261771] Done setting audio session category.
    2018-04-26 16:31:28.582877+0200 TestOpenEars[972:261771] Done setting preferred sample rate to 16000.000000 – now the real sample rate is 48000.000000
    2018-04-26 16:31:28.583434+0200 TestOpenEars[972:261771] number of channels is already the preferred number of 1 so not setting it.
    2018-04-26 16:31:28.586772+0200 TestOpenEars[972:261771] Done setting session's preferred I/O buffer duration to 0.128000 – now the actual buffer duration is 0.085333
    2018-04-26 16:31:28.586817+0200 TestOpenEars[972:261771] Done setting up audio session
    2018-04-26 16:31:28.588052+0200 TestOpenEars[972:261785] Audio route has changed for the following reason:
    2018-04-26 16:31:28.590354+0200 TestOpenEars[972:261771] About to set up audio IO unit in a session with a sample rate of 48000.000000, a channel number of 1 and a buffer duration of 0.085333.
    2018-04-26 16:31:28.624144+0200 TestOpenEars[972:261785] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-26 16:31:28.674501+0200 TestOpenEars[972:261785] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c0405c00, 
    inputs = (null); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c0405750, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
    )>".
    2018-04-26 16:31:28.676633+0200 TestOpenEars[972:261785] Audio route has changed for the following reason:
    2018-04-26 16:31:28.678797+0200 TestOpenEars[972:261785] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2018-04-26 16:31:28.681564+0200 TestOpenEars[972:261771] Done setting up audio unit
    2018-04-26 16:31:28.681601+0200 TestOpenEars[972:261771] About to start audio IO unit
    2018-04-26 16:31:28.682725+0200 TestOpenEars[972:261785] This is not a case in which OpenEars notifies of a route change. At the close of this method, the new audio route will be <Input route or routes: "MicrophoneBuiltIn". Output route or routes: "Speaker">. The previous route before changing to this route was "<AVAudioSessionRouteDescription: 0x1c0405bf0, 
    inputs = (
        "<AVAudioSessionPortDescription: 0x1c0405c80, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
    ); 
    outputs = (
        "<AVAudioSessionPortDescription: 0x1c421b890, type = Receiver; name = Receiver; UID = Built-In Receiver; selectedDataSource = (null)>"
    )>".
    2018-04-26 16:31:28.888849+0200 TestOpenEars[972:261771] Done starting audio unit
    2018-04-26 16:31:28.888907+0200 TestOpenEars[972:261771] The file you've sent to the decoder appears to be a JSGF grammar based on its naming, but you have not set languageModelIsJSGF: to TRUE. If you are experiencing recognition issues, there is a good chance that this is the reason for it. This can also happen if you meant to use the method [OELanguageModelGenerator pathToSuccessfullyGeneratedLanguageModelWithRequestedName:] to obtain a language model path but unintentionally used the method [OELanguageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:] instead.
    INFO: pocketsphinx.c(145): Parsed model-specific feature parameters from /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    Current configuration:
    [NAME]			[DEFLT]		[VALUE]
    -agc			none		none
    -agcthresh		2.0		2.000000e+00
    -allphone				
    -allphone_ci		no		no
    -alpha			0.97		9.700000e-01
    -ascale			20.0		2.000000e+01
    -aw			1		1
    -backtrace		no		no
    -beam			1e-48		1.000000e-48
    -bestpath		yes		yes
    -bestpathlw		9.5		9.500000e+00
    -ceplen			13		13
    -cmn			current		current
    -cmninit		8.0		30
    -compallsen		no		no
    -debug					0
    -dict					/var/mobile/Containers/Data/Application/4BEF6CDF-6561-47C4-AF08-F8C54C84EBEF/Library/Caches/GermanModel.dic
    -dictcase		no		no
    -dither			no		no
    -doublebw		no		no
    -ds			1		1
    -fdict					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    -feat			1s_c_d_dd	1s_c_d_dd
    -featparams				/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/feat.params
    -fillprob		1e-8		1.000000e-08
    -frate			100		100
    -fsg					
    -fsgusealtpron		yes		yes
    -fsgusefiller		yes		yes
    -fwdflat		yes		yes
    -fwdflatbeam		1e-64		1.000000e-64
    -fwdflatefwid		4		4
    -fwdflatlw		8.5		8.500000e+00
    -fwdflatsfwin		25		25
    -fwdflatwbeam		7e-29		7.000000e-29
    -fwdtree		yes		yes
    -hmm					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle
    -input_endian		little		little
    -jsgf					
    -keyphrase				
    -kws					
    -kws_delay		10		10
    -kws_plp		1e-1		1.000000e-01
    -kws_threshold		1		1.000000e+00
    -latsize		5000		5000
    -lda					
    -ldadim			0		0
    -lifter			0		22
    -lm					/var/mobile/Containers/Data/Application/4BEF6CDF-6561-47C4-AF08-F8C54C84EBEF/Library/Caches/GermanModel.gram
    -lmctl					
    -lmname					
    -logbase		1.0001		1.000100e+00
    -logfn					
    -logspec		no		no
    -lowerf			133.33334	1.300000e+02
    -lpbeam			1e-40		1.000000e-40
    -lponlybeam		7e-29		7.000000e-29
    -lw			6.5		6.500000e+00
    -maxhmmpf		30000		30000
    -maxwpf			-1		-1
    -mdef					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    -mean					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/means
    -mfclogdir				
    -min_endfr		0		0
    -mixw					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    -mixwfloor		0.0000001	1.000000e-07
    -mllr					
    -mmap			yes		yes
    -ncep			13		13
    -nfft			512		512
    -nfilt			40		25
    -nwpen			1.0		1.000000e+00
    -pbeam			1e-48		1.000000e-48
    -pip			1.0		1.000000e+00
    -pl_beam		1e-10		1.000000e-10
    -pl_pbeam		1e-10		1.000000e-10
    -pl_pip			1.0		1.000000e+00
    -pl_weight		3.0		3.000000e+00
    -pl_window		5		5
    -rawlogdir				
    -remove_dc		no		no
    -remove_noise		yes		yes
    -remove_silence		yes		yes
    -round_filters		yes		yes
    -samprate		16000		1.600000e+04
    -seed			-1		-1
    -sendump				
    -senlogdir				
    -senmgau				
    -silprob		0.005		5.000000e-03
    -smoothspec		no		no
    -svspec					
    -tmat					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    -tmatfloor		0.0001		1.000000e-04
    -topn			4		4
    -topn_beam		0		0
    -toprule				
    -transform		legacy		dct
    -unit_area		yes		yes
    -upperf			6855.4976	6.800000e+03
    -uw			1.0		1.000000e+00
    -vad_postspeech		50		69
    -vad_prespeech		20		10
    -vad_startspeech	10		10
    -vad_threshold		2.0		3.200000e+00
    -var					/var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/variances
    -varfloor		0.0001		1.000000e-04
    -varnorm		no		no
    -verbose		no		no
    -warp_params				
    -warp_type		inverse_linear	inverse_linear
    -wbeam			7e-29		7.000000e-29
    -wip			0.65		6.500000e-01
    -wlen			0.025625	2.562500e-02
    
    INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: mdef.c(518): Reading model definition: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/mdef
    INFO: bin_mdef.c(181): Allocating 53834 * 8 bytes (420 KiB) for CD tree
    INFO: tmat.c(206): Reading HMM transition probability matrices: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/transition_matrices
    INFO: acmod.c(117): Attempting to use PTM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ptm_mgau.c(801): Number of codebooks exceeds 256: 2129
    INFO: acmod.c(119): Attempting to use semi-continuous computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: acmod.c(121): Falling back to general multi-stream GMM computation
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/means
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/variances
    INFO: ms_gauden.c(292): 2129 codebook, 1 feature, size: 
    INFO: ms_gauden.c(294):  32x39
    INFO: ms_gauden.c(354): 7100 variance values floored
    INFO: ms_senone.c(149): Reading senone mixture weights: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/mixture_weights
    INFO: ms_senone.c(200): Truncating senone logs3(pdf) values by 10 bits
    INFO: ms_senone.c(207): Not transposing mixture weights in memory
    INFO: ms_senone.c(268): Read mixture weights for 2129 senones: 1 features x 32 codewords
    INFO: ms_senone.c(320): Mapping senones to individual codebooks
    INFO: ms_mgau.c(141): The value of topn: 4
    INFO: phone_loop_search.c(114): State beam -225 Phone exit beam -225 Insertion penalty 0
    INFO: dict.c(320): Allocating 4143 * 32 bytes (129 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/4BEF6CDF-6561-47C4-AF08-F8C54C84EBEF/Library/Caches/GermanModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 43 words read
    INFO: dict.c(358): Reading filler dictionary: /var/containers/Bundle/Application/B43039EB-3F1B-427A-9364-199FBEB79021/TestOpenEars.app/AcousticModelGerman.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(361): 4 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 43^3 * 2 bytes (155 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 44720 bytes (43 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 44720 bytes (43 KiB) for single-phone word triphones
    INFO: ngram_model_trie.c(424): Trying to read LM in bin format
    ERROR: "ngram_model_trie.c", line 447: bin file /var/mobile/Containers/Data/Application/4BEF6CDF-6561-47C4-AF08-F8C54C84EBEF/Library/Caches/GermanModel.gram not found
    INFO: ngram_model_trie.c(180): Trying to read LM in arpa format
    ERROR: "ngram_model_trie.c", line 203: arpa file /var/mobile/Containers/Data/Application/4BEF6CDF-6561-47C4-AF08-F8C54C84EBEF/Library/Caches/GermanModel.gram not found
    INFO: ngram_model_trie.c(537): Trying to read LM in DMP format
    ERROR: "ngram_model_trie.c", line 560: DMP file /var/mobile/Containers/Data/Application/4BEF6CDF-6561-47C4-AF08-F8C54C84EBEF/Library/Caches/GermanModel.gram not found
    2018-04-26 16:31:29.361202+0200 TestOpenEars[972:261771] Error: it wasn't possible to initialize the pocketsphinx decoder.
    2018-04-26 16:31:29.372364+0200 TestOpenEars[972:261652] Successfully started listening session from startListeningWithLanguageModelAtPath:
    Local callback: Setting up the continuous recognition loop has failed for the reason Optional("Error: it wasn\'t possible to initialize the pocketsphinx decoder. Please turn on OELogging in order to troubleshoot this. If you need support with this issue, please turn on both OELogging and verbosePocketsphinx in order to get assistance."), please turn on OELogging.startOpenEarsLogging() to learn more.
    #1032402
    iKK
    Participant

    Rejecto Language Model creation as it looks right now – still giving an error on startup…

    import UIKit
    
    class ViewController: UIViewController, OEEventsObserverDelegate {
        
        var openEarsEventsObserver = OEEventsObserver()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            
            // ************* Necessary for logging **************************
            OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
            OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
            // ************* Necessary for logging **************************
            
            self.openEarsEventsObserver.delegate = self
            
            let lmGenerator = OELanguageModelGenerator()
            let accusticModelName = "AcousticModelGerman"
            
            let fileName = "GermanModel"
            
            // let words = ["esch do no frey"]
            let words = ["eschdonofrey"]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            var lmPath = ""
            var dictPath = ""
            
            if(err != nil) {
                print("Error while creating initial language model: \(err)")
            } else {
                // lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
                lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
                dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
            }
            
            do {
                try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
            } catch {
                print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
            }
            
            OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
        OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
            
        }
    
        override func didReceiveMemoryWarning() {
            super.didReceiveMemoryWarning()
            // Dispose of any resources that can be recreated.
        }
        
        func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
            print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
        // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
        func pocketsphinxRecognitionLoopDidStart() {
            print("Local callback: Pocketsphinx started.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
        func pocketsphinxDidStartListening() {
            print("Local callback: Pocketsphinx is now listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
        func pocketsphinxDidDetectSpeech() {
            print("Local callback: Pocketsphinx has detected speech.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
        func pocketsphinxDidDetectFinishedSpeech() {
            print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
        // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
        func pocketsphinxDidStopListening() {
            print("Local callback: Pocketsphinx has stopped listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
        // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
        // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
        func pocketsphinxDidSuspendRecognition() {
            print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
        // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
        // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
        func pocketsphinxDidResumeRecognition() {
            print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
        }
        
        // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
        // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
        func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
            
            print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
        // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
        func fliteDidStartSpeaking() {
            print("Local callback: Flite has started speaking") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
        // complex interaction between sound classes.
        func fliteDidFinishSpeaking() {
            print("Local callback: Flite has finished speaking") // Log it.
        }
        
        func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
            print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
        }
        
        func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
            print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
        }
        
        /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
        func pocketsphinxFailedNoMicPermissions() {
            print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
        }
        
        /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result  (will only be returned on iOS7 or later).*/
        
        func micPermissionCheckCompleted(withResult: Bool) {
            print("Local callback: mic check completed.")
        }
    }
    #1032403
    Halle Winkler
    Politepix

    This:

    
    lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
    

    needs to be:

    
    lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
    
    #1032404
    iKK
    Participant

    This helped !

    This is the first time Rejecto seems to peform :) Thank you very much !

    Now, I tested both A) and B) and it indeed makes a slight difference.

    A)

    es	ee s
    eschdonofrey	@ ss d oo n oo f r @ ii
    esf	ee s f

    B)

    es  ee s
    eschdonofrey    ee ss d oo n oo f r ee ii
    esf ee s f

    As for A, this seems to perform slightly better than B.

    However, even for A there are many false-positives ! And also 1 out of 10 is a false-negative (which I never had in any of the previous tests).

    At least, this is something to play now….

    One question:
    What does the symbol “@” represent in the LookupList.text ? (the double-ee’s and double-ii’s I can somehow intereprete but what does “@” really mean ?)

    #1032405
    iKK
    Participant

    Can we now approach the RuleORama error ?

    I feel that having both up’n’running (Rejecto and RuleORama) would help to compare the solutions to find the best approach.

    Here is the Language-model creation of RuleORama (see next forum-entry) and its Error-log (see after-next forum-entry).

    #1032406
    iKK
    Participant

    RuleORama Accoustic Model creation Code :

    import UIKit
    
    class ViewController: UIViewController, OEEventsObserverDelegate {
        
        var openEarsEventsObserver = OEEventsObserver()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            
            // ************* Necessary for logging **************************
            OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
            OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
            // ************* Necessary for logging **************************
            
            
            self.openEarsEventsObserver.delegate = self
            
            let lmGenerator = OELanguageModelGenerator()
            let accusticModelName = "AcousticModelGerman"
            let fileName = "GermanModel"
            
            let words = ["esch do no frey"]
            
            let grammar = [
                ThisWillBeSaidOnce : [
                    [ OneOfTheseWillBeSaidOnce : words]
                ]
            ]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            // let err: Error! = lmGenerator.generateGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateFastGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            var lmPath = ""
            var dictPath = ""
            
            if(err != nil) {
                print("Error while creating initial language model: \(err)")
            } else {
                // lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
                lmPath = lmGenerator.pathToSuccessfullyGeneratedRuleORamaRuleset(withRequestedName: fileName)
                dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
            }
            
            do {
                try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
            } catch {
                print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
            }
            
            OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
        OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
        }
    
        override func didReceiveMemoryWarning() {
            super.didReceiveMemoryWarning()
            // Dispose of any resources that can be recreated.
        }
        
        func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
            print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
        // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
        func pocketsphinxRecognitionLoopDidStart() {
            print("Local callback: Pocketsphinx started.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
        func pocketsphinxDidStartListening() {
            print("Local callback: Pocketsphinx is now listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
        func pocketsphinxDidDetectSpeech() {
            print("Local callback: Pocketsphinx has detected speech.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
        func pocketsphinxDidDetectFinishedSpeech() {
            print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
        // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
        func pocketsphinxDidStopListening() {
            print("Local callback: Pocketsphinx has stopped listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
        // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
        // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
        func pocketsphinxDidSuspendRecognition() {
            print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
        // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
        // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
        func pocketsphinxDidResumeRecognition() {
            print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
        }
        
        // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
        // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
        func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
            
            print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
        // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
        func fliteDidStartSpeaking() {
            print("Local callback: Flite has started speaking") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
        // complex interaction between sound classes.
        func fliteDidFinishSpeaking() {
            print("Local callback: Flite has finished speaking") // Log it.
        }
        
        func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
            print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
        }
        
        func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
            print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
        }
        
        /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
        func pocketsphinxFailedNoMicPermissions() {
            print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
        }
        
        /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result  (will only be returned on iOS7 or later).*/
        
        func micPermissionCheckCompleted(withResult: Bool) {
            print("Local callback: mic check completed.")
        }
    }
    #1032407
    iKK
    Participant

    RuleORama Error log :

    2018-04-26 17:02:12.467180+0200 TestOpenEars[1005:277279] Starting OpenEars logging for OpenEars version 2.506 on 64-bit device (or build): iPhone running iOS version: 11.300000
    2018-04-26 17:02:12.468228+0200 TestOpenEars[1005:277279] Creating shared instance of OEPocketsphinxController
    2018-04-26 17:02:12.484721+0200 TestOpenEars[1005:277279] RuleORama version 2.502000
    2018-04-26 17:02:12.498086+0200 TestOpenEars[1005:277279] Since there is no cached version, loading the language model lookup list for the acoustic model called AcousticModelGerman
    2018-04-26 17:02:12.502769+0200 TestOpenEars[1005:277279] Since there is no cached version, loading the g2p model for the acoustic model called AcousticModelGerman
    2018-04-26 17:02:12.535249+0200 TestOpenEars[1005:277279] The word do was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-26 17:02:12.535454+0200 TestOpenEars[1005:277279] the graphemes "d oo" were created for the word do using the fallback method.
    2018-04-26 17:02:12.538121+0200 TestOpenEars[1005:277279] The word esch was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-26 17:02:12.538570+0200 TestOpenEars[1005:277279] the graphemes "@ ss" were created for the word esch using the fallback method.
    2018-04-26 17:02:12.541126+0200 TestOpenEars[1005:277279] The word frey was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-26 17:02:12.541368+0200 TestOpenEars[1005:277279] the graphemes "f r @ ii" were created for the word frey using the fallback method.
    2018-04-26 17:02:12.543943+0200 TestOpenEars[1005:277279] The word no was not found in the dictionary of the acoustic model /var/containers/Bundle/Application/19DFCFC8-F32E-4454-87D3-F5960BF22F97/TestOpenEars.app/AcousticModelGerman.bundle. Now using the fallback method to look it up. If this is happening more frequently than you would expect, likely causes can be that you are entering words in another language from the one you are recognizing, or that there are symbols (including numbers) that need to be spelled out or cleaned up, or you are using your own acoustic model and there is an issue with either its phonetic dictionary or it lacks a g2p file. Please get in touch at the forums for assistance with the last two possible issues.
    2018-04-26 17:02:12.544061+0200 TestOpenEars[1005:277279] the graphemes "n oo" were created for the word no using the fallback method.
    2018-04-26 17:02:12.544117+0200 TestOpenEars[1005:277279] I'm done running performDictionaryLookup and it took 0.041384 seconds
    2018-04-26 17:02:12.564690+0200 TestOpenEars[1005:277279] Starting dynamic language model generation
    
    INFO: ngram_model_arpa_legacy.c(504): ngrams 1=3, 2=0, 3=0
    INFO: ngram_model_arpa_legacy.c(136): Reading unigrams
    INFO: ngram_model_arpa_legacy.c(543):        3 = #unigrams created
    INFO: ngram_model_dmp_legacy.c(521): Building DMP model...
    INFO: ngram_model_dmp_legacy.c(551):        3 = #unigrams created
    2018-04-26 17:02:12.589480+0200 TestOpenEars[1005:277279] Done creating language model with CMUCLMTK in 0.024742 seconds.
    2018-04-26 17:02:12.592952+0200 TestOpenEars[1005:277279] Generating fast grammar took 0.095226 seconds
    INFO: ngram_model_trie.c(424): Trying to read LM in bin format
    INFO: ngram_model_trie.c(457): Header doesn't match
    INFO: ngram_model_trie.c(180): Trying to read LM in arpa format
    INFO: ngram_model_trie.c(218): LM of order 1
    INFO: ngram_model_trie.c(220): #1-grams: 3
    2018-04-26 17:02:12.596542+0200 TestOpenEars[1005:277279] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2018-04-26 17:02:12.596608+0200 TestOpenEars[1005:277279] Error: you have invoked the method:
    
    startListeningWithLanguageModelAtPath:(NSString *)languageModelPath dictionaryAtPath:(NSString *)dictionaryPath acousticModelAtPath:(NSString *)acousticModelPath languageModelIsJSGF:(BOOL)languageModelIsJSGF
    
    with a languageModelPath which is nil. If your call to OELanguageModelGenerator did not return an error when you generated this grammar, that means the correct path to your grammar that you should pass to this method's languageModelPath argument is as follows:
    
    NSString *correctPathToMyLanguageModelFile = [myLanguageModelGenerator pathToSuccessfullyGeneratedGrammarWithRequestedName:@"TheNameIChoseForMyVocabulary"];
    
    Feel free to copy and paste this code for your path to your grammar, but remember to replace the part that says "TheNameIChoseForMyVocabulary" with the name you actually chose for your grammar or you will get this error again (and replace myLanguageModelGenerator with the name of your OELanguageModelGenerator instance). Since this file is required, expect an exception or undocumented behavior shortly.
    #1032408
    Halle Winkler
    Politepix

    Regarding your Rejecto results: you can now pick whichever one of them is better and experiment with raising or reducing the value withWeight in this line (lowest possible value is 0.1 and largest possible value is 1.9):

    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    What does the symbol “@” represent in the LookupList.text ? (the double-ee’s and double-ii’s I can somehow intereprete but what does “@” really mean ?)

    It represents the phone sound in Hochdeutsch which is represented by the IPA ə. This is getting outside of the things I support here but there should be enough info in that explanation for you to find sources outside of these forums to continue your investigation if you continue to have questions.

    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    #1032410
    Halle Winkler
    Politepix

    The first thing in this RuleORama implementation to fix is again that this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
    

    needs to be this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
    

    There may be other issues but let’s start there.

    #1032411
    iKK
    Participant

    Thank you for the explanation – I will investigate in the “0.1 < withWeight < 1.9” parameter as well as play with the two LookupList.text suggestions (maybe even play with that one also a little bit to understand its effects…)….

    #1032412
    iKK
    Participant

    Thank you – also the RuleORama is now performing !

    Unfortunately, it still has many false negatives.

    Also, one thing I don’t understand is that it often times responds with having recognized the “sentence at question” several times. As can be seen in this log-excert:

    2018-04-26 17:12:25.815209+0200 TestOpenEars[1012:281905] Pocketsphinx heard "esch do no frey esch do no frey esch do no frey esch do no frey esch do no frey" with a score of (-130134) and an utterance ID of 19.
    Local callback: The received hypothesis is esch do no frey esch do no frey esch do no frey esch do no frey esch do no frey with a score of -130134 and an ID of 19

    What could this be ? i.e. Why does the hypothesis contain our “sentence at question” this many times ??

    #1032413
    iKK
    Participant

    For both solutions (Rejecto or RuleORama) my question to you: Are you able to interpret the logs in order to tune one or the other solution even more ? I am completely lost in what to tune here since the logs look very cryptic to me.

    If yes, what examples should I place ? (positive, false-positive or false negative ones?

    #1032414
    Halle Winkler
    Politepix

    Yeah, that makes a certain amount of sense because this use case is very borderline for RuleORama – it isn’t usually great with a rule that has a single entry and the other elements of this which are pushing the limits of what is likely to work are probably making it worse. We can shelve the investigation of RuleORama now that we have seen a result from it.

    #1032415
    iKK
    Participant

    OK – let’s continue with Rejecto !

    #1032416
    Halle Winkler
    Politepix

    I’ve recommended what is possible to tune for Rejecto, there is nothing else. If it isn’t doing well yet, this is likely to just be due to it being a different language. You can also tune vadThreshold but I recommended doing that at the start so I am assuming it is correct now.

    #1032417
    Halle Winkler
    Politepix

    Have we ever seen a fully-working result from your original grammar implementation without a plugin since we fixed the grammar?

    #1032418
    iKK
    Participant

    With Rejecto: What I don’t understand why the ending of my “sentence at question” does not seem to matter at all. i.e. if I speak “eschdonofrey” or if I speak “eschdonoAnything” makes no difference, the sentence is still recognized (which leads to so many false-negatives !

    Do you have any idea on how to improve the ending-problem of my sentence at question ?

    #1032419
    Halle Winkler
    Politepix

    This is because there are multiple things about this which are a problem for ideal recognition with these tools: it has high uncertainty because it is a different language, and language models aren’t designed to work with a single word. I expect changing the weight to affect this outcome, but if it doesn’t, that is the answer on whether this approach will work.

    #1032420
    iKK
    Participant

    With OpenEars-only (original grammar without any plugin):

    I have false-positives and I have false negatives. I feel that it does a tiny bit better as for the false-negatives than Rejecto. But I would have to test much more thorougly.

    I suggest that I place more logs for each of the solutions. (since again, for me this logs are very cryptic).

    But let me first place the Accoustic Model Creation Code of the non-plugin OpenEars-only solution to have everything mentioned here.

    #1032421
    iKK
    Participant

    Accoustic Model Creation Code for non-plugin OpenEars grammar solution:

    import UIKit
    
    class ViewController: UIViewController, OEEventsObserverDelegate {
        
        var openEarsEventsObserver = OEEventsObserver()
    
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            
            // ************* Necessary for logging **************************
            OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
            OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true
            // ************* Necessary for logging **************************
            
            self.openEarsEventsObserver.delegate = self
            
            let lmGenerator = OELanguageModelGenerator()
            let accusticModelName = "AcousticModelGerman"
            let fileName = "GermanModel"
            
            let words = ["esch do no frey"]
            
            let grammar = [
                ThisWillBeSaidOnce : [
                    [ OneOfTheseWillBeSaidOnce : words]
                ]
            ]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            var lmPath = ""
            var dictPath = ""
            
            if(err != nil) {
                print("Error while creating initial language model: \(err)")
            } else {
                lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
                dictPath = lmGenerator.pathToSuccessfullyGeneratedDictionary(withRequestedName: fileName)
            }
            
            do {
                try OEPocketsphinxController.sharedInstance().setActive(true) // Setting the shared OEPocketsphinxController active is necessary before any of its properties are accessed.
            } catch {
                print("Error: it wasn't possible to set the shared instance to active: \"\(error)\"")
            }
            
            OEPocketsphinxController.sharedInstance().vadThreshold = 3.2;
        OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
        }
    
        override func didReceiveMemoryWarning() {
            super.didReceiveMemoryWarning()
            // Dispose of any resources that can be recreated.
        }
        
        func pocketsphinxDidReceiveHypothesis(_ hypothesis: String!, recognitionScore: String!, utteranceID: String!) { // Something was heard
            print("Local callback: The received hypothesis is \(hypothesis!) with a score of \(recognitionScore!) and an ID of \(utteranceID!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that the Pocketsphinx recognition loop has entered its actual loop.
        // This might be useful in debugging a conflict between another sound class and Pocketsphinx.
        func pocketsphinxRecognitionLoopDidStart() {
            print("Local callback: Pocketsphinx started.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is now listening for speech.
        func pocketsphinxDidStartListening() {
            print("Local callback: Pocketsphinx is now listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected speech and is starting to process it.
        func pocketsphinxDidDetectSpeech() {
            print("Local callback: Pocketsphinx has detected speech.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx detected a second of silence, indicating the end of an utterance.
        func pocketsphinxDidDetectFinishedSpeech() {
            print("Local callback: Pocketsphinx has detected a second of silence, concluding an utterance.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx has exited its recognition loop, most
        // likely in response to the OEPocketsphinxController being told to stop listening via the stopListening method.
        func pocketsphinxDidStopListening() {
            print("Local callback: Pocketsphinx has stopped listening.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop but it is not
        // Going to react to speech until listening is resumed.  This can happen as a result of Flite speech being
        // in progress on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to suspend recognition via the suspendRecognition method.
        func pocketsphinxDidSuspendRecognition() {
            print("Local callback: Pocketsphinx has suspended recognition.") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Pocketsphinx is still in its listening loop and after recognition
        // having been suspended it is now resuming.  This can happen as a result of Flite speech completing
        // on an audio route that doesn't support simultaneous Flite speech and Pocketsphinx recognition,
        // or as a result of the OEPocketsphinxController being told to resume recognition via the resumeRecognition method.
        func pocketsphinxDidResumeRecognition() {
            print("Local callback: Pocketsphinx has resumed recognition.") // Log it.
        }
        
        // An optional delegate method which informs that Pocketsphinx switched over to a new language model at the given URL in the course of
        // recognition. This does not imply that it is a valid file or that recognition will be successful using the file.
        func pocketsphinxDidChangeLanguageModel(toFile newLanguageModelPathAsString: String!, andDictionary newDictionaryPathAsString: String!) {
            
            print("Local callback: Pocketsphinx is now using the following language model: \n\(newLanguageModelPathAsString!) and the following dictionary: \(newDictionaryPathAsString!)")
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is speaking, most likely to be useful if debugging a
        // complex interaction between sound classes. You don't have to do anything yourself in order to prevent Pocketsphinx from listening to Flite talk and trying to recognize the speech.
        func fliteDidStartSpeaking() {
            print("Local callback: Flite has started speaking") // Log it.
        }
        
        // An optional delegate method of OEEventsObserver which informs that Flite is finished speaking, most likely to be useful if debugging a
        // complex interaction between sound classes.
        func fliteDidFinishSpeaking() {
            print("Local callback: Flite has finished speaking") // Log it.
        }
        
        func pocketSphinxContinuousSetupDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on [OELogging startOpenEarsLogging] to learn why.
            print("Local callback: Setting up the continuous recognition loop has failed for the reason \(reasonForFailure), please turn on OELogging.startOpenEarsLogging() to learn more.") // Log it.
        }
        
        func pocketSphinxContinuousTeardownDidFail(withReason reasonForFailure: String!) { // This can let you know that something went wrong with the recognition loop startup. Turn on OELogging.startOpenEarsLogging() to learn why.
            print("Local callback: Tearing down the continuous recognition loop has failed for the reason \(reasonForFailure)") // Log it.
        }
        
        /** Pocketsphinx couldn't start because it has no mic permissions (will only be returned on iOS7 or later).*/
        func pocketsphinxFailedNoMicPermissions() {
            print("Local callback: The user has never set mic permissions or denied permission to this app's mic, so listening will not start.")
        }
        
        /** The user prompt to get mic permissions, or a check of the mic permissions, has completed with a true or a false result  (will only be returned on iOS7 or later).*/
        
        func micPermissionCheckCompleted(withResult: Bool) {
            print("Local callback: mic check completed.")
        }
    }
    #1032422
    Halle Winkler
    Politepix

    I think that if your original grammar implementation doesn’t raise any errors and is returning output that you can evaluate, we have explored everything that is within the realm of supportable troubleshooting here, so I am going to close this as answered because I think we have explored all of the topics which have come up here at substantial length and I think there should be enough for you to examine further outside of an ongoing troubleshooting process with me.

    If you have very specific questions later on (I mean, questions about a single aspect of a single implementation with a single acoustic model) it’s OK to start very focused new topics, just please create a fresh implementation you are comfortable sharing things about and you are sure doesn’t have accumulated code from different implementations, and remember to share the info in here without prompting from me so the questions don’t get closed, thanks and good luck!

    • This reply was modified 5 years, 12 months ago by Halle Winkler.
    • This reply was modified 5 years, 12 months ago by Halle Winkler.
Viewing 86 posts - 1 through 86 (of 86 total)
  • The topic ‘Recognize short Command in nonEnglish’ is closed to new replies.