morchella

Forum Replies Created

Viewing 13 posts - 1 through 13 (of 13 total)

  • Author
    Posts
  • morchella
    Participant

    ################

    DEVICE: Powerbeats 2 Wireless (no model #)
    APPLE DEVICE: iPhone 5s
    iOS VERSION: 10.0
    OPEN EARS VERSION: 2.5

    BEHAVIOR: OE does not accept input from device.

    FIX: Set disablePreferredBufferSize = YES.

    ################

    DEVICE: Beats Solo 2 Wireless (no model #)
    APPLE DEVICE: iPhone 5s
    iOS VERSION: 10.0
    OPEN EARS VERSION: 2.5

    BEHAVIOR: OE does not accept input from device.

    FIX: Set disablePreferredBufferSize = YES.

    ################

    DEVICE: Jabra Classic (Model OTE15)
    APPLE DEVICE: iPhone 5s
    iOS VERSION: 10.0
    OPEN EARS VERSION: 2.5

    BEHAVIOR: OE does not accept input from device.

    FIX: Set disablePreferredBufferSize = YES.

    morchella
    Participant

    Halle, thanks for those pointers. I’ve now got bluetooth working (in the sample app) on all three devices that I own: Jabra Classic, PowerBeats and Beats Solo. All that was needed was to disable preferred buffer size. (I haven’t done this in my own app, because I’m not yet on 2.5.)

    This leads to various thoughts about how best to adapt to bluetooth when it is present. In order to do that, it would be great if you could share your experience. Which headsets have you tested and what were the results? From reading the FAQ, it sounds like it is not always preferred buffer size that does the trick?

    I’d also be very interested to know just how much impact disabling these settings has on OE performance?

    Does PolitePix have any way to host a wiki where the community could share device tests? Perhaps collectively we could do what is not reasonable for you to do alone?

    I totally understand the impossibility of offering full bluetooth support, given the diversity of devices out in the wild. But, at the same time, from my users’ point of view, when the app fails to work with their headset, it’s the app’s fault. They say, “My headset works with everything else–this app is just too buggy–Delete.” That’s why I believe we have to make a best effort to support bluetooth, even if it will never be 100% and even if you–quite understandably–can’t devote much time to it.

    in reply to: OEContinuousModel unrecognized selector #1027478
    morchella
    Participant

    Okay, great! I upgraded to 2.04 and the problem seems fixed. Thanks, Halle!

    in reply to: OEContinuousModel unrecognized selector #1027476
    morchella
    Participant

    I’m on RapidEars 2.0, which is the last version that I received a download link for. (Dec 2014)

    in reply to: Open Ears/Rapid Ears 2.0 + Bluetooth #1023353
    morchella
    Participant

    Halle, thanks for these thoughtful and excellent suggestions! I have to focus on other code for a bit, but will be revisiting the bluetooth issue as time permits. I’ll keep you posted as I learn more.

    in reply to: Open Ears/Rapid Ears 2.0 + Bluetooth #1023341
    morchella
    Participant

    I did a fresh install of the sample app. I uncommented the two logging lines, but otherwise ran it as is. Logs are below. (For some reason, in the sample app, the logging of the current route is truncated?? In my app, it prints out full port descriptions, but here shows only ---BluetoothHFPBluetoothHFP---)

    2014-12-09 13:05:34.389 OpenEarsSampleApp[451:85137] Starting OpenEars logging for OpenEars version 2.0 on 32-bit device (or build): iPhone running iOS version: 8.100000
    2014-12-09 13:05:34.392 OpenEarsSampleApp[451:85137] Creating shared instance of OEPocketsphinxController
    2014-12-09 13:05:34.431 OpenEarsSampleApp[451:85137] Starting dynamic language model generation

    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa \
    -o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -case
    -debug 0
    -help no no
    -i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.arpa
    -ienc
    -ifmt
    -logbase 1.0001 1.000100e+00
    -mmap no no
    -o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
    -oenc utf8 utf8
    -ofmt

    INFO: ngram_model_arpa.c(504): ngrams 1=10, 2=16, 3=8
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543): 10 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561): 16 = #bigrams created
    INFO: ngram_model_arpa.c(562): 3 = #prob2 entries
    INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583): 8 = #trigrams created
    INFO: ngram_model_arpa.c(584): 2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model…
    INFO: ngram_model_dmp.c(548): 10 = #unigrams created
    INFO: ngram_model_dmp.c(649): 16 = #bigrams created
    INFO: ngram_model_dmp.c(650): 3 = #prob2 entries
    INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661): 8 = #trigrams created
    INFO: ngram_model_dmp.c(662): 2 = #prob3 entries
    2014-12-09 13:05:34.498 OpenEarsSampleApp[451:85137] Done creating language model with CMUCLMTK in 0.066862 seconds.
    2014-12-09 13:05:34.602 OpenEarsSampleApp[451:85137] I’m done running performDictionaryLookup and it took 0.075391 seconds
    2014-12-09 13:05:34.609 OpenEarsSampleApp[451:85137] I’m done running dynamic language model generation and it took 0.210020 seconds
    2014-12-09 13:05:34.615 OpenEarsSampleApp[451:85137] Starting dynamic language model generation

    INFO: cmd_ln.c(702): Parsing command line:
    sphinx_lm_convert \
    -i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa \
    -o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -case
    -debug 0
    -help no no
    -i /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.arpa
    -ienc
    -ifmt
    -logbase 1.0001 1.000100e+00
    -mmap no no
    -o /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/SecondOpenEarsDynamicLanguageModel.DMP
    -oenc utf8 utf8
    -ofmt

    INFO: ngram_model_arpa.c(504): ngrams 1=12, 2=19, 3=10
    INFO: ngram_model_arpa.c(137): Reading unigrams
    INFO: ngram_model_arpa.c(543): 12 = #unigrams created
    INFO: ngram_model_arpa.c(197): Reading bigrams
    INFO: ngram_model_arpa.c(561): 19 = #bigrams created
    INFO: ngram_model_arpa.c(562): 3 = #prob2 entries
    INFO: ngram_model_arpa.c(570): 3 = #bo_wt2 entries
    INFO: ngram_model_arpa.c(294): Reading trigrams
    INFO: ngram_model_arpa.c(583): 10 = #trigrams created
    INFO: ngram_model_arpa.c(584): 2 = #prob3 entries
    INFO: ngram_model_dmp.c(518): Building DMP model…
    INFO: ngram_model_dmp.c(548): 12 = #unigrams created
    INFO: ngram_model_dmp.c(649): 19 = #bigrams created
    INFO: ngram_model_dmp.c(650): 3 = #prob2 entries
    INFO: ngram_model_dmp.c(657): 3 = #bo_wt2 entries
    INFO: ngram_model_dmp.c(661): 10 = #trigrams created
    INFO: ngram_model_dmp.c(662): 2 = #prob3 entries
    2014-12-09 13:05:34.682 OpenEarsSampleApp[451:85137] Done creating language model with CMUCLMTK in 0.066150 seconds.
    2014-12-09 13:05:34.764 OpenEarsSampleApp[451:85137] The word QUIDNUNC was not found in the dictionary /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/LanguageModelGeneratorLookupList.text/LanguageModelGeneratorLookupList.text.
    2014-12-09 13:05:34.765 OpenEarsSampleApp[451:85137] Now using the fallback method to look up the word QUIDNUNC
    2014-12-09 13:05:34.765 OpenEarsSampleApp[451:85137] If this is happening more frequently than you would expect, the most likely cause for it is since you are using the English phonetic lookup dictionary is that your words are not in English or aren’t dictionary words, or that you are submitting the words in lowercase when they need to be entirely written in uppercase.
    2014-12-09 13:05:34.766 OpenEarsSampleApp[451:85137] Using convertGraphemes for the word or phrase QUIDNUNC which doesn’t appear in the dictionary
    2014-12-09 13:05:34.814 OpenEarsSampleApp[451:85137] I’m done running performDictionaryLookup and it took 0.121312 seconds
    2014-12-09 13:05:34.822 OpenEarsSampleApp[451:85137] I’m done running dynamic language model generation and it took 0.212430 seconds
    2014-12-09 13:05:34.823 OpenEarsSampleApp[451:85137]

    Welcome to the OpenEars sample project. This project understands the words:
    BACKWARD,
    CHANGE,
    FORWARD,
    GO,
    LEFT,
    MODEL,
    RIGHT,
    TURN,
    and if you say “CHANGE MODEL” it will switch to its dynamically-generated model which understands the words:
    CHANGE,
    MODEL,
    MONDAY,
    TUESDAY,
    WEDNESDAY,
    THURSDAY,
    FRIDAY,
    SATURDAY,
    SUNDAY,
    QUIDNUNC
    2014-12-09 13:05:34.824 OpenEarsSampleApp[451:85137] Attempting to start listening session from startListeningWithLanguageModelAtPath:
    2014-12-09 13:05:34.832 OpenEarsSampleApp[451:85137] User gave mic permission for this app.
    2014-12-09 13:05:34.833 OpenEarsSampleApp[451:85137] setSecondsOfSilence wasn’t set, using default of 0.700000.
    2014-12-09 13:05:34.834 OpenEarsSampleApp[451:85137] Successfully started listening session from startListeningWithLanguageModelAtPath:
    2014-12-09 13:05:34.835 OpenEarsSampleApp[451:85152] Starting listening.
    2014-12-09 13:05:34.835 OpenEarsSampleApp[451:85152] about to set up audio session
    2014-12-09 13:05:34.884 OpenEarsSampleApp[451:85165] Audio route has changed for the following reason:
    2014-12-09 13:05:34.889 OpenEarsSampleApp[451:85165] There was a category change. The new category is AVAudioSessionCategoryPlayAndRecord
    2014-12-09 13:05:36.248 OpenEarsSampleApp[451:85152] done starting audio unit
    INFO: cmd_ln.c(702): Parsing command line:
    \
    -lm /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP \
    -vad_threshold 1.500000 \
    -remove_noise yes \
    -remove_silence yes \
    -bestpath yes \
    -lw 6.500000 \
    -dict /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic \
    -hmm /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -agc none none
    -agcthresh 2.0 2.000000e+00
    -allphone
    -allphone_ci no no
    -alpha 0.97 9.700000e-01
    -argfile
    -ascale 20.0 2.000000e+01
    -aw 1 1
    -backtrace no no
    -beam 1e-48 1.000000e-48
    -bestpath yes yes
    -bestpathlw 9.5 9.500000e+00
    -bghist no no
    -ceplen 13 13
    -cmn current current
    -cmninit 8.0 8.0
    -compallsen no no
    -debug 0
    -dict /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    -dictcase no no
    -dither no no
    -doublebw no no
    -ds 1 1
    -fdict
    -feat 1s_c_d_dd 1s_c_d_dd
    -featparams
    -fillprob 1e-8 1.000000e-08
    -frate 100 100
    -fsg
    -fsgusealtpron yes yes
    -fsgusefiller yes yes
    -fwdflat yes yes
    -fwdflatbeam 1e-64 1.000000e-64
    -fwdflatefwid 4 4
    -fwdflatlw 8.5 8.500000e+00
    -fwdflatsfwin 25 25
    -fwdflatwbeam 7e-29 7.000000e-29
    -fwdtree yes yes
    -hmm /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle
    -input_endian little little
    -jsgf
    -kdmaxbbi -1 -1
    -kdmaxdepth 0 0
    -kdtree
    -keyphrase
    -kws
    -kws_plp 1e-1 1.000000e-01
    -kws_threshold 1 1.000000e+00
    -latsize 5000 5000
    -lda
    -ldadim 0 0
    -lextreedump 0 0
    -lifter 0 0
    -lm /var/mobil2014-12-09 13:05:36.268 OpenEarsSampleApp[451:85165] This is not a case in which OpenEars notifies of a route change. At the close of this function, the new audio route is —BluetoothHFPBluetoothHFP—. The previous route before changing to this route was <AVAudioSessionRouteDescription: 0x146a82f0,
    inputs = (null);
    outputs = (
    “<AVAudioSessionPortDescription: 0x146a81f0, type = BluetoothA2DPOutput; name = Powerbeats Wireless; UID = 04:88:E2:37:55:15-tacl; selectedDataSource = (null)>”
    )>.
    e/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.DMP
    -lmctl
    -lmname
    -logbase 1.0001 1.000100e+00
    -logfn
    -logspec no no
    -lowerf 133.33334 1.333333e+02
    -lpbeam 1e-40 1.000000e-40
    -lponlybeam 7e-29 7.000000e-29
    -lw 6.5 6.500000e+00
    -maxhmmpf 10000 10000
    -maxnewoov 20 20
    -maxwpf -1 -1
    -mdef
    -mean
    -mfclogdir
    -min_endfr 0 0
    -mixw
    -mixwfloor 0.0000001 1.000000e-07
    -mllr
    -mmap yes yes
    -ncep 13 13
    -nfft 512 512
    -nfilt 40 40
    -nwpen 1.0 1.000000e+00
    -pbeam 1e-48 1.000000e-48
    -pip 1.0 1.000000e+00
    -pl_beam 1e-10 1.000000e-10
    -pl_pbeam 1e-5 1.000000e-05
    -pl_window 0 0
    -rawlogdir
    -remove_dc no no
    -remove_noise yes yes
    -remove_silence yes yes
    -round_filters yes yes
    -samprate 16000 1.600000e+04
    -seed -1 -1
    -sendump
    -senlogdir
    -senmgau
    -silprob 0.005 5.000000e-03
    -smoothspec no no
    -svspec
    -tmat
    -tmatfloor 0.0001 1.000000e-04
    -topn 4 4
    -topn_beam 0 0
    -toprule
    -transform legacy legacy
    -unit_area yes yes
    -upperf 6855.4976 6.855498e+03
    -usewdphones no no
    -uw 1.0 1.000000e+00
    -vad_postspeech 50 50
    -vad_prespeech 10 10
    -vad_threshold 2.0 1.500000e+00
    -var
    -varfloor 0.0001 1.000000e-04
    -varnorm no no
    -verbose no no
    -warp_params
    -warp_type inverse_linear inverse_linear
    -wbeam 7e-29 7.000000e-29
    -wip 0.65 6.500000e-01
    -wlen 0.025625 2.562500e-02

    INFO: cmd_ln.c(702): Parsing command line:
    \
    -nfilt 25 \
    -lowerf 130 \
    -upperf 6800 \
    -feat 1s_c_d_dd \
    -svspec 0-12/13-25/26-38 \
    -agc none \
    -cmn current \
    -varnorm no \
    -transform dct \
    -lifter 22 \
    -cmninit 40

    Current configuration:
    [NAME] [DEFLT] [VALUE]
    -agc none none
    -agcthresh 2.0 2.000000e+00
    -alpha 0.97 9.700000e-01
    -ceplen 13 13
    -cmn current current
    -cmninit 8.0 40
    -dither no no
    -doublebw no no
    -feat 1s_c_d_dd 1s_c_d_dd
    -frate 100 100
    -input_endian little little
    -lda
    -ldadim 0 0
    -lifter 0 22
    -logspec no no
    -lowerf 133.33334 1.300000e+02
    -ncep 13 13
    -nfft 512 512
    -nfilt 40 25
    -remove_dc no no
    -remove_noise yes yes
    -remove_silence yes yes
    -round_filters yes yes
    -samprate 16000 1.600000e+04
    -seed -1 -1
    -smoothspec no no
    -svspec 0-12/13-25/26-38
    -transform legacy dct
    -unit_area yes yes
    -upperf 6855.4976 6.800000e+03
    -vad_postspeech 50 50
    -vad_prespeech 10 10
    -vad_threshold 2.0 1.500000e+00
    -varnorm no no
    -verbose no no
    -warp_params
    -warp_type inverse_linear inverse_linear
    -wlen 0.025625 2.562500e-02

    INFO: acmod.c(252): Parsed model-specific feature parameters from /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/feat.params
    INFO: feat.c(715): Initializing feature stream to type: ‘1s_c_d_dd’, ceplen=13, CMN=’current’, VARNORM=’no’, AGC=’none’
    INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0
    INFO: acmod.c(171): Using subvector specification 0-12/13-25/26-38
    INFO: mdef.c(518): Reading model definition: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    INFO: mdef.c(531): Found byte-order mark BMDF, assuming this is a binary mdef file
    INFO: bin_mdef.c(336): Reading binary model definition: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/mdef
    INFO: bin_mdef.c(516): 46 CI-phone, 168344 CD-phone, 3 emitstate/phone, 138 CI-sen, 6138 Sen, 32881 Sen-Seq
    INFO: tmat.c(206): Reading HMM transition probability matrices: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/transition_matrices
    INFO: acmod.c(124): Attempting to use SCHMM computation module
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/means
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
    INFO: ms_gauden.c(294): 512×13
    INFO: ms_gauden.c(294): 512×13
    INFO: ms_gauden.c(294): 512×13
    INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/variances
    INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
    INFO: ms_gauden.c(294): 512×13
    INFO: ms_gauden.c(294): 512×13
    INFO: ms_gauden.c(294): 512×13
    INFO: ms_gauden.c(354): 0 variance values floored
    INFO: s2_semi_mgau.c(904): Loading senones from dump file /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/sendump
    INFO: s2_semi_mgau.c(928): BEGIN FILE FORMAT DESCRIPTION
    INFO: s2_semi_mgau.c(991): Rows: 512, Columns: 6138
    INFO: s2_semi_mgau.c(1023): Using memory-mapped I/O for senones
    INFO: s2_semi_mgau.c(1294): Maximum top-N: 4 Top-N beams: 0 0 0
    INFO: dict.c(320): Allocating 4113 * 20 bytes (80 KiB) for word entries
    INFO: dict.c(333): Reading main dictionary: /var/mobile/Containers/Data/Application/CFB72ABF-044A-4318-A993-730FB47BF497/Library/Caches/FirstOpenEarsDynamicLanguageModel.dic
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(336): 8 words read
    INFO: dict.c(342): Reading filler dictionary: /private/var/mobile/Containers/Bundle/Application/63660436-E1F7-4E9E-965F-E724ADF24D5A/OpenEarsSampleApp.app/AcousticModelEnglish.bundle/noisedict
    INFO: dict.c(213): Allocated 0 KiB for strings, 0 KiB for phones
    INFO: dict.c(345): 9 words read
    INFO: dict2pid.c(396): Building PID tables for dictionary
    INFO: dict2pid.c(406): Allocating 46^3 * 2 bytes (190 KiB) for word-initial triphones
    INFO: dict2pid.c(132): Allocated 25576 bytes (24 KiB) for word-final triphones
    INFO: dict2pid.c(196): Allocated 25576 bytes (24 KiB) for single-phone word triphones
    INFO: ngram_model_arpa.c(79): No \data\ mark in LM file
    INFO: ngram_model_dmp.c(166): Will use memory-mapped I/O for LM file
    INFO: ngram_model_dmp.c(220): ngrams 1=10, 2=16, 3=8
    INFO: ngram_model_dmp.c(266): 10 = LM.unigrams(+trailer) read
    INFO: ngram_model_dmp.c(312): 16 = LM.bigrams(+trailer) read
    INFO: ngram_model_dmp.c(338): 8 = LM.trigrams read
    INFO: ngram_model_dmp.c(363): 3 = LM.prob2 entries read
    INFO: ngram_model_dmp.c(383): 3 = LM.bo_wt2 entries read
    INFO: ngram_model_dmp.c(403): 2 = LM.prob3 entries read
    INFO: ngram_model_dmp.c(431): 1 = LM.tseg_base entries read
    INFO: ngram_model_dmp.c(487): 10 = ascii word strings read
    INFO: ngram_search_fwdtree.c(99): 8 unique initial diphones
    INFO: ngram_search_fwdtree.c(148): 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(186): Creating search tree
    INFO: ngram_search_fwdtree.c(192): before: 0 root, 0 non-root channels, 10 single-phone words
    INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 145
    INFO: ngram_search_fwdtree.c(339): after: 8 root, 17 non-root channels, 9 single-phone words
    INFO: ngram_search_fwdflat.c(157): fwdflat: min_ef_width = 4, max_sf_win = 25
    2014-12-09 13:05:36.431 OpenEarsSampleApp[451:85152] Restoring SmartCMN value of 18.854980
    2014-12-09 13:05:36.433 OpenEarsSampleApp[451:85152] Listening.
    2014-12-09 13:05:36.435 OpenEarsSampleApp[451:85152] Project has these words or phrases in its dictionary:
    BACKWARD
    CHANGE
    FORWARD
    GO
    LEFT
    MODEL
    RIGHT
    TURN
    2014-12-09 13:05:36.435 OpenEarsSampleApp[451:85152] Recognition loop has started
    2014-12-09 13:05:36.465 OpenEarsSampleApp[451:85137] Local callback: Pocketsphinx is now listening.
    2014-12-09 13:05:36.469 OpenEarsSampleApp[451:85137] Local callback: Pocketsphinx started.

    in reply to: Open Ears/Rapid Ears 2.0 + Bluetooth #1023340
    morchella
    Participant

    Are you absolutely positive that there’s nothing special about the headset in this interaction (low on battery, far away, muted, being overridden by a different, nearby bluetooth device that you aren’t interacting with, something else similar)?

    I don’t think so, but I will keep looking.

    When the sample app isn’t working, what does the decibel label read, is it moving or fixed? Can you try a different bluetooth device with the sample app?

    In a separate post, I’ll give you logs for latest sample run. Short answer – decibel label doesn’t move at all.

    It’s a good suggestion — I’ll have to get my hands on some other bluetooth devices.

    in reply to: Open Ears/Rapid Ears 2.0 + Bluetooth #1023333
    morchella
    Participant

    What is causing the repeated suspend/resume in the timeframe in which you’re expecting speech?

    That’s expected. My app has a call-and-response UI, so it’s constantly suspending (when it plays audio) and resuming (when it needs to listen).

    For debugging, I set up a separate view controller in my app that lets me interactively enable Open Ears and play sounds from button presses. In that context as well, I’m getting neither input nor output from bluetooth. (It works fine with phone or earbuds).

    Can you use your bluetooth device as input either with a tutorial app or the sample app?

    No. I’ve got the sample app running (both with and without RapidEars) and it works fine with phone or earbuds, but not with bluetooth.

    So if the headset follows standards and is able to use its mic input outside of calls (not every bluetooth headset can do that), it ought to work. Does it work with other 3rd-party apps which can use bluetooth mic input?

    This is a well-known, fairly high-end headset. Sound quality is excellent and it works fine with a variety of Apple and 3rd party apps that I’ve tested it with (both input/output).

    I would love to hear that this is just something stupid I’m doing :)

    in reply to: [Resolved] Small bug when running on iOS 8 #1022793
    morchella
    Participant

    As another data point, I’m seeing the same behavior with a Beats Powerbeats bluetooth headset. I see the cont_ad_calib failed message, both in sample app and my own. I’m on 1.71 (also, using RapidEars in my own app).

    in reply to: RapidEars ignoring secondsOfSilenceToDetect #1020764
    morchella
    Participant

    Okay, thanks Halle, I appreciate the advice!

    in reply to: RapidEars ignoring secondsOfSilenceToDetect #1020745
    morchella
    Participant

    Yes, but even when setFinalizeHypothesis = FALSE, the end of speech is still detected and reported. It’s just that the config option (secondsOfSilenceToDetect) is now ignored.

    I can work around it, but it seems like incorrect behavior.

    in reply to: RapidEars ignoring secondsOfSilenceToDetect #1020743
    morchella
    Participant

    Sorry if I was unclear. There is a rapidEarsDidDetectEndOfSpeech delegate method which appears to always be called 50-300ms after pocketsphinxDidDetectFinishedSpeech. I have observed that these end of speech callbacks are sensitive to secondsOfSilenceToDetect, but only when setFinalizeHypothesis = TRUE.

    Is this the intended behavior?

    in reply to: Use length of utterance for rejection? #1018902
    morchella
    Participant

    Halle, thanks! You are awesome for taking the time to respond in detail. I will play around with this some more as you suggest.

    Personally, I would never design an app to fail after “average” use time was exceeded, but then I would also never rip off another software developer. :)

Viewing 13 posts - 1 through 13 (of 13 total)