oe_assertions start to fail on long speeches

Home Forums OpenEars oe_assertions start to fail on long speeches

Viewing 2 posts - 1 through 2 (of 2 total)

  • Author
    Posts
  • #1022453
    omgbobbyg
    Participant

    Halle,

    We’re using RapidEars/Rejecto/OpenEars to do live speech recognition on prepared text a user is reading. We’ve got it working fine for 4-5 minute long speeches, however we’ve noticed that after some time of speaking (anywhere between 6-15 minutes users have reported) OpenEars stops working and a whole bunch oe_assertions() start to fail like these:

    failed oe_assertion `hmm_frame(&hmm_re_combelleautechnologiespromptsmart->hmm_re_combelleautechnologiespromptsmart) >= frame_idx_re_combelleautechnologiespromptsmart’
    failed oe_assertion `hmm_frame(&hmm_re_combelleautechnologiespromptsmart->hmm_re_combelleautechnologiespromptsmart) == frame_idx_re_combelleautechnologiespromptsmart’
    failed oe_assertion `frame_idx_re_combelleautechnologiespromptsmart == bpe.frame_re_combelleautechnologiespromptsmart’

    One thing I suspect might be the culprit is that whenever these assertions start to fail, the hypothesis being delivered to our rapidEars recognition delegate is very long () .

    I noticed that it happens more often when there are no pauses in the speech (which I guess explains why the hypothesis is so long).

    i.) Any idea what might be causing the assertions to start to fail?

    ii.) Are there options on rapidears or openears to tell it to not continue to generate such a long hypothesis in the absence of pauses by the speaker?

    #1022454
    Halle Winkler
    Politepix

    Yes, there are some practical limits to how long a single hypothesis can be for live recognition in RapidEars – an unbounded hypothesis length in live recognition wasn’t a design goal and isn’t currently possible. I’m currently examining these issues for an upcoming version update, so if you’d like to send me a recording and a dictionary/language model pair that replicates the exceptions, I can add them to the testbed and see if a better result is possible in the next version.

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.