doogie001

Forum Replies Created

Viewing 2 posts - 1 through 2 (of 2 total)

  • Author
    Posts
  • doogie001
    Participant

    Thank you for your quick and detailed reply.

    doogie001
    Participant

    After the user speaks and openears processes, I want to be able to get at what the user has said. This will be like siri in the repect that it writes out or speaks exactly what the user said before it continues to process the users request. It seems though that looking at the code and several logs produced that it only will understand what you have placed in the Language model generator.

    So unless I have the “phrase” or words that the user will speak, I don’t think I can get at all that the user has said. This may be the “nature of the beast” for offline speech recognition but I just want to make sure this is the case.

    For example: I want to say “This is a test can you understand what I am saying”. But my Language model generator is initialized with the words @”cat”, @”mouse”. @”dog”, I don’t think I’ll be able to get at what the user intially said. Is this a correct assesment or am I missing someting here.

    In my testing I was looking at the coentents of words from the method – (void) rapidEarsDidDetectFinishedSpeechAsWordArray:(NSArray *)words andScoreArray:(NSArray *)scores {
    }
    Thanks for your help

Viewing 2 posts - 1 through 2 (of 2 total)