Halle Winkler

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 2,165 total)

  • Author
    Posts
  • Halle Winkler
    Politepix

    Welcome,

    This is not really an intended/supported feature of OpenEars, so I don’t really recommend it and can’t offer support assistance, but it is possible using the APIs which are intended for testing found in OEPocketsphinxController. If you take a look in the docs or header for the APIs beginning with runRecognitionOnWavFileAtPath and pathToTestFile, you can learn more about how this would work. This isn’t going to give identical results to your android code. Good luck!

    in reply to: OpenEars on iOS 13+ #1033153
    Halle Winkler
    Politepix

    Hello,

    I haven’t heard of this issue before, but if you can let me know via the contact form (https://politepix.com/contact) which license for Rejecto this issue is appearing with, I can take a look.

    in reply to: Open ears when screen is locked #1033139
    Halle Winkler
    Politepix

    Welcome Marcus,

    Sorry, no, only foregrounded app usage is supported by this project.

    Halle Winkler
    Politepix

    I’m sorry, your simultaneous recording requirement just isn’t supported by this project. It is the reason you are encountering the issue, and as a consequence, I don’t have a method for you to avoid the situation you are encountering. There is more info in the FAQ about issues with other audio-using frameworks.

    Halle Winkler
    Politepix

    Welcome,

    Sorry, OpenEars doesn’t coexist with multiple audio sessions, which this would require.

    in reply to: Proper way to start/stop OE? #1033123
    Halle Winkler
    Politepix

    👍🏻👍🏻

    in reply to: Special case use of OpenEars #1033122
    Halle Winkler
    Politepix

    Hi Anton,

    Sorry, this is probably not a great match for OpenEars as a use case, but I recommend taking a look at the FAQ since it has some info about rejecting wrong hypotheses: https://www.politepix.com/openears/support. Unfortunately there isn’t really bandwidth at the moment to moderate a larger discussion about Sphinx, so I will close this up, but I recommend searching previous forum topics for “OOV” and “out of vocabulary” and “keyword” and/or “wake word” recognition to learn more about the options here – there should be lots of existing info about similar applications once you know the phrases to search for.

    in reply to: Proper way to start/stop OE? #1033119
    Halle Winkler
    Politepix

    Hi Ming,

    Sorry, I don’t have a suggested approach for this, since I’m not familiar with Xamarin and unfortunately don’t support it.

    in reply to: Error building sample app #1033117
    Halle Winkler
    Politepix

    Super, happy to hear that worked for you.

    in reply to: Proper way to start/stop OE? #1033115
    Halle Winkler
    Politepix

    Hi Ming,

    Check out the post: https://www.politepix.com/forums/topic/install-issues-and-their-solutions so you can see how to turn on OELogging, which will have the information about the audio session.

    in reply to: Error building sample app #1033114
    Halle Winkler
    Politepix

    Welcome Anton,

    You don’t need to add the framework to the sample app – it will build right out of the box after you change the app ID to use your developer account. I recommend getting rid of this modified distribution version and downloading fresh one to try this out.

    in reply to: Proper way to start/stop OE? #1033108
    Halle Winkler
    Politepix

    Hi Ming,

    Sorry I didn’t see this when you first posted it. Do you have OpenEarsLogging on or just verbosePocketSphinx there? That looks a bit like an OS bug and it is hard to say without knowing about the audio routing that would appear with OELogging info.

    in reply to: Swift 5 support #1032994
    Halle Winkler
    Politepix

    Hello,

    Yes, it supports current Swift versions.

    in reply to: How to give Credit ? #1032987
    Halle Winkler
    Politepix

    This is in the license that is contained in the downloaded distribution, so please give it a read (it’s a good idea to read the license for any project you link to).

    in reply to: Mimic Pocketsphinx's handling of background noise #1032979
    Halle Winkler
    Politepix

    Hi Amit, this is in the FAQ, it will help you to give it a read, thanks: https://www.politepix.com/openears/support

    in reply to: How to stop listening and still receive hypothesis #1032966
    Halle Winkler
    Politepix

    Welcome,

    Genuinely sorry, but I don’t support this user interface flow (there are a few older forum posts here about it IIRC).

    in reply to: Listening any words of background and giving wrong result #1032946
    Halle Winkler
    Politepix

    Hello,

    There can be a couple of different reasons for this, and they are discussed in detail in the FAQ: https://www.politepix.com/openears/support/

    in reply to: OpenEars iOS version and integration in app #1032942
    Halle Winkler
    Politepix

    Hello,

    OpenEars always supports three versions back, so currently that means iOS 10-12. Please make sure to read the license included with the distribution and the FAQ (https://www.politepix.com/openears/support/) to answer questions about the licensing of OpenEars.

    in reply to: Can I have Hindi Support for the Open Ears OEAcousticModel ? #1032933
    Halle Winkler
    Politepix

    Welcome,

    No, sorry, it isn’t supported.

    in reply to: Acoustic model adaptation #1032920
    Halle Winkler
    Politepix

    You’re welcome!

    in reply to: Acoustic model adaptation #1032918
    Halle Winkler
    Politepix

    Hi Mihai,

    I’m really sorry, but I don’t give support for sphinx or acoustic model adaptation or generation here; I have to stick to the given scope of offering support for the OpenEars API.

    Your initial issue could be sphinx version related, but as a heads-up, there will probably be a secondary issue, in that OpenEars’ acoustic model bundle format is custom to Politepix, so it is not necessarily the case that your sphinx adaptation results will be OpenEars-compatible, and that is going to unfortunately be outside the scope of the support I give here to troubleshoot. I wish you the best of success with this advanced topic in local recognition.

    in reply to: one additional follow-up NBest hypothesis #1032916
    Halle Winkler
    Politepix

    Fantastic!

    in reply to: one additional follow-up NBest hypothesis #1032913
    Halle Winkler
    Politepix

    I would first start with the unmodified sample app from the distribution, and make sure basic recognition works for you for some utterance which is included in the language model, i.e. ruling out that the issue isn’t with recognition in general rather than nbest specifically. Then turn on nbest and set a valid nbest number, and turn on null hypotheses so it is also called even if there isn’t a match with the language model, and see if that helps. Print the actual array in the callback so you can see if null hyps are being returned.

    The most powerful tool you can apply to your own troubleshooting is turning on all the applicable forms of logging documented in the docs and headers – these will show you what the engine is perceiving and whether there is a difference between that and what is ending up on the callback, as well as telling you if your nbest settings are passing validation.

    Good luck!

    in reply to: OEEventsObserver: N best hypothesis #1032910
    Halle Winkler
    Politepix

    Welcome,

    Sorry, there isn’t existing sample code for this and I unfortunately don’t have time at the moment to provide an example. However, it is known to work according to the instructions in the documentation and headers, so I recommend just taking some time and giving it a careful read, particularly relating to the details of how to get specific callbacks from OEEventsObserver (for instance, n-best) by changing settings in OEPocketsphinxController.

    in reply to: Pocketsphinx features unimplemented availability #1032883
    Halle Winkler
    Politepix

    Hi Ketera,

    Sorry I overlooked the second part of your question. No, there isn’t a single location where this can be configured.

    in reply to: Pocketsphinx feature not available : keywords search #1032881
    Halle Winkler
    Politepix

    Welcome,

    Sorry, it isn’t planned.

    in reply to: Adaption #1032866
    Halle Winkler
    Politepix

    Welcome,

    Sorry, there is no mechanism for local on-device adaptation in OpenEars.

    in reply to: Text to Speech demo #1032863
    Halle Winkler
    Politepix

    Welcome,

    Sorry, there is no demo other than the demo framework.

    in reply to: App is Crashing after adding a new OpenEars framework #1032857
    Halle Winkler
    Politepix

    The best way to verify this is to quickly create a totally new test app (in a different location) which only has one app target, where you set the bundle ID of the app to the new framework bundle ID. If the framework can run RapidEars methods with this app, the issue is that the old app is either not loading the scheme/target you are expecting, or that the target that is being called by the scheme doesn’t really have the bundle ID you are expecting.

    If a brand-new app that only has one target with the expected bundle ID doesn’t work either, you can send it to me and I will figure out what is going on.

    in reply to: App is Crashing after adding a new OpenEars framework #1032855
    Halle Winkler
    Politepix

    Hello,

    This will be because the bundle ID in the new framework doesn’t match the bundle ID in the app. Please double-check what the app target you are building and running has as a bundle ID, and whether it corresponds to the bundle ID that the license was created with.

    in reply to: adding new bundle id #1032850
    Halle Winkler
    Politepix

    Greetings,

    Sorry, there is no mechanism for adding an ID to a purchase – the purchase is for a specific bundle ID. A license for a different bundle ID is a different license, so it is a different purchase. This is mentioned during the purchase process, so you can also see it mentioned on your invoice for that purchase. Sorry I can’t help out with this issue.

    in reply to: Does OpenEars require internet access to when using? #1032847
    Halle Winkler
    Politepix

    Greetings,

    Sorry, it isn’t possible to change a bundle ID after purchase.

    in reply to: Timing information for runRecognitionOnWavFile #1032827
    Halle Winkler
    Politepix

    Welcome,

    Sorry, runRecognitionOnWaveFile is an OpenEars-only method, not a RapidEars method, so it shouldn’t invoke a RapidEars callback. Have you tried it out with pathToTestFile? I am doubtful that there is a way to do exactly what you’re trying for here, but pathToTestFile will invoke RapidEars hypothesis callbacks.

    in reply to: how can I recognize a huge number of words? #1032822
    Halle Winkler
    Politepix

    Welcome,

    This is just not the use case that this framework is conceived for, sorry. You will probably need to use a networked service to do this.

    in reply to: NeatSpeech Demo #1032810
    Halle Winkler
    Politepix

    Welcome,

    Sorry, no, there are no samples or Xamarin support, so it may not be the right solution for your needs.

    in reply to: audioPlayerBeginInterruption #1032764
    Halle Winkler
    Politepix

    OK, in that case I’m afraid I can’t give you more assistance with this at this time since I can’t replicate that this issue persists with the build system changed, and the error shared doesn’t have information about the specific build error in the logs.

    However, I have made a new release today which doesn’t demonstrate this issue based on my earlier replication of your issue, so you’re welcome to download it and see if it helps you: https://www.politepix.com/wp-content/uploads/OpenEarsDistribution.tar.bz2

    in reply to: audioPlayerBeginInterruption #1032762
    Halle Winkler
    Politepix

    I was able to replicate the issue, but it was fixed by that change. Can you describe to me how you changed the project to the legacy build system?

    in reply to: audioPlayerBeginInterruption #1032760
    Halle Winkler
    Politepix

    Please set your project to the legacy build system.

    in reply to: audioPlayerBeginInterruption #1032757
    Halle Winkler
    Politepix

    Hi Robert,

    Are you finished with your question about building the framework?

    in reply to: audioPlayerBeginInterruption #1032755
    Halle Winkler
    Politepix

    BTW, it shouldn’t be necessary for you to build the framework if you are just testing out the sample app for the first time – the cases under which a rebuild of the framework are needed/suggested aren’t that common.

    in reply to: audioPlayerBeginInterruption #1032754
    Halle Winkler
    Politepix

    OK, I’m not sure it’s constructive to troubleshoot the sample app’s recognition of your voice right now if you would like assistance building the framework. Have you given my suggestion above a try?

    in reply to: audioPlayerBeginInterruption #1032752
    Halle Winkler
    Politepix

    Hmm, looks like it is possible for Xcode to override the following build setting differently for different setups, unfortunately. I will fix this in a future version, but for now, please set the framework project’s build setting as follows:

    Screenshot

    in reply to: audioPlayerBeginInterruption #1032751
    Halle Winkler
    Politepix

    Are you talking about the framework or one of the sample apps?

    in reply to: audioPlayerBeginInterruption #1032748
    Halle Winkler
    Politepix

    Welcome,

    Can you be most specific about what part of the project you can’t build and copy and paste which results you receive, which Xcode you’re using, and with which targets? A deprecation warning shouldn’t affect building.

    in reply to: Mixing with other audio. #1032742
    Halle Winkler
    Politepix

    Sure, it’s in the API definition – it is for use while recognition isn’t in progress, if PocketsphinxController is usually doing something undesired to your session while recognition isn’t in progress. While recognition is in progress, it is expected that PocketsphinxController always normalizes to the session settings it is designed around.

    in reply to: Mixing with other audio. #1032740
    Halle Winkler
    Politepix

    OK, thank you for the elaboration. This is the current expected behavior, sorry. The recognition is intended to be performed on a single audio input which is only mic speech, and PocketsphinxController performs its own audio session management to achieve this.

    in reply to: Mixing with other audio. #1032736
    Halle Winkler
    Politepix

    Welcome,

    Can you clarify the result you are seeking and what is actually happening?

    in reply to: Possibilities of OpenEars #1032715
    Halle Winkler
    Politepix

    Welcome Cupcaker,

    1) Can you elaborate more on this? Functionality within an app is the main purpose of the framework so I think I’m not yet following the specific question.

    2) Yes, there is TTS in OpenEars and with the NeatSpeech plugin, but I would recommend first using the native Apple TTS API and seeing if it covers your requirements.

    3) No, are you seeing an incompatibility with a Swift version?

    in reply to: combining words + numbers #1032714
    Halle Winkler
    Politepix

    Hi,

    Sorry, it isn’t really a bug in that sense, but a limitation of the models and their size/domain for offline use.

    in reply to: combining words + numbers #1032711
    Halle Winkler
    Politepix

    Welcome,

    Sorry, yes, this is actually a known issue – if you search for “number” in these forums you can see several previous discussions about it.

    in reply to: Can OpenEars bundle a pretrained acoustic model? #1032697
    Halle Winkler
    Politepix

    Hi Steve,

    Sorry, it isn’t as simple as just bundling it – the bundles contain more than CMU Sphinx files and it isn’t trivial to create the parts which are contributed from this end. Sorry I can’t help with this.

    in reply to: How to create a bundle for a custom acoustic model #1032695
    Halle Winkler
    Politepix

    Welcome Steve,

    Sorry, it isn’t possible to create an OpenEars-compatible bundle for arbitrary acoustic models.

    in reply to: Extract MFCC from audio file #1032689
    Halle Winkler
    Politepix

    Welcome,

    Sorry, that isn’t a feature.

    in reply to: Switching between runRecognitionOnWavFile & standard setup #1032687
    Halle Winkler
    Politepix

    Ah, I would not really expect this to work during suspension, to be honest. Starting and stopping should be very quick, have you tried that and timed it?

    in reply to: Switching between runRecognitionOnWavFile & standard setup #1032685
    Halle Winkler
    Politepix

    Hi, can you show me the code where you switch between the recognition methods?

    in reply to: Switching between runRecognitionOnWavFile & standard setup #1032678
    Halle Winkler
    Politepix

    Welcome,

    OK, there should actually be a lot more logging information that is possible to share, so can you take a look at this post about how much logging is needed Please read before you post – how to troubleshoot and provide logging info here to troubleshoot this kind of issue, and also share the backtrace (type ‘bt’ in lldb at the crash) in order to make further investigation possible? Thanks.

    in reply to: Unable to obtain correct path to language model #1032674
    Halle Winkler
    Politepix

    Good fix! Happy New Year to you as well.

    Halle Winkler
    Politepix

    Welcome,

    No, there is no way to stream audio which isn’t on the iPhone.

    in reply to: Unable to obtain correct path to language model #1032668
    Halle Winkler
    Politepix

    Welcome Mike,

    If you check out the Swift tutorial tool (using the first switch for OpenEars alone), it will show you how to use the functions which give you the path to the generated language models:

    https://www.politepix.com/openearsswift-tutorial/

    Since lmPath is storing a returned string, it seems unlikely that lmPath.text is what you want. Another working example of obtaining this path in Swift can be seen in the Swift sample app that is in the OpenEars distribution folder.

    If the issue is due to something else, it is necessary to turn on OpenEarsLogging and show all of the logging from the beginning to the end of the app run so I can help. It is possible that the language model generation isn’t working (it looks like this model is being generated out of a document, so I can imagine some complications there), so after the failed generation there is nothing to pass to the string, and the full logging output will indicate this if so.

    in reply to: Changing LanguageModel on the Fly #1032659
    Halle Winkler
    Politepix

    Can you take another look at the link to the RapidEars callbacks workaround and make sure that you also made the described modifications to RapidEars.framework? I’m talking about the part at the beginning that begins “In the RapidEars framework’s header file OEEventsObserver+RapidEars.h”:

    https://www.politepix.com/forums/topic/rapidearsdidreceivelivespeechhypothesis-not-firing/#post-1032229

    in reply to: Changing LanguageModel on the Fly #1032658
    Halle Winkler
    Politepix

    Hi,

    Language model switching is performed by OpenEars.framework rather than by RapidEars.framework or RapidEarsDemo.framework, so it is kind of unlikely that the issue is related to switching from the RapidEars demo to the licensed version despite it appearing at that time (I won’t say it’s impossible, but I am not personally aware of a way that it could occur due to the fact that RapidEars doesn’t perform that function).

    RapidEars only adds changes to speech processing, and callbacks, so those would be the only things I could offer advice with specifically regarding your new RapidEars framework, but it sounds like you are confident that you are receiving all RapidEars and OpenEars callbacks, is that correct?

    This is a bit of a tricky one because there also isn’t any difference in speech code between the demo and the licensed version; there is just the timeout after a few minutes in one and the license checking in the other as differences. It would be unusual to get any kind of a behavioral change in the actual engine as a result of changing over to the licensed version since there isn’t different engine code.

    If you are definitely receiving all expected callbacks from OpenEars and from RapidEars, I would recommend beginning by taking standard troubleshooting steps to see what the issue might be. Step one would be to turn on OpenEarsLogging and verbosePocketsphinx and verboseRapidEars, and see if there is anything in the logging to let you know what is happening when you switch models. Please feel free to show me the logs (it’s fine to do a search/replace for private info in there as long as you show me all of the OpenEars logging from the beginning to the end of the app session without removing any of the OpenEars logging in between).

    in reply to: Changing LanguageModel on the Fly #1032655
    Halle Winkler
    Politepix

    Hi Saatyagi,

    I think you may be experiencing this issue with callbacks:

    https://www.politepix.com/forums/topic/rapidearsdidreceivelivespeechhypothesis-not-firing/#post-1032229

    Please give the fix I describe in the linked reply a try, and let me know if it works for you.

    in reply to: Cannot install OpenEars (pch module not found?) #1032626
    Halle Winkler
    Politepix

    Super, I’m happy to hear it!

    in reply to: Cannot install OpenEars (pch module not found?) #1032624
    Halle Winkler
    Politepix

    There is no need to import the xcodeproj file into anything else. Please just start with a new distribution and open and run the OpenEarsSampleApp project itself with no changes, thanks.

    in reply to: Cannot install OpenEars (pch module not found?) #1032621
    Halle Winkler
    Politepix

    OK, there should be no issue running the Swift app or following the tutorial with Xcode 9, so I would start by establishing why you can’t compile the sample app, which should be possible to compile and run right after downloading. I’m going to suggest the theory that something has been unintentionally changed in the downloaded distribution, and recommend that you remove all of your work (back it up first) and download an entirely new OpenEars distribution from https://www.politepix.com/openears to a new location, and start by compiling the sample app. If this works, start again with the tutorial with a new app, taking extra time with the first part of step 1. I don’t recommend continuing trying to troubleshoot the existing tutorial implementation app since the issue with the sample app makes it sound like an accidental issue of some kind and not something direct to troubleshoot, and it may also be the case that there have been further changes in the troubleshooting process.

    in reply to: Cannot install OpenEars (pch module not found?) #1032619
    Halle Winkler
    Politepix

    Welcome,

    Can you let me know which Xcode/Swift version this is with, and what the target is (simulator or a specific device)?

    Are you able to compile the Swift sample app?

    in reply to: micPermissionCheckCompleted is never called #1032615
    Halle Winkler
    Politepix

    Greetings,

    You’re welcome! Hmm, yes, I think that is a known bug (or was a known bug at the time that I did the sample app) which I have unfortunately not reexamined in a while. It would need some time set aside to create a few new installs in order to investigate why the callback isn’t received as expected (or possibly, why the sample app just doesn’t do the right thing, which is the second possibility). It could be a while before I have the opportunity to look into it – are you able to use the workaround for the time being?

    Halle Winkler
    Politepix

    Should be working now.

    Halle Winkler
    Politepix

    Ah, that makes sense. Thank you for updating me on it.

    Halle Winkler
    Politepix

    What is the demo order email that was sent to you a few minutes ago related to? Was one of your tests successful?

    in reply to: programatically start and stop openEars/RapidEars swift #1032598
    Halle Winkler
    Politepix

    Hi,

    There is a sample app written in Swift which shows a working example of this and many other operations. It is in OpenEarsDistribution/OpenEarsSampleAppSwift/OpenEarsSampleAppSwift.xcodeproj

    in reply to: programatically start and stop openEars/RapidEars swift #1032596
    Halle Winkler
    Politepix

    You’re welcome! Switching dictionaries is also in the sample app, so I recommend opening it up and checking out its code.

    in reply to: programatically start and stop openEars/RapidEars swift #1032593
    Halle Winkler
    Politepix

    Hi Saatyagi,

    I wouldn’t set the instance false first or call stop on an optional instance that has been set false. There are a couple of examples of working stopListening() calls in the Swift sample app that is part of the OpenEarsDistribution directory in the file ViewController.swift that can help with this, as well as other complexities you may be seeing such as avoiding calling stop on an already-stopped instance.

    Halle Winkler
    Politepix

    Hmm, just tried it out in another browser with no existing cookies or logins and it worked as expected. Weird question, but could there be anything going on at the network level for you which might prevent a first-party cookie from working on the site?

    Halle Winkler
    Politepix

    Hi Joseph,

    I’m not seeing this issue, unfortunately – which browser is this with?

    in reply to: Bitcode enabled Rejecto #1032565
    Halle Winkler
    Politepix

    Hi Sean,

    I haven’t made a decision about this, sorry. We’re in year three of bitcode not being required in the iPhone target app despite the note you linked to, so this isn’t a current concern.

    Halle Winkler
    Politepix

    Welcome,

    Thanks for this information. Unfortunately, I won’t be able to look into the issue until after the 15th; I apologize for the delay but it is currently unavoidable.

    In the meantime, to make it possible to fix quickly once I am able to look into it, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue. There will be a complete error in our build log which you can share, and you can also provide important missing information such as which OpenEars plugin this question is about, which version it is, which version of OpenEars you are using, etc.

    If you want to attempt to troubleshoot the error yourself in the meantime, you can look at the full error you receive in the build log, and then most likely link to a current and supported c++ library in your build settings, since my best guess without being able to look into it is that you have carried forward an older unsupported c++ library that Xcode 10 no longer supports. Good luck if you want to give that a try, and I will check into this as soon as it is possible to do so; thanks for your patience.

    in reply to: Italian language model #1032526
    Halle Winkler
    Politepix

    That’s right, there is lots of info about how to do this using OpenEars’ APIs in the tutorial and documentation.

    in reply to: Italian language model #1032524
    Halle Winkler
    Politepix

    Hello,

    OpenEars works with smaller dynamically-generated models via its own API rather than large pre-existing models.

    in reply to: Italian language model #1032522
    Halle Winkler
    Politepix

    Welcome,

    OpenEars-supported acoustic models are found here: https://www.politepix.com/languages/

    in reply to: rapidEarsDidReceiveLiveSpeechHypothesis not firing #1032509
    Halle Winkler
    Politepix

    Glad it worked for you! Thanks for the feedback on the instructions.

    in reply to: Install Rapid Ears #1032498
    Halle Winkler
    Politepix

    Sorry, it is currently necessary to create all of the protocol methods.

    in reply to: Install Rapid Ears #1032496
    Halle Winkler
    Politepix

    You’re welcome, very glad it helped!

    in reply to: Install Rapid Ears #1032494
    Halle Winkler
    Politepix
    Halle Winkler
    Politepix

    Sorry, I don’t have a suggestion for this situation, other than considering creating your listening session on demand when it is needed rather than instantiating it in situations where speech isn’t used, since startup on modern devices is very fast.

    in reply to: OpenEars with Xamarin #1032488
    Halle Winkler
    Politepix

    Welcome,

    Sorry, I don’t support it.

    in reply to: rapidEarsDidReceiveLiveSpeechHypothesis not firing #1032487
    Halle Winkler
    Politepix

    Hi,

    This sounds like it might be a different issue – if this is about a licensed framework, are you sure that the app you have linked to has the same bundle ID as the one you registered when purchasing? Showing the logs before the XPC issue may help.

    in reply to: Any noise cancellation support? #1032470
    Halle Winkler
    Politepix

    Welcome,

    You can use different audio modes with OpenEars, take a look at OEPocketsphinxController’s audioMode property. The modes correspond to documented Apple modes. Use of non-default modes isn’t supported by me, but this should get you started with experimenting.

    in reply to: Error setting audio session active to 0! ‘!act’ #1032465
    Halle Winkler
    Politepix

    Glad it’s working for you!

    in reply to: Error setting audio session active to 0! ‘!act’ #1032463
    Halle Winkler
    Politepix

    No problem, glad you have a known-working reference point you can check things from. I don’t support Corona, so getting it working well in that context has to be done without my assistance unfortunately, but I would recommend just taking your time and assuming it’s something minor which will turn up when retracing your steps if you use the sample app as the guideline.

    in reply to: Error setting audio session active to 0! ‘!act’ #1032461
    Halle Winkler
    Politepix

    Does the sample app get the same error on the same device?

    in reply to: Error setting audio session active to 0! ‘!act’ #1032457
    Halle Winkler
    Politepix

    Welcome, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue.

    in reply to: Recognition Score is Always Zero (0) #1032430
    Halle Winkler
    Politepix

    Greetings,

    That’s normal for a grammar. Generally, scoring isn’t a useful/usable piece of info for your implementation.

    in reply to: Recognize short Command in nonEnglish #1032422
    Halle Winkler
    Politepix

    I think that if your original grammar implementation doesn’t raise any errors and is returning output that you can evaluate, we have explored everything that is within the realm of supportable troubleshooting here, so I am going to close this as answered because I think we have explored all of the topics which have come up here at substantial length and I think there should be enough for you to examine further outside of an ongoing troubleshooting process with me.

    If you have very specific questions later on (I mean, questions about a single aspect of a single implementation with a single acoustic model) it’s OK to start very focused new topics, just please create a fresh implementation you are comfortable sharing things about and you are sure doesn’t have accumulated code from different implementations, and remember to share the info in here without prompting from me so the questions don’t get closed, thanks and good luck!

    in reply to: Recognize short Command in nonEnglish #1032419
    Halle Winkler
    Politepix

    This is because there are multiple things about this which are a problem for ideal recognition with these tools: it has high uncertainty because it is a different language, and language models aren’t designed to work with a single word. I expect changing the weight to affect this outcome, but if it doesn’t, that is the answer on whether this approach will work.

    in reply to: Recognize short Command in nonEnglish #1032417
    Halle Winkler
    Politepix

    Have we ever seen a fully-working result from your original grammar implementation without a plugin since we fixed the grammar?

    in reply to: Recognize short Command in nonEnglish #1032416
    Halle Winkler
    Politepix

    I’ve recommended what is possible to tune for Rejecto, there is nothing else. If it isn’t doing well yet, this is likely to just be due to it being a different language. You can also tune vadThreshold but I recommended doing that at the start so I am assuming it is correct now.

    in reply to: Recognize short Command in nonEnglish #1032414
    Halle Winkler
    Politepix

    Yeah, that makes a certain amount of sense because this use case is very borderline for RuleORama – it isn’t usually great with a rule that has a single entry and the other elements of this which are pushing the limits of what is likely to work are probably making it worse. We can shelve the investigation of RuleORama now that we have seen a result from it.

    in reply to: Recognize short Command in nonEnglish #1032410
    Halle Winkler
    Politepix

    The first thing in this RuleORama implementation to fix is again that this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
    

    needs to be this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
    

    There may be other issues but let’s start there.

    in reply to: Recognize short Command in nonEnglish #1032408
    Halle Winkler
    Politepix

    Regarding your Rejecto results: you can now pick whichever one of them is better and experiment with raising or reducing the value withWeight in this line (lowest possible value is 0.1 and largest possible value is 1.9):

    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    What does the symbol “@” represent in the LookupList.text ? (the double-ee’s and double-ii’s I can somehow intereprete but what does “@” really mean ?)

    It represents the phone sound in Hochdeutsch which is represented by the IPA ə. This is getting outside of the things I support here but there should be enough info in that explanation for you to find sources outside of these forums to continue your investigation if you continue to have questions.

    • This reply was modified 2 years, 7 months ago by Halle Winkler.
Viewing 100 posts - 1 through 100 (of 2,165 total)