Halle Winkler

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 2,153 total)

  • Author
    Posts
  • in reply to: Swift 5 support #1032994
    Halle Winkler
    Politepix

    Hello,

    Yes, it supports current Swift versions.

    in reply to: How to give Credit ? #1032987
    Halle Winkler
    Politepix

    This is in the license that is contained in the downloaded distribution, so please give it a read (it’s a good idea to read the license for any project you link to).

    in reply to: Mimic Pocketsphinx's handling of background noise #1032979
    Halle Winkler
    Politepix

    Hi Amit, this is in the FAQ, it will help you to give it a read, thanks: https://www.politepix.com/openears/support

    in reply to: How to stop listening and still receive hypothesis #1032966
    Halle Winkler
    Politepix

    Welcome,

    Genuinely sorry, but I don’t support this user interface flow (there are a few older forum posts here about it IIRC).

    in reply to: Listening any words of background and giving wrong result #1032946
    Halle Winkler
    Politepix

    Hello,

    There can be a couple of different reasons for this, and they are discussed in detail in the FAQ: https://www.politepix.com/openears/support/

    in reply to: OpenEars iOS version and integration in app #1032942
    Halle Winkler
    Politepix

    Hello,

    OpenEars always supports three versions back, so currently that means iOS 10-12. Please make sure to read the license included with the distribution and the FAQ (https://www.politepix.com/openears/support/) to answer questions about the licensing of OpenEars.

    in reply to: Can I have Hindi Support for the Open Ears OEAcousticModel ? #1032933
    Halle Winkler
    Politepix

    Welcome,

    No, sorry, it isn’t supported.

    in reply to: Acoustic model adaptation #1032920
    Halle Winkler
    Politepix

    You’re welcome!

    in reply to: Acoustic model adaptation #1032918
    Halle Winkler
    Politepix

    Hi Mihai,

    I’m really sorry, but I don’t give support for sphinx or acoustic model adaptation or generation here; I have to stick to the given scope of offering support for the OpenEars API.

    Your initial issue could be sphinx version related, but as a heads-up, there will probably be a secondary issue, in that OpenEars’ acoustic model bundle format is custom to Politepix, so it is not necessarily the case that your sphinx adaptation results will be OpenEars-compatible, and that is going to unfortunately be outside the scope of the support I give here to troubleshoot. I wish you the best of success with this advanced topic in local recognition.

    in reply to: one additional follow-up NBest hypothesis #1032916
    Halle Winkler
    Politepix

    Fantastic!

    in reply to: one additional follow-up NBest hypothesis #1032913
    Halle Winkler
    Politepix

    I would first start with the unmodified sample app from the distribution, and make sure basic recognition works for you for some utterance which is included in the language model, i.e. ruling out that the issue isn’t with recognition in general rather than nbest specifically. Then turn on nbest and set a valid nbest number, and turn on null hypotheses so it is also called even if there isn’t a match with the language model, and see if that helps. Print the actual array in the callback so you can see if null hyps are being returned.

    The most powerful tool you can apply to your own troubleshooting is turning on all the applicable forms of logging documented in the docs and headers – these will show you what the engine is perceiving and whether there is a difference between that and what is ending up on the callback, as well as telling you if your nbest settings are passing validation.

    Good luck!

    in reply to: OEEventsObserver: N best hypothesis #1032910
    Halle Winkler
    Politepix

    Welcome,

    Sorry, there isn’t existing sample code for this and I unfortunately don’t have time at the moment to provide an example. However, it is known to work according to the instructions in the documentation and headers, so I recommend just taking some time and giving it a careful read, particularly relating to the details of how to get specific callbacks from OEEventsObserver (for instance, n-best) by changing settings in OEPocketsphinxController.

    • This reply was modified 5 months, 2 weeks ago by Halle Winkler.
    in reply to: Pocketsphinx features unimplemented availability #1032883
    Halle Winkler
    Politepix

    Hi Ketera,

    Sorry I overlooked the second part of your question. No, there isn’t a single location where this can be configured.

    in reply to: Pocketsphinx feature not available : keywords search #1032881
    Halle Winkler
    Politepix

    Welcome,

    Sorry, it isn’t planned.

    in reply to: Adaption #1032866
    Halle Winkler
    Politepix

    Welcome,

    Sorry, there is no mechanism for local on-device adaptation in OpenEars.

    in reply to: Text to Speech demo #1032863
    Halle Winkler
    Politepix

    Welcome,

    Sorry, there is no demo other than the demo framework.

    in reply to: App is Crashing after adding a new OpenEars framework #1032857
    Halle Winkler
    Politepix

    The best way to verify this is to quickly create a totally new test app (in a different location) which only has one app target, where you set the bundle ID of the app to the new framework bundle ID. If the framework can run RapidEars methods with this app, the issue is that the old app is either not loading the scheme/target you are expecting, or that the target that is being called by the scheme doesn’t really have the bundle ID you are expecting.

    If a brand-new app that only has one target with the expected bundle ID doesn’t work either, you can send it to me and I will figure out what is going on.

    in reply to: App is Crashing after adding a new OpenEars framework #1032855
    Halle Winkler
    Politepix

    Hello,

    This will be because the bundle ID in the new framework doesn’t match the bundle ID in the app. Please double-check what the app target you are building and running has as a bundle ID, and whether it corresponds to the bundle ID that the license was created with.

    in reply to: adding new bundle id #1032850
    Halle Winkler
    Politepix

    Greetings,

    Sorry, there is no mechanism for adding an ID to a purchase – the purchase is for a specific bundle ID. A license for a different bundle ID is a different license, so it is a different purchase. This is mentioned during the purchase process, so you can also see it mentioned on your invoice for that purchase. Sorry I can’t help out with this issue.

    in reply to: Does OpenEars require internet access to when using? #1032847
    Halle Winkler
    Politepix

    Greetings,

    Sorry, it isn’t possible to change a bundle ID after purchase.

    in reply to: Timing information for runRecognitionOnWavFile #1032827
    Halle Winkler
    Politepix

    Welcome,

    Sorry, runRecognitionOnWaveFile is an OpenEars-only method, not a RapidEars method, so it shouldn’t invoke a RapidEars callback. Have you tried it out with pathToTestFile? I am doubtful that there is a way to do exactly what you’re trying for here, but pathToTestFile will invoke RapidEars hypothesis callbacks.

    in reply to: how can I recognize a huge number of words? #1032822
    Halle Winkler
    Politepix

    Welcome,

    This is just not the use case that this framework is conceived for, sorry. You will probably need to use a networked service to do this.

    in reply to: NeatSpeech Demo #1032810
    Halle Winkler
    Politepix

    Welcome,

    Sorry, no, there are no samples or Xamarin support, so it may not be the right solution for your needs.

    in reply to: audioPlayerBeginInterruption #1032764
    Halle Winkler
    Politepix

    OK, in that case I’m afraid I can’t give you more assistance with this at this time since I can’t replicate that this issue persists with the build system changed, and the error shared doesn’t have information about the specific build error in the logs.

    However, I have made a new release today which doesn’t demonstrate this issue based on my earlier replication of your issue, so you’re welcome to download it and see if it helps you: https://www.politepix.com/wp-content/uploads/OpenEarsDistribution.tar.bz2

    in reply to: audioPlayerBeginInterruption #1032762
    Halle Winkler
    Politepix

    I was able to replicate the issue, but it was fixed by that change. Can you describe to me how you changed the project to the legacy build system?

    in reply to: audioPlayerBeginInterruption #1032760
    Halle Winkler
    Politepix

    Please set your project to the legacy build system.

    in reply to: audioPlayerBeginInterruption #1032757
    Halle Winkler
    Politepix

    Hi Robert,

    Are you finished with your question about building the framework?

    in reply to: audioPlayerBeginInterruption #1032755
    Halle Winkler
    Politepix

    BTW, it shouldn’t be necessary for you to build the framework if you are just testing out the sample app for the first time – the cases under which a rebuild of the framework are needed/suggested aren’t that common.

    in reply to: audioPlayerBeginInterruption #1032754
    Halle Winkler
    Politepix

    OK, I’m not sure it’s constructive to troubleshoot the sample app’s recognition of your voice right now if you would like assistance building the framework. Have you given my suggestion above a try?

    in reply to: audioPlayerBeginInterruption #1032752
    Halle Winkler
    Politepix

    Hmm, looks like it is possible for Xcode to override the following build setting differently for different setups, unfortunately. I will fix this in a future version, but for now, please set the framework project’s build setting as follows:

    Screenshot

    in reply to: audioPlayerBeginInterruption #1032751
    Halle Winkler
    Politepix

    Are you talking about the framework or one of the sample apps?

    in reply to: audioPlayerBeginInterruption #1032748
    Halle Winkler
    Politepix

    Welcome,

    Can you be most specific about what part of the project you can’t build and copy and paste which results you receive, which Xcode you’re using, and with which targets? A deprecation warning shouldn’t affect building.

    in reply to: Mixing with other audio. #1032742
    Halle Winkler
    Politepix

    Sure, it’s in the API definition – it is for use while recognition isn’t in progress, if PocketsphinxController is usually doing something undesired to your session while recognition isn’t in progress. While recognition is in progress, it is expected that PocketsphinxController always normalizes to the session settings it is designed around.

    in reply to: Mixing with other audio. #1032740
    Halle Winkler
    Politepix

    OK, thank you for the elaboration. This is the current expected behavior, sorry. The recognition is intended to be performed on a single audio input which is only mic speech, and PocketsphinxController performs its own audio session management to achieve this.

    in reply to: Mixing with other audio. #1032736
    Halle Winkler
    Politepix

    Welcome,

    Can you clarify the result you are seeking and what is actually happening?

    in reply to: Possibilities of OpenEars #1032715
    Halle Winkler
    Politepix

    Welcome Cupcaker,

    1) Can you elaborate more on this? Functionality within an app is the main purpose of the framework so I think I’m not yet following the specific question.

    2) Yes, there is TTS in OpenEars and with the NeatSpeech plugin, but I would recommend first using the native Apple TTS API and seeing if it covers your requirements.

    3) No, are you seeing an incompatibility with a Swift version?

    in reply to: combining words + numbers #1032714
    Halle Winkler
    Politepix

    Hi,

    Sorry, it isn’t really a bug in that sense, but a limitation of the models and their size/domain for offline use.

    in reply to: combining words + numbers #1032711
    Halle Winkler
    Politepix

    Welcome,

    Sorry, yes, this is actually a known issue – if you search for “number” in these forums you can see several previous discussions about it.

    in reply to: Can OpenEars bundle a pretrained acoustic model? #1032697
    Halle Winkler
    Politepix

    Hi Steve,

    Sorry, it isn’t as simple as just bundling it – the bundles contain more than CMU Sphinx files and it isn’t trivial to create the parts which are contributed from this end. Sorry I can’t help with this.

    in reply to: How to create a bundle for a custom acoustic model #1032695
    Halle Winkler
    Politepix

    Welcome Steve,

    Sorry, it isn’t possible to create an OpenEars-compatible bundle for arbitrary acoustic models.

    in reply to: Extract MFCC from audio file #1032689
    Halle Winkler
    Politepix

    Welcome,

    Sorry, that isn’t a feature.

    in reply to: Switching between runRecognitionOnWavFile & standard setup #1032687
    Halle Winkler
    Politepix

    Ah, I would not really expect this to work during suspension, to be honest. Starting and stopping should be very quick, have you tried that and timed it?

    in reply to: Switching between runRecognitionOnWavFile & standard setup #1032685
    Halle Winkler
    Politepix

    Hi, can you show me the code where you switch between the recognition methods?

    in reply to: Switching between runRecognitionOnWavFile & standard setup #1032678
    Halle Winkler
    Politepix

    Welcome,

    OK, there should actually be a lot more logging information that is possible to share, so can you take a look at this post about how much logging is needed Please read before you post – how to troubleshoot and provide logging info here to troubleshoot this kind of issue, and also share the backtrace (type ‘bt’ in lldb at the crash) in order to make further investigation possible? Thanks.

    in reply to: Unable to obtain correct path to language model #1032674
    Halle Winkler
    Politepix

    Good fix! Happy New Year to you as well.

    Halle Winkler
    Politepix

    Welcome,

    No, there is no way to stream audio which isn’t on the iPhone.

    in reply to: Unable to obtain correct path to language model #1032668
    Halle Winkler
    Politepix

    Welcome Mike,

    If you check out the Swift tutorial tool (using the first switch for OpenEars alone), it will show you how to use the functions which give you the path to the generated language models:

    https://www.politepix.com/openearsswift-tutorial/

    Since lmPath is storing a returned string, it seems unlikely that lmPath.text is what you want. Another working example of obtaining this path in Swift can be seen in the Swift sample app that is in the OpenEars distribution folder.

    If the issue is due to something else, it is necessary to turn on OpenEarsLogging and show all of the logging from the beginning to the end of the app run so I can help. It is possible that the language model generation isn’t working (it looks like this model is being generated out of a document, so I can imagine some complications there), so after the failed generation there is nothing to pass to the string, and the full logging output will indicate this if so.

    in reply to: Changing LanguageModel on the Fly #1032659
    Halle Winkler
    Politepix

    Can you take another look at the link to the RapidEars callbacks workaround and make sure that you also made the described modifications to RapidEars.framework? I’m talking about the part at the beginning that begins “In the RapidEars framework’s header file OEEventsObserver+RapidEars.h”:

    https://www.politepix.com/forums/topic/rapidearsdidreceivelivespeechhypothesis-not-firing/#post-1032229

    in reply to: Changing LanguageModel on the Fly #1032658
    Halle Winkler
    Politepix

    Hi,

    Language model switching is performed by OpenEars.framework rather than by RapidEars.framework or RapidEarsDemo.framework, so it is kind of unlikely that the issue is related to switching from the RapidEars demo to the licensed version despite it appearing at that time (I won’t say it’s impossible, but I am not personally aware of a way that it could occur due to the fact that RapidEars doesn’t perform that function).

    RapidEars only adds changes to speech processing, and callbacks, so those would be the only things I could offer advice with specifically regarding your new RapidEars framework, but it sounds like you are confident that you are receiving all RapidEars and OpenEars callbacks, is that correct?

    This is a bit of a tricky one because there also isn’t any difference in speech code between the demo and the licensed version; there is just the timeout after a few minutes in one and the license checking in the other as differences. It would be unusual to get any kind of a behavioral change in the actual engine as a result of changing over to the licensed version since there isn’t different engine code.

    If you are definitely receiving all expected callbacks from OpenEars and from RapidEars, I would recommend beginning by taking standard troubleshooting steps to see what the issue might be. Step one would be to turn on OpenEarsLogging and verbosePocketsphinx and verboseRapidEars, and see if there is anything in the logging to let you know what is happening when you switch models. Please feel free to show me the logs (it’s fine to do a search/replace for private info in there as long as you show me all of the OpenEars logging from the beginning to the end of the app session without removing any of the OpenEars logging in between).

    in reply to: Changing LanguageModel on the Fly #1032655
    Halle Winkler
    Politepix

    Hi Saatyagi,

    I think you may be experiencing this issue with callbacks:

    https://www.politepix.com/forums/topic/rapidearsdidreceivelivespeechhypothesis-not-firing/#post-1032229

    Please give the fix I describe in the linked reply a try, and let me know if it works for you.

    in reply to: Cannot install OpenEars (pch module not found?) #1032626
    Halle Winkler
    Politepix

    Super, I’m happy to hear it!

    in reply to: Cannot install OpenEars (pch module not found?) #1032624
    Halle Winkler
    Politepix

    There is no need to import the xcodeproj file into anything else. Please just start with a new distribution and open and run the OpenEarsSampleApp project itself with no changes, thanks.

    in reply to: Cannot install OpenEars (pch module not found?) #1032621
    Halle Winkler
    Politepix

    OK, there should be no issue running the Swift app or following the tutorial with Xcode 9, so I would start by establishing why you can’t compile the sample app, which should be possible to compile and run right after downloading. I’m going to suggest the theory that something has been unintentionally changed in the downloaded distribution, and recommend that you remove all of your work (back it up first) and download an entirely new OpenEars distribution from https://www.politepix.com/openears to a new location, and start by compiling the sample app. If this works, start again with the tutorial with a new app, taking extra time with the first part of step 1. I don’t recommend continuing trying to troubleshoot the existing tutorial implementation app since the issue with the sample app makes it sound like an accidental issue of some kind and not something direct to troubleshoot, and it may also be the case that there have been further changes in the troubleshooting process.

    in reply to: Cannot install OpenEars (pch module not found?) #1032619
    Halle Winkler
    Politepix

    Welcome,

    Can you let me know which Xcode/Swift version this is with, and what the target is (simulator or a specific device)?

    Are you able to compile the Swift sample app?

    in reply to: micPermissionCheckCompleted is never called #1032615
    Halle Winkler
    Politepix

    Greetings,

    You’re welcome! Hmm, yes, I think that is a known bug (or was a known bug at the time that I did the sample app) which I have unfortunately not reexamined in a while. It would need some time set aside to create a few new installs in order to investigate why the callback isn’t received as expected (or possibly, why the sample app just doesn’t do the right thing, which is the second possibility). It could be a while before I have the opportunity to look into it – are you able to use the workaround for the time being?

    Halle Winkler
    Politepix

    Should be working now.

    Halle Winkler
    Politepix

    Ah, that makes sense. Thank you for updating me on it.

    Halle Winkler
    Politepix

    What is the demo order email that was sent to you a few minutes ago related to? Was one of your tests successful?

    in reply to: programatically start and stop openEars/RapidEars swift #1032598
    Halle Winkler
    Politepix

    Hi,

    There is a sample app written in Swift which shows a working example of this and many other operations. It is in OpenEarsDistribution/OpenEarsSampleAppSwift/OpenEarsSampleAppSwift.xcodeproj

    in reply to: programatically start and stop openEars/RapidEars swift #1032596
    Halle Winkler
    Politepix

    You’re welcome! Switching dictionaries is also in the sample app, so I recommend opening it up and checking out its code.

    in reply to: programatically start and stop openEars/RapidEars swift #1032593
    Halle Winkler
    Politepix

    Hi Saatyagi,

    I wouldn’t set the instance false first or call stop on an optional instance that has been set false. There are a couple of examples of working stopListening() calls in the Swift sample app that is part of the OpenEarsDistribution directory in the file ViewController.swift that can help with this, as well as other complexities you may be seeing such as avoiding calling stop on an already-stopped instance.

    Halle Winkler
    Politepix

    Hmm, just tried it out in another browser with no existing cookies or logins and it worked as expected. Weird question, but could there be anything going on at the network level for you which might prevent a first-party cookie from working on the site?

    Halle Winkler
    Politepix

    Hi Joseph,

    I’m not seeing this issue, unfortunately – which browser is this with?

    in reply to: Bitcode enabled Rejecto #1032565
    Halle Winkler
    Politepix

    Hi Sean,

    I haven’t made a decision about this, sorry. We’re in year three of bitcode not being required in the iPhone target app despite the note you linked to, so this isn’t a current concern.

    Halle Winkler
    Politepix

    Welcome,

    Thanks for this information. Unfortunately, I won’t be able to look into the issue until after the 15th; I apologize for the delay but it is currently unavoidable.

    In the meantime, to make it possible to fix quickly once I am able to look into it, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue. There will be a complete error in our build log which you can share, and you can also provide important missing information such as which OpenEars plugin this question is about, which version it is, which version of OpenEars you are using, etc.

    If you want to attempt to troubleshoot the error yourself in the meantime, you can look at the full error you receive in the build log, and then most likely link to a current and supported c++ library in your build settings, since my best guess without being able to look into it is that you have carried forward an older unsupported c++ library that Xcode 10 no longer supports. Good luck if you want to give that a try, and I will check into this as soon as it is possible to do so; thanks for your patience.

    in reply to: Italian language model #1032526
    Halle Winkler
    Politepix

    That’s right, there is lots of info about how to do this using OpenEars’ APIs in the tutorial and documentation.

    in reply to: Italian language model #1032524
    Halle Winkler
    Politepix

    Hello,

    OpenEars works with smaller dynamically-generated models via its own API rather than large pre-existing models.

    in reply to: Italian language model #1032522
    Halle Winkler
    Politepix

    Welcome,

    OpenEars-supported acoustic models are found here: https://www.politepix.com/languages/

    in reply to: rapidEarsDidReceiveLiveSpeechHypothesis not firing #1032509
    Halle Winkler
    Politepix

    Glad it worked for you! Thanks for the feedback on the instructions.

    in reply to: Install Rapid Ears #1032498
    Halle Winkler
    Politepix

    Sorry, it is currently necessary to create all of the protocol methods.

    in reply to: Install Rapid Ears #1032496
    Halle Winkler
    Politepix

    You’re welcome, very glad it helped!

    in reply to: Install Rapid Ears #1032494
    Halle Winkler
    Politepix
    Halle Winkler
    Politepix

    Sorry, I don’t have a suggestion for this situation, other than considering creating your listening session on demand when it is needed rather than instantiating it in situations where speech isn’t used, since startup on modern devices is very fast.

    in reply to: OpenEars with Xamarin #1032488
    Halle Winkler
    Politepix

    Welcome,

    Sorry, I don’t support it.

    in reply to: rapidEarsDidReceiveLiveSpeechHypothesis not firing #1032487
    Halle Winkler
    Politepix

    Hi,

    This sounds like it might be a different issue – if this is about a licensed framework, are you sure that the app you have linked to has the same bundle ID as the one you registered when purchasing? Showing the logs before the XPC issue may help.

    in reply to: Any noise cancellation support? #1032470
    Halle Winkler
    Politepix

    Welcome,

    You can use different audio modes with OpenEars, take a look at OEPocketsphinxController’s audioMode property. The modes correspond to documented Apple modes. Use of non-default modes isn’t supported by me, but this should get you started with experimenting.

    in reply to: Error setting audio session active to 0! ‘!act’ #1032465
    Halle Winkler
    Politepix

    Glad it’s working for you!

    in reply to: Error setting audio session active to 0! ‘!act’ #1032463
    Halle Winkler
    Politepix

    No problem, glad you have a known-working reference point you can check things from. I don’t support Corona, so getting it working well in that context has to be done without my assistance unfortunately, but I would recommend just taking your time and assuming it’s something minor which will turn up when retracing your steps if you use the sample app as the guideline.

    in reply to: Error setting audio session active to 0! ‘!act’ #1032461
    Halle Winkler
    Politepix

    Does the sample app get the same error on the same device?

    in reply to: Error setting audio session active to 0! ‘!act’ #1032457
    Halle Winkler
    Politepix

    Welcome, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue.

    in reply to: Recognition Score is Always Zero (0) #1032430
    Halle Winkler
    Politepix

    Greetings,

    That’s normal for a grammar. Generally, scoring isn’t a useful/usable piece of info for your implementation.

    in reply to: Recognize short Command in nonEnglish #1032422
    Halle Winkler
    Politepix

    I think that if your original grammar implementation doesn’t raise any errors and is returning output that you can evaluate, we have explored everything that is within the realm of supportable troubleshooting here, so I am going to close this as answered because I think we have explored all of the topics which have come up here at substantial length and I think there should be enough for you to examine further outside of an ongoing troubleshooting process with me.

    If you have very specific questions later on (I mean, questions about a single aspect of a single implementation with a single acoustic model) it’s OK to start very focused new topics, just please create a fresh implementation you are comfortable sharing things about and you are sure doesn’t have accumulated code from different implementations, and remember to share the info in here without prompting from me so the questions don’t get closed, thanks and good luck!

    in reply to: Recognize short Command in nonEnglish #1032419
    Halle Winkler
    Politepix

    This is because there are multiple things about this which are a problem for ideal recognition with these tools: it has high uncertainty because it is a different language, and language models aren’t designed to work with a single word. I expect changing the weight to affect this outcome, but if it doesn’t, that is the answer on whether this approach will work.

    in reply to: Recognize short Command in nonEnglish #1032417
    Halle Winkler
    Politepix

    Have we ever seen a fully-working result from your original grammar implementation without a plugin since we fixed the grammar?

    in reply to: Recognize short Command in nonEnglish #1032416
    Halle Winkler
    Politepix

    I’ve recommended what is possible to tune for Rejecto, there is nothing else. If it isn’t doing well yet, this is likely to just be due to it being a different language. You can also tune vadThreshold but I recommended doing that at the start so I am assuming it is correct now.

    in reply to: Recognize short Command in nonEnglish #1032414
    Halle Winkler
    Politepix

    Yeah, that makes a certain amount of sense because this use case is very borderline for RuleORama – it isn’t usually great with a rule that has a single entry and the other elements of this which are pushing the limits of what is likely to work are probably making it worse. We can shelve the investigation of RuleORama now that we have seen a result from it.

    in reply to: Recognize short Command in nonEnglish #1032410
    Halle Winkler
    Politepix

    The first thing in this RuleORama implementation to fix is again that this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
    

    needs to be this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
    

    There may be other issues but let’s start there.

    in reply to: Recognize short Command in nonEnglish #1032408
    Halle Winkler
    Politepix

    Regarding your Rejecto results: you can now pick whichever one of them is better and experiment with raising or reducing the value withWeight in this line (lowest possible value is 0.1 and largest possible value is 1.9):

    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    What does the symbol “@” represent in the LookupList.text ? (the double-ee’s and double-ii’s I can somehow intereprete but what does “@” really mean ?)

    It represents the phone sound in Hochdeutsch which is represented by the IPA ə. This is getting outside of the things I support here but there should be enough info in that explanation for you to find sources outside of these forums to continue your investigation if you continue to have questions.

    in reply to: Recognize short Command in nonEnglish #1032403
    Halle Winkler
    Politepix

    This:

    
    lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
    

    needs to be:

    
    lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
    
    in reply to: Recognize short Command in nonEnglish #1032399
    Halle Winkler
    Politepix

    If you want to show me more logs from this implementation, make sure to show me the now-changed code again as well.

    in reply to: Recognize short Command in nonEnglish #1032396
    Halle Winkler
    Politepix

    Hi,

    That is happening because this code is a mixed example of an earlier grammar implementation and a later Rejecto implementation. Please change this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
    

    to this:

    
    OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
    

    and also please change this vowels option which looks like it must be left over from some previous round of experimentation and will harm accuracy:

    
    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: true, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
    

    to this:

    
    let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
    
    in reply to: Recognize short Command in nonEnglish #1032391
    Halle Winkler
    Politepix

    If you want to collapse the logs so that they aren’t as visually big, you can put spoiler tags around them:
    [spoiler]
    [/spoiler]
    this will make them possible to open and close so they don’t take up vertical space if it bothers you.

    in reply to: Recognize short Command in nonEnglish #1032390
    Halle Winkler
    Politepix

    Paste the logs and VC contents in this forum, thank you. There are many other discussions here with big logs and they provide a way for searchers to get hits for specific errors and problems they are troubleshooting without my having to answer the same questions many times, as well as for me to be able to go back and find either bugs or points of confusion. When that is all hidden away in a repo they will eventually disappear as the repo changes or is removed, or cause the support request to occur in that repo, and won’t help anyone solve their problems where it is possible for them to follow on with a “I got the same log result but your fix isn’t affecting my case”. It’s a very important part of there being visibility for solutions.

    in reply to: Recognize short Command in nonEnglish #1032387
    Halle Winkler
    Politepix

    Please put all your documentation of what is going on in this forum, thank you. The Github repo will change or disappear (it has already disappeared and then returned with different content in the course of just this discussion, so there is a previous link to it which is already out of date) and as a consequence make this discussion no use for anyone who has a similar issue to any of the many questions you are asking for information about.

    in reply to: Recognize short Command in nonEnglish #1032383
    Halle Winkler
    Politepix

    They are being marked as spam due to the multiple external links. Please keep all the discussion in here so it is a useful resource to other people with the same issue. I recommend doing this without all the confusion and complexity by returning to the premise of troubleshooting exactly one case at a time. You can choose which to begin with.

    in reply to: Recognize short Command in nonEnglish #1032368
    Halle Winkler
    Politepix

    OK, let’s see what happens when you make the following changes to the three projects.

    For your regular grammar project and for your RuleORama project, please adjust this code:

    
            let words = ["esch do no frey"]
            
            // let err: Error! = lmGenerator.generateLanguageModel(from: words, withFilesNamed: name, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            // let err: Error! = lmGenerator.generateGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateFastGrammar(from: [OneOfTheseWillBeSaidOnce : words], withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    so it matches the grammar instructions with the enclosing ThisWillBeSaidOnce declaration like so:

    
            let words = ["esch do no frey"]
            let grammar = [
    			ThisWillBeSaidOnce : [
    				[ OneOfTheseWillBeSaidOnce : words]
    			]
    		]
    
            // let err: Error! = lmGenerator.generateGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
            
            let err: Error! = lmGenerator.generateFastGrammar(from: grammar, withFilesNamed: fileName, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))

    Uncommenting whichever of the generateGrammar/generateFastGrammar lines are to be used by the respective grammar project.

    For your Rejecto project, please open AcousticModelGerman.bundle/LanguageModelGeneratorLookupList.text at whatever location you are really linking to it (please be ABSOLUTELY sure you are performing this change on the real acoustic model that your project links to and moves into your app bundle, wherever that is located, or our troubleshooting work on this project will be guaranteed to be unsuccessful) and look for the following two lines:

    es	ee s
    esf	ee s f

    and change them to this:

    es	ee s
    eschdonofrey	@ ss d oo n oo f r @ ii
    esf	ee s f

    and then you have to change your Rejecto language model generation code (which you have never shown me here) so that it just creates a model for the single word “eschdonofrey”. Do not make this change to your grammar projects. For contrast, you can also try changing the acoustic model entry to this instead with your Rejecto project, with slightly different phonemes:

    es	ee s
    eschdonofrey	ee ss d oo n oo f r ee ii
    esf	ee s f

    If none of these have better results, this will be the point at which we will have to stop the troubleshooting process, because it is guaranteed to get confused results if we try to further troubleshoot three different implementations in parallel which have hosted other different implementations at different times. If one of these projects has improved results, we can do a little bit more investigation of it, under the condition that the other two projects are put away and it is possible for me to rely on the fact that we are only investigating one clean project at a time moving forward. Let me know how it goes!

    in reply to: Recognize short Command in nonEnglish #1032361
    Halle Winkler
    Politepix

    No problem, just keep in mind that I asked for that project to have a clean slate to work from without mixing up code from multiple approaches, so we want to get back to that state of clarity and simplicity.

    in reply to: Recognize short Command in nonEnglish #1032359
    Halle Winkler
    Politepix

    I’m talking about the project which uses this VC: https://www.politepix.com/forums/topic/recognize-short-command-in-nonenglish/#post-1032343

    Except moving the logging calls high enough up so that we can see any errors that happen while you’re generating the grammar.

    in reply to: Recognize short Command in nonEnglish #1032358
    Halle Winkler
    Politepix

    Cool, thank you. Do you have a log for your earlier project that is just OpenEars using a grammar (without Rejecto), with the logging calls moved to the top? I thought that was the main file we had starting with debugging above, and then we were going to quickly try out RuleORama if you wanted. Those two grammar-using projects are the ones I’m curious about right now, because it looks like there is a flaw in the grammar and I want to know if the same error is happening in both.

    in reply to: Recognize short Command in nonEnglish #1032354
    Halle Winkler
    Politepix

    Hi,

    A few things to clarify:

    • it is of course completely fine if you don’t want to use RuleORama, which is the reason I asked first if it was OK for you. This is not an upsell – my question was because there is no easier time to try it out then right after you have set up a grammar, and if you wanted to hear all the options, this was the most convenient moment to explore that one and any other timing will be less convenient because we will be changing from a grammar to a language model afterwards. My intention was to explain to you how to add it to your existing project if you agreed to try it out. It is fine with me either to skip it or to take time to get it working.

    • This is too unconstructive for me while I’m working hard to give you fast and helpful support for an unsupported language, and I’d appreciate it if you’d consider that we both have stresses in this process: “I relaize the RuleORama-demo is again not useful after download – and I feel that I loose trememdeous amount of time just to set up these demo-projects. Also, your manual contains ObjC-Code under the Swift3 chapter – which is not something pleasant either.” I disagree with you about the origin of the issues in this case, but more importantly, I just don’t want to proceed with this tone, which also seemed to come up due to my trying hard to remove all the unknown variables from our troubleshooting process, and I’m likely to choose to close the question if it is an ongoing thing even though we’ve both invested time in it. You don’t have to agree, but I don’t want you to be surprised if I close this discussion for this reason.

    • I want to warn you here that it is possible there is no great solution because this is an unsupported language, so that you have enough info to know whether to invest more time. I am happy to help, and I have some theories about how we might be able to make this work, but not every question has a perfect answer.

    That all said, I just noticed from your RuleORama install that there is something we need to fix in both installs, which is that in both cases the logging is being called too late to log the results of generating the language model or grammar. Can you move these:

    OELogging.startOpenEarsLogging() //Uncomment to receive full OpenEars logging in case of any unexpected results.
    OEPocketsphinxController.sharedInstance().verbosePocketSphinx = true

    To run right after super.viewDidLoad() and share the logging output from both projects?

Viewing 100 posts - 1 through 100 (of 2,153 total)