Forum Replies Created
Sorry, no, only foregrounded app usage is supported by this project.April 6, 2020 at 10:01 am in reply to: Disappears audio on video after restart OpenEars listening #1033132
I’m sorry, your simultaneous recording requirement just isn’t supported by this project. It is the reason you are encountering the issue, and as a consequence, I don’t have a method for you to avoid the situation you are encountering. There is more info in the FAQ about issues with other audio-using frameworks.April 4, 2020 at 11:37 am in reply to: Disappears audio on video after restart OpenEars listening #1033130
Sorry, OpenEars doesn’t coexist with multiple audio sessions, which this would require.
Sorry, this is probably not a great match for OpenEars as a use case, but I recommend taking a look at the FAQ since it has some info about rejecting wrong hypotheses: https://www.politepix.com/openears/support. Unfortunately there isn’t really bandwidth at the moment to moderate a larger discussion about Sphinx, so I will close this up, but I recommend searching previous forum topics for “OOV” and “out of vocabulary” and “keyword” and/or “wake word” recognition to learn more about the options here – there should be lots of existing info about similar applications once you know the phrases to search for.
Sorry, I don’t have a suggested approach for this, since I’m not familiar with Xamarin and unfortunately don’t support it.
Super, happy to hear that worked for you.
Check out the post: https://www.politepix.com/forums/topic/install-issues-and-their-solutions so you can see how to turn on OELogging, which will have the information about the audio session.
You don’t need to add the framework to the sample app – it will build right out of the box after you change the app ID to use your developer account. I recommend getting rid of this modified distribution version and downloading fresh one to try this out.
Sorry I didn’t see this when you first posted it. Do you have OpenEarsLogging on or just verbosePocketSphinx there? That looks a bit like an OS bug and it is hard to say without knowing about the audio routing that would appear with OELogging info.
Yes, it supports current Swift versions.
This is in the license that is contained in the downloaded distribution, so please give it a read (it’s a good idea to read the license for any project you link to).
Hi Amit, this is in the FAQ, it will help you to give it a read, thanks: https://www.politepix.com/openears/supportOctober 3, 2019 at 2:18 pm in reply to: How to stop listening and still receive hypothesis #1032966
Genuinely sorry, but I don’t support this user interface flow (there are a few older forum posts here about it IIRC).September 25, 2019 at 9:14 am in reply to: Listening any words of background and giving wrong result #1032946
There can be a couple of different reasons for this, and they are discussed in detail in the FAQ: https://www.politepix.com/openears/support/
OpenEars always supports three versions back, so currently that means iOS 10-12. Please make sure to read the license included with the distribution and the FAQ (https://www.politepix.com/openears/support/) to answer questions about the licensing of OpenEars.September 14, 2019 at 10:38 am in reply to: Can I have Hindi Support for the Open Ears OEAcousticModel ? #1032933
No, sorry, it isn’t supported.
I’m really sorry, but I don’t give support for sphinx or acoustic model adaptation or generation here; I have to stick to the given scope of offering support for the OpenEars API.
Your initial issue could be sphinx version related, but as a heads-up, there will probably be a secondary issue, in that OpenEars’ acoustic model bundle format is custom to Politepix, so it is not necessarily the case that your sphinx adaptation results will be OpenEars-compatible, and that is going to unfortunately be outside the scope of the support I give here to troubleshoot. I wish you the best of success with this advanced topic in local recognition.
I would first start with the unmodified sample app from the distribution, and make sure basic recognition works for you for some utterance which is included in the language model, i.e. ruling out that the issue isn’t with recognition in general rather than nbest specifically. Then turn on nbest and set a valid nbest number, and turn on null hypotheses so it is also called even if there isn’t a match with the language model, and see if that helps. Print the actual array in the callback so you can see if null hyps are being returned.
The most powerful tool you can apply to your own troubleshooting is turning on all the applicable forms of logging documented in the docs and headers – these will show you what the engine is perceiving and whether there is a difference between that and what is ending up on the callback, as well as telling you if your nbest settings are passing validation.
Sorry, there isn’t existing sample code for this and I unfortunately don’t have time at the moment to provide an example. However, it is known to work according to the instructions in the documentation and headers, so I recommend just taking some time and giving it a careful read, particularly relating to the details of how to get specific callbacks from OEEventsObserver (for instance, n-best) by changing settings in OEPocketsphinxController.
- This reply was modified 8 months, 4 weeks ago by Halle Winkler.
Sorry I overlooked the second part of your question. No, there isn’t a single location where this can be configured.August 19, 2019 at 11:37 am in reply to: Pocketsphinx feature not available : keywords search #1032881
Sorry, it isn’t planned.
Sorry, there is no mechanism for local on-device adaptation in OpenEars.
Sorry, there is no demo other than the demo framework.June 25, 2019 at 9:24 am in reply to: App is Crashing after adding a new OpenEars framework #1032857
The best way to verify this is to quickly create a totally new test app (in a different location) which only has one app target, where you set the bundle ID of the app to the new framework bundle ID. If the framework can run RapidEars methods with this app, the issue is that the old app is either not loading the scheme/target you are expecting, or that the target that is being called by the scheme doesn’t really have the bundle ID you are expecting.
If a brand-new app that only has one target with the expected bundle ID doesn’t work either, you can send it to me and I will figure out what is going on.June 25, 2019 at 8:51 am in reply to: App is Crashing after adding a new OpenEars framework #1032855
This will be because the bundle ID in the new framework doesn’t match the bundle ID in the app. Please double-check what the app target you are building and running has as a bundle ID, and whether it corresponds to the bundle ID that the license was created with.
Sorry, there is no mechanism for adding an ID to a purchase – the purchase is for a specific bundle ID. A license for a different bundle ID is a different license, so it is a different purchase. This is mentioned during the purchase process, so you can also see it mentioned on your invoice for that purchase. Sorry I can’t help out with this issue.June 18, 2019 at 8:37 am in reply to: Does OpenEars require internet access to when using? #1032847
Sorry, it isn’t possible to change a bundle ID after purchase.
Sorry, runRecognitionOnWaveFile is an OpenEars-only method, not a RapidEars method, so it shouldn’t invoke a RapidEars callback. Have you tried it out with pathToTestFile? I am doubtful that there is a way to do exactly what you’re trying for here, but pathToTestFile will invoke RapidEars hypothesis callbacks.
This is just not the use case that this framework is conceived for, sorry. You will probably need to use a networked service to do this.
Sorry, no, there are no samples or Xamarin support, so it may not be the right solution for your needs.
OK, in that case I’m afraid I can’t give you more assistance with this at this time since I can’t replicate that this issue persists with the build system changed, and the error shared doesn’t have information about the specific build error in the logs.
However, I have made a new release today which doesn’t demonstrate this issue based on my earlier replication of your issue, so you’re welcome to download it and see if it helps you: https://www.politepix.com/wp-content/uploads/OpenEarsDistribution.tar.bz2
I was able to replicate the issue, but it was fixed by that change. Can you describe to me how you changed the project to the legacy build system?
Please set your project to the legacy build system.
Are you finished with your question about building the framework?
BTW, it shouldn’t be necessary for you to build the framework if you are just testing out the sample app for the first time – the cases under which a rebuild of the framework are needed/suggested aren’t that common.
OK, I’m not sure it’s constructive to troubleshoot the sample app’s recognition of your voice right now if you would like assistance building the framework. Have you given my suggestion above a try?
Hmm, looks like it is possible for Xcode to override the following build setting differently for different setups, unfortunately. I will fix this in a future version, but for now, please set the framework project’s build setting as follows:
Are you talking about the framework or one of the sample apps?
Can you be most specific about what part of the project you can’t build and copy and paste which results you receive, which Xcode you’re using, and with which targets? A deprecation warning shouldn’t affect building.
Sure, it’s in the API definition – it is for use while recognition isn’t in progress, if PocketsphinxController is usually doing something undesired to your session while recognition isn’t in progress. While recognition is in progress, it is expected that PocketsphinxController always normalizes to the session settings it is designed around.
OK, thank you for the elaboration. This is the current expected behavior, sorry. The recognition is intended to be performed on a single audio input which is only mic speech, and PocketsphinxController performs its own audio session management to achieve this.
Can you clarify the result you are seeking and what is actually happening?
1) Can you elaborate more on this? Functionality within an app is the main purpose of the framework so I think I’m not yet following the specific question.
2) Yes, there is TTS in OpenEars and with the NeatSpeech plugin, but I would recommend first using the native Apple TTS API and seeing if it covers your requirements.
3) No, are you seeing an incompatibility with a Swift version?
Sorry, it isn’t really a bug in that sense, but a limitation of the models and their size/domain for offline use.
Sorry, yes, this is actually a known issue – if you search for “number” in these forums you can see several previous discussions about it.
Sorry, it isn’t as simple as just bundling it – the bundles contain more than CMU Sphinx files and it isn’t trivial to create the parts which are contributed from this end. Sorry I can’t help with this.January 24, 2019 at 7:49 pm in reply to: How to create a bundle for a custom acoustic model #1032695
Sorry, it isn’t possible to create an OpenEars-compatible bundle for arbitrary acoustic models.
Sorry, that isn’t a feature.January 15, 2019 at 2:40 pm in reply to: Switching between runRecognitionOnWavFile & standard setup #1032687
Ah, I would not really expect this to work during suspension, to be honest. Starting and stopping should be very quick, have you tried that and timed it?January 15, 2019 at 12:16 pm in reply to: Switching between runRecognitionOnWavFile & standard setup #1032685
Hi, can you show me the code where you switch between the recognition methods?January 7, 2019 at 10:18 am in reply to: Switching between runRecognitionOnWavFile & standard setup #1032678
OK, there should actually be a lot more logging information that is possible to share, so can you take a look at this post about how much logging is needed Please read before you post – how to troubleshoot and provide logging info here to troubleshoot this kind of issue, and also share the backtrace (type ‘bt’ in lldb at the crash) in order to make further investigation possible? Thanks.
Good fix! Happy New Year to you as well.December 28, 2018 at 9:30 am in reply to: Can I feed continues stream of audio data to the recognizer ? #1032671
No, there is no way to stream audio which isn’t on the iPhone.
If you check out the Swift tutorial tool (using the first switch for OpenEars alone), it will show you how to use the functions which give you the path to the generated language models:
Since lmPath is storing a returned string, it seems unlikely that lmPath.text is what you want. Another working example of obtaining this path in Swift can be seen in the Swift sample app that is in the OpenEars distribution folder.
If the issue is due to something else, it is necessary to turn on OpenEarsLogging and show all of the logging from the beginning to the end of the app run so I can help. It is possible that the language model generation isn’t working (it looks like this model is being generated out of a document, so I can imagine some complications there), so after the failed generation there is nothing to pass to the string, and the full logging output will indicate this if so.
Can you take another look at the link to the RapidEars callbacks workaround and make sure that you also made the described modifications to RapidEars.framework? I’m talking about the part at the beginning that begins “In the RapidEars framework’s header file OEEventsObserver+RapidEars.h”:
Language model switching is performed by OpenEars.framework rather than by RapidEars.framework or RapidEarsDemo.framework, so it is kind of unlikely that the issue is related to switching from the RapidEars demo to the licensed version despite it appearing at that time (I won’t say it’s impossible, but I am not personally aware of a way that it could occur due to the fact that RapidEars doesn’t perform that function).
RapidEars only adds changes to speech processing, and callbacks, so those would be the only things I could offer advice with specifically regarding your new RapidEars framework, but it sounds like you are confident that you are receiving all RapidEars and OpenEars callbacks, is that correct?
This is a bit of a tricky one because there also isn’t any difference in speech code between the demo and the licensed version; there is just the timeout after a few minutes in one and the license checking in the other as differences. It would be unusual to get any kind of a behavioral change in the actual engine as a result of changing over to the licensed version since there isn’t different engine code.
If you are definitely receiving all expected callbacks from OpenEars and from RapidEars, I would recommend beginning by taking standard troubleshooting steps to see what the issue might be. Step one would be to turn on OpenEarsLogging and verbosePocketsphinx and verboseRapidEars, and see if there is anything in the logging to let you know what is happening when you switch models. Please feel free to show me the logs (it’s fine to do a search/replace for private info in there as long as you show me all of the OpenEars logging from the beginning to the end of the app session without removing any of the OpenEars logging in between).
I think you may be experiencing this issue with callbacks:
Please give the fix I describe in the linked reply a try, and let me know if it works for you.
Super, I’m happy to hear it!
There is no need to import the xcodeproj file into anything else. Please just start with a new distribution and open and run the OpenEarsSampleApp project itself with no changes, thanks.
OK, there should be no issue running the Swift app or following the tutorial with Xcode 9, so I would start by establishing why you can’t compile the sample app, which should be possible to compile and run right after downloading. I’m going to suggest the theory that something has been unintentionally changed in the downloaded distribution, and recommend that you remove all of your work (back it up first) and download an entirely new OpenEars distribution from https://www.politepix.com/openears to a new location, and start by compiling the sample app. If this works, start again with the tutorial with a new app, taking extra time with the first part of step 1. I don’t recommend continuing trying to troubleshoot the existing tutorial implementation app since the issue with the sample app makes it sound like an accidental issue of some kind and not something direct to troubleshoot, and it may also be the case that there have been further changes in the troubleshooting process.
Can you let me know which Xcode/Swift version this is with, and what the target is (simulator or a specific device)?
Are you able to compile the Swift sample app?
You’re welcome! Hmm, yes, I think that is a known bug (or was a known bug at the time that I did the sample app) which I have unfortunately not reexamined in a while. It would need some time set aside to create a few new installs in order to investigate why the callback isn’t received as expected (or possibly, why the sample app just doesn’t do the right thing, which is the second possibility). It could be a while before I have the opportunity to look into it – are you able to use the workaround for the time being?November 20, 2018 at 5:58 pm in reply to: Are the RapidEars, Rejecto, and RuleORama demos still available? #1032608
Should be working now.November 20, 2018 at 5:54 pm in reply to: Are the RapidEars, Rejecto, and RuleORama demos still available? #1032606
Ah, that makes sense. Thank you for updating me on it.November 20, 2018 at 5:47 pm in reply to: Are the RapidEars, Rejecto, and RuleORama demos still available? #1032603
What is the demo order email that was sent to you a few minutes ago related to? Was one of your tests successful?November 20, 2018 at 1:00 pm in reply to: programatically start and stop openEars/RapidEars swift #1032598
There is a sample app written in Swift which shows a working example of this and many other operations. It is in OpenEarsDistribution/OpenEarsSampleAppSwift/OpenEarsSampleAppSwift.xcodeprojNovember 20, 2018 at 10:03 am in reply to: programatically start and stop openEars/RapidEars swift #1032596
You’re welcome! Switching dictionaries is also in the sample app, so I recommend opening it up and checking out its code.November 20, 2018 at 9:38 am in reply to: programatically start and stop openEars/RapidEars swift #1032593
I wouldn’t set the instance false first or call stop on an optional instance that has been set false. There are a couple of examples of working stopListening() calls in the Swift sample app that is part of the OpenEarsDistribution directory in the file ViewController.swift that can help with this, as well as other complexities you may be seeing such as avoiding calling stop on an already-stopped instance.November 19, 2018 at 10:35 pm in reply to: Are the RapidEars, Rejecto, and RuleORama demos still available? #1032591
Hmm, just tried it out in another browser with no existing cookies or logins and it worked as expected. Weird question, but could there be anything going on at the network level for you which might prevent a first-party cookie from working on the site?November 19, 2018 at 4:45 pm in reply to: Are the RapidEars, Rejecto, and RuleORama demos still available? #1032588
I’m not seeing this issue, unfortunately – which browser is this with?
I haven’t made a decision about this, sorry. We’re in year three of bitcode not being required in the iPhone target app despite the note you linked to, so this isn’t a current concern.October 5, 2018 at 1:05 pm in reply to: I can’t get app worked when I set .mm for ViewController on XCode 10.0. #1032557
Thanks for this information. Unfortunately, I won’t be able to look into the issue until after the 15th; I apologize for the delay but it is currently unavoidable.
In the meantime, to make it possible to fix quickly once I am able to look into it, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue. There will be a complete error in our build log which you can share, and you can also provide important missing information such as which OpenEars plugin this question is about, which version it is, which version of OpenEars you are using, etc.
If you want to attempt to troubleshoot the error yourself in the meantime, you can look at the full error you receive in the build log, and then most likely link to a current and supported c++ library in your build settings, since my best guess without being able to look into it is that you have carried forward an older unsupported c++ library that Xcode 10 no longer supports. Good luck if you want to give that a try, and I will check into this as soon as it is possible to do so; thanks for your patience.
That’s right, there is lots of info about how to do this using OpenEars’ APIs in the tutorial and documentation.
OpenEars works with smaller dynamically-generated models via its own API rather than large pre-existing models.
OpenEars-supported acoustic models are found here: https://www.politepix.com/languages/August 11, 2018 at 6:27 pm in reply to: rapidEarsDidReceiveLiveSpeechHypothesis not firing #1032509
Glad it worked for you! Thanks for the feedback on the instructions.
Sorry, it is currently necessary to create all of the protocol methods.
You’re welcome, very glad it helped!
Welcome! Did you try this:July 17, 2018 at 9:11 pm in reply to: Bluetooth Audio Playback and disableSessionResetsWhileStopped Issue #1032489
Sorry, I don’t have a suggestion for this situation, other than considering creating your listening session on demand when it is needed rather than instantiating it in situations where speech isn’t used, since startup on modern devices is very fast.
Sorry, I don’t support it.July 17, 2018 at 9:08 pm in reply to: rapidEarsDidReceiveLiveSpeechHypothesis not firing #1032487
This sounds like it might be a different issue – if this is about a licensed framework, are you sure that the app you have linked to has the same bundle ID as the one you registered when purchasing? Showing the logs before the XPC issue may help.
You can use different audio modes with OpenEars, take a look at OEPocketsphinxController’s audioMode property. The modes correspond to documented Apple modes. Use of non-default modes isn’t supported by me, but this should get you started with experimenting.
Glad it’s working for you!
No problem, glad you have a known-working reference point you can check things from. I don’t support Corona, so getting it working well in that context has to be done without my assistance unfortunately, but I would recommend just taking your time and assuming it’s something minor which will turn up when retracing your steps if you use the sample app as the guideline.
Does the sample app get the same error on the same device?
Welcome, please check out the post Please read before you post – how to troubleshoot and provide logging info here so you can see how to turn on and share the logging that provides troubleshooting information for this kind of issue.
That’s normal for a grammar. Generally, scoring isn’t a useful/usable piece of info for your implementation.
I think that if your original grammar implementation doesn’t raise any errors and is returning output that you can evaluate, we have explored everything that is within the realm of supportable troubleshooting here, so I am going to close this as answered because I think we have explored all of the topics which have come up here at substantial length and I think there should be enough for you to examine further outside of an ongoing troubleshooting process with me.
If you have very specific questions later on (I mean, questions about a single aspect of a single implementation with a single acoustic model) it’s OK to start very focused new topics, just please create a fresh implementation you are comfortable sharing things about and you are sure doesn’t have accumulated code from different implementations, and remember to share the info in here without prompting from me so the questions don’t get closed, thanks and good luck!
This is because there are multiple things about this which are a problem for ideal recognition with these tools: it has high uncertainty because it is a different language, and language models aren’t designed to work with a single word. I expect changing the weight to affect this outcome, but if it doesn’t, that is the answer on whether this approach will work.
Have we ever seen a fully-working result from your original grammar implementation without a plugin since we fixed the grammar?
I’ve recommended what is possible to tune for Rejecto, there is nothing else. If it isn’t doing well yet, this is likely to just be due to it being a different language. You can also tune vadThreshold but I recommended doing that at the start so I am assuming it is correct now.
Yeah, that makes a certain amount of sense because this use case is very borderline for RuleORama – it isn’t usually great with a rule that has a single entry and the other elements of this which are pushing the limits of what is likely to work are probably making it worse. We can shelve the investigation of RuleORama now that we have seen a result from it.
The first thing in this RuleORama implementation to fix is again that this:
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: true)
needs to be this:
OEPocketsphinxController.sharedInstance().startListeningWithLanguageModel(atPath: lmPath, dictionaryAtPath: dictPath, acousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName), languageModelIsJSGF: false)
There may be other issues but let’s start there.
Regarding your Rejecto results: you can now pick whichever one of them is better and experiment with raising or reducing the value withWeight in this line (lowest possible value is 0.1 and largest possible value is 1.9):
let err: Error! = lmGenerator.generateRejectingLanguageModel(from: words, withFilesNamed: fileName, withOptionalExclusions: nil, usingVowelsOnly: false, withWeight: 1.0, forAcousticModelAtPath: OEAcousticModel.path(toModel: accusticModelName))
What does the symbol “@” represent in the LookupList.text ? (the double-ee’s and double-ii’s I can somehow intereprete but what does “@” really mean ?)
It represents the phone sound in Hochdeutsch which is represented by the IPA ə. This is getting outside of the things I support here but there should be enough info in that explanation for you to find sources outside of these forums to continue your investigation if you continue to have questions.
- This reply was modified 2 years, 1 month ago by Halle Winkler.
lmPath = lmGenerator.pathToSuccessfullyGeneratedGrammar(withRequestedName: fileName)
needs to be:
lmPath = lmGenerator.pathToSuccessfullyGeneratedLanguageModel(withRequestedName: fileName)
If you want to show me more logs from this implementation, make sure to show me the now-changed code again as well.