So I try to create a custom keyboard extension with a speech recognition functionality provided by our own server. In some older thread I read about the restriction where you cannot use the microphone from an App Extension, but I cannot found any hints about that in the new Developer Documentation. There is only a bullet point which mentioned the fact that you cannot access the Microphone when RequestOpenAccess is set to false in the Info.plist file, which made me thought that I can use the microphone from inside the keyboard.
So I tried it myself, set the RequestOpenAccess setting to true and also set the "Microphone usage Description". The question to allow the microphone showed up on the first try but the recording did not start and there were no hint about it in the console and no errors poped up. I tried the recording code before, directly inside the app and there it worked perfectly, so I think it has nothing to do with this code.
As an addition I tried the Keyboard in the Simulator and there the recording worked also as intended so I think I maybe miss some permissions or something.
Does anybody know something about this and can help me figuring out my issue?
The workaround is to make keyboard extension open containing app which does the actual mic recording and communicates with extension via user defaults. The containing app switches back to host app after it starts recording.
Related
I am using flutter_local_notifications 9.4.1 in my flutter mobile application
I am deploying successfully to Android and iOS 14.2
My use case is simple - to provide a local notification with a sound when a specific task has completed.
Android - everything works fine - my mp3 sound plays when the notification appears just as I want. Perfect.
iOS - I have everything working except for sound. I have added a custom sound (a CAF file converted from a .AIF file, using the Apple recommended procedure ... afconvert). I added the sound file to the Flutter project the correct way - by using XCode to include it into the top level folder with the target ticked. In my flutter code, I already have the modification to the appDelegate.swift, and in the IOSNotificationDetails I have ensured that presentSound is true and presentAlert is true. I am also specifying sound: 'mysound.caf'. But actually even if I don't specify a custom sound, according to the documentation, the default sound will be used. But no sound occurs, default or otherwise.
On the iPhone device, I have checked silent mode is off, I have checked the app permissions - everything set as it should be - sound is enabled, notifications are enabled. I have even copied the CAF file manually to the iOS device to be sure it plays and that I can hear it! I have no build errors or run-time errors. So I must be missing something.
I don't know of a way to check that the sound file is actually deployed to the device - is this possible?
Is there something in the setup or deployment process I have missed? I have not posted code up because I have spent hours comparing my code to the documentation and finding no differences. I am hoping someone has experienced this and found an undocumented step.
I'm working on an iOS Swift Package/framework and we have a simple microphone input class that will assign the microphone input level to a variable to control visuals.
The class works fine when I use the class in a traditional app. The simulator properly asks for permissions, as does the actual iPhone. It does this through the info.plists file which is properly configured with the microphone privacy setting and explanation string.
As a part of the framework, we are also including Xcode Playgrounds within the package as a set of tutorials. I am trying to create a microphone example. Because the playgrounds are inside of a Swift Package/framework, I'm unsure if it's possible to attach an info.plists file or even if that is the proper workflow.
My question is this:
Is there any way to get Xcode/the playground to ask for permission to use the mic from within a SwiftPackage/framework? Is there a place I could put an info.plist to make it work?
In my iOS AU host application, I am using AVAudioUnitComponentManager.components method to retrieve the list of available Audio Units. It works as expected most of the time. However sometimes it returns only Audio Units created by Apple, and none of the third-party Audio Units that are installed on the device. The interesting thing is that, if after I encounter this issue in my app I go to GarageBand and open the Audio Unit list there then when I return to my app, all the third-party AUs are present. So I am wondering maybe some other initialization should be done before calling AVAudioUnitComponentManager.components method which GarageBand is doing and I also should do in my app.
Any suggestions?
As is turned out, the problem was happening because my app did not have an entitlements file with the “Inter-App Audio” capability key. After adding this capability to my project target in xcode the problem was fixed.
I am making a simple sprite kit game that needs an audio input from the microphone for voice commands. I have already created a simple game and I have already created a separate app that if you speak a word it detects and it displays on the screen using cmu-sphinx/pocketsphinx (http://cmusphinx.sourceforge.net/).
I was using Novocaine (https://github.com/alexbw/novocaine) because I have used this library numerous times and had great success with it in other projects in the past, but whenever I initiate the Novocaine object in the game, my app crashes with an exception.
So I figured that since I was using a wrapper library it must have been some deprecated functions that were not supported any more, so I did some more research. I came across Apple's demo app AurioTouch (https://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html). After playing around with that sample app, I was able to narrow down the classes that I needed to get the raw input values, which was primarily the AudioController class. So I imported over the AudioController class and its necessary components and was able to build it. Fortunately, the example had proper exception handling so I was able to see where it was failing. The exception was caused by this line:
AudioUnitInitialize(_rioUnit)
I was beginning to suspect that trying to access the microphone in a SpriteKit/SceneKit environment was causing the issue. I've also tried the same steps mentioned above in brand new SpriteKit and SceneKit projects and did not resolve the issue. So I am wondering if it is anyway possible for you to give real time microphone input within a SpriteKit/SceneKit project.
There is a setting in the iOS Settings app under Privacy : Microphone, where you have to switch on permission for an app to use the device's microphone. The AVAudioSession requestRecordPermission API can help check this permission setting.
There are also some AVAudioSession categories that should be set and activated to use the microphone. The use of SpriteKit and SceneKit does not interfere with these settings.
In iOS 10 Apple is extending the scope of privacy control. You have to declare in Info.plist file access of any private data.
Goto Info.Plist file and added the privacy key according to your requirement.
Microphone :
Key : Privacy - Microphone Usage Description
Value : $(PRODUCT_NAME) allow microphone use to give audio input for voice commands.
I want to make an app that won't be released on the App Store. I want this app to open siri through private API's, basically inject a home button press and hold into the events queue. I have tried using GSEvent(GSSendEvent - Inject Touch Event iOS), but it no longer works(it silently fails) after iOS 7. I believe it is possible through SBUIController but I can't figure how to use SBUIController in iOS 8. To be clear, I want to do this on a non-jailbroken phone.
How can I go about doing this in iOS 8?
Thanks
You should check out the runtime headers of all the private/public apis here.
I found a method hidden in accessibility, which could possibly work in your case. Have a look at it here: http://git.io/frK6Sw . The method is named -(void)openSiri, which suggest that it might open Siri, I haven't tried though.