AVAudioEngine warning: "deprecated Carbon Component Manager for hosting Audio Units" - ios

I'm writing my first audio app for Mac, which loads an external audio unit and uses it to play sound through an instance of AVAudioEngine, and I've been seeing this warning:
WARNING: 140: This application, or a library it uses, is using the
deprecated Carbon Component Manager for hosting Audio Units. Support
for this will be removed in a future release. Also, this makes the
host incompatible with version 3 audio units. Please transition to the
API's in AudioComponent.h.
I've already transitioned from using AVAudioUnitComponents to AudioComponents (now accessed via this api) which I expected would solve this issue, but I'm still seeing this warning when I call start() on my engine.
Any ideas what's going wrong here? As far as I can tell, I'm no longer using deprecated APIs. Is it possible that AVAudioEngine is using deprecated APIs under the hood?
Here's a snippet from the code I'm working with. I'm calling selectInstrument with a description I've retrieved using the AudioComponents API.
public func selectInstrument(withDescription description: AudioComponentDescription, callback: #escaping SelectInstrumentCallback) {
AVAudioUnit.instantiate(with: description, options: []) { avAudioUnit, error in
guard let unit = avAudioUnit else {
callback(nil)
return
}
self.disconnectCurrent()
self.connect(unit: unit)
unit.auAudioUnit.requestViewController { viewController in
callback(viewController)
}
}
}
private func disconnectCurrent() {
guard let current = currentInstrument else { return }
self.engine.disconnectNodeInput(engine.mainMixerNode)
self.engine.detach(current)
self.currentInstrument = nil
self.engine.stop()
}
private func connect(unit: AVAudioUnit) {
let hardwareFormat = self.engine.outputNode.outputFormat(forBus: 0)
self.engine.connect(self.engine.mainMixerNode, to: self.engine.outputNode, format: hardwareFormat)
self.engine.attach(unit)
do {
try ExceptionCatcher.catchException {
let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 2)
self.engine.connect(unit, to: self.engine.mainMixerNode, format: stereoFormat)
}
} catch {
let monoFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: 1)
self.engine.connect(unit, to: self.engine.mainMixerNode, format: monoFormat)
}
unit.auAudioUnit.contextName = "Running in AU host demo app"
self.currentInstrument = unit
do {
// Carbon Component Manager warning issued here:
try self.engine.start()
} catch {
print("Failed to start engine")
}
}
Thanks for your help!

Related

How to deactivate on demand connect VPN from network extension?

I have configured an always on VPN with a NEOnDemandRuleConnect I retrieve some user data from a backend such as expiration date if the user has paid the subscription. If it expires I'd like to deactivate the VPN without opening the main app, doing it from the Network Extension. I retrieve the data from the backend using a daily timer and then check if the subscription has expired. Then I'd have a function that loads the VPN manager from the system settings app and then deactivate it and finally save it. If I don't deactivate the manager the device will be without connection as it's a VPN that has been configured to connect always with the NEOnDemandRule. The function will be more or less this one
func stopProtection(completion: #escaping (Result<Void>) -> Void) {
NSLog("Called stopProtection")
NETunnelProviderManager.loadAllFromPreferences { (managers, error) in
if let error = error {
NSLog("[SUBS] ERROR \(error)")
}
if let managers = managers {
if managers.count > 0 {
let index = managers.firstIndex(where: { $0.localizedDescription == Constants.vpnBundleId })
guard let index = index else {
completion(.error(ProtectionServiceError.noKidsVpnInstalled))
return
}
let myManager = managers[index]
myManager.loadFromPreferences(completionHandler: { (error) in
guard error == nil else {
completion(.error(ProtectionServiceError.errorStoppingTunnel))
return
}
// Deactivate the VPN and save it
myManager.isEnabled = false
myManager.saveToPreferences(completionHandler: { (error) in
guard error == nil else {
completion(.error(ProtectionServiceError.errorStoppingTunnel))
return
}
completion(.success(()))
})
})
} else {
completion(.error(ProtectionServiceError.errorStoppingTunnel))
}
}
}
}
All this code and logic is being performed in the extension with all the limitations it supposes. Using the previous function I'd only get the first NSLog saying Called stopProtection but it doesn't load any manager. Calling this from the main target it'd work. I don't know if I can load and modify the manager from the extension or it's another way to do it.
Okay, I have debugged the network extension by attaching to the process and looking into the device Console and this error pops up,
NETunnelProviderManager objects cannot be instantiated from NEProvider processes
So nope, there's the answer!

Error while running multiple SFSpeechRecognitionTask in background

As per requirement of the App I am developing, I have to pass multiple audio files to SFSpeechRecognizer and get the transcription in return.
I did it in two ways
First Method - Using Recursion (Running correctly, you can skip it if you want)
I have first completed this task by getting transcription one by one. i.e when the SFSpeechRecognitionTask gets completed, the result gets saved and the process runs again through a recursive call.
class Transcription
{
let url = [URL(fileURLWithPath: "sad")]
var fileCount = 3
let totalFiles = 4;
func getTranscriptionRecursive()
{
getTranscriptionOfAudioFile(atURL: url[fileCount], fileCount: fileCount, totalFiles: totalFiles) { (result) in
if(self.fileCount <= self.totalFiles)
{
self.fileCount = self.fileCount+1
self.getTranscriptionRecursive()
}
}
}
func getTranscriptionOfAudioFile(atURL url: URL, fileCount: Int, totalFiles: Int, completion: #escaping ((SFSpeechRecognitionResult?)->Void))
{
let request = SFSpeechURLRecognitionRequest(url: url)
request.shouldReportPartialResults = false
let recognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))
if (recognizer?.isAvailable)! {
recognizer?.recognitionTask(with: request) { result, error in
//error handling
completion(result)
}
}
}
}
This method worked great but it takes way too much time as each SFSpeechRecognizer request takes time to complete.
Second Method - Using Loop and Background Thread
(This one is having the issue)
I tried to create multiple requests and execute them in background at once.
For that I created a For Loop till the count of audio files, and in that loop I called the function to create SFSpeechRecognizer Request and task.
for index in 0..<urls.count
{
DispatchQueue.global(qos: .background).async {
self.getTranscriptionOfAudio(atURL: self.urls[index]) { (result, myError, message) in
//error handling
//process Results
}
}
}
where as the function to get speech recognition results is
func getTranscriptionOfAudio(atURL audioURL: URL?, completion: #escaping ((SFSpeechRecognitionResult? , Error?, String?)->Void))
{
let request = SFSpeechURLRecognitionRequest(url: audioURL!)
request.shouldReportPartialResults = false
let recognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))
if (recognizer?.isAvailable)! {
recognizer?.recognitionTask(with: request) { result, error in
//error handling
completion(results,nil,nil)
}
} else {
completion(nil,nil,"Reognizer could not be initialized");
}
}
When I run this code, only one task executes and other tasks give this error
+[AFAggregator logDictationFailedWithError:] Error
Domain=kAFAssistantErrorDomain Code=209 "(null)"
I searched this error on internet but there is no documentation that contains its details.
It might be due to running SFSpeechRecognitionTask in parallel, but in official Apple documents here they didn't forbid to do so, as we can create SFSpeechRecognitionRequest object separately. We don't use singleton object of SFSpeechRecognizer
let me know if anyone has any idea what is going on and what you suggest me to do.

IBM Watson Speech To Text : Not able to transcribe the text using the Swift SDK

I am using IBM Watson speech to text iOS SDK for transcribing the real-time audio. I have installed it through cocoa pods. I am stuck with an issue (authentication) while transcribing the audio to text.
The installed STT SDK version is 0.38.1.
I have configured everything, created the service and credential correctly and also making sure SpeechToText instantiated with proper apikey and URL. Whenever I call the startStreaming method STT SDK prints some error log, which seems related to the authentication challenge.
Here is the code snippet.
let speechToText = SpeechToText(apiKey: Credentials.SpeechToTextAPIKey,iamUrl: Credentials.SpeechToTextURL)
var accumulator = SpeechRecognitionResultsAccumulator()
func startStreaming() {
var settings = RecognitionSettings(contentType: "audio/ogg;codecs=opus")
settings.interimResults = true
let failure = { (error: Error) in print(error) }
speechToText.recognizeMicrophone(settings: settings, failure: failure) { results in
accumulator.add(results: results)
print(accumulator.bestTranscript)
}
}
Error Logs
CredStore - performQuery - Error copying matching creds. Error=-25300,
query={
class = inet;
"m_Limit" = "m_LimitAll";
ptcl = htps;
"r_Attributes" = 1;
sdmn = "IBM Watson Gateway(Log-in)";
srvr = "gateway-syd.watsonplatform.net";
sync = syna;
}
I have dug into IBM Watson SDK documentation even googled around this issue but did not find any relevant answer.
A new version1.0.0 of Swift SDK is released with SpeechToTextV1 changes and the below code works for me with Speech to Text service API Key.
You don't have to extensively pass the URL until and unless the service is created in a region other than Dallas. Check the URLs here
import SpeechToTextV1 // If sdk is installed using Carthage.
import SpeechToText // If sdk is installed as a pod
let apiKey = "your-api-key"
let speechToText = SpeechToText(apiKey: apiKey)
var accumulator = SpeechRecognitionResultsAccumulator()
func startStreaming() {
var settings = RecognitionSettings(contentType: "audio/ogg;codecs=opus")
settings.interimResults = true
speechToText.recognizeMicrophone(settings: settings) { response, error in
if let error = error {
print(error)
}
guard let results = response?.result else {
print("Failed to recognize the audio")
return
}
accumulator.add(results: results)
print(accumulator.bestTranscript)
}
}
func stopStreaming() {
speechToText.stopRecognizeMicrophone()
}
You can find more examples here
Hope this helps!!

How to Stream screen without Broadcast Extension in iOS

I wanna streaming my app to twitch, youtube or such a streaming service without any other application likes mob crush.
According to Apple, by using Broadcast Extension I can stream my application screen.
Broadcast Extension gave video data as a type of CMSampleBuffer. Then I should send that data to rtmp sever like youtube, twitch or etc.
I think if I can get video data, I can stream the other things without using Broadcast Extension in my app. So I try to send RPScreenRecorder data to rtmp server, but I doesn't work.
Here is a code I wrote.
I use HaishinKit open source framework to rtmp communication.
(https://github.com/shogo4405/HaishinKit.swift/tree/master/Examples/iOS/Screencast)
let rpScreenRecorder : RPScreenRecorder = RPScreenRecorder.shared()
private var broadcaster: RTMPBroadcaster = RTMPBroadcaster()
rpScreenRecorder.startCapture(handler: { (cmSampleBuffer, rpSampleBufferType, error) in
if (error != nil) {
print("Error is occured \(error.debugDescription)")
} else {
if let description: CMVideoFormatDescription = CMSampleBufferGetFormatDescription(cmSampleBuffer) {
let dimensions: CMVideoDimensions = CMVideoFormatDescriptionGetDimensions(description)
self.broadcaster.stream.videoSettings = [
"width": dimensions.width,
"height": dimensions.height ,
"profileLevel": kVTProfileLevel_H264_Baseline_AutoLevel
]
}
self.broadcaster.appendSampleBuffer(cmSampleBuffer, withType: .video)
}
}) { (error) in
if ( error != nil) {
print ( "Error occured \(error.debugDescription)")
} else {
print ("Success")
}
}
}
If you have any solution, please answer me :)
I've tried a similar setup and it is possible to achieve what you'd like, just need to adjust it a little:
I don't see it in your example, but make sure that the broadcaster's endpoint is correctly set up. For example:
let endpointURL: String = "rtmps://live-api-s.facebook.com:443/rtmp/"
let streamName: String = "..."
self.broadcaster.streamName = streamName
self.broadcaster.connect(endpointURL, arguments: nil)
Then in the startCapture's handler block you need to filter by the buffer type to send the correct data to the stream. In this case you're only sending the video so we can ignore audio. (You can also find some examples with HaishinKit to send audio too.) For example:
RPScreenRecorder.shared().startCapture(handler: { (sampleBuffer, type, error) in
if type == .video, broadcaster.connected {
if let description: CMVideoFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
let dimensions: CMVideoDimensions = CMVideoFormatDescriptionGetDimensions(description)
broadcaster.stream.videoSettings = [
.width: dimensions.width,
.height: dimensions.height ,
.profileLevel: kVTProfileLevel_H264_Baseline_AutoLevel
]
}
broadcaster.appendSampleBuffer(sampleBuffer, withType: .video)
}
}) { (error) in }
Also make sure that the screen is updated during streaming. I've noticed that if you're recording a static window with RPScreenRecorder, then it will only update the handler when there's actually new video data to send. For testing I've added a simple UISlider which will update the feed when you move it around.
I've tested it with Facebook Live and I think it should work with other RTMP services too.

How to write data to a file on iOS in the background

I have an app that runs continuously in the background and needs to write data to a file. Occasionally I am finding a partial record written to the file so I have added some additional code to try and ensure that even if the app is backgrounded it will still have some chance of completing any writes.
Here is the code so far, and it seems to work but I am still not really sure if the APIs that I am using are the best ones for this job.
For example, is there a better way of opening the file and keeping it open so as to not have to seek to the end of the file each time ?
Is the approach for marking the task as a background task correct to ensure that iOS will allow the task to complete - it executes approximately once every second.
/// We wrap this in a background task to ensure that task will complete even if the app is switched to the background
/// by the OS
func asyncWriteFullData(dataString: String, completion: (() -> Void)?) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
let taskID = self.beginBackgroundUpdateTask()
self.writeFullData(dataString)
self.endBackgroundUpdateTask(taskID)
if (completion != nil) {
completion!()
}
})
}
func beginBackgroundUpdateTask() -> UIBackgroundTaskIdentifier {
return UIApplication.sharedApplication().beginBackgroundTaskWithExpirationHandler({})
}
func endBackgroundUpdateTask(taskID: UIBackgroundTaskIdentifier) {
UIApplication.sharedApplication().endBackgroundTask(taskID)
}
/// Write the record out to file. If the file does not exist then
/// create it.
private func writeFullData(dataString: String) {
let filemgr = NSFileManager.defaultManager()
if let filePath = self.fullDataFilePath {
if filemgr.fileExistsAtPath(filePath) {
if filemgr.isWritableFileAtPath(filePath) {
let file: NSFileHandle? = NSFileHandle(forUpdatingAtPath: filePath)
if file == nil {
// This is a major problem so best notify the User
// How are we going to handle this type of error ?
DebugLog("File open failed for \(self.fullDataFilename)")
AlertManager.sendActionNotification("We have a problem scanning data, please contact support.");
} else {
let data = (dataString as
NSString).dataUsingEncoding(NSUTF8StringEncoding)
file?.seekToEndOfFile()
file?.writeData(data!)
file?.closeFile()
}
} else {
//print("File is read-only")
}
} else {
//print("File not found")
if createFullDataFile() {
// Now write the data we were asked to write
writeFullData(dataString)
} else {
DebugLog("Error unable to write Full Data record")
}
}
}
}
I suggest you use NSOutputStream. In addition, GCD IO can also deal your work.
Techniques for Reading and Writing Files Without File Coordinators

Resources