I'm currently building an app with video and audio recording (foreground and background).
There are some reports from my clients that some of the recordings are failing, now I've checked with their logs and it seems that on some devices the encodings settings are not good.
I have search all over the web to find a decent source of information on how should I config the recording objects AVAssetWriterInput & AVAudioRecorder. Currently this is my settings for both of the objects
let recorderSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100,
AVEncoderBitRateKey: 64000,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
Any help would be appreciated, thanks
BTW this is the error I'm getting
"AssetWrtiting finished with
Optional(Error Domain=AVFoundationErrorDomain Code=-11861
"Cannot Encode Media"
UserInfo={NSLocalizedFailureReason=The encoding parameters are not supported.
NSLocalizedDescription=Cannot Encode Media,
NSUnderlyingError=0x2839b7750 {Error Domain=NSOSStatusErrorDomain Code=-12651 "(null)"}})
Just a FYI, if you are stumbling upon this question.
The outputs of the AVCaptureSession AVCaptureVideoDataOutput & AVCaptureAudioDataOutput has a method to get the recommended settings for recording called
func recommendedAudioSettingsForAssetWriter(writingTo outputFileType: AVFileType) -> [AnyHashable : Any]?
Related
I am trying to record voice with AVAudioRecorder. It is working fine if Screen-share is not enable. But notice when i share my device screen with Zoom or any other app. AVAudioSession is not active.
Here i attach code that i added for audio record
UIApplication.shared.beginReceivingRemoteControlEvents()
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord,options: .defaultToSpeaker)
try session.setActive(true)
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVEncoderAudioQualityKey:AVAudioQuality.high.rawValue
]
audioRecorder = try AVAudioRecorder(url: getFileUrl(), settings: settings)
audioRecorder.delegate = self
audioRecorder.isMeteringEnabled = true
audioRecorder.prepareToRecord()
self.nextBtn.isHidden = true
}catch let error {
print("Error \(error)")
}
When i hit record button it shows me error NSOSStatusErrorDomain Code=561017449 "Session activation failed".
Here i attach video.
https://share.icloud.com/photos/0a09o5DCNip6Rx_GnTpht7K3A
I don't have the reputation to comment or I would. (Almost there lol!) Have you tried AVAudioSession.CategoryOptions.overridemutedmicrophoneinterrupt?
Edit
The more I looked into this it seems like if Zoom is using the hardware then the iPhone won't be able to record that stream. I think that's the idea behind the AVAudioSession.sharedSession() being a singleton.
From the docs:
Type Property
overrideMutedMicrophoneInterruption: An option that indicates
whether the system interrupts the audio session when it mutes the
built-in microphone.
Declaration
AVAudioSession.CategoryOptions { get }
Discussion
Some devices include a privacy feature that mutes the built-in
microphone at the hardware level in certain conditions, such as when
you close the Smart Folio cover of an iPad. When this occurs, the
system interrupts the audio session that’s capturing input from the
microphone. Attempting to start audio input after the system mutes the
microphone results in an error. If your app uses an audio session
category that supports input and output, such as playAndRecord, you
can set this option to disable the default behavior and continue using
the session. Disabling the default behavior may be useful to allow
your app to continue playback when recording or monitoring a muted
microphone doesn’t lead to a poor user experience. When you set this
option, playback continues as normal, and the microphone hardware
produces sample buffers, but with values of 0.
Important
Attempting to use this option with a session category that doesn’t
support audio input results in an error.
I'm currently trying to use AVSpeechSynthesizer to speak text from within an iOS Safari extension:
let synthesizer = AVSpeechSynthesizer()
...
let utterance = AVSpeechUtterance(string: self.text)
utterance.rate = 0.55;
self.synthesizer.speak(utterance)
On a simulator this works fine. However, on a physical device, I get the following error (even when the device is unmuted/volume-up):
NSURLConnection finished with error - code -1002
NSURLConnection finished with error - code -1002
NSURLConnection finished with error - code -1002
[AXTTSCommon] Failure starting audio queue alp!
[AXTTSCommon] Run loop timed out waiting for free audio buffer
[AXTTSCommon] _BeginSpeaking: speech cancelled error: Error Domain=TTSErrorDomain Code=-4001 "(null)"
[AXTTSCommon] _BeginSpeaking: couldn't begin playback
I have looked through quite a few SO and Apple Dev Forums threads and have tried many of the proposed solutions with no luck. Here are the things I've tried:
Linking AVFAudio.framework and AVFoundation.framework to the extension.
Starting an AVAudioSession prior to playing the utterance:
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [.mixWithOthers, .allowAirPlay])
try session.setActive(true, options: .notifyOthersOnDeactivation)
} catch let error {
print("Error starting audio: \(error.localizedDescription)")
}
This actually results in another error being thrown right before the same errors above:
Error starting audio: The operation couldn’t be completed. (OSStatus error 2003329396.)
Playing a plain mp3 audio file:
guard let url = Bundle.main.url(forResource: "sample", withExtension: "mp3") else {
print("Couldn't find file")
return
}
do {
self.player = try AVAudioPlayer(contentsOf: url)
self.player.play()
print("**playing sound")
} catch let error as NSError {
print("Error playing sound: \(error.localizedDescription)")
}
This prints the following:
**playing sound
[aqsrv] AQServer.cpp:72 Exception caught in AudioQueueInternalNotifyRunning - error -66671
Enabling Audio, AirPlay, and Picture in Picture in Background Modes for the main target app (not available for the extension).
Any help would be appreciated.
EDIT:
The solution below gets rejected due to a validation error when submitting to App Store Connect.
I filed a Technical Support Incident with Apple, and this was their response:
Safari extensions are very short-lived, hence not fit for audio playback or speech synthesis. Not being able to validate an app extension in Xcode with a manually-added plist entry for background audio is the designed behavior. The general recommendation is to synthesize speech using JavaScript in conjunction with the Web Speech API.
TLDR: Use the Web Speech API for text-to-speech in Safari extensions, not AVSpeechSynthesizer.
Original answer:
Adding the following to the extension's Info.plist allowed the audio to play as expected:
<dict>
...
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
...
</dict>
Interestingly, it actually shows the same errors in the console as before, but it does play the audio.
Tried iOS13.0 and iOS13.1 and still not working, I tried both AVAggregateAssetDownloadTask and AVAssetDownloadURLSession but none of them working. Not any delegate was called to tell me error of finish, and I found downloaded cache was only 25Kb what was not the right size.
The error is:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedDescription=The operation could not be completed, _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <AFDCA3CC-FA49-488B-AB16-C74425345EE4>.<1>, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"BackgroundAVAssetDownloadTask <AFDCA3CC-FA49-488B-AB16-C74425345EE4>.<1>"
), NSLocalizedFailureReason=An unknown error occurred (-16654)}
Found out AVAssetDownloadURLSession can only download HLS with master playlist structure which contains codec attribute into EXT-X-STREAM-INF m3u8 meta on iOS 13+.
I have no idea if this is a bug or function restriction.
(m3u8 meta have no CODECS attribute can be played with AVFoundation, but can't be downloaded with AVAssetDownloadURLSession)
Anyway, the solution is:
If you have HLS master playlist:
add CODECS attribute into your #EXT-X-STREAM-INF in m3u8 meta.
e.g.
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=63701,CODECS="mp4a.40.34"
playlist.m3u8
If you haven't HLS master playlist yet:
You have to make a master playlist even if you're not supporting adaptive streaming.
The master playlist is the only m3u8 which can contain #EXT-X-STREAM-INF hence CODECS attribute.
So, I found out that the 'AVAssetDownloadTask' had some error in calling delegates in iOS 13 (13.1,13.2.13.3). Finally, in iOS 13.4.1, Apple has fixed this error and now delegates have called after setting delegate and starting the task. Below is what I used to start downloading the m3u8 file from the server and saving it as an Asset to play later offline.
func downloadVideo(_ url: URL) {
let configuration = URLSessionConfiguration.background(withIdentifier: currentFileName)
let downloadSession = AVAssetDownloadURLSession(configuration: configuration,
assetDownloadDelegate: self,
delegateQueue: OperationQueue.main)
// HLS Asset URL
let asset = AVURLAsset(url: url)
// Create new AVAssetDownloadTask for the desired asset
let downloadTask = downloadSession.makeAssetDownloadTask(asset: asset,
assetTitle: currentFileName,
assetArtworkData: nil,
options: nil)
// Start task and begin download
downloadTask?.resume()
}
I tried this on iOS 12 and iOS 13.4.1 and it is working as expected. Also, it was already on the Apple Developer Forums here. Hope this helps someone.
My app has 360-degree video playback and I am using GoogleVR's GVRRendererView class for it. I am trying to play a high-quality 360-degree video from the server. But the problem is that video streaming is very slow and I get below mentioned error message in the XCode console.
<AppName> [Symptoms] {
"transportType" : "HTTP Progressive Download",
"mediaType" : "HTTP Progressive Download",
"BundleID" : "AppID",
"name" : "MEDIA_PLAYBACK_STALL",
"interfaceType" : "WiredEthernet"
}
How to resolve it?
I have been trying to use the Microsoft SpeechSDK Speech Recognition backend to work with WAV files created using AVAudioRecorder and noticed that the DataRecognitionClient doesn't seem to return any errors or partial/final responses.
If however, I export that same WAV file using Audacity to WAV (Microsoft) signed 16 bit PCM it works fine.
Repro:
Using an Apple device use the AVAudioRecorder to create an audio.wav file (with less than 2 min of conversation) with the following format settings:
let recordSettings: [String: AnyObject] = [
AVFormatIDKey: NSNumber(int: Int32(kAudioFormatLinearPCM)),
AVNumberOfChannelsKey: NSNumber(int: 1),
AVSampleRateKey: NSNumber(float: 16000.0),
AVLinearPCMBitDepthKey: NSNumber(int: 16),
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false]
Download and open your https://github.com/microsoft/cognitive-speech-stt-ios example project.
Open the SpeechRecognitionServerExample project and add the previously recorded audio.wav file (in step 1) into the SpeechRecognitionServerExample/Assets group.
Open ViewController.mm and go to the longWaveFile function and replace the file name with #"audio.wav"
Run the example and notice how no error is returned and nothing is recognized either.
Analysis:
The only thing that seems to be different from the provided sample WAV files in the project (batman.wav and whatstheweatherlike.wav) from the SpeechSDK samples is that the WAV files created by the AVAudioRecorder add the "FLLR" sub-chunk used for page alignment between the "fmt" and "data" sub-chunks in the file format header.
RIFF WAV Apple vs Microsoft
While this is non-standard it still is specification compliant and it seems it might not be accounted for, preventing speech recognition from occurring. Are there any suggested work arounds for this?
Update:
So I went ahead and created a new audio recording class which uses Audio Queues and does exactly the same thing as AVAudioRecorder, except that it removes the "FLLR" sub-chunk. This can be done upon the creation of the audio file by setting the AudioFileFlags.DontPageAlignAudioData flag.
AudioFileCreateWithURL(
filePathUrl,
kAudioFileWAVEType,
&dataFormat,
[.DontPageAlignAudioData, .EraseFile],
&audioFile)
Doing this causes speech recognition to start working. Does anyone know if there is a way to indicate AVAudioRecorder to not page align the audio data? I read through the Apple documentation and couldn't find any setting or option. I really don't want to maintain something that duplicates the existing functionality just because of this.