Can I get AudioKit appendAsynchronously to create a file with name other than '.caf', and to output in .mp4 format? - audiokit

I have the following AudioKit usage scenario under iOS:
I record a sound from the microphone.
I save the sound asynchronously to the tmp directory as an .mp4 file.
I record a second sound from the microphone.
I attempt to (a) retrieve the previously saved .mp4 file, and (b) asynchronously append to it the second microphone sound.
My problem is that while this process does succeed, the appended file is saved with the filename '.caf' (literally just that, with nothing before the extension).
Ideally, I would like some way of forcing the appended file to have a more manageable name. It would also be nice if the appended file could be in an .mp4 format.
The following code suffices to reproduce the issue:
import SwiftUI
import AudioKit
struct ContentView: View {
#State private var mic: AKMicrophone!
#State private var micBooster: AKBooster!
#State private var micRecorder: AKNodeRecorder!
var body: some View {
Button(action: {
self.recordFirstSound()
}) {
Text("Start")
}
}
func recordFirstSound() {
AKAudioFile.cleanTempDirectory()
AKSettings.audioInputEnabled = true
AKSettings.defaultToSpeaker = true
do {
try AKSettings.setSession(category: .multiRoute)
} catch {
print("Failed to set session category to .playAndRecord");
}
AKSettings.bufferLength = .medium
mic = AKMicrophone()
micBooster = AKBooster()
mic >>> micBooster
micBooster.gain = 0.0
do {
micRecorder = try AKNodeRecorder(node: mic)
} catch {
print("Failed to initialise micRecorder")
}
AudioKit.output = micBooster
do {
try AudioKit.start()
} catch {
print("Failed to start AudioKit")
}
do {
try micRecorder.record()
} catch {
print("Failed to start micRecorder")
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5 , execute: {
self.writeRecordingToTmpDir()
})
}
func writeRecordingToTmpDir() {
micRecorder.stop()
if let recorderAudioFile = micRecorder.audioFile {
recorderAudioFile.exportAsynchronously(name: "recording",
baseDir: .temp,
exportFormat: .mp4, callback: callbackAfterInitialExport)
} else {
print("Problem accessing micRecorder audioFile")
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5 , execute: {
self.recordSecondSound()
})
}
func recordSecondSound() {
do {
try micRecorder.reset()
} catch {
print("Failed to reset micRecorder")
}
do {
try micRecorder.record()
} catch {
print("Failed to re-start micRecorder")
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5 , execute: {
self.appendSecondSoundFileToFirstAsynchronously()
})
}
func appendSecondSoundFileToFirstAsynchronously() {
micRecorder.stop()
do {
let existingMp4File = try AKAudioFile(readFileName: "recording.mp4", baseDir: .temp)
if let micRecorderAudioFile = micRecorder.audioFile {
existingMp4File.appendAsynchronously(file: micRecorderAudioFile, completionHandler: callbackAfterAsynchronousAppend)
}
} catch {
print("Failed to read or append recording.mp4")
}
}
func callbackAfterInitialExport(processedFile: AKAudioFile?, error: NSError?) {
if let file = processedFile {
print("Asynchronous export of \(file.fileNamePlusExtension) succeeded")
print("Exported file duration: \(file.duration) seconds")
} else {
print("Asynchronous export failed")
}
}
func callbackAfterAsynchronousAppend(processedFile: AKAudioFile?, error: NSError?) {
if let file = processedFile {
print("Asynchronous append succeeded. New file is \(file.fileNamePlusExtension)")
print("Duration of new file: \(file.duration) seconds")
} else {
print("Asynchronous append failed")
}
}
}

Wasn't populating appropriate parameters for appendAsynchronously call. Call should be:
if let micRecorderAudioFile = micRecorder.audioFile {
existingMp4File.appendAsynchronously(file: micRecorderAudioFile, baseDir: .temp, name: "recording", completionHandler: callbackAfterAsynchronousAppend)
}

Related

iOS Video recording with AVFoundation

I need to open the camera and record the process. Also I have a timer, so according to the timer interval I should save multiple videos without stopping the record process.
So I use AVFoundation and in timer action I call 2 functions (stopRecording, startRecording).
TimerInterval is 4 seconds.
When I call stopRecording method "didFinishRecordingToOutputFileAtURL" delegate method does not return the record source immediately, it returns after 3 seconds, so I lose every second record.
Is there any other way to organize this kind of process or how can I fix this issue?
Thanks
func start(complition: (Error?, Bool)->()) {
setupSession { success in
if !success {
print("Error!")
return
}
setupPreview()
startSession()
let timeInterval = 4
timer = Timer.scheduledTimer(timeInterval: timeInterval, target: self, selector: #selector(timerAction), userInfo: nil, repeats: true)
}
}
func setupSession(complition: (Bool)->()) {
captureSession.beginConfiguration()
guard let camera = AVCaptureDevice.default(for: .video) else {
complition(false)
return
}
guard let mic = AVCaptureDevice.default(for: .audio) else {
complition(false)
return
}
do {
let videoInput = try AVCaptureDeviceInput(device: camera)
let audioInput = try AVCaptureDeviceInput(device: mic)
for input in [videoInput, audioInput] {
if captureSession.canAddInput(input) {
captureSession.addInput(input)
}
}
activeInput = videoInput
} catch {
print("Error setting device input: \(error)")
complition(false)
return
}
captureSession.addOutput(movieOutput)
captureSession.commitConfiguration()
}
func setupPreview() {
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.frame = containerView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
containerView.layer.addSublayer(previewLayer)
}
func startSession() {
if !captureSession.isRunning {
DispatchQueue.global(qos: .default).async { [weak self] in
self?.captureSession.startRunning()
}
}
}
func stopSession() {
if captureSession.isRunning {
DispatchQueue.global(qos: .default).async() { [weak self] in
self?.captureSession.stopRunning()
}
}
}
public func captureMovie() {
guard let connection = movieOutput.connection(with: .video) else {
return
}
if connection.isVideoStabilizationSupported {
connection.preferredVideoStabilizationMode = .auto
}
let device = activeInput.device
if device.isSmoothAutoFocusEnabled {
do {
try device.lockForConfiguration()
device.isSmoothAutoFocusEnabled = true
device.unlockForConfiguration()
} catch {
print("error: \(error)")
}
}
guard let outUrl = tempURL else { return }
movieOutput.startRecording(to: outUrl, recordingDelegate: self)
}
public func stopRecording() {
if movieOutput.isRecording {
movieOutput.stopRecording()
}
}
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
print(Date.now, " ", "file")
if let error = error {
print("error: \(error.localizedDescription)")
} else {
// Save the source
}
}
#objc private func timerAction() {
print(Date.now, " timerAction")
stopRecording()
captureMovie()
}

Swift IOS - How can I obtain permission to upload a document when testing on a device?

I am integrating a documentPicker into my IOS app. Selected files will be uploaded using Firebase Storage. I am getting this error when attempting to upload the file while testing the app on my iPhone:
Error uploading file: The file “file.pdf” couldn’t be opened because you don’t have permission to view it.
On the other hand, am not getting this or any other error when testing using the simulator, and the error occurs whether I select a file from iCloud or on local storage.
Here is the code for picker:
struct DocumentPicker: UIViewControllerRepresentable {
#Binding var filePath: URL?
func makeCoordinator() -> DocumentPicker.Coordinator {
return DocumentPicker.Coordinator(parent1: self)
}
func makeUIViewController(context: UIViewControllerRepresentableContext<DocumentPicker>) -> UIDocumentPickerViewController {
let picker = UIDocumentPickerViewController(documentTypes: ["public.item"], in: .open)
picker.allowsMultipleSelection = false
picker.delegate = context.coordinator
return picker
}
func updateUIViewController(_ uiViewController: DocumentPicker.UIViewControllerType, context: UIViewControllerRepresentableContext<DocumentPicker>) {
}
class Coordinator: NSObject, UIDocumentPickerDelegate {
var parent: DocumentPicker
init(parent1: DocumentPicker){
parent = parent1
}
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
// Here is where I get the path for the file to be uploaded
parent.filePath = urls[0]
print(urls[0].absoluteString)
}
}
}
Here is the upload function, and where the error is caught:
do {
let fileName = (PickedDocument?.lastPathComponent)!
let fileData = try Data(contentsOf: PickedDocument!)
let StorageRef = Storage.storage().reference().child(uid + "/" + doc.documentID + "/" + fileName)
StorageRef.putData(fileData, metadata: nil) { (metadata, error) in
guard let metadata = metadata else {
return
}
StorageRef.downloadURL { (url, error) in
guard let urlStr = url else{
completion(nil)
return
}
let urlFinal = (urlStr.absoluteString)
ShortenUrl(from: urlFinal) { result in
if (result != "") {
print("Short URL:")
print(result)
completion(result)
}
else {
completion(nil)
return
}
}
}
}
}
catch {
print("Error uploading file: " + error.localizedDescription)
self.IsLoading = false
return
}
}
Obviously there is some sort of permission that I need to request in order to be able to access and upload files on a physical iPhone, but am not sure how to get that? I have tried adding the following in info.plist but it still didn't work.
<key>NSDocumentsFolderUsageDescription</key>
<string>To access device files for upload upon user request</string>
I figured it out based on an answer I found here.
What I was doing is: I was storing a reference to the local file, which is a security scoped resource. Hence the permission error was thrown.
What I did to get around this: At the moment when the user picks the file, I will use url.startAccessingSecurityScopedResource() to start accessing its content and make a copy of the file instead of holding its reference.
Here is the updated code for the picker didPickDocumentsAt:
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
guard controller.documentPickerMode == .open, let url = urls.first, url.startAccessingSecurityScopedResource() else { return }
defer {
DispatchQueue.main.async {
url.stopAccessingSecurityScopedResource()
}
}
do {
let document = try Data(contentsOf: url.absoluteURL)
parent.file = document
parent.fileName = url.lastPathComponent
print("File Selected: " + url.path)
}
catch {
print("Error selecting file: " + error.localizedDescription)
}
}
And then my Storage upload function is:
func UploadFile(doc: DocumentReference, completion:#escaping((Bool?, String?) -> () )) {
do {
let StorageReference = Storage.storage().reference().child(self.User + "/" + doc.documentID + "/" + fileName!)
StorageReference.putData(file!, metadata: nil) { (metadata, error) in
if let error = error {
self.alertMessage = error.localizedDescription
self.showingAlert = true
completion(false, nil)
}
else {
StorageReference.downloadURL { (url, error) in
if let error = error {
self.alertMessage = error.localizedDescription
self.showingAlert = true
completion(false, nil)
}
else {
ShortenUrl(from: url!.absoluteString) { result in
if (result != "") {
completion(true, result)
}
else {
completion(false, nil)
}
}
}
}
}
}
}
catch {
self.alertMessage = "Error uploading file: " + error.localizedDescription
self.showingAlert = true
completion(false, nil)
}
}

Is it possible to run multiple instances of SFSpeechRecognizer?

I've implemented Apple's SpeechRecognizer to convert speech to text. I have multiple audio recordings so I'm creating mulitple SFSpeechRecognizer instance so that all of those are converted parallely and I've also used DispatchGroup so that I can get completion at last one's end. But I'm keep getting error kAFAssistantErrorDomain error 209.
private var dispatchGroup = DispatchGroup()
allURLs.forEach { (singleURL) in
DispatchQueue.main.async {
thisSelf.dispatchGroup.enter()
let request = SFSpeechURLRecognitionRequest(url: url)
guard let recognizer = SFSpeechRecognizer() else {
thisSelf.dispatchGroup.leave()
completion(.failure(thisSelf.speechReconInitError))
return
}
request.shouldReportPartialResults = false
if !recognizer.isAvailable {
thisSelf.dispatchGroup.leave()
return
}
recognizer.recognitionTask(with: request) { [weak thisSelf] (result, error) in
guard let reconSelf = thisSelf else { return }
if let error = error {
completion(.failure(error))
if let nsError = error as NSError? {
print("Error while transcripting audio: \(url.path), Code, Domain, Description: \(nsError.code), \(nsError.domain), \(nsError.localizedDescription)")
} else {
print("Error while transcripting audio: \(url.path), Error: \(error.localizedDescription)")
}
reconSelf.dispatchGroup.leave()
} else if let transcriptionResult = result, transcriptionResult.isFinal {
transcribedText += transcriptionResult.bestTranscription.formattedString
reconSelf.dispatchGroup.leave()
}
}
thisSelf.dispatchGroup.notify(queue: .main) {
if !transcribedText.isEmpty {
completion(transcribedText)
}
}
}
}
And If I transcribe only one audio to text at one time then I don't get any error.
TIA

GPUImage3 MovieOutput outputs broken frames

I just saved the output video to Photos.
I revised some in the example SimpleVideoRecorder, or you can just copy/paste the following codes:
import UIKit
import GPUImage
import AVFoundation
import Photos
class ViewController: UIViewController {
#IBOutlet weak var renderView: RenderView!
var camera:Camera!
var filter:SaturationAdjustment!
var isRecording = false
var movieOutput:MovieOutput? = nil
var fileURL: URL!
override func viewDidLoad() {
super.viewDidLoad()
do {
camera = try Camera(sessionPreset:.vga640x480)
camera.runBenchmark = true
filter = SaturationAdjustment()
camera --> filter --> renderView
camera.startCapture()
} catch {
fatalError("Could not initialize rendering pipeline: \(error)")
}
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
}
#IBAction func capture(_ sender: AnyObject) {
if (!isRecording) {
do {
self.isRecording = true
let documentsDir = try FileManager.default.url(for:.documentDirectory, in:.userDomainMask, appropriateFor:nil, create:true)
fileURL = URL(string:"test.mp4", relativeTo:documentsDir)!
do {
try FileManager.default.removeItem(at:fileURL)
} catch {
}
movieOutput = try MovieOutput(URL:fileURL, size:Size(width:480, height:640), liveVideo:true)
// camera.audioEncodingTarget = movieOutput
filter --> movieOutput!
movieOutput!.startRecording()
DispatchQueue.main.async {
// Label not updating on the main thread, for some reason, so dispatching slightly after this
(sender as! UIButton).titleLabel!.text = "Stop"
}
} catch {
fatalError("Couldn't initialize movie, error: \(error)")
}
} else {
movieOutput?.finishRecording{
self.isRecording = false
DispatchQueue.main.async {
(sender as! UIButton).titleLabel!.text = "Record"
}
// self.camera.audioEncodingTarget = nil
self.movieOutput = nil
PHPhotoLibrary.shared().performChanges({() -> Void in
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: self.fileURL)
}, completionHandler: { _, error -> Void in
do {
// delete the caches.
try FileManager.default.removeItem(at: self.fileURL)
} catch {
print(error)
}
})
}
}
}
}
Then you can easily tell that almost every single frame is broken. This happens on my iPhone6S with iOS 13.3.1, as well as the case that I cast the texture from filter to CVPixelBuffer.
I guess there may be something wrong in the BasicOperation. I also post the same issue on BradLarson's GitHub.

AudioKit 4.3: record audio, render it offline, then play it

I'm trying to record audio, then save offline with AudioKit.renderToFile, then use AKPlayer to play the original recorded audio file.
import UIKit
import AudioKit
class ViewController: UIViewController {
private var recordUrl:URL!
private var isRecording:Bool = false
public var player:AKPlayer!
private let format = AVAudioFormat(commonFormat: .pcmFormatFloat64, sampleRate: 44100, channels: 2, interleaved: true)!
private var amplitudeTracker:AKAmplitudeTracker!
private var boostedMic:AKBooster!
private var mic:AKMicrophone!
private var micMixer:AKMixer!
private var silence:AKBooster!
public var recorder: AKNodeRecorder!
#IBOutlet weak var recordButton: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
//self.recordUrl = Bundle.main.url(forResource: "sound", withExtension: "caf")
//self.startAudioPlayback(url: self.recordUrl!)
self.recordUrl = self.urlForDocument("record.caf")
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func requestMic(completion: #escaping () -> Void) {
AVAudioSession.sharedInstance().requestRecordPermission({ (granted: Bool) in
if granted { completion()}
})
}
public func switchToMicrophone() {
stopEngine()
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
AKLog("Could not set session category.")
}
mic = AKMicrophone()
micMixer = AKMixer(mic)
boostedMic = AKBooster(micMixer, gain: 5)
amplitudeTracker = AKAmplitudeTracker(boostedMic)
silence = AKBooster(amplitudeTracker, gain: 0)
AudioKit.output = silence
startEngine()
}
#IBAction func startStopRecording(_ sender: Any) {
self.isRecording = !self.isRecording
if self.isRecording {
self.startRecording()
self.recordButton.setTitle("Stop Recording", for: .normal)
} else {
self.stopRecording()
self.recordButton.setTitle("Start Recording", for: .normal)
}
}
func startRecording() {
self.requestMic() {
self.switchToMicrophone()
if let url = self.recordUrl {
do {
let audioFile = try AKAudioFile(forWriting: url, settings: self.format.settings, commonFormat: .pcmFormatFloat64, interleaved: true)
self.recorder = try AKNodeRecorder(node: self.micMixer, file: audioFile)
try self.recorder.reset()
try self.recorder.record()
} catch {
print("error setting up recording", error)
}
}
}
}
func stopRecording() {
recorder.stop()
startAudioPlayback(url: self.recordUrl)
}
#IBAction func saveToDisk(_ sender: Any) {
if let source = self.player, let saveUrl = self.urlForDocument("pitchAudio.caf") {
do {
source.stop()
let audioFile = try AKAudioFile(forWriting: saveUrl, settings: self.format.settings, commonFormat: .pcmFormatFloat64, interleaved: true)
try AudioKit.renderToFile(audioFile, duration: source.duration, prerender: {
source.play()
})
print("audio file rendered")
} catch {
print("error rendering", error)
}
// PROBLEM STARTS HERE //
self.startAudioPlayback(url: self.recordUrl)
}
}
public func startAudioPlayback(url:URL) {
print("loading playback audio", url)
self.stopEngine()
do {
try AKSettings.setSession(category: .playback)
player = AKPlayer.init()
try player.load(url: url)
}
catch {
print("error setting up audio playback", error)
return
}
player.prepare()
player.isLooping = true
self.setPitch(pitch: self.getPitch(), saveValue: false)
AudioKit.output = player
startEngine()
startPlayer()
}
public func startPlayer() {
if AudioKit.engine.isRunning { self.player.play() }
else { print("audio engine not running, can't play") }
}
public func startEngine() {
if !AudioKit.engine.isRunning {
print("starting engine")
do { try AudioKit.start() }
catch {
print("error starting audio", error)
}
}
}
public func stopEngine(){
if AudioKit.engine.isRunning {
print("stopping engine")
do {
try AudioKit.stop()
}
catch {
print("error stopping audio", error)
}
}
//playback doesn't work without this?
mic = nil
}
#IBAction func changePitch(_ sender: UISlider) {
self.setPitch(pitch:Double(sender.value))
}
public func getPitch() -> Double {
return UserDefaults.standard.double(forKey: "pitchFactor")
}
public func setPitch(pitch:Double, saveValue:Bool = true) {
player.pitch = pitch * 1000.0
if saveValue {
UserDefaults.standard.set(pitch, forKey: "pitchFactor")
UserDefaults.standard.synchronize()
}
}
func urlForDocument(_ named:String) -> URL? {
let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as String
let url = NSURL(fileURLWithPath: path)
if let pathComponent = url.appendingPathComponent(named) {
return pathComponent
}
return nil
}
}
The order of calls is switchToMicrophone, startRecording, stopRecording, startAudioPlayback, saveToDisk, and again, startAudioPlayback
Please see the github repo for full code in ViewController.swift
After the renderToFile function, when restarting AudioKit for the player, the following errors occur:
[mcmx] 338: input bus 0 sample rate is 0
[avae] AVAEInternal.h:103:_AVAE_CheckNoErr: [AVAudioEngineGraph.mm:1265:Initialize: (err = AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainOptimizedTraversal, *GetOutputNode(), isOutputChainActive)): error -10875
[avae] AVAudioEngine.mm:149:-[AVAudioEngine prepare]: Engine#0x1c4008ae0: could not initialize, error = -10875
[mcmx] 338: input bus 0 sample rate is 0
[avae] AVAEInternal.h:103:_AVAE_CheckNoErr: [AVAudioEngineGraph.mm:1265:Initialize: (err = AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainOptimizedTraversal, *GetOutputNode(), isOutputChainActive)): error -10875
error starting audio Error Domain=com.apple.coreaudio.avfaudio Code=-10875 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainOptimizedTraversal, *GetOutputNode(), isOutputChainActive)} ***
This all works flawlessly if I take the recording piece out, or the offline render out, but not with both included.
It might be that the issue is with your order of execution try swapping startAudioPlayback, saveToDisk, so that it first does saveToDisk, and then reads the file back and plays it, startAudioPlayback.
EDIT: So far by playing around with it I believe I have identified the issue. Once you save the file the other tempfile which was the recording is disappearing for some reason. I think that needs to be narrowed down why is that.
Or perhaps to playaround and send the whole saveToDisk method to a background thread without interrupting the currently playing file.
In my spare time I'll try to tweak it a little more and let you know.
EDIT 2:
check this https://stackoverflow.com/a/48133092/9497657
if you get nowhere try to post your issue here:
https://github.com/audiokit/AudioKit/issues/
also check this tutorial out as well:
https://www.raywenderlich.com/145770/audiokit-tutorial-getting-started
also it might be useful to message Aurelius Prochazka as he is a developer of AudioKit who could help.
I was able to get it working by combining the recording and playback into a single pipeline:
mixer = AKMixer(mic)
boostedMic = AKBooster(mixer, gain: 5)
amplitudeTracker = AKAmplitudeTracker(boostedMic)
micBooster = AKBooster(amplitudeTracker, gain: 0)
player = AKPlayer()
try? player.load(url: self.recordUrl)
player.prepare()
player.gain = 2.0
outputMixer = AKMixer(micBooster, player)
AudioKit.output = outputMixer

Resources