I am trying to develop a audio streaming system. The client streams the audio from microphone to server while recording in real-time, another clients will subscribe like podcast.
As far as my understanding, the AVCaptureSession derives the AVCaptureDeviceInput to AVCaptureAudioDataOutput.
let captureSession = AVCaptureSession()
guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return }
let audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(audioInput) {
captureSession.addInput(audioInput)
}
let audioOutput = AVCaptureAudioDataOutput()
if captureSession.canAddOutput(audioOutput) {
captureSession.addOutput(audioOutput)
}
captureSession.commitConfiguration()
With a background from javascript with MediaRecoder API. The recoder emits a chunk of audio data every X milliseconds.
So, Is it possible to do a similar way in iOS (swift)? Is there any other way to do stream the audio in real-time
Edit 1: I need a stream or a chunk of data that emit every X miliseconds. This questions mentioned the recording but it saved to the file.
Related
I'm developing app for sound effects. I use AudioKit and I got a needed result for recoding voice. Now I want to add sound effect to voice while video is recording. I can record video with default voice, but I don't know how I can add effect to voice from video.
I use AVCaptureSession for video recording and I get audio stream like this:
self.audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
if let audioDevice = self.audioDevice {
self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(self.audioInput!) {
captureSession.addInput(self.audioInput!)
} else {
throw CameraControllerError.inputsAreInvalid
}
}
But I want to set to captureSession input AVAudioNode, because AudioKit works with it.
I will be glad for any help. Thanks
I'm having a problem with recording audio from the microphone of my test device to a .caf file in Swift, XCode 9.4.1 using the latest version of AudioKit. In a simple test whereby I send the audio straight from the microphone to the output via an AKBooster, it works just fine and I can hear the mic input coming out of the speakers. I'm more or less following this example, although again using a booster node instead of an oscillator.
The following is my code:
class MicrophoneHandler
{
var microphone : AKMicrophone!
var booster : AKBooster!
var mixer : AKMixer!
var recorder : AKNodeRecorder!
var file : AKAudioFile!
var player : AKAudioPlayer!
init()
{
setupMicrophone()
microphone = AKMicrophone()
booster = AKBooster(microphone) // Stereo amplifier for microphone
mixer = AKMixer(booster)
file = try! AKAudioFile() // File to store recorder output
player = try? AKAudioPlayer(file: file) // Player to play back recorded audio file
//player.looping = true
recorder = try? AKNodeRecorder(node: mixer, file: file)
try? recorder.record()
sleep(5)
let dur = String(format: "%0.3f seconds", recorder.recordedDuration)
print("Stopped. (\(dur) recorded)")
recorder.stop()
//file.exportAsynchronously(name: "Test", baseDir: .documents, exportFormat: .caf){ [weak self] _, _ in
//}
//player.play()
//AudioKit.output = player!
//try? AudioKit.start()
}
func setupMicrophone()
{
// Function to initialise microphone settings
// Adapted from AudioKit example code found here:
// https://audiokit.io/examples/MicrophoneAnalysis
AKSettings.bufferLength = .medium
AKSettings.ioBufferDuration = 0.002 // TODO experiment with this to control latency
do
{
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP) // Set session type & allow streaming to Bluetooth devices
} catch
{
AKLog("Could not set session category.")
}
AKSettings.defaultToSpeaker = true // Output to speaker when audio input is enabled
}
}
I have commented out the export code as the problem doesn't appear to be here. The console displays the following:
AKMicrophone.swift:init():45:Mixer inputs 8
AKAudioPlayer.swift:updatePCMBuffer():533:AKAudioPlayer Warning: "BF848EC0-94F8-4E39-A211-784B001CED72.caf" is an empty file
2018-11-16 17:49:16.936169+0000 VoxBox[2258:6984570] Audio files cannot be non-interleaved. Ignoring setting AVLinearPCMIsNonInterleaved YES.
AKNodeRecorder.swift:record():104:AKNodeRecorder: recording
Stopped. (0.000 seconds recorded)
As you can see, the recorder appears not to be recording to file for some reason. To my mind, my code should
Initialise the microphone (including settings)
Route the microphone input through a booster followed by a mixer (mixing with an FX bank will happen later)
Create an empty .caf audio file to be written to
Set up a player to play this file when the time comes
Set up a recorder to record the output of the mixer node to the audio file
Record 5 seconds of microphone input to the audio file
Yet for some reason nothing is being recorded. Clearly I am missing something or have misunderstood how the AKNodeRecorder works in this regard. I have read as many StackOverflow questions on similar topics as I can, had a dig through the AudioKit documentation and read a couple of examples from the AudioKit site, but nothing seems to address my particular problem.
Any help would be much appreciated.
I apologise in advance for the "dumb" question, but I feel I have exhausted all resources. I have little to no experience with Swift and coding in general but I understand much based on past experience and use of object based programming such as MAX MSP.
I am attempting to develop a camera/microphone capture iOS app for the macOS QuickTime Player recording function (answering my own need for RAW camera access as I literally could not find the right thing out there!).
Having implemented AVCaptureSession video output successfully, I have tried many methods of sending audio to Quicktime (including AVAudioSessionPortUSBAudio) to no avail. This was before I realised that QuickTime automatically captures the iOS system audio output.
So my presumption was that I should be able to preview audio under AVCapture Session easily; not so! It seems AVCaptureAudioPreviewOutput in "not available" in swift4 or I am simple missing some basics. I have seen articles on stack mentions the need to STOP audio processing, so I'm hopeful it is easy to preview/monitor it.
Could any of you point me to a method of previewing audio in AVCaptureSession? I have an instantiated AVAudioSession still (my original attempt), and have also just managed (I hope) to successfully connect the mic to the AVCaptureSession. However, I am not sure what else to use! My aim: just to hear the Mic input on the system's audio output: the Quicktime connection should (hopefully) handle capturing from the USB port (music played on the phone goes over the usb when the iOS device is selected as the microphone).
let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
do {
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
self.captureSession.addInput(audioInput)
} catch {
print("Unable to add Audio Device")
}
I have also attempted other things which I am becoming lost on;
captureSession.automaticallyConfiguresApplicationAudioSession = true
func showAudioPreview() -> Bool { return true }
Perhaps it is possible to use AVAudioSession alongside the capture? However, my basic knowledge points to the fact that there are problems running Capture and Audio Sessions together.
Any help would be sincerely appreciated, I am sure many of you will roll your eyes and be able to easily point out my mistakes!
Thanks,
Iwan
AVCaptureAudioPreviewOutput is only available on the mac, but you could instead use AVSampleBufferAudioRenderer. You have to manually enqueue audio CMSampleBuffers to it which an AVCaptureAudioDataOutput can provide:
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureAudioDataOutputSampleBufferDelegate {
let session = AVCaptureSession()
let bufferRenderSyncer = AVSampleBufferRenderSynchronizer()
let bufferRenderer = AVSampleBufferAudioRenderer()
override func viewDidLoad() {
super.viewDidLoad()
bufferRenderSyncer.addRenderer(bufferRenderer)
let audioDevice = AVCaptureDevice.default(for: .audio)!
let captureInput = try! AVCaptureDeviceInput(device: audioDevice)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) // or some other dispatch queue
session.addInput(captureInput)
session.addOutput(audioOutput)
session.startRunning()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
bufferRenderer.enqueue(sampleBuffer)
if bufferRenderSyncer.rate == 0 {
bufferRenderSyncer.setRate(1, time: sampleBuffer.presentationTimeStamp)
}
}
}
I am setting a microphone on a AVCaptureSession and I am in need of a switch for the mic. How should I proceed with this?
Do I really need to the captureSession?.removeInput(microphone), or is there an easies way?
let microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
do {
let micInput = try AVCaptureDeviceInput(device: microphone)
if captureSession.canAddInput(micInput) {
captureSession.addInput(micInput)
}
} catch {
print("Error setting device audio input: \(error)")
return false
}
You can always just leave the mic input attached and then using your switch decide what to do with the audio buffer. If the switch is off then don't process the audio data. I found an objc.io article that talks about how to set up the separate audio and video buffers before writing the data with an AVAssetWriter.
By default, all AVCaptureAudioChannel objects exposed by a connection are enabled. You may set enabled to false to stop the flow of data for a particular channel.
https://developer.apple.com/documentation/avfoundation/avcaptureaudiochannel/1388574-isenabled
I'm developing a music instrument in iOS with two audio samples (high and low pitches) that are played with view touches. The first sample is very short (a half second) and the other is a little bigger (two seconds). When I play repeatedly and fast the low pitch sound, there is an audio click/pop. There is no problem playing the high pitch sound.
Both audio samples have fade in and fade out in their init/end and there is no clip problem with them.
I'm using this code to load the audio files (simplified here):
engine = AVAudioEngine()
mixer = engine.mainMixerNode
let player = AVAudioPlayerNode()
do {
let audioFile = try AVAudioFile(forReading: instrumentURL)
let audioFormat = audioFile.processingFormat
let audioFrameCount = UInt32(audioFile.length)
let audioFileBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: audioFrameCount)
try audioFile.readIntoBuffer(audioFileBuffer)
engine.attachNode(player)
engine.connect(player, to: mixer, format: audioFileBuffer.format)
} catch {
print("Init Error!")
}
and this code to play the samples:
player.play()
player.scheduleBuffer(audioFileBuffer, atTime: nil, options: option, completionHandler: nil)
I'm using a similar functionality in Android with the same audio samples without any click/pop problem.
Is this click/pop problem an implementation error?
How can I fix this problem?
Update 1
I just tried another approach, with AVAudioPlayer and I got the same pop/click problem.
Update 2
I think the problem is to start the audio file again before its end. The sound stops abruptly.