I apologise in advance for the "dumb" question, but I feel I have exhausted all resources. I have little to no experience with Swift and coding in general but I understand much based on past experience and use of object based programming such as MAX MSP.
I am attempting to develop a camera/microphone capture iOS app for the macOS QuickTime Player recording function (answering my own need for RAW camera access as I literally could not find the right thing out there!).
Having implemented AVCaptureSession video output successfully, I have tried many methods of sending audio to Quicktime (including AVAudioSessionPortUSBAudio) to no avail. This was before I realised that QuickTime automatically captures the iOS system audio output.
So my presumption was that I should be able to preview audio under AVCapture Session easily; not so! It seems AVCaptureAudioPreviewOutput in "not available" in swift4 or I am simple missing some basics. I have seen articles on stack mentions the need to STOP audio processing, so I'm hopeful it is easy to preview/monitor it.
Could any of you point me to a method of previewing audio in AVCaptureSession? I have an instantiated AVAudioSession still (my original attempt), and have also just managed (I hope) to successfully connect the mic to the AVCaptureSession. However, I am not sure what else to use! My aim: just to hear the Mic input on the system's audio output: the Quicktime connection should (hopefully) handle capturing from the USB port (music played on the phone goes over the usb when the iOS device is selected as the microphone).
let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
do {
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
self.captureSession.addInput(audioInput)
} catch {
print("Unable to add Audio Device")
}
I have also attempted other things which I am becoming lost on;
captureSession.automaticallyConfiguresApplicationAudioSession = true
func showAudioPreview() -> Bool { return true }
Perhaps it is possible to use AVAudioSession alongside the capture? However, my basic knowledge points to the fact that there are problems running Capture and Audio Sessions together.
Any help would be sincerely appreciated, I am sure many of you will roll your eyes and be able to easily point out my mistakes!
Thanks,
Iwan
AVCaptureAudioPreviewOutput is only available on the mac, but you could instead use AVSampleBufferAudioRenderer. You have to manually enqueue audio CMSampleBuffers to it which an AVCaptureAudioDataOutput can provide:
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureAudioDataOutputSampleBufferDelegate {
let session = AVCaptureSession()
let bufferRenderSyncer = AVSampleBufferRenderSynchronizer()
let bufferRenderer = AVSampleBufferAudioRenderer()
override func viewDidLoad() {
super.viewDidLoad()
bufferRenderSyncer.addRenderer(bufferRenderer)
let audioDevice = AVCaptureDevice.default(for: .audio)!
let captureInput = try! AVCaptureDeviceInput(device: audioDevice)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) // or some other dispatch queue
session.addInput(captureInput)
session.addOutput(audioOutput)
session.startRunning()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
bufferRenderer.enqueue(sampleBuffer)
if bufferRenderSyncer.rate == 0 {
bufferRenderSyncer.setRate(1, time: sampleBuffer.presentationTimeStamp)
}
}
}
Related
I'm developing app for sound effects. I use AudioKit and I got a needed result for recoding voice. Now I want to add sound effect to voice while video is recording. I can record video with default voice, but I don't know how I can add effect to voice from video.
I use AVCaptureSession for video recording and I get audio stream like this:
self.audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
if let audioDevice = self.audioDevice {
self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(self.audioInput!) {
captureSession.addInput(self.audioInput!)
} else {
throw CameraControllerError.inputsAreInvalid
}
}
But I want to set to captureSession input AVAudioNode, because AudioKit works with it.
I will be glad for any help. Thanks
I'm trying to stream a CMSampleBuffer video / audio combo using WebRTC on iOS, but I'm running into trouble trying to capture audio. Video works just fine:
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("couldn't get image from buffer :~(")
return
}
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: ._0, timeStampNs: timeStampNs)
videoSource.capturer(videoCapturer, didCapture: rtcVideoFrame)
When it comes to audio, I can't see any method on the RTCAudioSource class in order to capture audio, any help would be appreciated!
I found a fork of the WebRTC codebase which solves this issue by adding a way for audio samples to be captured by an RTCAudioDeviceModule:
https://github.com/pixiv/webrtc/blob/87.0.4280.142-pixiv0/README.pixiv.en.md
I have an iPhone app that plays back prerecorded video clips. The audio sounds fine from the phone speaker or applepods, but when I listen through bluetooth connected headphones/speakers that are not apple it sounds terribly distorted. I have tried to use AVAUDIOSESESSION to fix the problem, but no luck. This is the code I tried (from another similar stack overflow answer):
var error: NSError?
var success: Bool?
override func viewDidLoad() {
super.viewDidLoad()
// so we can play the audio undistorted through bluetooth headphones:
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord,
mode: .default, options: [.mixWithOthers, .allowAirPlay,
.allowBluetoothA2DP,.defaultToSpeaker])
try AVAudioSession.sharedInstance().setActive(true)
} catch let error1 as NSError {
error = error1
success = false
}
if !success! {
print("Failed to set audio session category. Error: \(error!)")
}
I am a first time developer so I need things explained very simply and from the basics up. Thank you so much.
It was an embarrassing error. I hope no one else makes it.
The range of AVPlayer's volume must be between 0.0 and 1.0. If you set the volume louder than this (I had player.volume = 7, instead of player.volume = 0.7) you will get distortion on non-apple bluetooth headphones for some reason (apple earphones and the internal speaker accommodates for this error).
This question already has answers here:
iOS: Sample code for simultaneous record and playback
(3 answers)
Closed 3 years ago.
Is there a way I can play the file that is BEING RECORDED during the recording? That is, when the user plugs in an earphone with mic, he is able to hear his own voice through the earphone as he speaks into the mic, which means there is no looping to be worried about.
P.S. If AVAudioRecorder cannot achieve this, is there any way of doing this with AudioKit? If so, please tell me how.
My guess is you would have to use an AVCaptureSession to do this.
You can grab input from AVCaptureDeviceInput.
And then you can use AVCaptureOutputAudioDataOutput, which provides access to audio sample buffers as they are recorded.
extension RecordingViewController: AVCaptureAudioDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection) {
output.connection(with: AVMediaType(rawValue: AVAudioSessionPortBuiltInSpeaker))
}
}
Edit: It might be simpler and cleaner to implement this with AudioKit, however. Code would be something like this:
let microphone = AKMicrophone()
let mixer = AKMixer(microphone)
let booster = AKBooster(mixer, gain: 0)
AudioKit.output = booster
microphone.start()
do {
try AudioKit.start()
} catch {
print("AudioKit boot failed.")
}
I am setting a microphone on a AVCaptureSession and I am in need of a switch for the mic. How should I proceed with this?
Do I really need to the captureSession?.removeInput(microphone), or is there an easies way?
let microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
do {
let micInput = try AVCaptureDeviceInput(device: microphone)
if captureSession.canAddInput(micInput) {
captureSession.addInput(micInput)
}
} catch {
print("Error setting device audio input: \(error)")
return false
}
You can always just leave the mic input attached and then using your switch decide what to do with the audio buffer. If the switch is off then don't process the audio data. I found an objc.io article that talks about how to set up the separate audio and video buffers before writing the data with an AVAssetWriter.
By default, all AVCaptureAudioChannel objects exposed by a connection are enabled. You may set enabled to false to stop the flow of data for a particular channel.
https://developer.apple.com/documentation/avfoundation/avcaptureaudiochannel/1388574-isenabled