Capturing a CMSampleBuffer using an RTCAudioSource on iOS - ios

I'm trying to stream a CMSampleBuffer video / audio combo using WebRTC on iOS, but I'm running into trouble trying to capture audio. Video works just fine:
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("couldn't get image from buffer :~(")
return
}
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: ._0, timeStampNs: timeStampNs)
videoSource.capturer(videoCapturer, didCapture: rtcVideoFrame)
When it comes to audio, I can't see any method on the RTCAudioSource class in order to capture audio, any help would be appreciated!

I found a fork of the WebRTC codebase which solves this issue by adding a way for audio samples to be captured by an RTCAudioDeviceModule:
https://github.com/pixiv/webrtc/blob/87.0.4280.142-pixiv0/README.pixiv.en.md

Related

How to stream audio while recording in swift?

I am trying to develop a audio streaming system. The client streams the audio from microphone to server while recording in real-time, another clients will subscribe like podcast.
As far as my understanding, the AVCaptureSession derives the AVCaptureDeviceInput to AVCaptureAudioDataOutput.
let captureSession = AVCaptureSession()
guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return }
let audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(audioInput) {
captureSession.addInput(audioInput)
}
let audioOutput = AVCaptureAudioDataOutput()
if captureSession.canAddOutput(audioOutput) {
captureSession.addOutput(audioOutput)
}
captureSession.commitConfiguration()
With a background from javascript with MediaRecoder API. The recoder emits a chunk of audio data every X milliseconds.
So, Is it possible to do a similar way in iOS (swift)? Is there any other way to do stream the audio in real-time
Edit 1: I need a stream or a chunk of data that emit every X miliseconds. This questions mentioned the recording but it saved to the file.

Use AudioKit Node to AVCaptureSession as audio inputs

I'm developing app for sound effects. I use AudioKit and I got a needed result for recoding voice. Now I want to add sound effect to voice while video is recording. I can record video with default voice, but I don't know how I can add effect to voice from video.
I use AVCaptureSession for video recording and I get audio stream like this:
self.audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
if let audioDevice = self.audioDevice {
self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(self.audioInput!) {
captureSession.addInput(self.audioInput!)
} else {
throw CameraControllerError.inputsAreInvalid
}
}
But I want to set to captureSession input AVAudioNode, because AudioKit works with it.
I will be glad for any help. Thanks

AVCaptureSession Audio preview

I apologise in advance for the "dumb" question, but I feel I have exhausted all resources. I have little to no experience with Swift and coding in general but I understand much based on past experience and use of object based programming such as MAX MSP.
I am attempting to develop a camera/microphone capture iOS app for the macOS QuickTime Player recording function (answering my own need for RAW camera access as I literally could not find the right thing out there!).
Having implemented AVCaptureSession video output successfully, I have tried many methods of sending audio to Quicktime (including AVAudioSessionPortUSBAudio) to no avail. This was before I realised that QuickTime automatically captures the iOS system audio output.
So my presumption was that I should be able to preview audio under AVCapture Session easily; not so! It seems AVCaptureAudioPreviewOutput in "not available" in swift4 or I am simple missing some basics. I have seen articles on stack mentions the need to STOP audio processing, so I'm hopeful it is easy to preview/monitor it.
Could any of you point me to a method of previewing audio in AVCaptureSession? I have an instantiated AVAudioSession still (my original attempt), and have also just managed (I hope) to successfully connect the mic to the AVCaptureSession. However, I am not sure what else to use! My aim: just to hear the Mic input on the system's audio output: the Quicktime connection should (hopefully) handle capturing from the USB port (music played on the phone goes over the usb when the iOS device is selected as the microphone).
let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)
do {
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
self.captureSession.addInput(audioInput)
} catch {
print("Unable to add Audio Device")
}
I have also attempted other things which I am becoming lost on;
captureSession.automaticallyConfiguresApplicationAudioSession = true
func showAudioPreview() -> Bool { return true }
Perhaps it is possible to use AVAudioSession alongside the capture? However, my basic knowledge points to the fact that there are problems running Capture and Audio Sessions together.
Any help would be sincerely appreciated, I am sure many of you will roll your eyes and be able to easily point out my mistakes!
Thanks,
Iwan
AVCaptureAudioPreviewOutput is only available on the mac, but you could instead use AVSampleBufferAudioRenderer. You have to manually enqueue audio CMSampleBuffers to it which an AVCaptureAudioDataOutput can provide:
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureAudioDataOutputSampleBufferDelegate {
let session = AVCaptureSession()
let bufferRenderSyncer = AVSampleBufferRenderSynchronizer()
let bufferRenderer = AVSampleBufferAudioRenderer()
override func viewDidLoad() {
super.viewDidLoad()
bufferRenderSyncer.addRenderer(bufferRenderer)
let audioDevice = AVCaptureDevice.default(for: .audio)!
let captureInput = try! AVCaptureDeviceInput(device: audioDevice)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) // or some other dispatch queue
session.addInput(captureInput)
session.addOutput(audioOutput)
session.startRunning()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
bufferRenderer.enqueue(sampleBuffer)
if bufferRenderSyncer.rate == 0 {
bufferRenderSyncer.setRate(1, time: sampleBuffer.presentationTimeStamp)
}
}
}

Audible glitches on buffer playback via AVAudioPlayerNode in iOS (Swift) *working in simulator, but not on device

When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))

Take screenshot from HLS video stream with iOS device

I'm developing an application which plays an HLS video and rendering it in an UIView.
At a determinate time I want to save a picture of the currently displayed video image. For this I begin an image context graphic, draw the UIView hierarchy in the context and save it in an UIImage with UIGraphicsGetImageFromCurrentImageContext method.
This work really fine on iOS simulator, the rendered image is perfect. But on a device the rendered image is totally white.
Anyone knows why it doesn't work on device ?
Or, is there a working way to take a screenshot of an HLS video on device ?
Thank for any help.
I was able to find a way to save a screenshot of an HLS live stream, by adding an AVPlayerItemVideoOutput object to the AVPlayerItem.
In initialisation:
self.output = AVPlayerItemVideoOutput(pixelBufferAttributes:
Dictionary<String, AnyObject>())
playerItem.addOutput(output!)
To save screenshot:
guard let time = self.player?.currentTime() else { return }
guard let pixelBuffer = self.output?.copyPixelBufferForItemTime(time,
itemTimeForDisplay: nil) else { return }
let ciImage = CIImage(CVPixelBuffer: pixelBuffer)
let temporaryContext = CIContext(options: nil)
let rect = CGRectMake(0, 0,
CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
let videoImage = temporaryContext.createCGImage(ciImage, fromRect: rect)
let image = UIImage(CGImage: videoImage)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
This seems not to work in the simulator, but works fine on a device. Code is in Swift 2 but should be straightforward to convert to obj-c or Swift 1.x.
People have tried, and failed (like me), apparently because of the nature of HLS. See: http://blog.denivip.ru/index.php/2012/12/screen-capture-in-ios/?lang=en

Resources