How to use CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer in swift - ios

I was wondering how exactly I can implement the Core Media method CMSampleBufferGetAudioBufferList in swift.
I'm following this tutorial, which uses the method to get a list of AudioBuffers from a CMSampleBuffer.
I've tried over and over, but the compiler keeps giving me the generic
Cannot invoke CMSampleBuffer...Buffer with an argument list of type
...
which isn't very helpful.
I've already seen
this StackOverflow question, but the only answer there seems to throw the exact same error I've been getting.
Basically, I just want someone to show me how to get this method to compile without errors in swift.

I assume you have access to a sampleBuffer in a delegate method such as captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!)
I've already struggled with the CMSampleBuffer methods, and I agree it is not obvious how to make them compile.
var sizeOut = UnsafeMutablePointer<Int>.alloc(1)
var listOut = UnsafeMutablePointer<AudioBufferList>.alloc(1)
let listSize: Int = 10
var blockBufferOut: Unmanaged<CMBlockBuffer>?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, sizeOut , listOut, listSize, kCFAllocatorDefault, kCFAllocatorDefault, UInt32(2), &blockBufferOut)
you will then need to call takeRetainedValueon the block buffer out, and handle your pointer to release them manually.

it works for me. try it...
let musicUrl: NSURL = mediaItemCollection.items[0].valueForProperty(MPMediaItemPropertyAssetURL) as! NSURL
let asset: AVURLAsset = AVURLAsset(URL: musicUrl, options: nil)
let assetOutput = AVAssetReaderTrackOutput(track: asset.tracks[0] as! AVAssetTrack, outputSettings: nil)
var error : NSError?
let assetReader: AVAssetReader = AVAssetReader(asset: asset, error: &error)
if error != nil {
print("Error asset Reader: \(error?.localizedDescription)")
}
assetReader.addOutput(assetOutput)
assetReader.startReading()
let sampleBuffer: CMSampleBufferRef = assetOutput.copyNextSampleBuffer()
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var blockBuffer: Unmanaged<CMBlockBuffer>? = nil
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
sizeof(audioBufferList.dynamicType), // instead of UInt(sizeof(audioBufferList.dynamicType))
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&blockBuffer
)

Related

Decoding frames with VTDecompressionSessionDecodeFrame fails with 12909 error

I'm trying to decode CMSampleBuffer so I can analyze their pixel data.
I keep getting error 12909 when I call VTDecompressionSessionDecodeFrame. This is all very new to me - any ideas where might be the problem?
Here's my code:
func decode(sampleBuffer: CMSampleBuffer) {
let imageBufferAttributes: CFDictionary? = [kCVPixelBufferOpenGLESCompatibilityKey: true,
kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA] as CFDictionary
let formatDes = sampleBuffer.formatDescription!
VTDecompressionSessionCreate(allocator: kCFAllocatorDefault,
formatDescription: formatDes,
decoderSpecification: nil,
imageBufferAttributes: imageBufferAttributes,
outputCallback: nil,
decompressionSessionOut: &session)
let flags: VTDecodeFrameFlags = []
var flagOut: VTDecodeInfoFlags = []
let canAccept = VTDecompressionSessionCanAcceptFormatDescription(session!,
formatDescription: formatDes)
print("Can accept: \(canAccept)") // true
VTDecompressionSessionDecodeFrame(session!,
sampleBuffer: sampleBuffer,
flags: flags,
infoFlagsOut: &flagOut)
{ status, infoFlags,imageBuffer, _ , _ in
guard let imageBuffer = imageBuffer else {
print("Error decoding. No image buffer. \(status)") // 12909
return
}
}
}
I my application I created the CMSampleBuffer from a CMBlockBuffer.
The CMBlockBuffer was created from a byte array extracted from a h.264 stream.
When creating the CMBlockBuffer I wrote a false array size into the NALU header.
That was causing the -12909 (Bad Data) error in my case.

Playing audio from microphone

Goal: To stream audio/video from one device to another.
Problem: I managed to get both audio and video but the audio won't play on the other side.
Details:
I have created an app that will transmit A/V data from one device to another over the network. To not go into too much detail I will show you where I am stuck. I managed to listen to the output delegate, where I extract the audio information, convert it into Data and pass it on to a delegate that I've created.
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// VIDEO | code excluded for simplicity of this question as this part works
// AUDIO | only deliver the frames if you are allowed to
if self.produceAudioFrames == true {
// process the audio buffer
let _audioFrame = self.audioFromSampleBuffer(sampleBuffer)
// process in async
DispatchQueue.main.async {
// pass the audio frame to the delegate
self.delegate?.audioFrame(data: _audioFrame)
}
}
}
The helper func that converts the SampleBuffer (not my code, can't find source. I know found it here on SO) :
func audioFromSampleBuffer(_ sampleBuffer: CMSampleBuffer) -> Data {
var audioBufferList = AudioBufferList()
var data = Data()
var blockBuffer : CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
nil,
&audioBufferList,
MemoryLayout<AudioBufferList>.size,
nil,
nil,
0,
&blockBuffer)
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers,
count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in buffers {
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
data.append(frame!, count: Int(audioBuffer.mDataByteSize))
}
// dev
//print("audio buffer count: \(buffers.count)") | this returns 2048
// give the raw data back to the caller
return data
}
Note: Before sending over the network, I convert the data returned from the helper func like so: let payload = Array(data)
That is the host's side.
On the client side I am receiving the payload as [UInt8] and this where I am stuck. I tried multiple things but none worked.
func processIncomingAudioPayloadFromFrame(_ ID: String, _ _Data: [UInt8]) {
let readableData = Data(bytes: _Data) // back from array to the data before we sent it over the network.
print(readableData.count) // still 2048 even after recieving from network, So I am guessing data is still intact
let x = self.bytesToAudioBuffer(_Data) // option two convert into a AVAudioPCMBuffer
print(x) // prints | <AVAudioPCMBuffer#0x600000201e80: 2048/2048 bytes> | I am guessing it works
// option one | play using AVAudioPlayer
do {
let player = try AVAudioPlayer(data: readableData)
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
player.prepareToPlay()
player.play()
print(player.volume) // doing this to see if this is reached
}catch{
print(error) // gets error | Error Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
}
}
Here is the helper func that converts [UInt8] into AVAudioPCMBuffer:
func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {
// format assumption! make this part of your protocol?
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100,
channels: 1, interleaved: true)
let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame
let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: frameLength)
audioBuffer.frameLength = frameLength
let dstLeft = audioBuffer.floatChannelData![0]
// for stereo
// let dstRight = audioBuffer.floatChannelData![1]
buf.withUnsafeBufferPointer {
let src = UnsafeRawPointer($0.baseAddress!).
bindMemory(to: Float.self, capacity: Int(frameLength))
dstLeft.initialize(from: src, count: Int(frameLength))
}
return audioBuffer
}
Questions:
Is it possible to even play directly from [UInt8]?
How can I play the AVAudioPCMBuffer payload using the AudioEngine?
Is it possible?
How can I play the audio on the client side.
Footnote: The comments in the code should give you some hint for the output I hope. Also I don't want to save to a file or anything file related as I just want to amplify the mic for real-time listening, I have no interest in saving the data.
I have used the same code, for playing audio file on Carrier call.
Please try and let me know the results :
Objective Code :
NSString *soundFilePath = [[NSBundle mainBundle]
pathForResource:self.bgAudioFileName ofType: #"mp3"];
NSURL *fileURL = [[NSURL alloc] initFileURLWithPath:soundFilePath ];
myAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL
error:nil];
myAudioPlayer.numberOfLoops = -1;
NSError *sessionError = nil;
// Change the default output audio route
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
// get your audio session somehow
[audioSession setCategory:AVAudioSessionCategoryMultiRoute
error:&sessionError];
BOOL success= [audioSession
overrideOutputAudioPort:AVAudioSessionPortOverrideNone
error:&sessionError];
[audioSession setActive:YES error:&sessionError];
if(!success)
{
NSLog(#"error doing outputaudioportoverride - %#", [sessionError
localizedDescription]);
}
[myAudioPlayer setVolume:1.0f];
[myAudioPlayer play];
Swift version :
var soundFilePath: String? = Bundle.main.path(forResource:
bgAudioFileName, ofType: "mp3")
var fileURL = URL(fileURLWithPath: soundFilePath ?? "")
myAudioPlayer = try? AVAudioPlayer(contentsOf: fileURL)
myAudioPlayer.numberOfLoops = -1
var sessionError: Error? = nil
// Change the default output audio route
var audioSession = AVAudioSession.sharedInstance()
// get your audio session somehow
try? audioSession.setCategory(AVAudioSessionCategoryMultiRoute)
var success: Bool? = try?
audioSession.overrideOutputAudioPort(AVAudioSessionPortOverrideNone
as? AVAudioSessionPortOverride ?? AVAudioSessionPortOverride())
try? audioSession.setActive(true)
if !(success ?? false) {
print("error doing outputaudioportoverride - \
(sessionError?.localizedDescription)")
}
myAudioPlayer.volume = 1.0
myAudioPlayer.play()

Can't play audio recorded from voice using AVCaptureAudioDataOutputSampleDelegate

I have been googling and researching for days but I can't seem to get this to work and I can't find any solution to it on the internet.
I am trying to capture my voice using the microphone and then playing it through the speakers.
Here is my code:
class ViewController: UIViewController, AVAudioRecorderDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
var recordingSession: AVAudioSession!
var audioRecorder: AVAudioRecorder!
var captureSession: AVCaptureSession!
var microphone: AVCaptureDevice!
var inputDevice: AVCaptureDeviceInput!
var outputDevice: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
recordingSession = AVAudioSession.sharedInstance()
do{
try recordingSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try recordingSession.setMode(AVAudioSessionModeVoiceChat)
try recordingSession.setPreferredSampleRate(44000.00)
try recordingSession.setPreferredIOBufferDuration(0.2)
try recordingSession.setActive(true)
recordingSession.requestRecordPermission() { [unowned self] (allowed: Bool) -> Void in
DispatchQueue.main.async {
if allowed {
do{
self.microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
try self.inputDevice = AVCaptureDeviceInput.init(device: self.microphone)
self.outputDevice = AVCaptureAudioDataOutput()
self.outputDevice.setSampleBufferDelegate(self, queue: DispatchQueue.main)
self.captureSession = AVCaptureSession()
self.captureSession.addInput(self.inputDevice)
self.captureSession.addOutput(self.outputDevice)
self.captureSession.startRunning()
}
catch let error {
print(error.localizedDescription)
}
}
}
}
}catch let error{
print(error.localizedDescription)
}
}
And the callback function:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
var audioBufferList = AudioBufferList(
mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 0,
mDataByteSize: 0,
mData: nil)
)
var blockBuffer: CMBlockBuffer?
var osStatus = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
MemoryLayout<AudioBufferList>.size,
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&blockBuffer
)
do {
var data: NSMutableData = NSMutableData.init()
for i in 0..<audioBufferList.mNumberBuffers {
var audioBuffer = AudioBuffer(
mNumberChannels: audioBufferList.mBuffers.mNumberChannels,
mDataByteSize: audioBufferList.mBuffers.mDataByteSize,
mData: audioBufferList.mBuffers.mData
)
let frame = audioBuffer.mData?.load(as: Float32.self)
data.append(audioBuffer.mData!, length: Int(audioBuffer.mDataByteSize))
}
var dataFromNsData = Data.init(referencing: data)
var avAudioPlayer: AVAudioPlayer = try AVAudioPlayer.init(data: dataFromNsData)
avAudioPlayer.prepareToPlay()
avAudioPlayer.play()
}
}
catch let error {
print(error.localizedDescription)
//prints out The operation couldn’t be completed. (OSStatus error 1954115647.)
}
Any help with this would be amazing and it would probably help a lot of other people as well since lots of incomplete swift versions of this is out there.
Thank you.
You were very close! You were capturing audio in the didOutputSampleBuffer callback, but that's a high frequency callback so you were creating a lot of AVAudioPlayers and passing them raw LPCM data, while they only know how to parse CoreAudio file types and then they were going out of scope anyway.
You can very easily play the buffers you're capturing with AVCaptureSession using AVAudioEngine's AVAudioPlayerNode, but at that point you may as well use AVAudioEngine to record from the microphone too:
import UIKit
import AVFoundation
class ViewController: UIViewController {
var engine = AVAudioEngine()
override func viewDidLoad() {
super.viewDidLoad()
let input = engine.inputNode!
let player = AVAudioPlayerNode()
engine.attach(player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
engine.connect(player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
player.scheduleBuffer(buffer)
}
try! engine.start()
player.play()
}
}

Capturing volume levels with AVCaptureAudioDataOutputSampleBufferDelegate in swift

I'm trying to live volume levels using AVCaptureDevice etc it compiles and runs but the values just seem to be random and I keep getting overflow errors as well.
EDIT:
also is it normal for the RMS range to be 0 to about 20000?
if let audioCaptureDevice : AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio){
try audioCaptureDevice.lockForConfiguration()
let audioInput = try AVCaptureDeviceInput(device: audioCaptureDevice)
audioCaptureDevice.unlockForConfiguration()
if(captureSession.canAddInput(audioInput)){
captureSession.addInput(audioInput)
print("added input")
}
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: GlobalUserInitiatedQueue)
if(captureSession.canAddOutput(audioOutput)){
captureSession.addOutput(audioOutput)
print("added output")
}
//supposed to start session not on UI queue coz it takes a while
dispatch_async(GlobalUserInitiatedQueue) {
print("starting captureSession")
self.captureSession.startRunning()
}
}
...
func captureOutput(captureOutput: AVCaptureOutput!, let didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
// Needs to be initialized somehow, even if we take only the address
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
//this needs to be in method otherwise only runs 125 times?
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
sizeof(audioBufferList.dynamicType),
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&buffer
)
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
for buffer in abl{
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(buffer.mData),
count: Int(buffer.mDataByteSize)/sizeof(Int16))
var sum:Int = 0
for sample in samples {
sum = sum + Int(sample*sample)
}
let rms = sqrt(Double(sum)/count)
}
Use AVCaptureAudioDataOutputSampleBufferDelegate's method
captureOutput(captureOutput: AVCaptureOutput!, let didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!)
to get AVCaptureConnection from last parameter.
Then get AVCaptureAudioChannel from connection.audioChannels
Then you can get volume levels from it:
audioChannel.averagePowerLevel
audioChannel.peakHoldLevel
Hey I don't understand much of it but here is a working Swift 5 version:
func captureOutput(_ output : AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection : AVCaptureConnection) {
var buffer: CMBlockBuffer? = nil
// Needs to be initialized somehow, even if we take only the address
let convenianceBuffer = AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil)
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: convenianceBuffer)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size(ofValue: audioBufferList),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
blockBufferOut: &buffer
)
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
for buffer in abl {
let originRawPtr = buffer.mData
let ptrDataSize = Int(buffer.mDataByteSize)
// From raw pointer to typed Int16 pointer
let buffPtrInt16 = originRawPtr?.bindMemory(to: Int16.self, capacity: ptrDataSize)
// From pointer typed Int16 to pointer of [Int16]
// So we can iterate on it simply
let unsafePtrByteSize = ptrDataSize/Int16.bitWidth
let samples = UnsafeMutableBufferPointer<Int16>(start: buffPtrInt16,
count: unsafePtrByteSize)
// Average of each sample squared, then root squared
let sumOfSquaredSamples = samples.map(Float.init).reduce(0) { $0 + $1*$1 }
let averageOfSomething = sqrt(sumOfSquaredSamples / Float(samples.count))
DispatchQueue.main.async {
print("Calulcus of something: \(String(averageOfSomething))" )
}
}
}
It appears I have it working. I casted sample to an Int64 before doing any manipulations.
for buffer in abl{
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(buffer.mData),
count: Int(buffer.mDataByteSize)/sizeof(Int16))
var sum:Int64 = 0
for sample in samples {
let s = Int64(sample)
sum +=s*s
}
dispatch_async(dispatch_get_main_queue()) {
self.volLevel.text = String(sqrt(Float(sum/Int64(samples.count))))
}
I've played with your example. This is a full working swift 2 code snippet:
// also define a variable in class scope, otherwise captureOutput will not be called
var session : AVCaptureSession!
func startCapture() {
if let device : AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio){
do {
self.session = AVCaptureSession()
try device.lockForConfiguration()
let audioInput = try AVCaptureDeviceInput(device: device)
device.unlockForConfiguration()
if(self.session.canAddInput(audioInput)){
self.session.addInput(audioInput)
print("added input")
}
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0))
if(self.session.canAddOutput(audioOutput)){
self.session.addOutput(audioOutput)
print("added output")
}
//supposed to start session not on UI queue coz it takes a while
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
print("starting captureSession")
self.session.startRunning()
}
} catch {
}
}
}
func captureOutput(captureOutput: AVCaptureOutput!, let didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
var buffer: CMBlockBuffer? = nil
// Needs to be initialized somehow, even if we take only the address
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
sizeof(audioBufferList.dynamicType),
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&buffer
)
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
for buffer in abl {
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(buffer.mData),
count: Int(buffer.mDataByteSize)/sizeof(Int16))
var sum:Int64 = 0
for sample in samples {
let s = Int64(sample)
sum = (sum + s*s)
}
dispatch_async(dispatch_get_main_queue()) {
print( String(sqrt(Float(sum/Int64(samples.count)))))
}
}
}

Swift - Realtime images from cam to server

I hope someone can help me!
I am making an app that sends frames of the camera to server, and server make some process. App sends 5-8 images per second (On NSData format)
I have tried different ways to do that, the two methods works but have different problems.
I will explain those situations, and maybe someone can help me.
First situation i tried is using AVCaptureVideoDataOutput mode.
Code below:
let captureSession = AVCaptureSession()
captureSession.sessionPreset=AVCaptureSessionPresetiFrame960x540
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
let output=AVCaptureVideoDataOutput();
output.videoSettings=[kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
output.setSampleBufferDelegate(self, queue: cameraQueue)
captureSession.addOutput(output)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
viewPreview?.layer.addSublayer(videoPreviewLayer)
captureSession.startRunning()
This view delegates:
AVCaptureMetadataOutputObjectsDelegate
AVCaptureVideoDataOutputSampleBufferDelegate
and call the delegate method:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef!, fromConnection connection: AVCaptureConnection!)
{
let imagen:UIImage=imageFromSampleBuffer(sampleBuffer)
let dataImg:NSdata=UIImageJPEGRepresentation(imagen,1.0)
//Here I send the NSData to server correctly.
}
This method call imageFromSampleBuffer and converts samplebuffer to uiimage.
func imageFromSampleBuffer(sampleBuffer :CMSampleBufferRef) -> UIImage {
let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let baseAddress: UnsafeMutablePointer<Void> = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, Int(0))
let bytesPerRow: Int = CVPixelBufferGetBytesPerRow(imageBuffer)
let width: Int = CVPixelBufferGetWidth(imageBuffer)
let height: Int = CVPixelBufferGetHeight(imageBuffer)
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerCompornent: Int = 8
var bitmapInfo = CGBitmapInfo((CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue) as UInt32)
let newContext: CGContextRef = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo) as CGContextRef
let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)
let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)!
return resultImage
}
Here finish the first method to do that, the problem is "infinite memory use", and app crashed after....2 minutes.
I Debug and problem is on UIImageJPEGRepresentation(imagen,1.0) method, there are any form to release memory after use the method???
Second (and I think best way i found) to do that is using "AVCaptureStillImageOutput"
Code below:
var stillImageOutput: AVCaptureStillImageOutput = AVCaptureStillImageOutput()
if session.canAddOutput(stillImageOutput){
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
session.addOutput(stillImageOutput)
self.stillImageOutput = stillImageOutput
}
var timer = NSTimer.scheduledTimerWithTimeInterval(0.2, target: self, selector: Selector("methodToBeCalled"), userInfo: nil, repeats: true)
func methodToBeCalled(){
dispatch_async(self.sessionQueue!, {
// Update the orientation on the still image output video connection before capturing.
let videoOrientation = (self.previewView.layer as! AVCaptureVideoPreviewLayer).connection.videoOrientation
self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo).videoOrientation = videoOrientation
self.stillImageOutput!.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo), completionHandler: {
(imageDataSampleBuffer: CMSampleBuffer!, error: NSError!) in
if error == nil {
let dataImg:NSdata= AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
//Here I send the NSData to server correctly.
}else{println(error)}
})
})
}
This works perfectly and without memory leaks, but when the app takes a Screenshot, the phone makes the tipical sound of "take a photo", and i can not allow it, there are any way to do that without make the sound??.
If someone needs the code i can share the links where i found them.
Thanks a lot!
Did you ever manage to solve this problem yourself?
I stumbled upon this question because I am converting a Objective-C project with an AVCaptureSession to Swift. What my code does differently is discard late frames in the AVCaptureVideoDataOutput, perhaps this is causing your memory problem.
output.alwaysDiscardsLateVideoFrames = true
Insert this line right after you define the video data output and before you create the queue:
let output=AVCaptureVideoDataOutput();
output.videoSettings=[kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
output.alwaysDiscardsLateVideoFrames = true
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
I am, of course, referring to the first of your two solutions.
There is no way to disallow users from taking a screenshot. Not even snapchat can do that.

Resources