Creating copy of CMSampleBuffer in Swift returns OSStatus -12743 (Invalid Media Format) - ios

I am attempting to perform a deep clone of CMSampleBuffer to store the output of a AVCaptureSession. I am receiving the error kCMSampleBufferError_InvalidMediaFormat (OSStatus -12743) when I run the function CMSampleBufferCreateForImageBuffer. I don't see how I've mismatched the CVImageBuffer and the CMSampleBuffer format description. Anyone know where I've gone wrong? Her is my test code.
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let allocator: CFAllocator = CFAllocatorGetDefault().takeRetainedValue()
func cloneImageBuffer(imageBuffer: CVImageBuffer!) -> CVImageBuffer? {
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let bytesPerRow: size_t = CVPixelBufferGetBytesPerRow(imageBuffer)
let width: size_t = CVPixelBufferGetWidth(imageBuffer)
let height: size_t = CVPixelBufferGetHeight(imageBuffer)
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let pixelFormatType = CVPixelBufferGetPixelFormatType(imageBuffer)
let data = NSMutableData(bytes: baseAddress, length: bytesPerRow * height)
CVPixelBufferUnlockBaseAddress(imageBuffer, 0)
var clonedImageBuffer: CVPixelBuffer?
let refCon = NSMutableData()
if CVPixelBufferCreateWithBytes(allocator, width, height, pixelFormatType, data.mutableBytes, bytesPerRow, nil, refCon.mutableBytes, nil, &clonedImageBuffer) == noErr {
return clonedImageBuffer
} else {
return nil
}
}
if let oldImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
if let newImageBuffer = cloneImageBuffer(oldImageBuffer) {
if let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
let dataIsReady = CMSampleBufferDataIsReady(sampleBuffer)
let refCon = NSMutableData()
var timingInfo: CMSampleTimingInfo = kCMTimingInfoInvalid
let timingInfoSuccess = CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timingInfo)
if timingInfoSuccess == noErr {
var newSampleBuffer: CMSampleBuffer?
let success = CMSampleBufferCreateForImageBuffer(allocator, newImageBuffer, dataIsReady, nil, refCon.mutableBytes, formatDescription, &timingInfo, &newSampleBuffer)
if success == noErr {
bufferArray.append(newSampleBuffer!)
} else {
NSLog("Failed to create new image buffer. Error: \(success)")
}
} else {
NSLog("Failed to get timing info. Error: \(timingInfoSuccess)")
}
}
}
}
}

I was able to fix the problem by creating a format description off the newly created image buffer and using it instead of the format description off the original sample buffer. Unfortunately while that fixes the problem here, the format descriptions don't match and causes problem further down.

I recently came across the same issue. After a bit of investigation, the CMVideoFormatDescriptionMatchesImageBuffer() function documentation gave a bit of insight.
This function uses the keys returned by CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers to compares the extensions of the given format description to the attachments of the given image buffer (if an attachment is absent in either it must be absent in both). It also checks kCMFormatDescriptionExtension_BytesPerRow against CVPixelBufferGetBytesPerRow, if applicable.
In my case, I didn't copy over some of the format description extensions as CVBuffer attachments of the copied pixel buffer. Running this bit of code after creating the new CVPixelBufferRef resolved the issue for me (Objective-C, but shouldn't be hard to convert to Swift)
NSSet *commonKeys = [NSSet setWithArray:(NSArray *)CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers()];
NSDictionary *attachments = (NSDictionary *)CVBufferGetAttachments(originalPixelBuffer, kCVAttachmentMode_ShouldPropagate);
[attachments enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop)
{
if ([commonKeys containsObject:key])
{
CVBufferSetAttachment(pixelBufferCopy, (__bridge CFStringRef)(key), (__bridge CFTypeRef)(obj), kCVAttachmentMode_ShouldPropagate);
}
}];
attachments = (NSDictionary *)CVBufferGetAttachments(originalPixelBuffer, kCVAttachmentMode_ShouldNotPropagate);
[attachments enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop)
{
if ([commonKeys containsObject:key])
{
CVBufferSetAttachment(pixelBufferCopy, (__bridge CFStringRef)(key), (__bridge CFTypeRef)(obj), kCVAttachmentMode_ShouldNotPropagate);
}
}];

The Swift version for the answer of Raymanman.
let commonKeys = NSSet(array: CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers() as! [Any])
let propagatedAttachments = NSDictionary(dictionary: CVBufferGetAttachments(pixelBuffer, .shouldPropagate)!)
propagatedAttachments.enumerateKeysAndObjects { key, obj, stop in
if commonKeys.contains(key) {
CVBufferSetAttachment(outputPixelBuffer, key as! CFString, obj as AnyObject, .shouldPropagate)
}
}
let nonPropagatedAttachments = NSDictionary(dictionary: CVBufferGetAttachments(pixelBuffer, .shouldPropagate)!)
nonPropagatedAttachments.enumerateKeysAndObjects { key, obj, stop in
if commonKeys.contains(key) {
CVBufferSetAttachment(outputPixelBuffer, key as! CFString, obj as AnyObject, .shouldNotPropagate)
}
}

Related

Audio CMSampleBuffer volume change in Swift

I am trying to record a video with AVAssetwriter. Now want to control volume for my final output video file. Any Help?
Solutions I had tried:-
self.avAssetInputAudio?.preferredVolume = 0.2 //The value for this property should typically be in the range of 0.0 to 1.0. (which is equivalent to a “normal” volume level)
https://developer.apple.com/documentation/avfoundation/avassetwriterinput/1389949-preferredvolume
Output:- No change in volume level in the output file.
2.Processing Audio with CMSampleBuffer
func processSampleBuffer(scale: Float, sampleBuffer: CMSampleBuffer, writerInput: AVAssetWriterInput) -> Bool {
guard let blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return false
}
let length = CMBlockBufferGetDataLength(blockBuffer)
var sampleBytes = UnsafeMutablePointer<Int16>.allocate(capacity: length)
defer { sampleBytes.deallocate(capacity: length) }
guard checkStatus(CMBlockBufferCopyDataBytes(blockBuffer, 0, length, sampleBytes), message: "Copying block buffer") else {
return false
}
(0..<length).forEach { index in
let ptr = sampleBytes + index
let scaledValue = Float(ptr.pointee) * scale
let processedValue = Int16(max(min(scaledValue, Float(Int16.max)), Float(Int16.min)))
ptr.pointee = processedValue
}
guard checkStatus(CMBlockBufferReplaceDataBytes(sampleBytes, blockBuffer, 0, length), message: "Replacing data bytes in block buffer") else { return false }
assert(CMSampleBufferIsValid(sampleBuffer))
return writerInput.append(sampleBuffer)
}
func checkStatus(_ status: OSStatus, message: String) -> Bool {
assert(kCMBlockBuferNoErr == noErr)
if status != noErr {
debugPrint("Error: \(message) [\(status)]")
}
return status == noErr
}
output:- Final audio is choppy and noisy.

AVCaptureVideoDataOutputSampleBufferDelegate drop frames using CIFilters for video filtering

I have very strange case where AVCaptureVideoDataOutputSampleBufferDelegate drops frames if I use 13 different filter chains. Let me explain:
I have CameraController setup, nothing special, here is my delegate method:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !paused {
if connection.output?.connection(with: .audio) == nil {
//capture video
// my try to avoid "Out of buffers error", no luck ;(
lastCapturedBuffer = nil
let err = CMSampleBufferCreateCopy(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleBufferOut: &lastCapturedBuffer)
if err == noErr {
}
connection.videoOrientation = .portrait
// getting image
let pixelBuffer = CMSampleBufferGetImageBuffer(lastCapturedBuffer!)
// remove if any
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
// captured - is just ciimage property
captured = CIImage(cvPixelBuffer: pixelBuffer!)
//remove if any
CVPixelBufferUnlockBaseAddress(pixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
//CVPixelBufferUnlockBaseAddress(pixelBuffer!, .readOnly)
// transform image to targer resolution
let srcWidth = CGFloat(captured.extent.width)
let srcHeight = CGFloat(captured.extent.height)
let dstWidth: CGFloat = ConstantsManager.shared.k_video_width
let dstHeight: CGFloat = ConstantsManager.shared.k_video_height
let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight
var transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
captured = captured.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
// mirror for front camera
if front {
var t = CGAffineTransform.init(scaleX: -1, y: 1)
t = t.translatedBy(x: -ConstantsManager.shared.k_video_width, y: 0)
captured = captured.transformed(by: t)
}
// video capture logic
let writable = canWrite()
if writable,
sessionAtSourceTime == nil {
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(lastCapturedBuffer!)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
}
if writable, (videoWriterInput.isReadyForMoreMediaData) {
videoWriterInput.append(lastCapturedBuffer!)
}
// apply effect in realtime <- here is problem. If I comment next line, it will be fixed but effect will n't be applied
captured = FilterManager.shared.applyFilterForCamera(inputImage: captured)
// current frame in case user wants to save image as photo
self.capturedPhoto = captured
// sent frame to Camcoder view controller
self.delegate?.didCapturedFrame(frame: captured)
} else {
// capture sound
let writable = canWrite()
if writable, (audioWriterInput.isReadyForMoreMediaData) {
//print("write audio buffer")
audioWriterInput?.append(lastCapturedBuffer!)
}
}
} else {
// paused
}
}
I also implemented didDrop delegate method, here is how I figure out why it drops frames:
func captureOutput(_ output: AVCaptureOutput, didDrop sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("did drop")
var mode: CMAttachmentMode = 0
let reason = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_DroppedFrameReason, attachmentModeOut: &mode)
print("reason \(String(describing: reason))") // Optional(OutOfBuffers)
}
So I did it like a pro and just commented parts of code to find where is the problem. So, it here:
captured = FilterManager.shared.applyFilterForCamera(inputImage: captured)
FilterManager - is singleton, here is called func:
func applyFilterForCamera(inputImage: CIImage) -> CIImage {
return currentVsFilter!.apply(sourceImage: inputImage)
}
currentVsFilter is object of VSFilter type - here is example of one:
import Foundation
import AVKit
class TestFilter: CustomFilter {
let _name = "Тестовый Фильтр"
let _displayName = "Test Filter"
var tempImage: CIImage?
var final: CGImage?
override func name() -> String {
return _name
}
override func displayName() -> String {
return _displayName
}
override init() {
super.init()
print("Test Filter init")
// setup my custom kernel filter
self.noise.type = GlitchFilter.GlitchType.allCases[2]
}
// this returns composition for playback using AVPlayer
override func composition(asset: AVAsset) -> AVMutableVideoComposition {
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let inputImage = request.sourceImage.cropped(to: request.sourceImage.extent)
DispatchQueue.global(qos: .userInitiated).async {
let output = self.apply(sourceImage: inputImage, forComposition: true)
request.finish(with: output, context: nil)
}
})
let size = FilterManager.shared.cropRectForOrientation().size
composition.renderSize = size
return composition
}
// this returns actual filtered CIImage, used for both AVPlayer composition and realtime camera
override func apply(sourceImage: CIImage, forComposition: Bool = false) -> CIImage {
// rendered text
tempImage = FilterManager.shared.textRenderedImage()
// some filters chained one by one
self.screenBlend?.setValue(tempImage, forKey: kCIInputImageKey)
self.screenBlend?.setValue(sourceImage, forKey: kCIInputBackgroundImageKey)
self.noise.inputImage = self.screenBlend?.outputImage
self.noise.inputAmount = CGFloat.random(in: 1.0...3.0)
// result
tempImage = self.noise.outputImage
// correct crop
let rect = forComposition ? FilterManager.shared.cropRectForOrientation() : FilterManager.shared.cropRect
final = self.context.createCGImage(tempImage!, from: rect!)
return CIImage(cgImage: final!)
}
}
And now, the most strange thing, I have 30 VSFilters and when I got to 13(switching one by one by UIButton) I got error "Out of Buffer", this one:
kCMSampleBufferDroppedFrameReason_OutOfBuffers
What I tested:
I changed vsFilters order in filters array inside FilterManager singleton - same
I tried switch from first to 12 one by one, then go back - works, but after I switched to 13tn(of 30th from 0) - bug
Looks like it can handle only 12 VSFIlter objects, like if it retains them somehow or maybe it's related to threading, I don't know.
This app made for iOs devices, tested on iPhone X iOs 13.3.1
This is video editor app to apply different effects to both live stream from camera and video files from camera roll
Maybe someone has experience with this?
Have a great day
Best, Victor
Edit 1. If I reinit cameraController(AVCaptureSession. input/output devices) it works but this is ugly option and it adds lag when switching filters
Ok, so I finally won this battle. In case some one else get this "OutOfBuffer" problem, here is my solution
As I figured out, CIFilter grabs CVPixelBuffer and don't release it while filtering images. It's kinda creates one huge buffer, I guess. Strange thing: it don't create memory leak, so I guess it grabs not particular buffer but creates strong reference to it. As rumors(me) say, it can handle only 12 such references.
So, my approach was to copy CVPixelBuffer and then work with it instead of buffer I got from AVCaptureVideoDataOutputSampleBufferDelegate didOutput func
Here is my new code:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !paused {
//print("camera controller \(id) got frame")
if connection.output?.connection(with: .audio) == nil {
//capture video
connection.videoOrientation = .portrait
// getting image
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// this works!
let copyBuffer = pixelBuffer.copy()
// captured - is just ciimage property
captured = CIImage(cvPixelBuffer: copyBuffer)
//remove if any
// transform image to targer resolution
let srcWidth = CGFloat(captured.extent.width)
let srcHeight = CGFloat(captured.extent.height)
let dstWidth: CGFloat = ConstantsManager.shared.k_video_width
let dstHeight: CGFloat = ConstantsManager.shared.k_video_height
let scaleX = dstWidth / srcWidth
let scaleY = dstHeight / srcHeight
var transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
captured = captured.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: dstWidth, height: dstHeight))
// mirror for front camera
if front {
var t = CGAffineTransform.init(scaleX: -1, y: 1)
t = t.translatedBy(x: -ConstantsManager.shared.k_video_width, y: 0)
captured = captured.transformed(by: t)
}
// video capture logic
let writable = canWrite()
if writable,
sessionAtSourceTime == nil {
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
}
if writable, (videoWriterInput.isReadyForMoreMediaData) {
videoWriterInput.append(sampleBuffer)
}
self.captured = FilterManager.shared.applyFilterForCamera(inputImage: self.captured)
// current frame in case user wants to save image as photo
self.capturedPhoto = captured
// sent frame to Camcoder view controller
self.delegate?.didCapturedFrame(frame: captured)
} else {
// capture sound
let writable = canWrite()
if writable, (audioWriterInput.isReadyForMoreMediaData) {
//print("write audio buffer")
audioWriterInput?.append(sampleBuffer)
}
}
} else {
// paused
//print("paused camera controller \(id)")
}
}
and there is func to copy buffer:
func copy() -> CVPixelBuffer {
precondition(CFGetTypeID(self) == CVPixelBufferGetTypeID(), "copy() cannot be called on a non-CVPixelBuffer")
var _copy : CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault,
CVPixelBufferGetWidth(self),
CVPixelBufferGetHeight(self),
CVPixelBufferGetPixelFormatType(self),
nil,
&_copy)
guard let copy = _copy else { fatalError() }
CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags.readOnly)
CVPixelBufferLockBaseAddress(copy, CVPixelBufferLockFlags(rawValue: 0))
let copyBaseAddress = CVPixelBufferGetBaseAddress(copy)
let currBaseAddress = CVPixelBufferGetBaseAddress(self)
print("copy data size: \(CVPixelBufferGetDataSize(copy))")
print("self data size: \(CVPixelBufferGetDataSize(self))")
memcpy(copyBaseAddress, currBaseAddress, CVPixelBufferGetDataSize(copy))
//memcpy(copyBaseAddress, currBaseAddress, CVPixelBufferGetDataSize(self) * 2)
CVPixelBufferUnlockBaseAddress(copy, CVPixelBufferLockFlags(rawValue: 0))
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags.readOnly)
return copy
}
I used it as extension
I hope, this will help anyone with similar problem
Best, Victor

Obtain a <AudioTimeStamp> for Audio Queue Buffer

I'm attempting to create a continuous FIFO audio recorder in Swift. I'm running into and issue while trying to create the audioQueueCallback.
From the docs AudioTimeStamp has this init method:
AudioTimeStamp(mSampleTime: Float64, mHostTime: UInt64, mRateScalar: Float64, mWordClockTime: UInt64, mSMPTETime: SMPTETime, mFlags: AudioTimeStampFlags, mReserved: UInt32)
And I have not idea how to use it.
It seems to me like the device should have a reliable internal clock to be able to manage audioQueues off of but I haven't been able to find any documentation for it.
Here's my attempt at creating a BufferQueue:
ypealias WYNDRInputQueueCallback = ((Data) -> Void)
class WYNDRInputQueue {
class WYNDRInputQueueUserData {
let callback: WYNDRInputQueueCallback
let bufferStub: NSData
init(callback: #escaping WYNDRInputQueueCallback, bufferStub: NSData){
self.callback = callback
self.bufferStub = bufferStub
}
}
private var audioQueueRef: AudioQueueRef?
private let userData: WYNDRInputQueueUserData
public init(asbd: inout AudioStreamBasicDescription, callback: #escaping WYNDRInputQueueCallback, buffersCount: UInt32 = 3, bufferSize: UInt32 = 9600) throws {
self.userData = WYNDRInputQueueUserData(callback: callback, bufferStub: NSMutableData(length: Int(bufferSize))!)
let userDataUnsafe = UnsafeMutableRawPointer(Unmanaged.passRetained(self.userData).toOpaque())
let input = AudioQueueNewInput(&asbd,
audioQueueInputCallback,
userDataUnsafe,
.none,
.none,
0,
&audioQueueRef)
if input != noErr {
throw InputQueueError.genericError(input)
}
assert(audioQueueRef != nil )
for _ in 0..<buffersCount {
var bufferRef: AudioQueueBufferRef?
let bufferInput = AudioQueueAllocateBuffer(audioQueueRef!, bufferSize, &bufferRef)
if bufferInput != noErr {
throw InputQueueError.genericError(bufferInput)
}
assert(bufferRef != nil)
Here's where I'm using the audioTimeStamp:
audioQueueInputCallback(userDataUnsafe, audioQueueRef!, bufferRef!, <#T##UnsafePointer<AudioTimeStamp>#>, 0, nil)
}
}
private let audioQueueInputCallback: AudioQueueInputCallback = { (inUserData, inAQ, inBuffer, inStartTime, inNumberPacketDescriptions, inPacketDescs) in
let userData = Unmanaged<WYNDRInputQueueUserData>.fromOpaque(inUserData!).takeUnretainedValue()
let dataSize = Int(inBuffer.pointee.mAudioDataByteSize)
let inputData = Data(bytes: inBuffer.pointee.mAudioData, count: dataSize)
userData.callback(inputData)
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)
}
Any advice here would be greatly appreciated!
I'm not sure how the timestamp is going to be used or who is going to use it, but if in doubt, why not use the number of samples you've recorded as the timestamp?
var timestamp = AudioTimeStamp()
timestamp.mSampleTime = numberOfSamplesRecorded
timestamp.mFlags = .sampleHostTimeValid

Decrypt Media Files in chunks and play via AVPlayer

I have a mp4 video file which i am encrypting to save and decrypting to play via AVPlayer. Using CRYPTOSWIFT Library for encrypting/decrypting
Its working fine when i am decrypting whole file at once but my file is quite big and taking 100% CPU usage and lot of memory. So, I need to decrypt encrypted file in chunks.
I tried to decrypt file in chunks but its not playing video as AVPlayer is not recognizing decrypted chunk data maybe data is not stored sequentially while encrypting file. I have tried chacha20, AES, AES.CTR & AES.CBC protocols to encrypt and decrypt files but to no avail.
extension PlayerController: AVAssetResourceLoaderDelegate {
func resourceLoader(resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
let request = loadingRequest.request
guard let path = request.URL?.path where request.URL?.scheme == Constants.customVideoScheme else { return true }
if let contentRequest = loadingRequest.contentInformationRequest {
do {
let fileAttributes = try NSFileManager.defaultManager().attributesOfItemAtPath(path)
if let fileSizeNumber = fileAttributes[NSFileSize] {
contentRequest.contentLength = fileSizeNumber.longLongValue
}
} catch { }
if fileHandle == nil {
fileHandle = NSFileHandle(forReadingAtPath: (request.URL?.path)!)!
}
contentRequest.contentType = "video/mp4"
contentRequest.byteRangeAccessSupported = true
}
if let data = decryptData(loadingRequest, path: path), dataRequest = loadingRequest.dataRequest {
dataRequest.respondWithData(data)
loadingRequest.finishLoading()
return true
}
return true
}
func decryptData(loadingRequest: AVAssetResourceLoadingRequest, path: String) -> NSData? {
print("Current OFFSET: \(loadingRequest.dataRequest?.currentOffset)")
print("requested OFFSET: \(loadingRequest.dataRequest?.requestedOffset)")
print("Current Length: \(loadingRequest.dataRequest?.requestedLength)")
if loadingRequest.contentInformationRequest != nil {
var data = fileHandle!.readDataOfLength((loadingRequest.dataRequest?.requestedLength)!)
fileHandle!.seekToFileOffset(0)
data = decodeVideoData(data)!
return data
} else {
fileHandle?.seekToFileOffset(UInt64((loadingRequest.dataRequest?.currentOffset)!))
let data = fileHandle!.readDataOfLength((loadingRequest.dataRequest?.requestedLength)!)
// let data = fileHandle!.readDataOfLength(length!) ** When I use this its not playing video but play fine when try with requestedLength **
return decodeVideoData(data)
}
}
}
Decode code to decode nsdata :
func decodeVideoData(data: NSData) -> NSData? {
if let cha = ChaCha20(key: Constants.Encryption.SecretKey, iv: Constants.Encryption.IvKey) {
let decrypted: NSData = try! data.decrypt(cha)
return decrypted
}
return nil
}
I need help regarding this issue, Kindly guide me to the right way to achieve this.
For in depth and a more complete CommonCrypto wrapper, check out my CommonCrypto wrapper. I've extracted bits and pieces for this answer.
First of all, we need to define some functions that will do the encryption/decryption. I'm assuming, for now, you use AES(256) CBC with PKCS#7 padding. Summarising the snippet below: we have an update function, that can be called repeatedly to consume the chunks. There's also a final function that will wrap up any left overs (usually deals with padding).
import CommonCrypto
import Foundation
enum CryptoError: Error {
case generic(CCCryptorStatus)
}
func getOutputLength(_ reference: CCCryptorRef?, inputLength: Int, final: Bool) -> Int {
CCCryptorGetOutputLength(reference, inputLength, final)
}
func update(_ reference: CCCryptorRef?, data: Data) throws -> Data {
var output = [UInt8](repeating: 0, count: getOutputLength(reference, inputLength: data.count, final: false))
let status = data.withUnsafeBytes { dataPointer -> CCCryptorStatus in
CCCryptorUpdate(reference, dataPointer.baseAddress, data.count, &output, output.count, nil)
}
guard status == kCCSuccess else {
throw CryptoError.generic(status)
}
return Data(output)
}
func final(_ reference: CCCryptorRef?) throws -> Data {
var output = [UInt8](repeating: 0, count: getOutputLength(reference, inputLength: 0, final: true))
var moved = 0
let status = CCCryptorFinal(reference, &output, output.count, &moved)
guard status == kCCSuccess else {
throw CryptoError.generic(status)
}
output.removeSubrange(moved...)
return Data(output)
}
Next up, for the purpose of demonstration, the encryption.
let key = Data(repeating: 0x0a, count: kCCKeySizeAES256)
let iv = Data(repeating: 0, count: kCCBlockSizeAES128)
let bigFile = (0 ..< 0xffff).map { _ in
return Data(repeating: UInt8.random(in: 0 ... UInt8.max), count: kCCBlockSizeAES128)
}.reduce(Data(), +)
var encryptor: CCCryptorRef?
CCCryptorCreate(CCOperation(kCCEncrypt), CCAlgorithm(kCCAlgorithmAES), CCOptions(kCCOptionPKCS7Padding), Array(key), key.count, Array(iv), &encryptor)
do {
let ciphertext = try update(encryptor, data: bigFile) + final(encryptor)
print(ciphertext) // 1048576 bytes
} catch {
print(error)
}
That appears to me as quite a large file. Now decrypting, would be done in a similar fashion.
var decryptor: CCCryptorRef?
CCCryptorCreate(CCOperation(kCCDecrypt), CCAlgorithm(kCCAlgorithmAES), CCOptions(kCCOptionPKCS7Padding), Array(key), key.count, Array(iv), &decryptor)
do {
var plaintext = Data()
for i in 0 ..< 0xffff {
plaintext += try update(decryptor, data: ciphertext[i * kCCBlockSizeAES128 ..< i * kCCBlockSizeAES128 + kCCBlockSizeAES128])
}
plaintext += try final(decryptor)
print(plaintext == bigFile, plaintext) // true 1048560 bytes
} catch {
print(error)
}
The encryptor can be altered for different modes and should also be released once it's done, and I'm not too sure how arbitrary output on the update function will behave, but this should be enough to give you an idea of how it can be done using CommonCrypto.

AudioFileReadBytes from a memory block, not a file

I'd like to cache CAF files before converting them to PCM whenever they play.
For example,
char *mybuffer = malloc(mysoundsize);
FILE *f = fopen("mysound.caf", "rb");
fread(mybuffer, mysoundsize, 1, f);
fclose(f);
char *pcmBuffer = malloc(pcmsoundsize);
// Convert to PCM for playing
AudioFileReadBytes(mybuffer, false, 0, mysoundsize, &numbytes, pcmBuffer);
This way, whenever the sound plays, the compressed CAF file is already loaded into memory, avoiding disk access. How can I open a block of memory with an 'AudioFileID' to make AudioFileReadBytes happy? Is there another method I can use?
I have not done it myself, but from the documentation I would think that you have to use AudioFileOpenWithCallbacks and implement callback functions that read from your memory buffer.
You can finish it with AudioFileStreamOpen
fileprivate var streamID: AudioFileStreamID?
public func parse(data: Data) throws {
let streamID = self.streamID!
let count = data.count
_ = try data.withUnsafeBytes { (bytes: UnsafePointer<UInt8>) in
let result = AudioFileStreamParseBytes(streamID, UInt32(count), bytes, [])
guard result == noErr else {
throw ParserError.failedToParseBytes(result)
}
}
}
you can store the data in memory within the callback
func ParserPacketCallback(_ context: UnsafeMutableRawPointer, _ byteCount: UInt32, _ packetCount: UInt32, _ data: UnsafeRawPointer, _ packetDescriptions: Optional<UnsafeMutablePointer<AudioStreamPacketDescription>>) {
let parser = Unmanaged<Parser>.fromOpaque(context).takeUnretainedValue()
/// At this point we should definitely have a data format
guard let dataFormat = parser.dataFormatD else {
return
}
let format = dataFormat.streamDescription.pointee
let bytesPerPacket = Int(format.mBytesPerPacket)
for i in 0 ..< Int(packetCount) {
let packetStart = i * bytesPerPacket
let packetSize = bytesPerPacket
let packetData = Data(bytes: data.advanced(by: packetStart), count: packetSize)
parser.packetsX.append(packetData)
}
}
full code in github repo

Resources