Here's an example code on how I decompressed a lz4-decompress data object:
extension Data {
var calculatedResult: Data? {
var result: Data?
let size = 15_000_000
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: size)
// Write data to buffer
let resultLength = ... // calculate the length of the result data
result = Data(bytes: buffer, count: read)
buffer.deallocate()
return result
}
}
However I recently I'v been getting crashes:
And the logs say "Could not allocate memory":
As I understand, this is caused by insufficient RAM space when creating the buffer. Is there anyway I can check whether the RAM is adequate before calling UnsafeMutablePointer<Int8>.allocate()?
Thanks in advance guys.
Related
I'm trying to compress data to improve the space complexity, but I'm not sure if I'm incorrectly compressing data or incorrectly measuring the size.
I tried the following in the Playground.
import Foundation
import Compression
// Example data
struct MyData: Encodable {
let property = "Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum."
}
// I tried using MemoryLayout to measure the size of the uncompressed data
let size = MemoryLayout<MyData>.size
print("myData type size", size) // 16
let myData = MyData()
let myDataSize = MemoryLayout.size(ofValue: myData)
print("myData instance size", myDataSize) // 16
func run() {
// 1. This shows the size of the encoded data
guard let encoded = try? JSONEncoder().encode(myData) else { return }
print("myData encoded size", encoded) // 589 bytes
/// 2. This shows the size after using a first compression method
guard let compressed = try? (encoded as NSData).compressed(using: .lzfse) else { return }
let firstCompression = Data(compressed)
print("firstCompression", firstCompression) // 491 bytes
/// 3. Second compression method (just wanted to try a different compression method)
let secondCompression = compress(encoded)
print("secondCompression", secondCompression) // 491 bytes
/// 4. Wanted to compare the size difference between compressed and uncompressed for a bigger data so here is the array of uncompressed data.
var myDataArray = [MyData]()
for _ in 0 ... 100 {
myDataArray.append(MyData())
}
guard let encodedArray = try? JSONEncoder().encode(myDataArray) else { return }
print("myData encodedArray size", encodedArray) // 59591 bytes
print("memory layout", MemoryLayout.size(ofValue: encodedArray)) // 16
/// 5. Compressed array
var compressedArray = [Data]()
for _ in 0 ... 100 {
guard let compressed = try? (encoded as NSData).compressed(using: .lzfse) else { return }
let data = Data(compressed)
compressedArray.append(data)
}
guard let encodedCompressedArray = try? JSONEncoder().encode(compressedArray) else { return }
print("myData compressed array size", encodedCompressedArray) // 66661 bytes
print("memory layout", MemoryLayout.size(ofValue: encodedCompressedArray)) // 16
/// 6. Compression using lzma
var differentCompressionArray = [Data]()
for _ in 0 ... 100 {
guard let compressed = try? (encoded as NSData).compressed(using: .lzma) else { return }
let data = Data(compressed)
differentCompressionArray.append(data)
}
guard let encodedCompressedArray2 = try? JSONEncoder().encode(differentCompressionArray) else { return }
print("myData compressed array size", encodedCompressedArray2) // 60702 bytes
print("memory layout", MemoryLayout.size(ofValue: encodedCompressedArray2)) // 16
}
run()
// The implementation for the second compression method
func compress(_ sourceData: Data) -> Data {
let pageSize = 128
var compressedData = Data()
do {
let outputFilter = try OutputFilter(.compress, using: .lzfse) { (data: Data?) -> Void in
if let data = data {
compressedData.append(data)
}
}
var index = 0
let bufferSize = sourceData.count
while true {
let rangeLength = min(pageSize, bufferSize - index)
let subdata = sourceData.subdata(in: index ..< index + rangeLength)
index += rangeLength
try outputFilter.write(subdata)
if (rangeLength == 0) {
break
}
}
}catch {
fatalError("Error occurred during encoding: \(error.localizedDescription).")
}
return compressedData
}
The MemoryLayout object doesn't seem to be helpful in measuring the size of encoded arrays whether or not they're compressed. I'm not sure how to measure a struct or an array of struts without encoding them with JSONEncoder which already compresses the data.
The before/after compression for the single instance of MyData (#1, #2, and #3) seems to show that the data is being properly compressed going from 589 bytes to 491 bytes. However, the comparison between an array of uncompressed data and an array of compressed data (#4, #5) seems to show that the size increased from 59591 to 66661 after the compression.
Finally, I tried using a different compression algorithm lzma (#6). It reduced the size to 60702 which is lower than the previous compression, but it still wasn't smaller than the uncompressed data.
To get a bit of confusion out of the way first: MemoryLayout gives you information about the size and structure of the layout of a type at compile time, but can't be used to determine the amount of storage an Array value needs at runtime because the size of the Array structure itself does not depend on how much data it contains.
Highly simplified, the layout of an Array value looks like this:
┌─────────────────────┐
│ Array │
├──────────┬──────────┤ ┌──────────────────┐
│ length │ buffer ─┼───▶│ storage │
└──────────┴──────────┘ └──────────────────┘
1 word / 1 word /
8 bytes 8 bytes
└─────────┬─────────┘
└─▶ MemoryLayout<Array<UInt8>>.size
An Array value stores its length, or count (mixed in with some flags, but we don't need to worry about that) and a pointer to the actual space where the items it contains are stored. Those items aren't stored as part of the Array value itself, but separately in allocated memory which the Array points to. Whether the Array "contains" 10 values or 100000 values, the size of the Array structure remains the same: 1 word (or 8 bytes on a 64-bit system) for the length, and 1 word for the pointer to the actual underlying storage. (The size of the storage buffer, however, is exactly determined by the number of elements it is able to contain, at runtime.)
In practice, Array is significantly more complicated than this for bridging and other reasons, but this is the basic gist; this is why you only ever see MemoryLayout.size(ofValue:) return the same number every time. [And incidentally, the size of String is the same as Array for similar reasons, which is why MemoryLayout<MyData>.size also reports 16.]
In order to know how many bytes an Array or a Data effectively take up, it's sufficient to ask them for their .count: Array<UInt8> and Data are both collections of UInt8 values (bytes), and their .count will reflect the amount of data effectively stored in their underlying storage.
As for the size increase between step (4) and (5), note that
Step 4 takes 100 copies of your MyData and joins them together before converting them to JSON, while
Step 5 takes 100 copies of individually compressed MyData instances, joins those together, and then re-coverts them to JSON
Step 5 has a few issues compared to step 4:
Compression benefits heavily from repetition in data: a bit of data compressed and repeated 100 times won't be nearly as compact as a bit of data repeated 100 times, then compressed, because each round of compression can't benefit from knowing that there's another copy of the data that came before it. As a simple example:
Let's say we wanted to use a form of run-length encoding to compress the string Hello: there isn't a lot we can do, except maybe turn it into Hel{2}o (where {2} indicates a repetition of the last character 2 times)
If we compress Hello and join it 3 times, we get might get Hel{2}oHel{2}oHel{2}o,
But if we first joined Hello 3 times and then compressed, we could get {Hel{2}o}{3}, which is much more compact
Compression also typically needs to insert some information about how the data was compressed in order to be able to recognize and decompress the data later. By compressing MyData 100 times and joining all of those instances, you're repeating that metadata 100 times
Even after compressing your MyData instances, re-representing them as JSON decreases how compressed they are because it can't represent the binary data exactly. Instead, it has to convert each Data blob into a Base64-encoded string, which causes it to grow again
Between these issues, it's not terribly surprising that your data is growing. What you actually want is a modification to step 4, which is compressing the joined data:
guard let encodedArray = try? JSONEncoder().encode(myDataArray) else { fatalError() }
guard let compressedEncodedArray = try? (encodedArray as NSData).compressed(using: .lzma) else { fatalError() }
print(compressedEncodedArray.count) // => 520
This is significantly better than
guard let encodedCompressedArray = try? JSONEncoder().encode(compressedArray) else { fatalError() }
print(encodedCompressedArray.count) // => 66661
As an aside: it seems unlikely that you're actually using JSONEncoder in practice to join data in this way, and this was just for measurement here — but if you actually are, consider other mechanisms for doing this. Converting binary data to JSON in this way is very inefficient storage-wise, and with a bit more information about what you might actually need in practice, we might be able to recommend a more effective way to do this.
If what you're actually doing in practice is encoding an Encodable object tree and then compressing that the one time, that's totally fine.
I work with the new iPadPro and a LIDAR app and am kinda new to SWIFT5 (normaly working on CordovaApps with minimal native coding needed)
I want to dump the CVPixelBuffer I get for a frame to a .bin file.
I get the buffer like this: let depthMap = frame.sceneDepth!.depthMap
It returns a DepthFloat32 buffer.
After that I lock the address and fetch it:
CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 0))
var addr = CVPixelBufferGetBaseAddress(depthMap)
How can I save these values to a file on my iPad? Would be thankful for any help.
I solved it on my own. Here is my solution in case somebody needs it.
Gather all the necessary values:
let addr = CVPixelBufferGetBaseAddress(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
let bpr = CVPixelBufferGetBytesPerRow(depthMap)
Then I hand the bufferaddress to the Data obj to create a byte buffer in memory --> Data
let data = Data(bytes: addr!, count: (bpr*height))
do {
let filename = getDirectory().appendingPathComponent(timestamp +"_depthbuffer.bin")
try data.write(to: filename)
} catch {
// do smth with errors
}
getDirectory() is a custom function to find the Documents Directory. I can get the created Files from the App Container.
Don't forgett to Lock & unlock the BufferAdress.
I'm sending a video via OutputStream.write(_maxLength:) but the write method doesn't send all the data bytes but only a fixed amount every time.
The total data count is videoData.count = 7357450 but the bytes written(returned by outputStream.write) is only 131768.
This is the method for writing to output stream.
extension OutputStream {
func write(data: Data) -> Int {
return data.withUnsafeBytes { write($0, maxLength: data.count) }
}
}
Is there something wrong with the code?
Is there a way to increase the .write capacity?
Note: This is not related to this question: Writing Data to an NSOutputStream in Swift 3. This question asks how to write while my question is about the limits of writing data.
I am trying to save depth images from the iPhoneX TrueDepth camera. Using the AVCamPhotoFilter sample code, I am able to view the depth, converted to grayscale format, on the screen of the phone in real-time. I cannot figure out how to save the sequence of depth images in the raw (16 bits or more) format.
I have depthData which is an instance of AVDepthData. One of its members is depthDataMap which is an instance of CVPixelBuffer and image format type kCVPixelFormatType_DisparityFloat16. Is there a way to save it to the phone to transfer for offline manipulation?
There's no standard video format for "raw" depth/disparity maps, which might have something to do with AVCapture not really offering a way to record it.
You have a couple of options worth investigating here:
Convert depth maps to grayscale textures (which you can do using the code in the AVCamPhotoFilter sample code), then pass those textures to AVAssetWriter to produce a grayscale video. Depending on the video format and grayscale conversion method you choose, other software you write for reading the video might be able to recover depth/disparity info with sufficient precision for your purposes from the grayscale frames.
Anytime you have a CVPixelBuffer, you can get at the data yourself and do whatever you want with it. Use CVPixelBufferLockBaseAddress (with the readOnly flag) to make sure the content won't change while you read it, then copy data from the pointer CVPixelBufferGetBaseAddress provides to wherever you want. (Use other pixel buffer functions to see how many bytes to copy, and unlock the buffer when you're done.)
Watch out, though: if you spend too much time copying from buffers, or otherwise retain them, they won't get deallocated as new buffers come in from the capture system, and your capture session will hang. (All told, it's unclear without testing whether a device has the memory & I/O bandwidth for much recording this way.)
You can use Compression library to create a zip file with the raw CVPixelBuffer data.
Few problems with this solution.
It's a lot of data and zip is not a good compression. (the compressed file is 20 times bigger than 32bits per frame video with the same number of frames).
Apple's Compression library creates a file which standard zip program does't open. I use zlib in C code to read it and use inflateInit2(&strm, -15); to make it work.
You'll need to do some work to export the file out of your application
Here is my code (which I limited to 250 frames since it hold it in RAM but you can flush to disk if needed more frames):
// DepthCapture.swift
// AVCamPhotoFilter
//
// Created by Eyal Fink on 07/04/2018.
// Copyright © 2018 Resonai. All rights reserved.
//
// Capture the depth pixelBuffer into a compress file.
// This is very hacky and there are lots of TODOs but instead we need to replace
// it with a much better compression (video compression)....
import AVFoundation
import Foundation
import Compression
class DepthCapture {
let kErrorDomain = "DepthCapture"
let maxNumberOfFrame = 250
lazy var bufferSize = 640 * 480 * 2 * maxNumberOfFrame // maxNumberOfFrame frames
var dstBuffer: UnsafeMutablePointer<UInt8>?
var frameCount: Int64 = 0
var outputURL: URL?
var compresserPtr: UnsafeMutablePointer<compression_stream>?
var file: FileHandle?
// All operations handling the compresser oobjects are done on the
// porcessingQ so they will happen sequentially
var processingQ = DispatchQueue(label: "compression",
qos: .userInteractive)
func reset() {
frameCount = 0
outputURL = nil
if self.compresserPtr != nil {
//free(compresserPtr!.pointee.dst_ptr)
compression_stream_destroy(self.compresserPtr!)
self.compresserPtr = nil
}
if self.file != nil {
self.file!.closeFile()
self.file = nil
}
}
func prepareForRecording() {
reset()
// Create the output zip file, remove old one if exists
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString
self.outputURL = URL(fileURLWithPath: documentsPath.appendingPathComponent("Depth"))
FileManager.default.createFile(atPath: self.outputURL!.path, contents: nil, attributes: nil)
self.file = FileHandle(forUpdatingAtPath: self.outputURL!.path)
if self.file == nil {
NSLog("Cannot create file at: \(self.outputURL!.path)")
return
}
// Init the compression object
compresserPtr = UnsafeMutablePointer<compression_stream>.allocate(capacity: 1)
compression_stream_init(compresserPtr!, COMPRESSION_STREAM_ENCODE, COMPRESSION_ZLIB)
dstBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bufferSize)
compresserPtr!.pointee.dst_ptr = dstBuffer!
//defer { free(bufferPtr) }
compresserPtr!.pointee.dst_size = bufferSize
}
func flush() {
//let data = Data(bytesNoCopy: compresserPtr!.pointee.dst_ptr, count: bufferSize, deallocator: .none)
let nBytes = bufferSize - compresserPtr!.pointee.dst_size
print("Writing \(nBytes)")
let data = Data(bytesNoCopy: dstBuffer!, count: nBytes, deallocator: .none)
self.file?.write(data)
}
func startRecording() throws {
processingQ.async {
self.prepareForRecording()
}
}
func addPixelBuffers(pixelBuffer: CVPixelBuffer) {
processingQ.async {
if self.frameCount >= self.maxNumberOfFrame {
// TODO now!! flush when needed!!!
print("MAXED OUT")
return
}
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
let add : UnsafeMutableRawPointer = CVPixelBufferGetBaseAddress(pixelBuffer)!
self.compresserPtr!.pointee.src_ptr = UnsafePointer<UInt8>(add.assumingMemoryBound(to: UInt8.self))
let height = CVPixelBufferGetHeight(pixelBuffer)
self.compresserPtr!.pointee.src_size = CVPixelBufferGetBytesPerRow(pixelBuffer) * height
let flags = Int32(0)
let compression_status = compression_stream_process(self.compresserPtr!, flags)
if compression_status != COMPRESSION_STATUS_OK {
NSLog("Buffer compression retured: \(compression_status)")
return
}
if self.compresserPtr!.pointee.src_size != 0 {
NSLog("Compression lib didn't eat all data: \(compression_status)")
return
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
// TODO(eyal): flush when needed!!!
self.frameCount += 1
print("handled \(self.frameCount) buffers")
}
}
func finishRecording(success: #escaping ((URL) -> Void)) throws {
processingQ.async {
let flags = Int32(COMPRESSION_STREAM_FINALIZE.rawValue)
self.compresserPtr!.pointee.src_size = 0
//compresserPtr!.pointee.src_ptr = UnsafePointer<UInt8>(0)
let compression_status = compression_stream_process(self.compresserPtr!, flags)
if compression_status != COMPRESSION_STATUS_END {
NSLog("ERROR: Finish failed. compression retured: \(compression_status)")
return
}
self.flush()
DispatchQueue.main.sync {
success(self.outputURL!)
}
self.reset()
}
}
}
I've been stuck on this problem for days now and have looked through nearly every related StackOverflow page. Through this, I now have a much greater understanding of what FFT is and how it works. Despite this, I'm having extreme difficulties implementing it into my application.
In short, what I am trying to do is make a spectrum visualizer for my application (Similar to this). From what I've gathered, I'm pretty sure I need to use the magnitudes of the sound as the heights of my bars. So with all this in mind, currently I am able to analyze an entire .caf file all at once. To do this, I am using the following code:
let audioFile = try! AVAudioFile(forReading: soundURL!)
let frameCount = UInt32(audioFile.length)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: frameCount)
do {
try audioFile.readIntoBuffer(buffer, frameCount:frameCount)
} catch {
}
let log2n = UInt(round(log2(Double(frameCount))))
let bufferSize = Int(1 << log2n)
let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))
var realp = [Float](count: bufferSize/2, repeatedValue: 0)
var imagp = [Float](count: bufferSize/2, repeatedValue: 0)
var output = DSPSplitComplex(realp: &realp, imagp: &imagp)
vDSP_ctoz(UnsafePointer<DSPComplex>(buffer.floatChannelData.memory), 2, &output, 1, UInt(bufferSize / 2))
vDSP_fft_zrip(fftSetup, &output, 1, log2n, Int32(FFT_FORWARD))
var fft = [Float](count:Int(bufferSize / 2), repeatedValue:0.0)
let bufferOver2: vDSP_Length = vDSP_Length(bufferSize / 2)
vDSP_zvmags(&output, 1, &fft, 1, bufferOver2)
This works fine and outputs a long array of data. However, the problem with this code is it analyzes the entire audio file at once. What I need is to be analyzing the audio file as it is playing, very similar to this video: Spectrum visualizer.
So I guess my question is this: How do you perform FFT analysis while the audio is playing?
Also, on top of this, how do I go about converting the output of an FFT analysis to actual heights for a bar? One of the outputs I received for an audio file using the FFT analysis code from above was this: http://pastebin.com/RBLTuGx7. The only reason for the pastebin is due to how long it is. I'm assuming I average all these numbers together and use those values instead? (Just for reference, I got that array by printing out the 'fft' variable in the code above)
I've attempted reading through the EZAudio code, however I am unable to find how they are reading in samples of audio in live time. Any help is greatly appreciated.
Here's how it is done in AudioKit, using EZAudio's FFT tools:
Create a class for your FFT that will hold the data:
#objc public class AKFFT: NSObject, EZAudioFFTDelegate {
internal let bufferSize: UInt32 = 512
internal var fft: EZAudioFFT?
/// Array of FFT data
public var fftData = [Double](count: 512, repeatedValue: 0.0)
...
}
Initialize the class and setup the FFT. Also install the tap on the appropriate node.
public init(_ input: AKNode) {
super.init()
fft = EZAudioFFT.fftWithMaximumBufferSize(vDSP_Length(bufferSize), sampleRate: 44100.0, delegate: self)
input.avAudioNode.installTapOnBus(0, bufferSize: bufferSize, format: AKManager.format) { [weak self] (buffer, time) -> Void in
if let strongSelf = self {
buffer.frameLength = strongSelf.bufferSize;
let offset: Int = Int(buffer.frameCapacity - buffer.frameLength);
let tail = buffer.floatChannelData[0];
strongSelf.fft!.computeFFTWithBuffer(&tail[offset], withBufferSize: strongSelf.bufferSize)
}
}
}
Then implement the callback to load your internal fftData array:
#objc public func fft(fft: EZAudioFFT!, updatedWithFFTData fftData: UnsafeMutablePointer<Float>, bufferSize: vDSP_Length) {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
for i in 0...511 {
self.fftData[i] = Double(fftData[i])
}
}
}
AudioKit's implementation may change so you should check https://github.com/audiokit/AudioKit/ to see if any improvements were made. EZAudio is at https://github.com/syedhali/EZAudio