Save depth images from TrueDepth camera - ios

I am trying to save depth images from the iPhoneX TrueDepth camera. Using the AVCamPhotoFilter sample code, I am able to view the depth, converted to grayscale format, on the screen of the phone in real-time. I cannot figure out how to save the sequence of depth images in the raw (16 bits or more) format.
I have depthData which is an instance of AVDepthData. One of its members is depthDataMap which is an instance of CVPixelBuffer and image format type kCVPixelFormatType_DisparityFloat16. Is there a way to save it to the phone to transfer for offline manipulation?

There's no standard video format for "raw" depth/disparity maps, which might have something to do with AVCapture not really offering a way to record it.
You have a couple of options worth investigating here:
Convert depth maps to grayscale textures (which you can do using the code in the AVCamPhotoFilter sample code), then pass those textures to AVAssetWriter to produce a grayscale video. Depending on the video format and grayscale conversion method you choose, other software you write for reading the video might be able to recover depth/disparity info with sufficient precision for your purposes from the grayscale frames.
Anytime you have a CVPixelBuffer, you can get at the data yourself and do whatever you want with it. Use CVPixelBufferLockBaseAddress (with the readOnly flag) to make sure the content won't change while you read it, then copy data from the pointer CVPixelBufferGetBaseAddress provides to wherever you want. (Use other pixel buffer functions to see how many bytes to copy, and unlock the buffer when you're done.)
Watch out, though: if you spend too much time copying from buffers, or otherwise retain them, they won't get deallocated as new buffers come in from the capture system, and your capture session will hang. (All told, it's unclear without testing whether a device has the memory & I/O bandwidth for much recording this way.)

You can use Compression library to create a zip file with the raw CVPixelBuffer data.
Few problems with this solution.
It's a lot of data and zip is not a good compression. (the compressed file is 20 times bigger than 32bits per frame video with the same number of frames).
Apple's Compression library creates a file which standard zip program does't open. I use zlib in C code to read it and use inflateInit2(&strm, -15); to make it work.
You'll need to do some work to export the file out of your application
Here is my code (which I limited to 250 frames since it hold it in RAM but you can flush to disk if needed more frames):
// DepthCapture.swift
// AVCamPhotoFilter
//
// Created by Eyal Fink on 07/04/2018.
// Copyright © 2018 Resonai. All rights reserved.
//
// Capture the depth pixelBuffer into a compress file.
// This is very hacky and there are lots of TODOs but instead we need to replace
// it with a much better compression (video compression)....
import AVFoundation
import Foundation
import Compression
class DepthCapture {
let kErrorDomain = "DepthCapture"
let maxNumberOfFrame = 250
lazy var bufferSize = 640 * 480 * 2 * maxNumberOfFrame // maxNumberOfFrame frames
var dstBuffer: UnsafeMutablePointer<UInt8>?
var frameCount: Int64 = 0
var outputURL: URL?
var compresserPtr: UnsafeMutablePointer<compression_stream>?
var file: FileHandle?
// All operations handling the compresser oobjects are done on the
// porcessingQ so they will happen sequentially
var processingQ = DispatchQueue(label: "compression",
qos: .userInteractive)
func reset() {
frameCount = 0
outputURL = nil
if self.compresserPtr != nil {
//free(compresserPtr!.pointee.dst_ptr)
compression_stream_destroy(self.compresserPtr!)
self.compresserPtr = nil
}
if self.file != nil {
self.file!.closeFile()
self.file = nil
}
}
func prepareForRecording() {
reset()
// Create the output zip file, remove old one if exists
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString
self.outputURL = URL(fileURLWithPath: documentsPath.appendingPathComponent("Depth"))
FileManager.default.createFile(atPath: self.outputURL!.path, contents: nil, attributes: nil)
self.file = FileHandle(forUpdatingAtPath: self.outputURL!.path)
if self.file == nil {
NSLog("Cannot create file at: \(self.outputURL!.path)")
return
}
// Init the compression object
compresserPtr = UnsafeMutablePointer<compression_stream>.allocate(capacity: 1)
compression_stream_init(compresserPtr!, COMPRESSION_STREAM_ENCODE, COMPRESSION_ZLIB)
dstBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bufferSize)
compresserPtr!.pointee.dst_ptr = dstBuffer!
//defer { free(bufferPtr) }
compresserPtr!.pointee.dst_size = bufferSize
}
func flush() {
//let data = Data(bytesNoCopy: compresserPtr!.pointee.dst_ptr, count: bufferSize, deallocator: .none)
let nBytes = bufferSize - compresserPtr!.pointee.dst_size
print("Writing \(nBytes)")
let data = Data(bytesNoCopy: dstBuffer!, count: nBytes, deallocator: .none)
self.file?.write(data)
}
func startRecording() throws {
processingQ.async {
self.prepareForRecording()
}
}
func addPixelBuffers(pixelBuffer: CVPixelBuffer) {
processingQ.async {
if self.frameCount >= self.maxNumberOfFrame {
// TODO now!! flush when needed!!!
print("MAXED OUT")
return
}
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
let add : UnsafeMutableRawPointer = CVPixelBufferGetBaseAddress(pixelBuffer)!
self.compresserPtr!.pointee.src_ptr = UnsafePointer<UInt8>(add.assumingMemoryBound(to: UInt8.self))
let height = CVPixelBufferGetHeight(pixelBuffer)
self.compresserPtr!.pointee.src_size = CVPixelBufferGetBytesPerRow(pixelBuffer) * height
let flags = Int32(0)
let compression_status = compression_stream_process(self.compresserPtr!, flags)
if compression_status != COMPRESSION_STATUS_OK {
NSLog("Buffer compression retured: \(compression_status)")
return
}
if self.compresserPtr!.pointee.src_size != 0 {
NSLog("Compression lib didn't eat all data: \(compression_status)")
return
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
// TODO(eyal): flush when needed!!!
self.frameCount += 1
print("handled \(self.frameCount) buffers")
}
}
func finishRecording(success: #escaping ((URL) -> Void)) throws {
processingQ.async {
let flags = Int32(COMPRESSION_STREAM_FINALIZE.rawValue)
self.compresserPtr!.pointee.src_size = 0
//compresserPtr!.pointee.src_ptr = UnsafePointer<UInt8>(0)
let compression_status = compression_stream_process(self.compresserPtr!, flags)
if compression_status != COMPRESSION_STATUS_END {
NSLog("ERROR: Finish failed. compression retured: \(compression_status)")
return
}
self.flush()
DispatchQueue.main.sync {
success(self.outputURL!)
}
self.reset()
}
}
}

Related

Using DJI video feed with Vision Framework

I'm working on an app that uses the video feed from the DJI Mavic 2 and runs it through a machine learning model to identify objects.
I managed to get my app to preview the feed from the drone using this sample DJI project, but I'm having a lot of trouble trying to get the video data into a format that's usable by the Vision framework.
I used this example from Apple as a guide to create my model (which is working!) but it looks I need to create a VNImageRequestHandler object which is created with a cvPixelBuffer of type CMSampleBuffer in order to use Vision.
Any idea how to make this conversion? Is there a better way to do this?
class DJICameraViewController: UIViewController, DJIVideoFeedListener, DJISDKManagerDelegate, DJICameraDelegate, VideoFrameProcessor {
// ...
func videoFeed(_ videoFeed: DJIVideoFeed, didUpdateVideoData rawData: Data) {
let videoData = rawData as NSData
let videoBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: videoData.length)
videoData.getBytes(videoBuffer, length: videoData.length)
DJIVideoPreviewer.instance().push(videoBuffer, length: Int32(videoData.length))
}
// MARK: VideoFrameProcessor Protocol Implementation
func videoProcessorEnabled() -> Bool {
// This is never called
return true
}
func videoProcessFrame(_ frame: UnsafeMutablePointer<VideoFrameYUV>!) {
// This is never called
let pixelBuffer = frame.pointee.cv_pixelbuffer_fastupload as! CVPixelBuffer
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientationFromDeviceOrientation(), options: [:])
do {
try imageRequestHandler.perform(self.requests)
} catch {
print(error)
}
}
} // End of DJICameraViewController class
EDIT: from what I've gathered from DJI's (spotty) documentation, it looks like the video feed is compressed H264. They claim the DJIWidget includes helper methods for decompression, but I haven't had success in understanding how to use them correctly because there is no documentation surrounding its use.
EDIT 2: Here's the issue I created on GitHub for the DJIWidget framework
EDIT 3: Updated code snippet with additional methods for VideoFrameProcessor, removing old code from videoFeed method
EDIT 4: Details about how to extract the pixel buffer successfully and utilize it can be found in this comment from GitHub
The steps :
Call DJIVideoPreviewer’s push:length: method and input the rawData. Inside DJIVideoPreviewer, if you have used VideoPreviewerSDKAdapter please skip this. (H.264 parsing and decoding steps will be performed once you do this.)
Conform to the VideoFrameProcessor protocol and call DJIVideoPreviewer.registFrameProcessor to register the VideoFrameProcessor protocol object.
VideoFrameProcessor protocol’s videoProcessFrame: method will output the VideoFrameYUV data.
Get the CVPixelBuffer data. VideoFrameYUV struct has a cv_pixelbuffer_fastupload field, this data is actually of type CVPixelBuffer when hardware decoding is turned on. If you are using software decoding, you will need to create a CVPixelBuffer yourself and copy the data from the VideoFrameYUV's luma, chromaB and chromaR field.
Code:
VideoFrameYUV* yuvFrame; // the VideoFrameProcessor output
CVPixelBufferRef pixelBuffer = NULL;
CVReturn resulst = CVPixelBufferCreate(kCFAllocatorDefault,
yuvFrame-> width,
yuvFrame -> height,
kCVPixelFormatType_420YpCbCr8Planar,
NULL,
&pixelBuffer);
if (kCVReturnSuccess != CVPixelBufferLockBaseAddress(pixelBuffer, 0) || pixelBuffer == NULL) {
return;
}
long yPlaneWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);
long yPlaneHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer,0);
long uPlaneWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 1);
long uPlaneHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1);
long vPlaneWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 2);
long vPlaneHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 2);
uint8_t* yDestination = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestination, yuvFrame->luma, yPlaneWidth * yPlaneHeight);
uint8_t* uDestination = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uDestination, yuvFrame->chromaB, uPlaneWidth * uPlaneHeight);
uint8_t* vDestination = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 2);
memcpy(vDestination, yuvFrame->chromaR, vPlaneWidth * vPlaneHeight);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

AudioKit - How to get Real Time floatChannelData from Microphone?

I'm new to Audiokit and I'm trying to do some real-time digital signal processing on input audio from the microphone.
I know the data I want is in AKAudioFile's FloatChannelData, but what if I want to obtain this in real-time? I'm currently using AKMicrophone, AKFrequencyTracker, AKNodeOutputPlot, AKBooster and I'm plotting the tracker's amplitude data. However, that data is not the same as the audio signal (as you know, it's the RMS). Is there any way I can obtain the signal's Float data from the mic? Or even from the AKNodeOutputPlot? I just need read-access.
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
plot = AKNodeOutputPlot(mic, frame: audioInputPlot.bounds)
tracker = AKFrequencyTracker.init(mic)
silence = AKBooster(tracker,gain:0)
AudioKit.output = silence
AudioKit.start()
The creator of recommends here:
AKNodeOutputPlot works, its one short file. You're basically just tapping the node and grabbing the data.
How would this work in my viewController if if have an instance of plot (AKNodeOutputPlot), mic(AKMicrophone) and want to output those values to a label?
Use a tap on which ever node you want to get the data out from. I used AKNodeOutputPlot in my quote above because its is fairly straightforward, just using that data as input for a plot, but you could take the data and do whatever with it. In this code (from AKNodeOutputPlot):
internal func setupNode(_ input: AKNode?) {
if !isConnected {
input?.avAudioNode.installTap(
onBus: 0,
bufferSize: bufferSize,
format: nil) { [weak self] (buffer, _) in
guard let strongSelf = self else {
AKLog("Unable to create strong reference to self")
return
}
buffer.frameLength = strongSelf.bufferSize
let offset = Int(buffer.frameCapacity - buffer.frameLength)
if let tail = buffer.floatChannelData?[0] {
strongSelf.updateBuffer(&tail[offset], withBufferSize: strongSelf.bufferSize)
}
}
}
isConnected = true
}
You get the buffer data in real time. Here we just send it to "updateBuffer" where it gets plotted, but instead of plotting you'd do something else.
To complete Aurelius Prochazka answer:
To record the audio flowing through a node, you need to attach a tap to it. A tap is just a closure which get called each time a buffer is available.
Here is a sample code you can reuse in your own class:
var mic = AKMicrophone()
func initMicrophone() {
// Facultative, allow to set the sampling rate of the microphone
AKSettings.sampleRate = 44100
// Link the microphone note to the output of AudioKit with a volume of 0.
AudioKit.output = AKBooster(mic, gain:0)
// Start AudioKit engine
try! AudioKit.start()
// Add a tap to the microphone
mic?.avAudioNode.installTap(
onBus: audioBus, bufferSize: 4096, format: nil // I choose a buffer size of 4096
) { [weak self] (buffer, _) in //self is now a weak reference, to prevent retain cycles
// We try to create a strong reference to self, and name it strongSelf
guard let strongSelf = self else {
print("Recorder: Unable to create strong reference to self #1")
return
}
// We look at the buffer if it contains data
buffer.frameLength = strongSelf.bufferSize
let offset = Int(buffer.frameCapacity - buffer.frameLength)
if let tail = buffer.floatChannelData?[0] {
// We convert the content of the buffer to a swift array
let samples = Array(UnsafeBufferPointer(start: &tail[offset], count: 4096))
strongSelf.myFunctionHandlingData(samples)
}
}
func myFunctionhandlingData(data: [Float]) {
// ...
}
Be careful to use DispatchQueue or an other synchronisation mechanism if you need a to interact on this data between different threads.
In my case I do use :
DispatchQueue.main.async { [weak self] in
guard let strongSelf = self else {
print("Recorder: Unable to create strong reference to self #2")
return
}
strongSelf.myFunctionHandlingData(samples)
}
so that my function run in the main thread.

IOS record audio and split it into files in real time

I am making an app which needs to stream audio to a server. What I want to do is to divide the recorded audio into chunks and upload them while recording.
I used two recorders to do that, but it didn't work well; I can hear the difference between the chunks (stops for couple of milliseconds).
How can I do this?
Your problem can be broken into two pieces: recording and chunking (and uploading, but who cares).
For recording from the microphone and writing to the file, you can get started quickly with AVAudioEngine and AVAudioFile. See below for a sample, which records chunks at the device's default input sampling rate (you will probably want to rate convert that).
When you talk about the "difference between the chunks" you are referring to the ability to divide your audio data into pieces in such a way that when you concatenate them you don't hear discontinuities. e.g. LPCM audio data can be divided into chunks at the sample level, but the LPCM bitrate is high, so you're more likely to use a packetised format like adpcm (called ima4 on iOS?), or mp3 or aac. These formats can only be divided on packet boundaries, e.g. 64, 576 or 1024 samples, say. If your chunks are written without a header (usual for mp3 and aac, not sure about ima4), then concatenation is trivial: simply lay the chunks end to end, exactly as the cat command line tool would. Sadly, on iOS there is no mp3 encoder, so that leaves aac as a likely format for you, but that depends on your playback requirements. iOS devices and macs can definitely play it back.
import AVFoundation
class ViewController: UIViewController {
let engine = AVAudioEngine()
struct K {
static let secondsPerChunk: Float64 = 10
}
var chunkFile: AVAudioFile! = nil
var outputFramesPerSecond: Float64 = 0 // aka input sample rate
var chunkFrames: AVAudioFrameCount = 0
var chunkFileNumber: Int = 0
func writeBuffer(_ buffer: AVAudioPCMBuffer) {
let samplesPerSecond = buffer.format.sampleRate
if chunkFile == nil {
createNewChunkFile(numChannels: buffer.format.channelCount, samplesPerSecond: samplesPerSecond)
}
try! chunkFile.write(from: buffer)
chunkFrames += buffer.frameLength
if chunkFrames > AVAudioFrameCount(K.secondsPerChunk * samplesPerSecond) {
chunkFile = nil // close file
}
}
func createNewChunkFile(numChannels: AVAudioChannelCount, samplesPerSecond: Float64) {
let fileUrl = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("chunk-\(chunkFileNumber).aac")!
print("writing chunk to \(fileUrl)")
let settings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVEncoderBitRateKey: 64000,
AVNumberOfChannelsKey: numChannels,
AVSampleRateKey: samplesPerSecond
]
chunkFile = try! AVAudioFile(forWriting: fileUrl, settings: settings)
chunkFileNumber += 1
chunkFrames = 0
}
override func viewDidLoad() {
super.viewDidLoad()
let input = engine.inputNode!
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
DispatchQueue.main.async {
self.writeBuffer(buffer)
}
}
try! engine.start()
}
}

Perform Audio Analysis with FFT

I've been stuck on this problem for days now and have looked through nearly every related StackOverflow page. Through this, I now have a much greater understanding of what FFT is and how it works. Despite this, I'm having extreme difficulties implementing it into my application.
In short, what I am trying to do is make a spectrum visualizer for my application (Similar to this). From what I've gathered, I'm pretty sure I need to use the magnitudes of the sound as the heights of my bars. So with all this in mind, currently I am able to analyze an entire .caf file all at once. To do this, I am using the following code:
let audioFile = try! AVAudioFile(forReading: soundURL!)
let frameCount = UInt32(audioFile.length)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: frameCount)
do {
try audioFile.readIntoBuffer(buffer, frameCount:frameCount)
} catch {
}
let log2n = UInt(round(log2(Double(frameCount))))
let bufferSize = Int(1 << log2n)
let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))
var realp = [Float](count: bufferSize/2, repeatedValue: 0)
var imagp = [Float](count: bufferSize/2, repeatedValue: 0)
var output = DSPSplitComplex(realp: &realp, imagp: &imagp)
vDSP_ctoz(UnsafePointer<DSPComplex>(buffer.floatChannelData.memory), 2, &output, 1, UInt(bufferSize / 2))
vDSP_fft_zrip(fftSetup, &output, 1, log2n, Int32(FFT_FORWARD))
var fft = [Float](count:Int(bufferSize / 2), repeatedValue:0.0)
let bufferOver2: vDSP_Length = vDSP_Length(bufferSize / 2)
vDSP_zvmags(&output, 1, &fft, 1, bufferOver2)
This works fine and outputs a long array of data. However, the problem with this code is it analyzes the entire audio file at once. What I need is to be analyzing the audio file as it is playing, very similar to this video: Spectrum visualizer.
So I guess my question is this: How do you perform FFT analysis while the audio is playing?
Also, on top of this, how do I go about converting the output of an FFT analysis to actual heights for a bar? One of the outputs I received for an audio file using the FFT analysis code from above was this: http://pastebin.com/RBLTuGx7. The only reason for the pastebin is due to how long it is. I'm assuming I average all these numbers together and use those values instead? (Just for reference, I got that array by printing out the 'fft' variable in the code above)
I've attempted reading through the EZAudio code, however I am unable to find how they are reading in samples of audio in live time. Any help is greatly appreciated.
Here's how it is done in AudioKit, using EZAudio's FFT tools:
Create a class for your FFT that will hold the data:
#objc public class AKFFT: NSObject, EZAudioFFTDelegate {
internal let bufferSize: UInt32 = 512
internal var fft: EZAudioFFT?
/// Array of FFT data
public var fftData = [Double](count: 512, repeatedValue: 0.0)
...
}
Initialize the class and setup the FFT. Also install the tap on the appropriate node.
public init(_ input: AKNode) {
super.init()
fft = EZAudioFFT.fftWithMaximumBufferSize(vDSP_Length(bufferSize), sampleRate: 44100.0, delegate: self)
input.avAudioNode.installTapOnBus(0, bufferSize: bufferSize, format: AKManager.format) { [weak self] (buffer, time) -> Void in
if let strongSelf = self {
buffer.frameLength = strongSelf.bufferSize;
let offset: Int = Int(buffer.frameCapacity - buffer.frameLength);
let tail = buffer.floatChannelData[0];
strongSelf.fft!.computeFFTWithBuffer(&tail[offset], withBufferSize: strongSelf.bufferSize)
}
}
}
Then implement the callback to load your internal fftData array:
#objc public func fft(fft: EZAudioFFT!, updatedWithFFTData fftData: UnsafeMutablePointer<Float>, bufferSize: vDSP_Length) {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
for i in 0...511 {
self.fftData[i] = Double(fftData[i])
}
}
}
AudioKit's implementation may change so you should check https://github.com/audiokit/AudioKit/ to see if any improvements were made. EZAudio is at https://github.com/syedhali/EZAudio

Memory warning on iOS 8 App - but usage is low

I hope someone can help me out, I already searched Stackoverflow and Google but I couldn't get the right solution.
I am having a very simple app which takes a photo (using the standard iOS Camera through UIImagePickerController) then I save it to the file system with a very low resolution - let thumbNailData = UIImageJPEGRepresentation(image, 0.02) after that I display the images in a collection view using Core Data - I only save the filename in Core Data, not the image, image is only in the filesystem.
So, however, when I run the app it shows me a memory usage of not more than 15 MB and Process is around 1-2 %. Everything runs fine but after adding 6-7 Photos I get strange errors like the Memory Warning, Lost connection to my iPhone and this on:
Communications error: <OS_xpc_error: <error: 0x198adfa80> { count = 1, contents =
"XPCErrorDescription" => <string: 0x198adfe78> { length = 22, contents = "Connection interrupted"
So, I am really stuck is I thought I made it very lightweight and then I get these errors....
I already submitted a note taking app to the App Store which was much more high functionality than this one but this one runs very stable...
Any ideas?
Here is some of my code:
// Here I load the picture names into an array to display in the collection view
func loadColl () {
let appDelegate = (UIApplication.sharedApplication().delegate as AppDelegate)
let context:NSManagedObjectContext = appDelegate.managedObjectContext!
let fetchRequest:NSFetchRequest = NSFetchRequest(entityName: "PictureNote")
var error:NSError?
var result = context.executeFetchRequest(fetchRequest, error: &error) as [PictureNote]
for res in result {
println(res.latitude)
println(res.longitude)
println(res.longDate)
println(res.month)
println(res.year)
println(res.text)
pictures.append(res.thumbnail)
}
}
// Here is the code to display in collection view
func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell {
let myImageCell:ImageCell = myCollectionView.dequeueReusableCellWithReuseIdentifier("imageCell", forIndexPath: indexPath) as ImageCell
myImageCell.imageCell.image = self.loadImageFromPath(fileInDocumentsDirectory(self.pictures[indexPath.row]))
return myImageCell
}
// Here is the code to load the pictures from disk
func loadImageFromPath(path: String) -> UIImage {
let image = UIImage(contentsOfFile: path)
if image == nil {
self.newAlert("Error loading your image, try again", title: "Notice")
}
return image!
}
// Here is my saving code
func saveNewEntry () {
var unique = NSUUID().UUIDString
var imageTitle = "Picture\(unique).jpg"
var image = self.pickedImageView.image
var path = fileInDocumentsDirectory(imageTitle)
var thumbImageTitle = "ThumbPicture\(unique).jpg"
var thumbPath = fileInDocumentsDirectory(thumbImageTitle)
if self.saveImage(image!, path: path) == true && self.saveThumbImage(image!, thumbPath: thumbPath) == true {
// Create the saving context
let context = (UIApplication.sharedApplication().delegate as AppDelegate).managedObjectContext!
let entityOne = NSEntityDescription.entityForName("PictureNote", inManagedObjectContext: context)
let thisTask = PictureNote(entity: entityOne!, insertIntoManagedObjectContext: context)
// Get all the values here
self.getMyLocation()
var theDate = NSDate()
thisTask.month = Date.toString(date: theDate, format: "MM")
thisTask.year = Date.toString(date: theDate, format: "yyyy")
thisTask.longDate = theDate
thisTask.longitude = getMyLocation()[1]
thisTask.latitude = getMyLocation()[0]
thisTask.text = self.noteTextView.text
thisTask.imageURL = imageTitle
thisTask.thumbnail = thumbImageTitle
thisTask.id = unique
// Saving to CoreData
if context.save(nil) {
self.newAlert("Saving your note was successful!", title: "Notice")
self.noteTextView.text = ""
} else {
self.newAlert("Error saving your note, try again", title: "Notice")
}
} else {
self.newAlert("Error saving your image, try again", title: "Notice")
}
self.pickedImageView.image = UIImage(named: "P1000342.jpg")
}
I am really thankful for every suggestion....if you need more code, just let me know...
I notice that you are using a dramatically reduced quality factor in conjunction with UIImageJPEGRepresentation. If this is an attempt is to reduce the memory involved, all that does is reduce the size of the resulting NSData your write to persistent storage, but loading that image into the image view will still require something on the order of (4 × width × height) bytes (note, that's the dimensions of the image, not the image view). Thus the 3264 × 2448 image from a iPhone takes up 30mb per image, regardless of the quality factor employed by UIImageJPEGRepresentation.
Usually I will make sure my collection/table view uses thumbnail representation (either the thumbnail property of the ALAsset or resize the default representation myself). If I'm caching this thumbnail anywhere (such as persistent storage suggested by your question), I can then use a high quality representation of the thumbnail (I use PNG because it's lossless with compression, but JPEG with 0.7-0.9 quality factor is a reasonably faithful version, too).

Resources