iOS App Crashes from Memory Issue when Generating Images from PHAssets - ios

I'm trying to get some photos (PHAsset's) from a user's Photos library, and convert them to UIImage's.
Here's the code for fetching the assets from the library:
func fetchPhotos() {
let fetchOptions = PHFetchOptions()
fetchOptions.predicate = NSPredicate(format: "(mediaSubtype & %d) != 0", PHAssetMediaSubtype.photoScreenshot.rawValue)
//Fetching Screen Shots
fetchedResult = PHAsset.fetchAssets(with: .image, options: fetchOptions)
let f: FetchResult = FetchResult(fetchedResult!)
var fetchCount = 0
for i in f {
fetchCount += 1
print("Fetch count \(fetchCount)")
let img = i.getAssetThumbnail()
testImages.append(img)
}
}
Below is the code for getting a UIImage out of a PHAsset:
extension PHAsset {
func getAssetThumbnail() -> UIImage {
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
var thumbnail = UIImage()
option.isSynchronous = true
manager.requestImage(for: self,
targetSize: CGSize(width: self.pixelWidth, height: self.pixelHeight),
contentMode: .aspectFit,
options: option,
resultHandler: {(result, info) -> Void in
thumbnail = result!
})
return thumbnail
}
}
I'm currently getting this crash output when trying to run the code:
Details
The app “AppClassificationDemo” on Nick’s iPhone quit unexpectedly.
Domain: IDEDebugSessionErrorDomain
Code: 11
Failure Reason: Message from debugger: Terminated due to memory issue
User Info: {
DVTErrorCreationDateKey = "2022-11-14 22:19:19 +0000";
IDERunOperationFailingWorker = DBGLLDBLauncher;
}
--
Analytics Event: com.apple.dt.IDERunOperationWorkerFinished : {
"device_model" = "iPhone14,2";
"device_osBuild" = "16.1.1 (20B101)";
"device_platform" = "com.apple.platform.iphoneos";
"launchSession_schemeCommand" = Run;
"launchSession_state" = 2;
"launchSession_targetArch" = arm64;
"operation_duration_ms" = 6369;
"operation_errorCode" = 11;
"operation_errorDomain" = IDEDebugSessionErrorDomain;
"operation_errorWorker" = DBGLLDBLauncher;
"operation_name" = IDEiPhoneRunOperationWorkerGroup;
"param_consoleMode" = 0;
"param_debugger_attachToExtensions" = 0;
"param_debugger_attachToXPC" = 1;
"param_debugger_type" = 5;
"param_destination_isProxy" = 0;
"param_destination_platform" = "com.apple.platform.iphoneos";
"param_diag_MainThreadChecker_stopOnIssue" = 0;
"param_diag_MallocStackLogging_enableDuringAttach" = 0;
"param_diag_MallocStackLogging_enableForXPC" = 1;
"param_diag_allowLocationSimulation" = 1;
"param_diag_checker_tpc_enable" = 1;
"param_diag_gpu_frameCapture_enable" = 0;
"param_diag_gpu_shaderValidation_enable" = 0;
"param_diag_gpu_validation_enable" = 0;
"param_diag_memoryGraphOnResourceException" = 0;
"param_diag_queueDebugging_enable" = 1;
"param_diag_runtimeProfile_generate" = 0;
"param_diag_sanitizer_asan_enable" = 0;
"param_diag_sanitizer_tsan_enable" = 0;
"param_diag_sanitizer_tsan_stopOnIssue" = 0;
"param_diag_sanitizer_ubsan_stopOnIssue" = 0;
"param_diag_showNonLocalizedStrings" = 0;
"param_diag_viewDebugging_enabled" = 1;
"param_diag_viewDebugging_insertDylibOnLaunch" = 1;
"param_install_style" = 0;
"param_launcher_UID" = 2;
"param_launcher_allowDeviceSensorReplayData" = 0;
"param_launcher_kind" = 0;
"param_launcher_style" = 0;
"param_launcher_substyle" = 0;
"param_runnable_appExtensionHostRunMode" = 0;
"param_runnable_productType" = "com.apple.product-type.application";
"param_runnable_type" = 2;
"param_testing_launchedForTesting" = 0;
"param_testing_suppressSimulatorApp" = 0;
"param_testing_usingCLI" = 0;
"sdk_canonicalName" = "iphoneos16.1";
"sdk_osVersion" = "16.1";
"sdk_variant" = iphoneos;
}
--
System Information
macOS Version 13.0.1 (Build 22A400)
Xcode 14.1 (21534.1) (Build 14B47b)
Timestamp: 2022-11-14T14:19:19-08:00
I'm fairly certain it's do the size of the image thumbnail being generated; when I set width and height to both be 100, the processing works as expected.
When changing it self.pixelWidth and self.pixelHeight, the app crashes.

Don't try to convert the PHAsset to UIImage and store in array. Because UIImage will use the full memory size of the image. So too many images in an array will cause a memory leak.
Hence just fetch the required PHAsset on demand and request for the UIImage when needed. Please see an example below.
func loadImageAt(_ index:Int) -> UIImage?{
let asset = fetchResult.object(at: index)
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
var thumbnail:UIImage?
option.isSynchronous = true
manager.requestImage(for: asset,
targetSize: CGSize(width: asset.pixelWidth, height: asset.pixelHeight),
contentMode: .aspectFit,
options: option,
resultHandler: {(result, info) -> Void in
thumbnail = result
})
return thumbnail
}
Also I will suggest you to use the asynchronous image request to avoid the UI performance issues.
Hope this helps!

Related

How to add/use GCKMediaQueue in Swift?

So I have managed to play a video on Chromecast. But only one at a time. I've been trying to figure how to programmatically add to the queue. The idea is to keep playing videos all day. In the code below "playthisvideo()" randomly returns a string that contain an http://.....mp4 . I've look at Google's documentation, it's either too vague or I just don't understand it. And I can't seem to find any examples that would lead the way for me to follow.
func castthevideo() {
let metadata = GCKMediaMetadata()
metadata.setString("Los Simpsons", forKey: kGCKMetadataKeyTitle)
metadata.setString ("Barista: ¿Cómo tomas tu café? " +
" Yo: Muy, muy en serio.",
forKey: kGCKMetadataKeySubtitle)
metadata.addImage(GCKImage(url: URL(string: "https://m.media-amazon.com/images/M/MV5BYjFkMTlkYWUtZWFhNy00M2FmLThiOTYtYTRiYjVlZWYxNmJkXkEyXkFqcGdeQXVyNTAyODkwOQ##._V1_.jpg")!,
width: 480,
height: 360))
let PTV = playthisvideo()
let url = URL.init(string: PTV)
print ("****** ", PTV)
guard let mediaURL = url else {
print("****** invalid mediaURL")
return }
//let mediaInfoBuilder = GCKMediaInformationBuilder.init(contentURL: mediaURL)
let mediaInfoBuilder = GCKMediaInformationBuilder.init(contentURL: mediaURL)
mediaInfoBuilder.streamType = GCKMediaStreamType.none;
mediaInfoBuilder.contentType = "video/mp4"
mediaInfoBuilder.metadata = metadata;
let mediaInformation = mediaInfoBuilder.build()
if let request = sessionManager.currentSession?.remoteMediaClient?.loadMedia(mediaInformation) { request.delegate = self }
GCKCastContext.sharedInstance().presentDefaultExpandedMediaControls()
}
func castanthor(byAppending appending: Bool) {
let PTV = playthisvideo()
let url = URL.init(string: PTV)
guard let mediaURL = url else {
print("invalid mediaURL")
return
}
myNSNumber = (1 as NSNumber)
if let remoteMediaClient = GCKCastContext.sharedInstance().sessionManager.currentCastSession?.remoteMediaClient {
let builder = GCKMediaQueueItemBuilder()
builder.mediaInformation = selectedItem.mediaInfo
builder.autoplay = true
builder.preloadTime = 3
let item = builder.build
if remoteMediaClient.mediaStatus != nil, appending {
let request = remoteMediaClient.queueInsert(item(), beforeItemWithID: kGCKMediaQueueInvalidItemID)
request.delegate = self
} else {
let options = GCKMediaQueueLoadOptions()
options.repeatMode = remoteMediaClient.mediaStatus?.queueRepeatMode ?? .off
let request = castSession.remoteMediaClient?.queueLoad([item()], with: options)
request?.delegate = self
}
}}
var mediaItems = [GCKMediaQueueItem]()
var urls = // Array of only audio and videos
for index in 0..<urls.count {
let builder = GCKMediaQueueItemBuilder()
let mediaInfoBuilder = GCKMediaInformationBuilder.init(contentURL: urls[i])
mediaInfoBuilder.streamType = GCKMediaStreamType.none;
mediaInfoBuilder.contentType = "video/mp4"
mediaInfoBuilder.metadata = metadata;
let mediaInformation = mediaInfoBuilder.build()
builder.mediaInformation = mediaInformation
builder.autoplay = true
builder.preloadTime = 3
let item = builder.build
mediaItems.append(item)
}
if let remoteMediaClient = GCKCastContext.sharedInstance().sessionManager.currentCastSession?.remoteMediaClient {
let loadOptions = GCKMediaQueueLoadOptions()
loadOptions.repeatMode = .all
loadOptions.startPosition = 0
remoteMediaClient.queueLoadItems(mediaItems, withOptions:loadOptions)
}

How to read exif data from UIImage in swift 4?

I have an image with a-lot of exif informations. But when trying to read the exif information with swift, it shows limited number of exif information.
I have tried following code:
let data = UIImageJPEGRepresentation(image, 1.0)
let source = CGImageSourceCreateWithData(data! as CFData, nil)
let metadata = (CGImageSourceCopyPropertiesAtIndex(source!, 0, nil))
debugPrint(metadata ?? "nil")
And it prints the following result:
{
ColorModel = RGB;
Depth = 8;
Orientation = 6;
PixelHeight = 2448;
PixelWidth = 3264;
ProfileName = "sRGB IEC61966-2.1";
"{Exif}" = {
ColorSpace = 1;
PixelXDimension = 3264;
PixelYDimension = 2448;
};
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
0,
1
);
XDensity = 72;
YDensity = 72;
};
"{TIFF}" = {
Orientation = 6;
};
}
How can I read all the exif information from UIImage?
if your image is captured using avcapturesession.than following is code for extract exif Data.
photoFileOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer!)
let image = self.processPhoto(imageData!)
let source: CGImageSource = CGImageSourceCreateWithData((imageData as! CFMutableData), nil)!
let metadata = CGImageSourceCopyPropertiesAtIndex(source, 0,nil) as! [String:Any]
print("exif data = \(metadata![kCGImagePropertyExifDictionary as String] as? [String : AnyObject]) ")
completionHandler(true)
} else {
completionHandler(false)
}
}
My suspicion is UIImageJPEGRepresentation function is the culprit as it does the conversion from HEIC to JPEG (assuming you're pulling images from the Photos app). A lot of valuable Exif tags, including things like geo-location seem to get lost during this conversion.
If you have the image Data, you can create a CIImage with it and read its properties, you'll find the EXIF data there. I tried with a UIImage, get the JPEG data and read the EXIF from there, but I only got the same values you printed in your post. I think some of the EXIF stuff is stripped out in the jpeg conversion. By using CIImage I'm able to get LensMode, the ISO, Exposure time etc.
Here is an example with PHImageManager where I read all the images and print EXIF data
private func getPhotos() {
let manager = PHImageManager.default()
let requestOptions = PHImageRequestOptions()
requestOptions.isSynchronous = true
requestOptions.deliveryMode = .fastFormat
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
let results: PHFetchResult = PHAsset.fetchAssets(with: .image, options: fetchOptions)
if results.count > 0 {
for i in 0..<results.count {
let asset = results.object(at: i)
manager.requestImageDataAndOrientation(for: asset, options: requestOptions) { (data, fileName, orientation, info) in
if let data = data,
let cImage = CIImage(data: data) {
let exif = cImage.properties["{Exif}"]
print("EXIF Data: \(exif)")
}
}
}
}
}

How to add Brightness to more than 4 videos using GPUImage in iOS Swift?

Below is my code to apply brightness to multiple videos.It works fine for 3 videos but for more than 4 videos GPUImage crash the application.
//arrVideoDetail -> Contains video Data
//isPortrait -> Getting video orientation
func addBrightNessToVideo(arrVideoDetail:[SelectedAssestData]?,isPortrait:Bool,completion: ((_ updatedVideos:[SelectedAssestData]) -> Void)?){
SVProgressHUD.show()
let imageDataGroup: DispatchGroup? = DispatchGroup()
var updatedVideoDetail = [SelectedAssestData]()
var arrForRemoveVideosPath = [String]()
for videoDict in (arrVideoDetail)! {
let videoDetail = videoDict
let videoUrl = URL(fileURLWithPath:(videoDetail.DocumentLocalAssetsPath?.path)!)
let brightNessValue = videoDetail.lightingPercent ?? 0.0
if brightNessValue == 0 {
updatedVideoDetail.append(videoDetail)
}else {
arrForRemoveVideosPath.append(videoUrl.path)
imageDataGroup?.enter()
let movie = GPUImageMovie(url: videoUrl)
movie?.runBenchmark = true
movie?.playAtActualSpeed = true
let brightnessFilter = GPUImageBrightnessFilter()
// Need to check this value with different different videos
brightnessFilter.brightness = brightNessValue //videoDetail["brightness"] as! CGFloat // Applying Brightness value
movie?.addTarget(brightnessFilter)
let anAsset = AVAsset(url: videoUrl)
let tracks = anAsset.tracks(withMediaType: AVMediaTypeVideo)
if(tracks.count>0){
let videoAssetTrack = anAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
var naturalSize = CGSize()
naturalSize = videoAssetTrack.naturalSize //Fetching naturalSize of video
var videoWidth:CGFloat!
var videoHeight:CGFloat!
if isPortrait {
videoWidth = 1080
videoHeight = 1920
}else {
videoWidth = 1920
videoHeight = 1080
}
//New path of video where movie created after filter apply
let pathToMovie = NSTemporaryDirectory().appending("\(String(NSDate().timeIntervalSince1970)).mov")
print(pathToMovie)
let filemgr = FileManager.default
do {
if filemgr.fileExists(atPath: pathToMovie) {
try filemgr.removeItem(atPath: pathToMovie)
} else {
print("\(pathToMovie) not found on applyEffect()")
}
} catch _ {
print("FAIL REMOVE \(pathToMovie) on applyEffect()")
}
videoDetail.DocumentLocalAssetsPath = URL(fileURLWithPath:pathToMovie)
unlink(pathToMovie)
//videoDetail["mediaUrl"] = pathToMovie as AnyObject
updatedVideoDetail.append(videoDetail)
let movieWriter = GPUImageMovieWriter(movieURL: URL(fileURLWithPath:pathToMovie), size: CGSize(width: videoWidth, height: videoHeight))
let input = brightnessFilter as GPUImageOutput
input.addTarget(movieWriter)
movieWriter?.shouldPassthroughAudio = true
let orientation = orientationForAsset(anAsset)
let gpuOrientation = imageRotationMode(forUIInterfaceOrientation: orientation)
movieWriter?.setInputRotation(gpuOrientation!, at: 0)
movieWriter?.enableSynchronizationCallbacks()
//Add Audio encoding target if audio available
if anAsset.tracks(withMediaType: AVMediaTypeAudio).count > 0 {
movie?.audioEncodingTarget = movieWriter
}
else
{
movie?.audioEncodingTarget = nil
}
print(movieWriter?.assetWriter.status.rawValue)
if movieWriter?.assetWriter.status != AVAssetWriterStatus.writing{
movieWriter?.startRecording()
movie?.startProcessing()
}
movieWriter?.completionBlock = {
print("complete video editing")
DispatchQueue.main.async {
input.removeTarget(movieWriter)
movieWriter?.finishRecording()
imageDataGroup?.leave()
}
}
}
else{
imageDataGroup?.leave()
}
}
}
Getting below crash
**** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetWriter startWriting] Cannot call method when status is 3'*

Get all the frames from video

I'm trying to get all the frames from some video but it seems that instead of getting 255 different frames I'm getting like 8 different frames but each of these 8 frames repeated 30 times.
my code is (the problem is with imagesForVideo array):
imagesForVideo = []
imagesForVideoCGI = []
var timesArray:[NSValue] = []
let generator:AVAssetImageGenerator = AVAssetImageGenerator(asset: sourceAsset)
for var i = 0; i < numberOfFrames - 1; i++ {
var actualTime : CMTime = CMTimeMake(0, 0)
let duration:CMTime = CMTimeMake(Int64(i), Int32(30))
let frameRef:CGImageRef = try! generator.copyCGImageAtTime(duration, actualTime: &actualTime)
let tempImage:UIImage = UIImage(CGImage: frameRef)
let nsDuration = NSValue.init(CMTime: duration)
timesArray.append(nsDuration)
imagesForVideoCGI.append(frameRef)
imagesForVideo.append(tempImage)
}
generator.generateCGImagesAsynchronouslyForTimes(timesArray, completionHandler: {(_, im:CGImage?, _, _, e:NSError?) in self.addingImages(im)})
and
func addingImages(im: CGImage?) {
if let img = im {
imagesForVideoCGI.append(img)
let justImage = UIImage(CGImage: img)
imagesForVideo.append(justImage)
}
else {
print("Fail")
}
}
What did I wrong here?

I want to release the CVPixelBufferRef in swift

I want to create a video from image.
So, I was the source of the link to the reference.
Link:CVPixelBufferPool Error ( kCVReturnInvalidArgument/-6661)
func writeAnimationToMovie(path: String, size: CGSize, animation: Animation) -> Bool {
var error: NSError?
let writer = AVAssetWriter(URL: NSURL(fileURLWithPath: path), fileType: AVFileTypeQuickTimeMovie, error: &error)
let videoSettings = [AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: size.width, AVVideoHeightKey: size.height]
let input = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
input.expectsMediaDataInRealTime = true
writer.addInput(input)
writer.startWriting()
writer.startSessionAtSourceTime(kCMTimeZero)
var buffer: CVPixelBufferRef
var frameCount = 0
for frame in animation.frames {
let rect = CGRectMake(0, 0, size.width, size.height)
let rectPtr = UnsafeMutablePointer<CGRect>.alloc(1)
rectPtr.memory = rect
buffer = pixelBufferFromCGImage(frame.image.CGImageForProposedRect(rectPtr, context: nil, hints: nil).takeUnretainedValue(), size)
var appendOk = false
var j = 0
while (!appendOk && j < 30) {
if pixelBufferAdaptor.assetWriterInput.readyForMoreMediaData {
let frameTime = CMTimeMake(Int64(frameCount), 10)
appendOk = pixelBufferAdaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
// appendOk will always be false
NSThread.sleepForTimeInterval(0.05)
} else {
NSThread.sleepForTimeInterval(0.1)
}
j++
}
if (!appendOk) {
println("Doh, frame \(frame) at offset \(frameCount) failed to append")
}
}
input.markAsFinished()
writer.finishWritingWithCompletionHandler({
if writer.status == AVAssetWriterStatus.Failed {
println("oh noes, an error: \(writer.error.description)")
} else {
println("hrmmm, there should be a movie?")
}
})
return true;
}
func pixelBufferFromCGImage(image: CGImageRef, size: CGSize) -> CVPixelBufferRef {
let options = [
kCVPixelBufferCGImageCompatibilityKey: true,
kCVPixelBufferCGBitmapContextCompatibilityKey: true]
var pixBufferPointer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)
let status = CVPixelBufferCreate(
nil,
UInt(size.width), UInt(size.height),
OSType(kCVPixelFormatType_32ARGB),
options,
pixBufferPointer)
CVPixelBufferLockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapinfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.NoneSkipFirst.toRaw())
var pixBufferData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixBufferPointer.memory?.takeUnretainedValue())
let context = CGBitmapContextCreate(
pixBufferData,
UInt(size.width), UInt(size.height),
8, UInt(4 * size.width),
rgbColorSpace, bitmapinfo!)
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0))
CGContextDrawImage(
context,
CGRectMake(0, 0, CGFloat(CGImageGetWidth(image)), CGFloat(CGImageGetHeight(image))),
image)
CVPixelBufferUnlockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
return pixBufferPointer.memory!.takeUnretainedValue()
}
Something even after the Movies can be as under the image remains in memory.
I believe or not than is left PixcelBuffer.
I had a method CVPixelBufferRelease(buffer) to release the PixcelBuffer when the Objective-c, I'm no longer can use this in Swift. How do I release the PixcelBuffer doing?
If anyone could help, I'd really appreciate it.
1
2
When using CVPixelBufferCreate the UnsafeMutablePointer has to be destroyed after retrieving the memory of it.
When I create a CVPixelBuffer, I do it like this:
func allocPixelBuffer() -> CVPixelBuffer {
let pixelBufferAttributes : CFDictionary = [...]
let pixelBufferOut = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
_ = CVPixelBufferCreate(kCFAllocatorDefault,
Int(Width),
Int(Height),
OSType(kCVPixelFormatType_32ARGB),
pixelBufferAttributes,
pixelBufferOut)
let pixelBuffer = pixelBufferOut.memory!
pixelBufferOut.destroy()
return pixelBuffer
}
I had same problem, but I have solved.
Use this: autoreleasepool
var boolWhile = true
while (boolWhile) {
autoreleasepool({() -> () in
if(input.readyForMoreMediaData) {
presentTime = CMTimeMake(Int64(ii), fps)
if(ii >= arrayImages.count){
...
Try changing
return pixBufferPointer.memory!.takeUnretainedValue()
to
return pixBufferPointer.memory!.takeRetainedValue()
to avoid leaking CVPixelBuffers

Resources