I'm trying to get all photos from photos library with image's metadata. It works fine for 10-20 images but when there are 50+ images it occupies too much memory, which causes to app crash.
Why i need all images into array?
Answer - to send images to server app. [i'm using GCDAsyncSocket to send data on receiver socket/port and i don't have that much waiting time to request images from PHAsset while sending images on socket/port.
My Code :
+(void)getPhotosDataFromCamera:(void(^)(NSMutableArray *arrImageData))completionHandler
{
[PhotosManager checkPhotosPermission:^(bool granted)
{
if (granted)
{
NSMutableArray *arrImageData = [NSMutableArray new];
NSArray *arrImages=[[NSArray alloc] init];
PHFetchResult *result = [PHAsset fetchAssetsWithMediaType:PHAssetMediaTypeImage options:nil];
NSLog(#"%d",(int)result.count);
arrImages = [result copy];
//--- If no images.
if (arrImages.count <= 0)
{
completionHandler(nil);
return ;
}
__block int index = 1;
__block BOOL isDone = false;
for (PHAsset *asset in arrImages)
{
[PhotosManager requestMetadata:asset withCompletionBlock:^(UIImage *image, NSDictionary *metadata)
{
#autoreleasepool
{
NSData *imageData = metadata?[PhotosManager addExif:image metaData:metadata]:UIImageJPEGRepresentation(image, 1.0f);
if (imageData != nil)
{
[arrImageData addObject:imageData];
NSLog(#"Adding images :%i",index);
//--- Done adding all images.
if (index == arrImages.count)
{
isDone = true;
NSLog(#"Done adding all images with info!!");
completionHandler(arrImageData);
}
index++;
}
}
}];
}
}
else
{
completionHandler(nil);
}
}];
}
typedef void (^PHAssetMetadataBlock)(UIImage *image,NSDictionary *metadata);
+(void)requestMetadata:(PHAsset *)asset withCompletionBlock:(PHAssetMetadataBlock)completionBlock
{
PHContentEditingInputRequestOptions *editOptions = [[PHContentEditingInputRequestOptions alloc]init];
editOptions.networkAccessAllowed = YES;
[asset requestContentEditingInputWithOptions:editOptions completionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info)
{
CIImage *CGimage = [CIImage imageWithContentsOfURL:contentEditingInput.fullSizeImageURL];
UIImage *image = contentEditingInput.displaySizeImage;
dispatch_async(dispatch_get_main_queue(), ^{
completionBlock(image,CGimage.properties);
});
CGimage = nil;
image = nil;
}];
editOptions = nil;
asset =nil;
}
+ (NSData *)addExif:(UIImage*)toImage metaData:(NSDictionary *)container
{
NSData *imageData = UIImageJPEGRepresentation(toImage, 1.0f);
// create an imagesourceref
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef) imageData, NULL);
// this is the type of image (e.g., public.jpeg)
CFStringRef UTI = CGImageSourceGetType(source);
// create a new data object and write the new image into it
NSMutableData *dest_data = [[NSMutableData alloc] initWithLength:imageData.length+2000];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)dest_data, UTI, 1, NULL);
if (!destination) {
NSLog(#"Error: Could not create image destination");
}
// add the image contained in the image source to the destination, overidding the old metadata with our modified metadata
CGImageDestinationAddImageFromSource(destination, source, 0, (__bridge CFDictionaryRef) container);
BOOL success = NO;
success = CGImageDestinationFinalize(destination);
if (!success) {
NSLog(#"Error: Could not create data from image destination");
}
CFRelease(destination);
CFRelease(source);
imageData = nil;
source = nil;
destination = nil;
return dest_data;
}
Well it's not a surprise that you arrive into this situation, since each of your image consumes memory and you instantiate and keep them in memory. This is not really a correct design approach.
In the end it depends on what you want to do with those images.
What I would suggest is that you keep just the array of your PHAsset objects and request the image only on demand.
Like if you want to represent those images into a tableView/collectionView, perform the call to
[PhotosManager requestMetadata:asset withCompletionBlock:^(UIImage *image, NSDictionary *metadata)
directly in the particular method. This way you won't drain the device memory.
There simply is not enough memory on the phone to load all of the images into the photo library into memory at the same time.
If you want to display the images, then only fetch the images that you need for immediate display. For the rest keep just he PHAsset. Make sure to discard the images when you don't need them any more.
If you need thumbnails, then fetch only the thumbnails that you need.
If want to do something with all of the images - like add a watermark to them or process them in some way - then process each image one at a time in a queue.
I cannot advise further as your question doesn't state why you need all of the images.
Related
I am working on an app which creates frames out of the recorded video:
var videoFrames:[UIImage] = [UIImage]()
func loadImages(){
let generator = AVAssetImageGenerator(asset: asset)
generator.generateCGImagesAsynchronously(forTimes: capturedFrames, completionHandler: {requestedTime, image, actualTime, result, error in
DispatchQueue.main.async {
if let image = image {
self.videoFrames.append(UIImage(cgImage: image)) }
}
})
}
Codes works fine for up to +/- 300 images loaded.
When there's more, app is Terminated due to memory issue - I am fairly new to swift - how can I debug it further?
Is there any better way to store so many images? Will splitting into couple arrays fix the issue?
My goal is to store thousands of photos (up to 1920x1080) efficiently - maybe you can recommend some better method?
Write the image to the disk and Maintain a database with image name and path.
if let image = image {
let uiImage = UIImage(cgImage: image)
let fileURL = URL(fileURLWithPath: ("__file_path__" + "\(actualTime).png"))
uiImage.pngData()!.write(to: fileURL)
//write filepath and image name to database
}
I'm adding some code of mine, it's an old code that I have in a never published app. It's in objC but the concepts are still valid, the main difference between the other code posted is that the handler takes also in consideration the orientation of the captured video, of course you must give a value to the orientation variable.
__block int i = 0;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result == AVAssetImageGeneratorSucceeded) {
NSMutableDictionary * metadata = #{}.mutableCopy;
[metadata setObject:#(recordingOrientation) forKey:(NSString*)kCGImagePropertyOrientation];;
NSString * path = [mainPath stringByAppendingPathComponent:[NSString stringWithFormat:#"Image_%.5ld.jpg",(long)i]];
CFURLRef url = (__bridge_retained CFURLRef)[NSURL fileURLWithPath:path];
CFMutableDictionaryRef metadataImage = (__bridge_retained CFMutableDictionaryRef) metadata;
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, im, metadataImage);
if (!CGImageDestinationFinalize(destination)) {
DLog(#"Failed to write image to %#", path);
}
else {
DLog(#"Writing image to %#", path);
}
}
if (result == AVAssetImageGeneratorFailed) {
//DLog(#"Failed with error: %# code %d", [error localizedDescription],error.code)
DLog(#"Failed with error: %# code %ld for CMTime requested %# and CMTime actual %#", [error localizedDescription],(long)error.code, CFAutorelease( CMTimeCopyDescription(kCFAllocatorDefault, requestedTime)), CFAutorelease(CMTimeCopyDescription(kCFAllocatorDefault,actualTime)));
DLog(#"Asset %#",videoAsset);
}
if (result == AVAssetImageGeneratorCancelled) {
NSLog(#"Canceled");
}
++i;
}
I'm creating a photo slideshow app.
My app flow :
user select photo assets > (over 100+)
load images from assets > (display image size)
set imageView image or add CIFilter to image and slides
Question :
When I create CIImage Object from CGImage , the memory is growing up so fast and if the image count over 100+, the app crash.
But it's strange that I remove the create code, the app works fine.
Can anybody help me ?
more code :
- (void)loadDisplayImagesWithCompletion:(void (^)(NSArray *images))completion {
dispatch_async(dispatch_queue_create("PhotosEditViewController_loadImageQueue", nil), ^{
__block NSMutableArray *images = [NSMutableArray array];
__block int handleCount = 0;
__weak PhotosEditViewController *weakSelf = self;
for (PHAsset *asset in self.photoAssets) {
[KPPhotoManager requestImageForAsset:asset targetSize:self.contentView.frame.size completeBlock:^(UIImage *image) {
[images addObject:image];
#autoreleasepool {
CIImage *ciImage = [CIImage imageWithCGImage:[image CGImage]];//memory growing up
}
}];
}
});
}
// KPPhotoManager
+ (void)requestImageForAsset:(PHAsset*)asset targetSize:(CGSize)size completeBlock:(void (^)(UIImage *image))completeBlock
{
[[PHImageManager defaultManager] requestImageForAsset:asset targetSize:size contentMode:PHImageContentModeAspectFill options:[self createImageRequestOptions] resultHandler:^(UIImage * _Nullable result, NSDictionary * _Nullable info) {
completeBlock(result);
}];
}
I created a new project copying your code. I increased the number of calls to 10000 and inspecting the memory all looks well. It does jump a bit but does not inflate.
I also tried to force image load using:
for (int i=0; i<10000; i++) {
#autoreleasepool {
UIImage *image = [UIImage imageWithContentsOfFile:[[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"image2.png"]];
[self createFilterImageWithImage:image];
}
}
and it still does not inflate the memory.
How are you inspecting your memory leak? Are you looking at the memory consumption? Are there any specifics in your project or file such as having disabled ARC?
More specifically, how can you know whether a PHAsset has current version of the underlying asset different than the original?
My user should only need to choose between the current or original asset when necessary. And then I need their answer for PHImageRequestOptions.version.
As of iOS 16, PHAsset has a hasAdjustments property which indicates if the asset has been edited.
For previous releases, you can get an array of data resources for a given asset via PHAssetResource API - it will have an adjustment data resource if that asset has been edited.
let isEdited = PHAssetResource.assetResources(for: asset).contains(where: { $0.type == .adjustmentData })
Note that if you want to actually work with a resource file, you have to fetch its data using a PHAssetResourceManager API. Also note that this method returns right away - there's no waiting for an async network request, unlike other answers here.
I have found two ways of checking PHAsset for modifications.
- (void)tb_checkForModificationsWithEditingInputMethodCompletion:(void (^)(BOOL))completion {
PHContentEditingInputRequestOptions *options = [PHContentEditingInputRequestOptions new];
options.canHandleAdjustmentData = ^BOOL(PHAdjustmentData *adjustmentData) { return YES; };
[self requestContentEditingInputWithOptions:options completionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info) {
if (completion) completion(contentEditingInput.adjustmentData != nil);
}];
}
- (void)tb_checkForModificationsWithAssetPathMethodCompletion:(void (^)(BOOL))completion {
PHVideoRequestOptions *options = [PHVideoRequestOptions new];
options.deliveryMode = PHVideoRequestOptionsDeliveryModeFastFormat;
[[PHImageManager defaultManager] requestAVAssetForVideo:self options:options resultHandler:^(AVAsset *asset, AVAudioMix *audioMix, NSDictionary *info) {
if (completion) completion([[asset description] containsString:#"/Mutations/"]);
}];
}
EDIT: I was at the point where I needed the same functionality for PHAsset with an image. I used this:
- (void)tb_checkForModificationsWithAssetPathMethodCompletion:(void (^)(BOOL))completion {
[self requestContentEditingInputWithOptions:nil completionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info) {
NSString *path = (contentEditingInput.avAsset) ? [contentEditingInput.avAsset description] : contentEditingInput.fullSizeImageURL.path;
completion([path containsString:#"/Mutations/"]);
}];
}
Take a look at PHImageRequestOptionsVersion
PHImageRequestOptionsVersionCurrent
Request the most recent version of the image asset (the one that reflects all edits).
The resulting image is the rendered output from all previously made adjustments.
PHImageRequestOptionsVersionUnadjusted
Request a version of the image asset without adjustments.
If the asset has been edited, the resulting image reflects the state of the asset before any edits were performed.
PHImageRequestOptionsVersionOriginal
Request the original, highest-fidelity version of the image asset. The
resulting image is originally captured or imported version of the
asset, regardless of any edits made.
If you ask user before retrieving assets, you know which version user specified. If you get a phasset from elsewhere, you can do a revertAssetContentToOriginal to get the original asset. And PHAsset has modificationDate and creationDate properties, you can use this to tell if a PHAsset is modified.
I found this code the only one working for now, and it handles most of the edge cases. It may not be the fastest one but works well for most images types. It takes the smallest possible original and modified image and compare their data content.
#implementation PHAsset (Utilities)
- (void)checkEditingHistoryCompletion:(void (^)(BOOL edited))completion
{
PHImageManager *manager = [PHImageManager defaultManager];
CGSize compareSize = CGSizeMake(64, 48);
PHImageRequestOptions *requestOptions = [PHImageRequestOptions new];
requestOptions.synchronous = YES;
requestOptions.deliveryMode = PHImageRequestOptionsDeliveryModeFastFormat;
requestOptions.version = PHImageRequestOptionsVersionOriginal;
[manager requestImageForAsset:self
targetSize:compareSize
contentMode:PHImageContentModeAspectFit
options:requestOptions
resultHandler:^(UIImage *originalResult, NSDictionary *info) {
UIImage *currentImage = originalResult;
requestOptions.version = PHImageRequestOptionsVersionCurrent;
[manager requestImageForAsset:self
targetSize:currentImage.size
contentMode:PHImageContentModeAspectFit
options:requestOptions
resultHandler:^(UIImage *currentResult, NSDictionary *info) {
NSData *currData = UIImageJPEGRepresentation(currentResult, 0.1);
NSData *orgData = UIImageJPEGRepresentation(currentImage, 0.1);
if (completion) {
//handle case when both images cannot be retrived it also mean no edition
if ((currData == nil) && (orgData == nil)) {
completion(NO);
return;
}
completion(([currData isEqualToData:orgData] == NO));
}
}];
}];
}
#end
I am trying to create GIF animated images, for which i pass an array of images.
Lets say I have a 4 seconds video and i extract about 120 frames from it. Regardless of the created GIF size, i create a GIF from all those 120 frames. The problem is, when i open the GIF in iPhone (by attaching it to MailViewComposer or iMessage) it runs fine, but if i email it, or import it to computer, it runs too fast. Can anyone suggest what is wrong here?
I am using HJImagesToGIF for GIF creation. The dictionary for GIF properties is as below:
NSDictionary *prep = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.03f]
forKey:(NSString *) kCGImagePropertyGIFDelayTime]
forKey:(NSString *) kCGImagePropertyGIFDictionary];
NSDictionary *fileProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFLoopCount: #0, // 0 means loop forever
}
};
creation of GIF:
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:path];
CGImageDestinationRef dst = CGImageDestinationCreateWithURL(url, kUTTypeGIF, [images count], nil);
CGImageDestinationSetProperties(dst, (__bridge CFDictionaryRef)fileProperties);
for (int i=0;i<[images count];i++)
{
//load anImage from array
UIImage * anImage = [images objectAtIndex:i];
CGImageDestinationAddImage(dst, anImage.CGImage,(__bridge CFDictionaryRef)(prep));
}
bool fileSave = CGImageDestinationFinalize(dst);
CFRelease(dst);
if(fileSave) {
NSLog(#"animated GIF file created at %#", path);
}else{
NSLog(#"error: no animated GIF file created at %#", path);
}
To save the GIF, I’m using:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:tempPath]];
data = [NSData dataWithContentsOfFile:tempPath];
[library writeImageDataToSavedPhotosAlbum:data metadata:nil completionBlock:^(NSURL *assetURL, NSError *error) {
if (error) {
NSLog(#"Error Saving GIF to Photo Album: %#", error);
} else {
// TODO: success handling
NSLog(#"GIF Saved to %#", assetURL);
}
}];
Thanks everyone.
High level here is what I am trying to do in my iOS app:
Allow the user to select an asset from their Photo Library
Share a URL which entities external to my app can use to request that same asset
My app is running a webserver (CocoaHTTPServer) and it is able to serve up that data to the requesting entity
I intentionally implemented this using ALAssetRepresentation getBytes:fromOffset:length:error: , thinking that I could avoid the following headaches:
writing a local copy of the image before serving it up (and then having to manage the local copies)
putting all of the image data into memory
potentially slow performance, waiting to have the whole image before serving any of it
It works high level, with one problem. The orientation of the image is not right in some cases.
This is a well known issue but the solutions generally seem to require creating a CGImage or UIImage from the ALAssetRepresentation. As soon as I do that, it eliminates my ability to use that handy ALAssetRepresentation getBytes:fromOffset:length:error: method.
It would be cool if there is a way to continue to use this method but with the corrected orientation. If not, I would appreciate any recommendations on a next best approach. Thanks!
Here are a couple of relevant methods:
- (id)initWithAssetURL:(NSURL *)assetURL forConnection:(MyHTTPConnection *)parent{
if((self = [super init]))
{
HTTPLogTrace();
// Need this now so we can use it throughout the class
self.connection = parent;
self.assetURL = assetURL;
offset = 0;
// Grab a pointer to the ALAssetsLibrary which we can persistently use
self.lib = [[ALAssetsLibrary alloc] init];
// Really need to know the size of the asset, but we will also set a property for the ALAssetRepresentation to use later
// Otherwise we would have to asynchronously look up the asset again by assetForURL
[self.lib assetForURL:assetURL
resultBlock:^(ALAsset *asset) {
self.assetRepresentation = [asset defaultRepresentation];
self.assetSize = [self.assetRepresentation size];
// Even though we don't really have any data, this will enable the response headers to be sent
// It will call our delayResponseHeaders method, which will now return NO, since we've set self.repr
[self.connection responseHasAvailableData:self];
}
failureBlock:^(NSError *error) {
// recover from error, then
NSLog(#"error in the failureBlock: %#",[error localizedDescription]);
}];
}
return self;
}
- (NSData *)readDataOfLength:(NSUInteger)lengthParameter{
HTTPLogTrace2(#"%#[%p]: readDataOfLength:%lu", THIS_FILE, self, (unsigned long)lengthParameter);
NSUInteger remaining = self.assetSize - offset;
NSUInteger length = lengthParameter < remaining ? lengthParameter : remaining;
Byte *buffer = (Byte *)malloc(length);
NSError *error;
NSUInteger buffered = [self.assetRepresentation getBytes:buffer fromOffset:offset length:length error:&error];
NSLog(#"Error: %#",[error localizedDescription]);
NSData *data = [NSData dataWithBytesNoCopy:buffer length:buffered freeWhenDone:YES];
// now that we've read data of length, update the offset for the next invocation
offset += length;
return data;
}