Storing images in NSMutableArray using GCD - ios

I am trying to store images from a remote location to an NSMutableArray using a GCD block. The following code is being called in viewDidLoad, and the images are to be populated in a UICollectionView:
dispatch_apply(self.count, dispatch_get_global_queue(0, 0), ^(size_t i){
NSString *strURL = [NSString stringWithFormat:#"%#%zu%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
[self.imageArray addObject:oneImage];
}
});
PROBLEM: the images are not being linearly stored.
eg.
[self.imageArray objectAtIndex:2] is not 2.jpg
Even though it is setting first and last image correct, rest are all jumbled up.
Another way to do this (What I basically need, minus the time consumed and memory overhead):
for (int i=0; i<=[TMAPopularImageManager sharedInstance].numberOfImages-1; i++){
NSString *strURL = [NSString stringWithFormat:#"%#%d%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
[self.imageArray addObject:oneImage];
}
}
Is there a better way to implement the GCD block in this case? I need the images in the array sequentially named.

I ran this about 10 times and it did not print in order once:
NSInteger iterations = 10;
dispatch_apply(iterations, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long) NULL), ^(size_t index) {
NSLog(#"%zu", index);
});
My suggestion, and what I have done in the past, is to just run a for loop inside the block of a background thread:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long) NULL);
dispatch_async(queue, ^{
int iterations = 0;
for (int x = 0; x < iterations; ++x) {
// Do stuff here
}
dispatch_async(dispatch_get_main_queue(), ^{
// Set content generated on background thread to main thread
});
});
When using background threads, it is important to make sure that either the the objects you initialized on the main thread are thread-safe or that you initialize objects in the background and then set the main thread's objects with the background-thread-created objects like in the above example. This is especially true with Core Data.
EDIT:
Seems like the iterations using dispatch_apply return immediately so they will probably execute out of order when doing anything meaningful. If you run these two, you will see that printf always runs in order but NSLog does not:
NSInteger iterations = 10;
dispatch_apply(iterations, the_queue, ^(size_t idx) {
printf("%zu\n", idx);
});
dispatch_apply(iterations, the_queue, ^(size_t idx) {
NSLog(#"%zu\n", idx);
});
In my opinion, it would be best to run a standard for statement in a background thread rather than dispatch_apply if the order is important.
EDIT 2:
This would be your implementation:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long) NULL);
dispatch_async(queue, ^{
int iterations = 0;
NSMutableArray *backgroundThreadImages = [NSMutableArray array];
for (int i = 0; i < iterations; ++i) {
NSString *strURL = [NSString stringWithFormat:#"%#%i%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
[backgroundThreadImages addObject:oneImage];
}
}
dispatch_async(dispatch_get_main_queue(), ^{
self.imageArray = backgroundThreadImages;
});
});

Not sure if applicable to your situation, but if the images are going to be displayed in a UICollectionView, it is better to load them when required. That is, when the delegate methods (cellFor...) are invoked.
That way you only load what is needed, when it's needed.
If you need to have the images in an array for some other purpose, ignore this answer.

two ways: sort the array afterwards
or
// assume self.imageArray is empty
for (int i = 0; i < self.count; ++i) // fill array with NSNull object
[self.imageArray addObject:[NSNull null]]];
dispatch_apply(self.count, dispatch_get_global_queue(0, 0), ^(size_t i){
NSString *strURL = [NSString stringWithFormat:#"%#%zu%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
self.imageArray[i] = oneImage;
}
});
I am not sure it is thread-safe to operate NSMutableArray this way, you may need
#synchronized(self.imageArray) {
self.imageArray[i] = oneImage;
}

Related

UI Gets Blocked When Using CGImageSourceCreateWithURL and CGImageSourceCopyPropertiesAtIndex

I am checking the dimensions of images via their URLs stored in an array called destination_urls. I then calculate the height of images if they were stretched to the width of the screen while maintaining the aspect ratio. All of this is done in a for loop.
When the code is run, the app UI gets stuck during the for loop. How can I revise the code to make sure the app doesn't freeze?
int t = 0;
int arraycount = [_destination_urls count];
dispatch_async(dispatch_get_main_queue(), ^{
for (int i = 0; i < arraycount; i++) {
NSURL *imageURL = [NSURL URLWithString:[_destination_urls[i] stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
CGImageSourceRef imgSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary* imageProps = (__bridge_transfer NSDictionary*) CGImageSourceCopyPropertiesAtIndex(imgSource, 0, NULL);
NSString *imageHeighttoDisplay = [imageProps objectForKey:#"PixelHeight"];
originalimageHeight = [imageHeighttoDisplay intValue];
NSString *imageWidthtoDisplay = [imageProps objectForKey:#"PixelWidth"];
originalimageWidth = [imageWidthtoDisplay intValue];
if (imgSource){
_revisedImageURLHeight[t] = [NSNumber numberWithInt:(screenWidth)*(originalimageHeight/originalimageWidth)];
t = t +1;
CFRelease(imgSource);
}
}
});

it happened a memory leaks when I getting image exif dictionary from a asset object using a static method,

Following are the method an methods call tree that cause the memory leak
//get the exif info of image asset background
#autoreleasepool {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
self.dataSource = [NSMutableArray new];
__weak typeof(self) weakSelf = self;
dispatch_async(queue, ^{
for (int i = 0; i < self.selectImageList.count; i++) {
PFALAssetImageItemData *dataEntity = weakSelf.selectImageList[i];
//getting the object usering an image asset
ImageExifInfoEntity *imageExifEntity = [ImageExifInfoEntity getAlbumImageFromAsset:dataEntity.imageAsset imageOryder:i];
LOG(#"%#",imageExifEntity.description);
[weakSelf.dataSource addObject:imageExifEntity];
}
//back main thread update views
dispatch_async(dispatch_get_main_queue(), ^{
[self.collectionView reloadData];
[self hideHud];
});
});
}
In this code I want to creat ImageExifInfoEntity using a static method with an asset in a thread:
[ImageExifInfoEntity getAlbumImageFromAsset:dataEntity.imageAsset imageOryder:i];
In this method,it create an new object of ImageExifInfoEntity type,and get the exif Dictionary using a static method
+(ImageExifInfoEntity *)getAlbumImageFromAsset:(ALAsset *)asset category:(NSString *)category imageOryder:(NSInteger)imageOrder{
ImageExifInfoEntity *albumImage = [ImageExifInfoEntity new];
..........
albumImage.imageSize = [UIImage imageSizeWithAlasset:asset];
albumImage.exifDic = [ImageExifInfoEntity getExifInfoFromAsset:asset] == nil ? #{}:[ImageExifInfoEntity getExifInfoFromAsset:asset];
..........
}
Finally,I get an exif dictionary using this method where the memory leak happed
+(NSDictionary *)getExifInfoFromAsset:(ALAsset *)asset {
NSDictionary *_imageProperty;
__weak ALAsset *tempAsset = asset;
ALAssetRepresentation *representation = tempAsset.defaultRepresentation;
uint8_t *buffer = (uint8_t *)malloc(representation.size);
NSError *error;
NSUInteger length = [representation getBytes:buffer fromOffset:0 length:representation.size error:&error];
NSData *data = [NSData dataWithBytes:buffer length:length];
CGImageSourceRef cImageSource = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(cImageSource, 0, NULL);
_imageProperty = (__bridge_transfer NSDictionary*)imageProperties;
free(buffer);
NSLog(#"image property: %#", _imageProperty);
return _imageProperty;
}
here is the instrument analyze result
call tree
the final method that cause memory leaks
CGImageSourceCreateWithData() is a creation function.
CGImageSourceRef cImageSource = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
According to the CF memory rules, you have to release it with CFRelease().
replace __bridge with __bridge_transfer so that ARC will be in charge of freeing the memory
I think you should call CFRelease(cImageSource); to release the image source. See document: https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CGImageSource/#//apple_ref/c/func/CGImageSourceCreateWithData
An image source. You are responsible for releasing this object using CFRelease.

Prevent loop from looping until inner loop writes to disk

I have a nested for loop in where I call a getSnapShotData method many times and write this data to disk. I've noticed I get too much memory build up doing this and I run out of memory, so I thought this would be a good use case for using dispatch semaphore.
I'm still running out of memory, so I'm not sure if I'm using the semaphore properly. Essentially I want the next loop to wait until the prior loop's data is written to disk, as I think that will free the memory. But I could be wrong. Thank you for your help.
Code:
dispatch_semaphore_t sema = dispatch_semaphore_create(0);
for (NSDictionary *sub in self.array)
{
NSArray *lastArray = [sub objectForKey:#"LastArray"];
for (NSDictionary *dict in lastArray)
{
currentIndex ++;
NSData *frame = [NSData dataWithData:[self getSnapshotData]];
savePath = [NSString stringWithFormat:#"%#/%lu.png",frameSourcePath,(unsigned long)currentIndex];
BOOL nextLoop = [frame writeToFile:savePath options:0 error:nil];
frame = nil;
if (nextLoop)
{
dispatch_semaphore_signal(sema);
}
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
}
}
- (NSData *)getSnapshotData
{
UIGraphicsBeginImageContextWithOptions(self.containerView.bounds.size, NO, 0.0);
[self.containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *snapShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [NSData dataWithData:UIImagePNGRepresentation(snapShot)];
}
You have too many autoreleased objects. Add an autorelease pool to improve the situation instead of using the semaphore.
for (NSDictionary *sub in self.array)
{
NSArray *lastArray = [sub objectForKey:#"LastArray"];
for (NSDictionary *dict in lastArray)
{
#autoreleasepool {
currentIndex ++;
NSData *frame = [NSData dataWithData:[self getSnapshotData]];
savePath = [NSString stringWithFormat:#"%#/%lu.png",frameSourcePath,(unsigned long)currentIndex];
BOOL nextLoop = [frame writeToFile:savePath options:0 error:nil];
}
}
}

Load images asyncroniously to the array using GCD

I try to use panoramaGL framework to show panorama. The server-side returns several tile images for every side of cube panorama. So I need to load this images asyncroniously into the array and after this I need to make a big side texture from this images - the code for the first part of this task is:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
int columnCount;
int rowCount;
NSString *side;
if ([level isEqual: #"3"]) {
columnCount = 1;
rowCount = 1;
} else if ([level isEqual: #"2"]) {
columnCount = 2;
rowCount = 2;
... here I prepare the url string parameters
if (face == PLCubeFaceOrientationFront)
side = #"F";
else if (face == PLCubeFaceOrientationBack)
side = #"B";
... here I prepare the url string parameters
self.tileArray = [[NSMutableArray alloc] initWithCapacity:columnCount];
for (int columnIndex = 0; columnIndex < columnCount; columnIndex++) {
for (int rowIndex = 0; rowIndex < rowCount; rowIndex++) {
NSString *tileUrl = [NSString stringWithFormat:#"%d_%d", columnIndex, rowIndex];
NSMutableArray *tileUrlComponents = [NSMutableArray array];
[tileUrlComponents addObject: panoramaData[#"id"]];
[tileUrlComponents addObject: side];
[tileUrlComponents addObject: level];
[tileUrlComponents addObject: tileUrl];
NSString *tileIdString = [tileUrlComponents componentsJoinedByString:#"/"];
NSString *panoramaIdUrlString = [NSString stringWithFormat:#"http://SOMEURL=%#", tileIdString];
NSURL *url = [NSURL URLWithString:panoramaIdUrlString];
NSURLRequest *req = [NSURLRequest requestWithURL:url];
[NSURLConnection sendAsynchronousRequest:req queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *res, NSData *data, NSError *err) {
UIImage *image = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
NSMutableArray *array = [self.tileArray objectAtIndex:columnIndex];
[array replaceObjectAtIndex:rowIndex withObject:image];
[self.tileArray addObject:array];
});
}];
NSLog(#"The URL for the %# tile is %#", tileUrl, tileIdString);
}
}
});
The main question - how can I understand, that my array is loaded - and I can now work with the images in it. The second question is that I get the empty array now unfortunately. Any help?
You can do something like this
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
dispatch_sync(dispatch_get_main_queue(), ^{
NSData *data = [[NSData alloc] initWithContentsOfURL: ImageURL];
UIImage *image = [[UIImage alloc] initWithData: data];
[UIImageJPEGRepresentation(image, 100) writeToFile: imagePath atomically: YES];
});
});
where imagePath ia a path in your applications directory..You can make a temporary directory and store the images there…and while retrieving you can just use the simple method as..
UIImage *image=[UIImage imageWithCOntentsOFFile:imagePath];.
It worked for me…just give it a try
I am not sure you considered using operations.
This is how I would do:
Create NSOperation for each request (make sure in your code the operation stays alive until it has finished loading)
Add each operation to NSOperationQueue. (By calling -operations you can see how many operations are in progress)
If you implement NSOperationQueue on back thread you can use -waitUntilAllOperationsAreFinished (this method blocks current thread until all your images have been downloaded) and then execute your code to display images. You can alternatively add all NSOperation's to NSArray and add to NSOperationQueue by using -(void)addOperations:(NSArray *)ops waitUntilFinished:(BOOL)wait

Calling imageWithData:UIImageJPEGRepresentation() multiple times only compresses image the first time

In order to prevent lagging in my app, I'm trying to compress images larger than 1 MB (mostly for pics taken from iphone's normal camera.
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSData *imageSize = UIImageJPEGRepresentation(image, 1);
NSLog(#"original size %u", [imageSize length]);
UIImage *image2 = [UIImage imageWithData:UIImageJPEGRepresentation(image, 0)];
NSData *newImageSize = UIImageJPEGRepresentation(image2, 1);
NSLog(#"new size %u", [newImageSize length]);
UIImage *image3 = [UIImage imageWithData:UIImageJPEGRepresentation(image2, 0)];
NSData *newImageSize2 = UIImageJPEGRepresentation(image3, 1);
NSLog(#"new size %u", [newImageSize2 length]);
picView = [[UIImageView alloc] initWithImage:image3] ;
However, the NSLog I get outputs something along the lines of
original size 3649058
new size 1835251
new size 1834884
The difference between the 1st and 2nd compression is almost negligible. My goal is to get the image size below 1 MB. Did I overlook something/is there an alternative approach to achieve this?
EDIT: I want to avoid scaling the image's height and width, if possible.
A couple of thoughts:
The UIImageJPEGRepresentation function does not return the "original" image. For example, if you employ a compressionQuality of 1.0, it does not, technically, return the "original" image, but rather it returns a JPEG rendition of the image with compressionQuality at its maximum value. This can actually yield an object that is larger than the original asset (at least if the original image is a JPEG). You're also discarding all of the metadata (information about where the image was taken, the camera settings, etc.) in the process.
If you want the original asset, you should use PHImageManager:
NSURL *url = [info objectForKey:UIImagePickerControllerReferenceURL];
PHFetchResult *result = [PHAsset fetchAssetsWithALAssetURLs:#[url] options:nil];
PHAsset *asset = [result firstObject];
PHImageManager *manager = [PHImageManager defaultManager];
[manager requestImageDataForAsset:asset options:nil resultHandler:^(NSData *imageData, NSString *dataUTI, UIImageOrientation orientation, NSDictionary *info) {
NSString *filename = [(NSURL *)info[#"PHImageFileURLKey"] lastPathComponent];
// do what you want with the `imageData`
}];
In iOS versions prior to 8, you'd have to use assetForURL of the ALAssetsLibrary class:
NSURL *url = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:url resultBlock:^(ALAsset *asset) {
ALAssetRepresentation *representation = [asset defaultRepresentation];
NSLog(#"size of original asset %llu", [representation size]);
// I generally would write directly to a `NSOutputStream`, but if you want it in a
// NSData, it would be something like:
NSMutableData *data = [NSMutableData data];
// now loop, reading data into buffer and writing that to our data strea
NSError *error;
long long bufferOffset = 0ll;
NSInteger bufferSize = 10000;
long long bytesRemaining = [representation size];
uint8_t buffer[bufferSize];
NSUInteger bytesRead;
while (bytesRemaining > 0) {
bytesRead = [representation getBytes:buffer fromOffset:bufferOffset length:bufferSize error:&error];
if (bytesRead == 0) {
NSLog(#"error reading asset representation: %#", error);
return;
}
bytesRemaining -= bytesRead;
bufferOffset += bytesRead;
[data appendBytes:buffer length:bytesRead];
}
// ok, successfully read original asset;
// do whatever you want with it here
} failureBlock:^(NSError *error) {
NSLog(#"error=%#", error);
}];
Please note that this assetForURL runs asynchronously.
If you want a NSData with compression, you can use UIImageJPEGRepresentation with a compressionQuality less than 1.0. Your code actually does this with a compressionQuality of 0.0, which should offer maximum compression. But you don't save that NSData, but rather use it to create a UIImage and you then get a new UIImageJPEGRepresentation with a compressionQuality of 1.0, thus losing much of the compression you originally achieved.
Consider the following code:
// a UIImage of the original asset (discarding meta data)
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// this may well be larger than the original asset
NSData *jpgDataHighestCompressionQuality = UIImageJPEGRepresentation(image, 1.0);
[jpgDataHighestCompressionQuality writeToFile:[docsPath stringByAppendingPathComponent:#"imageDataFromJpeg.jpg"] atomically:YES];
NSLog(#"compressionQuality = 1.0; length = %u", [jpgDataHighestCompressionQuality length]);
// this will be smaller, but with some loss of data
NSData *jpgDataLowestCompressionQuality = UIImageJPEGRepresentation(image, 0.0);
NSLog(#"compressionQuality = 0.0; length = %u", [jpgDataLowestCompressionQuality length]);
UIImage *image2 = [UIImage imageWithData:jpgDataLowestCompressionQuality];
// ironically, this will be larger than jpgDataLowestCompressionQuality
NSData *newImageSize = UIImageJPEGRepresentation(image2, 1.0);
NSLog(#"new size %u", [newImageSize length]);
In addition to the JPEG compression quality outlined the prior point, you could also just resize the image. You can also marry this with the JPEG compressionQuality, too.
You can not compress the image again and again. If so everything can be compressed again and again. Then how small do you think it will be?
One way to make your image smaller is to change it's size. For example change 640X960 to 320X480. But you will lose quality.
I is the first implementation of UIImageJPEGRepresentation (image, 0.75), and then change the size. Maybe image's width and heigh two-thirds or half.

Resources