UI Gets Blocked When Using CGImageSourceCreateWithURL and CGImageSourceCopyPropertiesAtIndex - ios

I am checking the dimensions of images via their URLs stored in an array called destination_urls. I then calculate the height of images if they were stretched to the width of the screen while maintaining the aspect ratio. All of this is done in a for loop.
When the code is run, the app UI gets stuck during the for loop. How can I revise the code to make sure the app doesn't freeze?
int t = 0;
int arraycount = [_destination_urls count];
dispatch_async(dispatch_get_main_queue(), ^{
for (int i = 0; i < arraycount; i++) {
NSURL *imageURL = [NSURL URLWithString:[_destination_urls[i] stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
CGImageSourceRef imgSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
NSDictionary* imageProps = (__bridge_transfer NSDictionary*) CGImageSourceCopyPropertiesAtIndex(imgSource, 0, NULL);
NSString *imageHeighttoDisplay = [imageProps objectForKey:#"PixelHeight"];
originalimageHeight = [imageHeighttoDisplay intValue];
NSString *imageWidthtoDisplay = [imageProps objectForKey:#"PixelWidth"];
originalimageWidth = [imageWidthtoDisplay intValue];
if (imgSource){
_revisedImageURLHeight[t] = [NSNumber numberWithInt:(screenWidth)*(originalimageHeight/originalimageWidth)];
t = t +1;
CFRelease(imgSource);
}
}
});

Related

iOS function leaking memory

I am creating a client socket using POSIX socket. Since this process needs an NSArray to use the data, I wrote the following two loops to convert an unsigned char array to an NSArray and vise versa. I checked the memory situation and noticed that memory usage increases slowly when running through these functions multiple times. Can anyone spot any potential memory leaks?
//read code run in background
NSMutableArray * arr = [[NSMutableArray alloc] init];
NSUInteger i = 0;
NSNumber *aUChar = 0;
for(;;)
{
n = read(sockfd,buffer,SOCKET_SIZE);
if (n > 0)
{
[arr removeAllObjects];
for (i = 0; i < n; i++)
{
aUChar = [NSNumber numberWithUnsignedChar:buffer[i]];
[arr addObject:aUChar];
}
//do sth with arr
}
}
//write code run in main thread when function called
int n=0;
unsigned char buffer[SOCKET_SIZE];
NSUInteger i = 0;
NSNumber *tmpnum;
NSArray *data_arr = [command.arguments objectAtIndex : 0];
for (i = 0; i < SOCKET_SIZE; i++)
{
tmpnum = [data_arr objectAtIndex:i];
buffer[i] = [tmpnum charValue];
}
n = write(sockfd,buffer,TCP_SOCKET_SIZE);

Load images asyncroniously to the array using GCD

I try to use panoramaGL framework to show panorama. The server-side returns several tile images for every side of cube panorama. So I need to load this images asyncroniously into the array and after this I need to make a big side texture from this images - the code for the first part of this task is:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
int columnCount;
int rowCount;
NSString *side;
if ([level isEqual: #"3"]) {
columnCount = 1;
rowCount = 1;
} else if ([level isEqual: #"2"]) {
columnCount = 2;
rowCount = 2;
... here I prepare the url string parameters
if (face == PLCubeFaceOrientationFront)
side = #"F";
else if (face == PLCubeFaceOrientationBack)
side = #"B";
... here I prepare the url string parameters
self.tileArray = [[NSMutableArray alloc] initWithCapacity:columnCount];
for (int columnIndex = 0; columnIndex < columnCount; columnIndex++) {
for (int rowIndex = 0; rowIndex < rowCount; rowIndex++) {
NSString *tileUrl = [NSString stringWithFormat:#"%d_%d", columnIndex, rowIndex];
NSMutableArray *tileUrlComponents = [NSMutableArray array];
[tileUrlComponents addObject: panoramaData[#"id"]];
[tileUrlComponents addObject: side];
[tileUrlComponents addObject: level];
[tileUrlComponents addObject: tileUrl];
NSString *tileIdString = [tileUrlComponents componentsJoinedByString:#"/"];
NSString *panoramaIdUrlString = [NSString stringWithFormat:#"http://SOMEURL=%#", tileIdString];
NSURL *url = [NSURL URLWithString:panoramaIdUrlString];
NSURLRequest *req = [NSURLRequest requestWithURL:url];
[NSURLConnection sendAsynchronousRequest:req queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *res, NSData *data, NSError *err) {
UIImage *image = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
NSMutableArray *array = [self.tileArray objectAtIndex:columnIndex];
[array replaceObjectAtIndex:rowIndex withObject:image];
[self.tileArray addObject:array];
});
}];
NSLog(#"The URL for the %# tile is %#", tileUrl, tileIdString);
}
}
});
The main question - how can I understand, that my array is loaded - and I can now work with the images in it. The second question is that I get the empty array now unfortunately. Any help?
You can do something like this
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
dispatch_sync(dispatch_get_main_queue(), ^{
NSData *data = [[NSData alloc] initWithContentsOfURL: ImageURL];
UIImage *image = [[UIImage alloc] initWithData: data];
[UIImageJPEGRepresentation(image, 100) writeToFile: imagePath atomically: YES];
});
});
where imagePath ia a path in your applications directory..You can make a temporary directory and store the images there…and while retrieving you can just use the simple method as..
UIImage *image=[UIImage imageWithCOntentsOFFile:imagePath];.
It worked for me…just give it a try
I am not sure you considered using operations.
This is how I would do:
Create NSOperation for each request (make sure in your code the operation stays alive until it has finished loading)
Add each operation to NSOperationQueue. (By calling -operations you can see how many operations are in progress)
If you implement NSOperationQueue on back thread you can use -waitUntilAllOperationsAreFinished (this method blocks current thread until all your images have been downloaded) and then execute your code to display images. You can alternatively add all NSOperation's to NSArray and add to NSOperationQueue by using -(void)addOperations:(NSArray *)ops waitUntilFinished:(BOOL)wait

Storing images in NSMutableArray using GCD

I am trying to store images from a remote location to an NSMutableArray using a GCD block. The following code is being called in viewDidLoad, and the images are to be populated in a UICollectionView:
dispatch_apply(self.count, dispatch_get_global_queue(0, 0), ^(size_t i){
NSString *strURL = [NSString stringWithFormat:#"%#%zu%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
[self.imageArray addObject:oneImage];
}
});
PROBLEM: the images are not being linearly stored.
eg.
[self.imageArray objectAtIndex:2] is not 2.jpg
Even though it is setting first and last image correct, rest are all jumbled up.
Another way to do this (What I basically need, minus the time consumed and memory overhead):
for (int i=0; i<=[TMAPopularImageManager sharedInstance].numberOfImages-1; i++){
NSString *strURL = [NSString stringWithFormat:#"%#%d%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
[self.imageArray addObject:oneImage];
}
}
Is there a better way to implement the GCD block in this case? I need the images in the array sequentially named.
I ran this about 10 times and it did not print in order once:
NSInteger iterations = 10;
dispatch_apply(iterations, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long) NULL), ^(size_t index) {
NSLog(#"%zu", index);
});
My suggestion, and what I have done in the past, is to just run a for loop inside the block of a background thread:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long) NULL);
dispatch_async(queue, ^{
int iterations = 0;
for (int x = 0; x < iterations; ++x) {
// Do stuff here
}
dispatch_async(dispatch_get_main_queue(), ^{
// Set content generated on background thread to main thread
});
});
When using background threads, it is important to make sure that either the the objects you initialized on the main thread are thread-safe or that you initialize objects in the background and then set the main thread's objects with the background-thread-created objects like in the above example. This is especially true with Core Data.
EDIT:
Seems like the iterations using dispatch_apply return immediately so they will probably execute out of order when doing anything meaningful. If you run these two, you will see that printf always runs in order but NSLog does not:
NSInteger iterations = 10;
dispatch_apply(iterations, the_queue, ^(size_t idx) {
printf("%zu\n", idx);
});
dispatch_apply(iterations, the_queue, ^(size_t idx) {
NSLog(#"%zu\n", idx);
});
In my opinion, it would be best to run a standard for statement in a background thread rather than dispatch_apply if the order is important.
EDIT 2:
This would be your implementation:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long) NULL);
dispatch_async(queue, ^{
int iterations = 0;
NSMutableArray *backgroundThreadImages = [NSMutableArray array];
for (int i = 0; i < iterations; ++i) {
NSString *strURL = [NSString stringWithFormat:#"%#%i%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
[backgroundThreadImages addObject:oneImage];
}
}
dispatch_async(dispatch_get_main_queue(), ^{
self.imageArray = backgroundThreadImages;
});
});
Not sure if applicable to your situation, but if the images are going to be displayed in a UICollectionView, it is better to load them when required. That is, when the delegate methods (cellFor...) are invoked.
That way you only load what is needed, when it's needed.
If you need to have the images in an array for some other purpose, ignore this answer.
two ways: sort the array afterwards
or
// assume self.imageArray is empty
for (int i = 0; i < self.count; ++i) // fill array with NSNull object
[self.imageArray addObject:[NSNull null]]];
dispatch_apply(self.count, dispatch_get_global_queue(0, 0), ^(size_t i){
NSString *strURL = [NSString stringWithFormat:#"%#%zu%#", #"http://theURL.com/popular/", i, #".jpg"];
NSURL *imageURL = [NSURL URLWithString:strURL];
NSData *imageData = [NSData dataWithContentsOfURL: imageURL];
UIImage *oneImage =[UIImage imageWithData:imageData];
if(oneImage!=nil){
self.imageArray[i] = oneImage;
}
});
I am not sure it is thread-safe to operate NSMutableArray this way, you may need
#synchronized(self.imageArray) {
self.imageArray[i] = oneImage;
}

CGDataProviderCopyData builds up in memory causing crash

Okay, so I'm downloading a bunch of large-ish images (5mb) from a server in pieces, then stitching the pieces together and rendering the total image from a byte array. However, I've realized that the data for each image is not being released, and consequently builds up causing a memory warning and crash of my app. I thought that because of my explicit (__bridge_transfer NSData *) casting that ARC would take care of releasing the object, but it's still proving to be a problem. In instruments, objects called "CGDataProviderCopyData" of ~ 1mb build up and are not discarded for each file that is being stitched into the whole image. Any ideas or anyone who can steer me in the right direction? Much obliged.
// Create array to add all files into total image
NSMutableArray *byteArray = [[NSMutableArray alloc] initWithCapacity:(imageHeight * imageWidth)];
// Iterate through each file in files array
for (NSString *file in array)
{
// Set baseURL for individual file path
NSString *baseURL = [NSString stringWithFormat:#"http://xx.225.xxx.xxx%#",[imageInfo objectForKey:#"BaseURL"]];
// Specify imagePath by appending baseURL to file name
NSString *imagePath = [NSString stringWithFormat:#"%#%#", baseURL, file];
// Change NSString --> NSURL --> NSData
NSURL *imageUrl = [NSURL URLWithString:imagePath];
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
// Create image from imageData
UIImage *image = [UIImage imageWithData:imageData];
CGImageRef cgimage = image.CGImage;
size_t width = CGImageGetWidth(cgimage);
size_t height = CGImageGetHeight(cgimage);
size_t bpr = CGImageGetBytesPerRow(cgimage);
size_t bpp = CGImageGetBitsPerPixel(cgimage);
size_t bpc = CGImageGetBitsPerComponent(cgimage);
size_t bytes_per_pixel = bpp / bpc;
// Get CGDataProviderRef from cgimage
CGDataProviderRef provider = CGImageGetDataProvider(cgimage);
// This is the object that is not being released
NSData *data = (__bridge_transfer NSData *)CGDataProviderCopyData(provider); //Using (__bridge_transfer NSData *) casts the provider to type NSData and gives ownership to ARC, but still not discarded
const UInt8 *bytes = (Byte *)[data bytes];
// Log which file is currently being iterated through
NSLog(#"---Stitching png file to total image: %#", file);
// Populate byte array with channel data from each pixel
for(size_t row = 0; row < height; row++)
{
for(size_t col = 0; col < width; col++)
{
const UInt8* pixel =
&bytes[row * bpr + col * bytes_per_pixel];
for(unsigned short i = 0; i < 4; i+=4)
{
__unused unsigned short red = pixel[i]; // red channel - unused
unsigned short green = pixel[i+1]; // green channel
unsigned short blue = pixel[i+2]; // blue channel
__unused unsigned short alpha = pixel[i+3]; // alpha channel - unused
// Create dicom intensity value from intensity = [(g *250) + b]
unsigned short dicomInt = ((green * 256) + blue);
//Convert unsigned short intensity value to NSNumber so can store in array as object
NSNumber *DICOMvalue = [NSNumber numberWithInt:dicomInt];
// Add to image array (total image)
[byteArray addObject:DICOMvalue];
}
}
}
data = nil;
}
return byteArray;
Running "Analyze" through Xcode doesn't show any apparent leaks either.
I took this code, almost verbatim, and did some more investigation. With the CFDataRef/NSData, I was able to see the problem you were seeing with the NSDatas not going away, and I was able to solve it by wrapping the portion of the code that uses the NSData in an #autoreleasepool scope, like this:
// Create array to add all files into total image
NSMutableArray *byteArray = [[NSMutableArray alloc] initWithCapacity:(imageHeight * imageWidth)];
// Iterate through each file in files array
for (NSString *file in array)
{
// Set baseURL for individual file path
NSString *baseURL = [NSString stringWithFormat:#"http://xx.225.xxx.xxx%#",[imageInfo objectForKey:#"BaseURL"]];
// Specify imagePath by appending baseURL to file name
NSString *imagePath = [NSString stringWithFormat:#"%#%#", baseURL, file];
// Change NSString --> NSURL --> NSData
NSURL *imageUrl = [NSURL URLWithString:imagePath];
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
// Create image from imageData
UIImage *image = [UIImage imageWithData:imageData];
CGImageRef cgimage = image.CGImage;
size_t width = CGImageGetWidth(cgimage);
size_t height = CGImageGetHeight(cgimage);
size_t bpr = CGImageGetBytesPerRow(cgimage);
size_t bpp = CGImageGetBitsPerPixel(cgimage);
size_t bpc = CGImageGetBitsPerComponent(cgimage);
size_t bytes_per_pixel = bpp / bpc;
// Get CGDataProviderRef from cgimage
CGDataProviderRef provider = CGImageGetDataProvider(cgimage);
#autoreleasepool
{
// This is the object that is not being released
NSData *data = (__bridge_transfer NSData *)CGDataProviderCopyData(provider); //Using (__bridge_transfer NSData *) casts the provider to type NSData and gives ownership to ARC, but still not discarded
const UInt8 *bytes = (Byte *)[data bytes];
// Log which file is currently being iterated through
NSLog(#"---Stitching png file to total image: %#", file);
// Populate byte array with channel data from each pixel
for(size_t row = 0; row < height; row++)
{
for(size_t col = 0; col < width; col++)
{
const UInt8* pixel =
&bytes[row * bpr + col * bytes_per_pixel];
for(unsigned short i = 0; i < 4; i+=4)
{
__unused unsigned short red = pixel[i]; // red channel - unused
unsigned short green = pixel[i+1]; // green channel
unsigned short blue = pixel[i+2]; // blue channel
__unused unsigned short alpha = pixel[i+3]; // alpha channel - unused
// Create dicom intensity value from intensity = [(g *250) + b]
unsigned short dicomInt = ((green * 256) + blue);
//Convert unsigned short intensity value to NSNumber so can store in array as object
NSNumber *DICOMvalue = [NSNumber numberWithInt:dicomInt];
// Add to image array (total image)
[byteArray addObject:DICOMvalue];
}
}
}
data = nil;
}
}
return byteArray;
After adding that #autoreleasepool, I then commented out the part where you create NSNumbers and put them in the array, and I was able to see in the Allocations template of Instruments that indeed the CFData objects were now being released with each turn of the loop.
The reason I commented out the part where you create NSNumbers and put them in the array, is that with that code in there, you're going to end up adding width * height * 4 NSNumbers to byteArray. This means that even if the NSData was being released properly, your heap use would be going up by width * height * 4 * <at least 4 bytes, maybe more> no matter what. Maybe that's what you need to do, but it sure made it harder for me to see what was going on with the NSDatas because their size was being dwarfed by the array of NSNumbers.
Hope that helps.

How to use ProgressMonitor function of ImageMagick on iOS?

I am using Image Magick on ios and I have converted some command lines used on my server into a ConvertImageCommand. Everything work well.
I have added the -monitor as an argument to the command so I can see every tiny progress on my image (loading, resizing, cropping, etc)
However, I would like to display a progress bar to inform the user on the progress of the image process.
I am looking for a very simple example on how to use the progress monitor function...
SetImageProgressMonitor(Image *,const MagickProgressMonitor,void *),
SetImageInfoProgressMonitor(ImageInfo *,const MagickProgressMonitor,void *);
Can somebody help me ?
Here is the code:
- (void)ExecuteCommand {
/*
command is an array which contains all the elements of the ImageMagick command.
example:
command (
convert,
imgSource.jpg,
"-blur",
"0x2.5",
"-paint",
5,
imgSaved.jpg
)
*/
ImageInfo *imageInfo = AcquireImageInfo();
int nbArgs = command.count;
char **argv = (char **)malloc((nbArgs + 1) * sizeof(char*));
for (unsigned i = 0; i < nbArgs; i++)
{
NSString *argString = [command objectAtIndex:i];
argv[i] = strdup([argString UTF8String]);
}
argv[nbArgs] = NULL;
progress_monitor_method = SetImageInfoProgressMonitor(imageInfo, &MonitorProgress, self);
ConvertImageCommand(imageInfo, nbArgs, argv, NULL, AcquireExceptionInfo());
if (argv != NULL)
{
for (unsigned index = 0; argv[index] != NULL; index++) {
free(argv[index]);
}
free(argv);
}
}
MagickBooleanType MonitorProgress(const char *text,const MagickOffsetType offset,const MagickSizeType extent,void *client_data) {
IM_TestViewController *IMVC = client_data;
float prog = offset;
float tot = extent;
NSNumber *value = [NSNumber numberWithFloat:prog/tot];
[IMVC performSelectorInBackground:#selector(updateProgressBar:) withObject:value] ;
NSLog(#"Action : %# %lld on %lld", [NSString stringWithCString:text encoding:NSUTF8StringEncoding], offset, extent);
return MagickTrue;
}
- (void)updateProgressBar:(NSNumber *)value {
self.progressBar.progress = [value floatValue];
}

Resources