Speed up UIImage creation from SpriteSheet - ios

I'm not really sure I got the title exactly right, but I'm not sure where my problem is exactly.
I need to load an array of UIImages from an spritesheet, which I'll then use as animations in a UIImageView.
The spritesheet is generated with TexturePacker, which generates the huge atlas (2048x2048) and a json with the sprites descriptions.
Until now, I've had it working without issues, even loading 100 frames in just 0.5-0.8 secs which I was really happy with.
The problem is now I need to load the spritesheets from the Documents folder (they are downloaded, so can't be integrated in the app), so I have to use UIImage imageWithContentsOfFile instead of UIImage imagenamed.
This makes my loading time increase by 5-15 seconds depending the number of frames in the animation.
I'm guessing the problem is that because the image isn't being cached, is loading it for every frame, but I don't see why
This is the code:
// With this line, perfect
//UIImage *atlas = [UIImage imageNamed:#"media/spritesheets/default-pro7#2x.png"];
// But with this one, incredibly looooong loading times
UIImage *atlas = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"media/spritesheets/default-pro7#2x" ofType:#"png"]];
CGImageRef atlasCGI = [atlas CGImage];
for( NSDictionary* dict in frames) {
NSDictionary *frame = [dict objectForKey:#"frame"];
int x = [[frame objectForKey:#"x"] intValue];
int y = [[frame objectForKey:#"y"] intValue];
int width = [[frame objectForKey:#"w"] intValue];
int height = [[frame objectForKey:#"h"] intValue];
NSDictionary *spriteSize = [dict objectForKey:#"spriteSourceSize"];
int realx = [[spriteSize objectForKey:#"x"] intValue];
NSDictionary *size = [dict objectForKey:#"sourceSize"];
int realWidth = [[size objectForKey:#"w"] intValue];
int realHeight = [[size objectForKey:#"h"] intValue];
//Initialize the canvas size!
canvasSize = CGSizeMake(realWidth, realHeight);
bitmapBytesPerRow = (canvasSize.width * 4);
bitmapByteCount = (bitmapBytesPerRow * canvasSize.height);
bitmapData = malloc( bitmapByteCount );
//Create the context
context = CGBitmapContextCreate(bitmapData, canvasSize.width, canvasSize.height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
// Clear background
CGContextClearRect(context,CGRectMake(0.0f,0.0f,canvasSize.width,canvasSize.height));
// Get spriteRect
CGRect cropRect = CGRectMake(x, y, width, height);
CGImageRef imageRef = CGImageCreateWithImageInRect(atlasCGI, cropRect);
//Draw the image into the bitmap context
CGContextDrawImage(context, CGRectMake(realx, 0, width, height), imageRef);
//Get the result image
resultImage = CGBitmapContextCreateImage(context);
UIImage *result = result = [UIImage imageWithCGImage:resultImage];
[images addObject:result];
//Cleanup
free(bitmapData);
CGImageRelease(resultImage);
CGImageRelease(imageRef);
}
I can even try an upload system profiler sessions which show where is taking so much time loading

Well, I finally figured out where the problem was. After looking more carefully with Instruments Time profiler, I noticed a lot of time was spent in copyImageBlockSetPNG , which was what made me believe first that I was loading the image from disk for every frame. turn out, somebody already had this problem Does CGContextDrawImage decompress PNG on the fly?, and it ended being true that I was loading the memory each frame, but not from disk, instead it was being uncompressed from memory in png format for every frame.
In that post theres a solution, which consists in writing the image to a context and getting from it an uncompressed copy of the image.
That works, but I found other way which I believe It's a little bit more efficient
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
NSData *imageData = [NSData dataWithContentsOfFile:#"path/to/image.png"]];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)(imageData), NULL);
CGImageRef atlasCGI = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
CFRelease(source);
Hope it helps!

Related

Pass a list of colors from Objective-C to React Native

I am trying to get a list of all the colors in an image in Objective-C. Note, I am COMPLETELY new to Objective-C - I've done some Swift work in the past, but not really Objective-C.
I pulled a library that more or less is supposed to pull all colors as part of its code. I've modified it to look like this (callback at the end is from React Native, path argument is just a string of the path):
getColors:(NSString *)path options:(NSDictionary *)options callback:(RCTResponseSenderBlock)callback) {
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
UIImage *image =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:0.5
orientation:(UIImageOrientationUp)];
CGImageRef cgImage = [image CGImage];
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
// Allocate storage for the pixel data
unsigned char *rawData = (unsigned char *)malloc(height * width * 4);
// Create the color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Set some metrics
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
// Create context using the storage
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// Release the color space
CGColorSpaceRelease(colorSpace);
// Draw the image into the storage
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
// We are done with the context
CGContextRelease(context);
// determine the colours in the image
NSMutableArray * colours = [NSMutableArray new];
float x = 0;
float y = 0;
for (int n = 0; n<(width*height); n++){
int index = (bytesPerRow * y) + x * bytesPerPixel;
int red = rawData[index];
int green = rawData[index + 1];
int blue = rawData[index + 2];
int alpha = rawData[index + 3];
NSArray * a = [NSArray arrayWithObjects:[NSString stringWithFormat:#"%i",red],[NSString stringWithFormat:#"%i",green],[NSString stringWithFormat:#"%i",blue],[NSString stringWithFormat:#"%i",alpha], nil];
[colours addObject:a];
y++;
if (y==height){
y=0;
x++;
}
}
free(rawData);
callback(#[[NSNull null], colours]);
Now, this script is fairly simple it seems like - it should be iterating over each pixel and adding each color to an array, which is then returned to React Native via the callback.
However, the response to the call is always an empty array.
I'm not sure why that is. Could it be due to where the images are located (they're at AWS, on S3), or something in the algorithm? The code looks right to me, but it's entirely possible that I'm missing something just due to unfamiliarity with Objective-C
I ran your code in an empty project and it performs as expected using an image loaded from the assets library. Is it possible that the UIImage *originalImage = [UIImage imageWithContentsOfFile:path ]; call uses an invalid path. You can easily validate that by simply logging the value of the read image:
UIImage * originalImage = [UIImage imageWithContentsOfFile: path];
NSLog(#"image read from file %#", originalImage);
If the image was not read properly from the file, you will get an empty colours array as the width and height will be nil there will be nothing to loop over.
Also, to avoid modifying the array after your function has returned, it is generally a good practice to return a copy of mutable object or an immutable object (i.e. NSArray instead of NSMutableArray):
callback(#[[NSNull null], [colours copy]]);
Hope this helps
The issue was ultimately that the image download method was returning null - not sure why.
So I took this:
UIImage *originalImage = [UIImage imageWithContentsOfFile:path ];
I changed it to this:
NSData * imageData = [[NSData alloc] initWithContentsOfURL: [NSURL URLWithString: path]];
UIImage *originalImage = [UIImage imageWithData: imageData];
And now my image downloads just fine and the rest of the script works great.

How to convert BGRA bytes to UIImage for saving?

I want to capture raw pixel data for manipulation using GPUImage framework. I capture the data like this:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
//raw values
UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);
//test out dropbox upload here
[self uploadDropbox:dataForRawBytes];
//end of dropbox upload
// Do whatever with your bytes
// [self processImages:dataForRawBytes];
CVPixelBufferUnlockBaseAddress(cameraFrame, 0); }];
I am using the following settings for camera:
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
For testing purposes I want to save the image i capture to dropbox, to do that I need to save it to a tmp directory, how would i save dataForRawBytes?
Any help would be very appreciated!
So i was able to figure out how to gain a UIImage from the raw data, here is my modified code:
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
size_t width = CVPixelBufferGetWidth(cameraFrame);
size_t height = CVPixelBufferGetHeight(cameraFrame);
NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
// Do whatever with your bytes
// create suitable color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
// release color space
CGColorSpaceRelease(colorSpace);
//Create a CGImageRef from the CVImageBufferRef
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
//is the image captured, now we can test saving it.
I needed to create properties such as colourspace and generate a CDContexyRef and work with that to finally get a UIImage, and when debugging I can properly see the image i captured.

Error decoding animated webp iOS

I've been struggling for two days to display an animated webp image in a UIImageView with no success whatsoever.
Mainly the problem is in the decoding step of the file which gives this error: VP8_STATUS_UNSUPPORTED_FEATURE.
I tried
https://github.com/seanooi/iOS-WebP
https://github.com/mattt/WebPImageSerialization
These projects provide code for creating UIImage with webp files and they work fine with images with no animation but they both fail with the same error as above when attempting to decode images with animation.
I am jailbroken and checking the filesystem I saw that Facebook's Messenger app has some of it's stickers in .webp format and also in the License they mention Google's "webp" library so I'm sure somehow it's possible.
Managed to decode animated .webp using the code snippet at the top of this header which also contains explanations of the data structures used.
static NSDictionary* DecodeWebPURL(NSURL *url) {
NSMutableDictionary *info = [NSMutableDictionary dictionary];
NSMutableArray *images = [NSMutableArray array];
NSData *imgData = [NSData dataWithContentsOfURL:url];
WebPData data;
WebPDataInit(&data);
data.bytes = (const uint8_t *)[imgData bytes];
data.size = [imgData length];
WebPDemuxer* demux = WebPDemux(&data);
int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);
int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);
uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);
if (flags & ANIMATION_FLAG) {
WebPIterator iter;
if (WebPDemuxGetFrame(demux, 1, &iter)) {
WebPDecoderConfig config;
WebPInitDecoderConfig(&config);
config.input.height = height;
config.input.width = width;
config.input.has_alpha = iter.has_alpha;
config.input.has_animation = 1;
config.options.no_fancy_upsampling = 1;
config.options.bypass_filtering = 1;
config.options.use_threads = 1;
config.output.colorspace = MODE_RGBA;
[info setObject:[NSNumber numberWithInt:iter.duration] forKey:#"duration"];
do {
WebPData frame = iter.fragment;
VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);
if (status != VP8_STATUS_OK) {
NSLog(#"Error decoding frame");
}
uint8_t *data = WebPDecodeRGBA(frame.bytes, frame.size, &width, &height);
CGDataProviderRef provider = CGDataProviderCreateWithData(&config, data, config.options.scaled_width * config.options.scaled_height * 4, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, 8, 32, 4 * width, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);
[images addObject:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
} while (WebPDemuxNextFrame(&iter));
WebPDemuxReleaseIterator(&iter);
}
}
WebPDemuxDelete(demux);
[info setObject:images forKey:#"frames"];
return info;
}

iOS easier way to resize image

I'm trying to resize an image on a background thread and the app always crashes after a few low memory warnings. How can I rewrite the code below to fix this?
float max = 1024*1024;
NSData *pngData = UIImagePNGRepresentation(setImage);
while ([pngData length] > max) {
pngData = nil;
CGSize newSize = CGSizeMake(setImage.size.width*.9, setImage.size.height *.9);
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"scale: %f", (1024.0*1024.0)/((float)[pngData length]));
pngData = UIImagePNGRepresentation(image);
}
NSLog(#"image length: %i",[pngData length]);
[pngData writeToFile:imageLocation atomically:YES];
I have already tried doing this by calculating the scale and replace the .9 in the code with a scale value
float scale = (max)/((float)[pngData length]);
CGSize newSize = CGSizeMake(setImage.size.width*scale, setImage.size.height *scale);
This made the image too small.
The end goal is to take an image from the camera and save it to disk. I originally had to resize the image because I was getting a "Low Memory warning" when loading the image.
Your code causes an infinite loop and creates images until you run out of memory. Try something like this to fix the infinite loop:
float max = 1024*1024;
NSData *pngData = UIImagePNGRepresentation(setImage);
CGSize newSize = setImage.size;
while ([pngData length] > max) {
newSize = CGSizeMake(newSize.width * 0.9, newSize.height * 0.9);
UIGraphicsBeginImageContext(newSize);
[setImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"scale: %f", (1024.0*1024.0)/((float)[pngData length]));
pngData = UIImagePNGRepresentation(image);
image = nil;
}
NSLog(#"image length: %i",[pngData length]);
[pngData writeToFile:imageLocation atomically:YES];
Is there any reason why you need to do this by hand? If you use a UIImageView and set the image either with initWithImage: or with the image property and then change UIImageView's frame the image will resize accordingly

What is the proper way to handle retina images from resized nsdata?

When a user selects an image from the their photo library, I'm resizing it and then uploading it to my server and then at some other point in the app the user can view all their photos. (I'm simplifying the work flow)
The uiimageview on the "detail" screen is 320 x 320. Based upon the below method should I be using:
UIImage *image = [UIImage imageWithCGImage:img];
or
UIImage *image = [UIImage imageWithCGImage:img scale:[UIScreen mainScreen].scale orientation:img.imageOrientation];
Part B would be when I request download the image (nsdata) should I use imageWithCGIImage or imageWithCGIImage:scale:orientation
- (UIImage *)resizedImageForUpload:(UIImage *)originalImage {
static CGSize __maxSize = {640, 640};
NSMutableData *data = [[NSMutableData alloc] initWithData:UIImageJPEGRepresentation(originalImage, 1.0)];
CFMutableDataRef dataRef = (__bridge CFMutableDataRef)data;
CGImageSourceRef imgSrc = CGImageSourceCreateWithData(dataRef, NULL);
CGFloat width = [originalImage maxDimensionForConstraintSize:__maxSize];
NSNumber *maxWidth = [NSNumber numberWithFloat:width];
NSDictionary *options = #{
(__bridge NSString *)kCGImageSourceCreateThumbnailFromImageAlways : #YES,
(__bridge NSString *)kCGImageSourceCreateThumbnailWithTransform: #YES,
(__bridge NSString *)kCGImageSourceThumbnailMaxPixelSize : maxWidth
};
CFDictionaryRef cfOptions = (__bridge CFDictionaryRef)options;
CGImageRef img = CGImageSourceCreateThumbnailAtIndex(imgSrc, 0, cfOptions);
CFStringRef type = CGImageSourceGetType(imgSrc);
CGImageDestinationRef imgDest = CGImageDestinationCreateWithData(dataRef, type, 1, NULL);
CGImageDestinationAddImage(imgDest, img, NULL);
CGImageDestinationFinalize(imgDest);
UIImage *image = [UIImage imageWithCGImage:img];
CFRelease(imgSrc);
CGImageRelease(img);
CFRelease(imgDest);
return image;
}
It appears I've found the answer to my own question. The resizeImageForUpload shouldn't try to scale based upon the device. Since I'm defining a max size 640,640 (retina size for my 320,320 uiimageview) no other manipulation is necessary. I've added some caching for the images and I'm handing the scaling at that point:
UIImage *image = [UIImage imageWithData:imgData scale:[UIScreen mainScreen].scale];
and then return. The reason why I thought I had messed something up was that I was trying to scale an already scaled image. Lessons learned.

Resources