convert jpg UIImage to bitmap UIImage? - ios

I'm trying to let a user pan/zoom through a static image with a selection rectangle on the main image, and a separate UIView for the "magnified" image.
The "magnified" UIView implements drawRect:
// rotate selectionRect if image isn't portrait internally
CGRect tmpRect = selectionRect;
if ((image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationLeftMirrored || image.imageOrientation == UIImageOrientationRight || image.imageOrientation == UIImageOrientationRightMirrored)) {
tmpRect = CGRectMake(selectionRect.origin.y,image.size.width - selectionRect.origin.x - selectionRect.size.width,selectionRect.size.height,selectionRect.size.width);
}
// crop and draw
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], tmpRect);
[[UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation] drawInRect:rect];
CGImageRelease(imageRef);
The performance on this is atrocious. It spends 92% of its time in [UIImage drawInRect].
Deeper, that's 84.5% in ripc_AcquireImage and 7.5% in ripc_RenderImage.
ripc_AcquireImage is 51% decoding the jpg, 30% upsampling.
So...I guess my question is what's the best way to avoid this? One option is to not take in a jpg to start, and that is a real solution for some things [ala captureStillImageAsynchronouslyFromConnection without JPG intermediary ]. But if I'm getting a UIImage off the camera roll, say...is there a clean way to convert the UIImage-jpg? Is converting it even the right thing (is there a "cacheAsBitmap" flag somewhere that would do that, essentially?)

Apparently this isn't just a JPG issue—it's anything that's not backed by "ideal native representation", I think.
I'm having good success (significant performance improvements) with just the following:
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
For both things taken from the camera roll and from the camera.

Related

iOS: renderInContext and Landscape orientation issue

I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.

UIImagePNGRepresentation returns nil after CIFilter

Running into a problem getting a PNG representation for a UIImage after having rotated it with CIAffineTransform. First, I have a category on UIImage that rotates an image 90 degrees clockwise. It seems to work correctly when I display the rotated image in a UIImageView.
-(UIImage *)cwRotatedRepresentation
{
// Not very precise, stop yelling at me.
CGAffineTransform xfrm=CGAffineTransformMakeRotation(-(6.28 / 4.0));
CIContext *context=[CIContext contextWithOptions:nil];
CIImage *inputImage=[CIImage imageWithCGImage:self.CGImage];
CIFilter *filter=[CIFilter filterWithName:#"CIAffineTransform"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSValue valueWithBytes:&xfrm objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIImage *result=[filter valueForKey:#"outputImage"];
CGImageRef cgImage=[context createCGImage:result fromRect:[inputImage extent]];
return [[UIImage alloc] initWithCIImage:result];
}
However, when I try to actually get a PNG for the newly rotated image, UIImagePNGRepresentation returns nil.
-(NSData *)getPNG
{
UIImage *myImg=[UIImage imageNamed:#"canada"];
myImg=[myImg cwRotatedRepresentation];
NSData *d=UIImagePNGRepresentation(myImg);
// d == nil :(
return d;
}
Is core image overwriting the PNG headers or something? Is there a way around this behavior, or a better means of achieving the desired result of a PNG representation of a UIImage rotated 90 degrees clockwise?
Not yelling, but -M_PI_4 will give you the constant you want with maximum precision :)
The only other thing that I see is you probably want to be using [result extent] instead of [inputImage extent] unless your image is known square.
Not sure how that would cause UIImagePNGRepresentation to fail though. One other thought... you create a CGImage and then use the CIImage in the UIImage, perhaps using initWithCGImage would give better results.

Can I store a decompressed image in Core Data

I'm working on a carrousel of large amount of big images and I am making some tests trying to improve the performance loading the images. Right now, even if I am already decompressing the jpg on a different queue, it's still taking a little bit, mostly comparing with the photo album app which is included in the iOS. Furthermore, if I pass the images very fast, I can produce memoryWarnings.
So what I am trying to do is to store the CGImageRef (or the UIImage already decompressed: raw data) into Core Data. But all the answers and options I found are using UIImageJPegRepresentation, but doing that I would compress the image again, wouldn't I?
Anybody knows if there is a way? Am I focusing the problem wrongly?
Yes, you can convert the image to NSData and store that. Example:
Entity *testEntity = [NSEntityDescription insertNewObjectForEntityForName:#"Entity" inManagedObjectContext:__managedObjectContext];
NSString *photoPath = [[NSBundle mainBundle] pathForResource:#"photo" ofType:#"png"];
if ([[NSFileManager defaultManager] fileExistsAtPath:photoPath]) {
NSData *data = [NSData dataWithContentsOfFile:photoPath];
[testEntity setPhoto:data];
}
This stores the image as BLOB data in the sqlite file.
Ideally you never keep large number of UIImage objects of large images in memory.They will give u memory warnings.
If the Images are local files, you can do one thing, using a background thread to scale the big images to the size which is ideal for the carousel.Save these thumb nails and map them to the original image.
Load the thumb nails for carousel, and use the original image file for detailed image viewing.The thumb nails would be png for max performance.Jpeg decoding is not the native of iOS, and require more cpu to decode them than png.You dont have to keep the thumb nail data in core data, a .png file would do a nice job in my experience.You can use following code to load the image
UIImage * image = [UIImage imageWithContentsOfFile:filePath];
Here is the code to resize image
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}

Optimizing Image Drawing for iPad 3

I am trying to find the most optimized way to draw images in iOS on the iPad 3. I am generating a reflection for a third party version of coverflow that I am implementing in my app. The reflection is created using NSOperationQueue and then added via UIImageView in the main thread. Because the coverflow part is already using resources for the animations as you scroll through the images, with each new image that is added, there is a bit of a "pop" in the scrolling and it makes the app feel kind of laggy/glitchy. Testing on iPad 1 and 2 the animation is perfectly smooth and looks great.
How can I further optimize the drawing to avoid this. Any ideas are appreciated. I have been looking into "tiling" the reflection so that it presents a little of the reflection at a time, but I'm not sure what the best approach is.
Here is the drawing code:
UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Carousel-Ref-Mask.jpg" ofType:nil]];
//
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:self.name ofType: nil]];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0.0, mask.size.height);
CGContextScaleCTM(ctx, 1.f, -1.f);
[image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = mask.CGImage;
CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);
CGImageRelease(maskCreate);
UIImage *maskedImage = [UIImage imageWithCGImage:masked scale:[[UIScreen mainScreen]scale] orientation:UIImageOrientationUp];
CGImageRelease(masked);
if (maskedImage) {
[mainView performSelectorOnMainThread:#selector(imageDidLoad:)
withObject:[NSArray arrayWithObjects:maskedImage, endView, nil]
waitUntilDone:YES];
} else
NSLog(#"Unable to find sample image: %#", self.name);
The Mask is just a gradient png that I am using to mask the image. Also, if I just draw this offscreen but don't add it, there isn't hardly any lag. The lag comes from actually adding it on the main thread.
So after spending a great deal of time researching this issue and trying out different approaches (and spending a good while with the "Time" profiler in Instruments), I found that the lag was from the image decoding on the main thread when the image was displayed. By decoding on the background with all CoreGraphics calls I was able to cut the time in half. This still wasn't good enough.
I further found that the reflection being created in my code was taking a long time to display due to the transparency or alpha pixels. I therefor drew it in a context and filled the context with solid black. And then I made the view itself transparent instead of the image. This reduced the time it took on the main thread by 83%—Mission Accomplished

UIImage face detection

I am trying to write a routine that takes a UIImage and returns a new UIImage that contains just the face. This would seem to be very straightforward, but my brain is having problems getting around the CoreImage vs. UIImage spaces.
Here's the basics:
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
return newImage;
}
-(UIImage *)getFaceImage:(UIImage *)picture {
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];
CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
NSArray *features = [detector featuresInImage:ciImage];
// For simplicity, I'm grabbing the first one in this code sample,
// and we can all pretend that the photo has one face for sure. :-)
CIFaceFeature *faceFeature = [features objectAtIndex:0];
return imageFromImage:picture inRect:faceFeature.bounds;
}
The image that is returned is from the flipped image. I've tried adjusting faceFeature.bounds using something like this:
CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);
... but that gives me results outside the image.
I'm sure there's something simple to fix this, but short of calculating the bottom-down and then creating a new rect using that as the X, is there a "proper" way to do this?
Thanks!
It's much easier and less messy to just use CIContext to crop your face from the image. Something like this:
CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];
Where inputImage is your UIImage object and faceFeature object is of type CIFaceFeature that you get from [CIDetector featuresInImage:] method.
Since there doesn't seem to be a simple way to do this, I just wrote some code to do it:
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
_picture.size.height - faceFeature.bounds.origin.y - largestFace.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
This worked a charm.
There is no simple way to achieve this, the problem is that the images from the iPhone camera are always in portrait mode, and metadata settings are used to get them to display correctly. You will also get better accuracy in your face detection call if you tell it the rotation of the image beforehand. Just to make things complicated, you have to pass it the image orientation in EXIF format.
Fortunately there is an apple sample project that covers all of this called Squarecam, i suggest you check it for details

Resources