UIImage Distortion from UIGraphicsBeginImageContext with larger files (pixel formats, codecs?) - ios

I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.

This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.

Related

iOS Redrawing image to prevent deferred decompression resulting in a bigger image

I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.
The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.
For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.
Here is the code:
UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
// Log to compare size and data length...
NSLog(#"BEFORE: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Original code from AsyncImageView
//redraw to prevent deferred decompression
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Log to compare size and data length...
NSLog(#"AFTER: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Some other code...
}
The log shows as follows:
BEFORE: 797.000000 500.000000
LEN 1594000
AFTER: 797.000000 500.000000
LEN 1600000
I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.
Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.
We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.
Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?
As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).
That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

Image loses quality when scaled

I know that when scaling down an image you have to expect some loss of quality, but when I assign an image to a UIButton of size (75,75) it has great quality.
When I scale the image to size (75,75) for copy/paste using UIPasteboard it has really bad quality.
Background: My app is a keyboard extension, so I have buttons with assigned images and when they are clicked, I get the image from the button, scale it to be the right size, copy it to UIPasteboard, then paste.
Code:
Here is my code for detecting a button click and copying an image:
- (IBAction) clickedImage:(id)sender {
UIButton *btn = sender;
UIImage *scaledImage = btn.imageView.image;
UIImage *newImage = [scaledImage imageWithImage:scaledImage andSize:CGSizeMake(75, 75)];
NSData *imgData = UIImagePNGRepresentation(newImage);
UIPasteboard *pasteboard = [UIPasteboard generalPasteboard];
[pasteboard setData:imgData forPasteboardType:[UIPasteboardTypeListImage objectAtIndex:0]];
}
And I have a UIImage category with the imageWithImage:andSize: method for scaling the image. This is the scaling method:
- (UIImage*)imageWithImage:(UIImage*)image andSize:(CGSize)newSize {
// Create a bitmap context.
UIGraphicsBeginImageContextWithOptions(newSize, NO, image.scale);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
What doesn't make sense is that when I put the image in the UIButton it is scaled down to the exact same size as when I scale using code, but the quality is way better for the UIButton than when I return the scaled image. Is there something wrong with my scaling code? Does anyone know why there is such a drop in quality between the two images?
A better way to do this is to use ImageIO to resize your images. It takes a little bit longer, but it is far better for scaling images than redrawing into a graphics context.
Did you try this https://github.com/mbcharbonneau/UIImage-Categories ?
There is an interesting method in the Resize category
- (UIImage *)resizedImage:(CGSize)newSize
interpolationQuality:(CGInterpolationQuality)quality;
Setting quality to kCGInterpolationHigh seems to give a good result (a little bit slower)

iOS: renderInContext and Landscape orientation issue

I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.

Get picture from square AVCaptureVideoPreviewLayer

So what I am doing is creating a custom image picker, and I have a 320 X 320 AVCaptureVideoPreviewLayer that I am using, and when I take a picture, I want to get a UIImage of what is actually seen in the preview layer, but what I get when I take a photo with captureStillImageAsynchronouslyFromConnection: completionHandler: is a normal image with size 2448 X 3264. So what would be the best way to get make this image into a 320 x 320 square image like is seen in the preview layer without messing it up? Is there a Right Way™ to do this? Also, I am using AVLayerVideoGravityResizeAspectFill for the videoGravity property of AVCaptureVideoPreviewLayer, if that is relevant.
Have you tried to transform the image?
try using CGAffineTransformMakeScale(<#sx: CGFloat#>, <#sy: CGFloat#>) to scale the image down. Transforms can do magic! If you have ever taken linear algebra, you should recall your standard transformation matrices. I have not used this for images, so I am not sure if this would work well with the pixels.
you could also try
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
where you can change the scale factor

Crop UIImage from a transformed UIImageView

I am letting the user capture an image from the camera or picking one from the library.
This image I display in an UIImageView.
The user can now scale and position the image within a bounding box, exactly like you would do using the UIImagePickerController when allowsEditing is set to YES.
When the user is satisfied with the result and taps Done I would like to produce a cropped UIImage.
The problem arises when using CGImageCreateWithImageInRect as this does not take the scaling into account. The transform is applied to the imageView like this:
CGAffineTransform transform = CGAffineTransformScale(self.imageView.transform, newScale, newScale);
[self.imageView setTransform:transform];
Using a gestureRecognizer.
I assume what is happening is; the UIImageView is scaled and moved, it then applies the UIViewContentModeScaleAspectFit to the UIImage is holds and when I ask it to crop the image, it does exactly that - whit no regards to the scaling positioning. The reason I think this, is that if I don't scale or move the image but just tap Done straight away the cropping works.
I crop the image like this:
- (UIImage *)cropImage:(UIImage*) img toRect:(CGRect)rect {
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale>1.0) {
rect = CGRectMake(rect.origin.x*scale , rect.origin.y*scale, rect.size.width*scale, rect.size.height*scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.imageView.image.scale orientation:self.imageView.image.imageOrientation];
// UIImage *result = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return result;
}
Passing in a cropRect from a view that is a subView of my main view (the square overlay box, like in UIImagePickerController). Main UIView has a UIImageView that gets scaled and a UIView that displays the crop rectangle.
How can I get the "what you see is what you get" cropping and which factors must I take into account. Or maybe suggestions if I should implemented the hierarchy or scaling differently.
Try a simple trick. Apple has got samples on its site to show how to zoom into a photo using code. Once done zooming, using graphic context take the frame size of the bounding view, and take the image with that. Eg Uiview contains scroll view which has the zoomed image. So the scrollview zooms and so does your image, now take the frame size of your bounding UIview, and create an image context out of it and then save that as a new image. Tell me if that makes sense.
Cheers :)

Resources