iOS render image in context not working as expected in landscape orientation - ios

The code below seems to work well to render an image from a view at retina resolution. The problem is it does not seem to work while my device is in landscape orientation. It returns an image rotated 90 degrees.
Do I need to add options for image or context orientation?
CGSize panelRect = CGSizeMake(selectedPanelView.frame.size.width, selectedPanelView.frame.size.height);
UIGraphicsBeginImageContextWithOptions(panelRect, 1, 2);
[[selectedPanelView layer] renderInContext:UIGraphicsGetCurrentContext()];
renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The problem seems to be fixed by using: UIImage imageWithCGImage:scale:orientation: as below.
UIImage *renderedImageWithOrientation = [UIImage imageWithCGImage:renderedImage.CGImage scale:1 orientation:0];
I run this after rendering the image, then I add the image to the imageView.

Related

How can I use "css sprites" in ios?

I have an image with multiple icons, and I have the position and the size of the icon that I want to show.
The question is, how can I show just part of an image in a UIImageView so I can show only the icon that I want to?
Is it possible to show the icon correctly in 1x, 2x, and 3x, even if the image gets a bit pixelated?
You can crop a part of the image and create a new UIImage from it with CGImageCreateWithImageInRect:
CGRect cropRect = CGRectMake(x,y,width,height); //Calculate the rect you'd like to show
CGImageRef imageRef = CGImageCreateWithImageInRect(originalImage.CGImage, cropRect);
UIImage* outImage = [UIImage imageWithCGImage:imageRef scale:originalImage.scale orientation:originalImage.imageOrientation];
CGImageRelease(imageRef);

iOS: renderInContext and Landscape orientation issue

I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.

How to save UIView content (screenshot) without reducing it's quality

I have a UIView and I want to save it's content to an image, I successfully did that using UIGraphicsBeginImageContext and UIGraphicsGetImageFromCurrentImageContext() but the problem is that the image quality is reduced. Is there a way to take a screenshot/save UIView content to an image without reducing it's quality?
Here's a snippet of my code:
UIGraphicsBeginImageContext(self.myView.frame.size);
[self.myview.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try Following Code :
UIGraphicsBeginImageContextWithOptions(YourView.bounds.size, NO, 0);
[YourView drawViewHierarchyInRect:YourView.bounds afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This code working Awsome...!!!
Mital Solanki’s answer is a fine solution, but the root of the problem is that your view is on a retina screen (so has a scale factor of 2) and you are creating a graphics context with a scale factor of 1. The documentation for UIGraphicsBeginImageContext states:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Instead use UIGraphicsBeginImageContextWithOptions with a scale of 0, which is equivalent to passing a scale of [[UIScreen mainScreen] scale].

Crop image to a square according to the size of a UIView/CGRect

I have an implementation of AVCaptureSession and my goal is for the user to take a photo and only save the part of the image within the red square border, as shown below:
AVCaptureSession's previewLayer (the camera) spans from (0,0) (top left) to the bottom of my camera controls bar (the bar just above the view that contains the shutter). My navigation bar and controls bar are semi-transparent, so the camera can show through.
I'm using [captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; to ensure that the original image being saved to the camera roll is like Apple's camera.
The user will be able to take the photo in portrait, landscape left and right, so the cropping method must take this into account.
So far, I've tried to crop the original image using this code:
DDLogVerbose(#"%#: Image crop rect: (%f, %f, %f, %f)", THIS_FILE, self.imageCropRect.origin.x, self.imageCropRect.origin.y, self.imageCropRect.size.width, self.imageCropRect.size.height);
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.imageCropRect.size.width, self.imageCropRect.size.width), NO, 0.0);
// Create rect for image
CGRect rect = self.imageCropRect;
// Draw the image into the rect
[self.captureManager.stillImage drawInRect:rect];
// Saving the image, ending image context
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
However, when I look at the cropped image in the camera roll, it seems that it has just squashed the original image, and not discarded the top and bottom parts of the image like I'd like. It also results in 53 pixels of white space at the top of the "cropped" image, likely because of the y position of my CGRect.
This is my logging output for the CGRect:
Image crop rect: (0.000000, 53.000000, 320.000000, 322.000000)
This also describes the frame of the red bordered view in the superview.
Is there something crucial I'm overlooking?
P.S. The original image size (taken with a camera in portrait mode) is:
Original image size: (2448.000000, 3264.000000)
You can crop images with CGImageCreateWithImageInRect:
CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Don't forget to add scale parameter otherwise you will get low resolution image
CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], CGRectMake(0, 0, 30, 120));
[imageView setImage:[UIImage imageWithCGImage:imageRef scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationUp]];
CGImageRelease(imageRef);
Swift 3:
let imageRef:CGImage = uncroppedImage.cgImage!.cropping(to: bounds)!
let croppedImage:UIImage = UIImage(cgImage: imageRef)

UIImagePickerController image Scaling and maintain its quality

Image Size captured using Camera return's image of size 720*960.
The captured Image is displayed in a UIImageView of 320*436, like this.
UIImageView *imgView=[[UIImageView alloc] initWithFrame:CGRectMake(0.0,0.0,320.0,436.0)];
imgView.image=img;//Image received from camera.
[self.view addSubView:imgView];
This, works fine image 720*960 is scaled to 320*436 and displayed.
Now, from here actual problem starts. I have another image of size 72*72. This image is overlapped with the image received from camera at some arbitrary coordinates.
CGRectMake(0.0,0.0,72.0,72.0);
I am not able to find a better way to handle scaling and applying a overlay of another Image, at the same time maintain its quality.
The image needs to be send to a server.
Use the following code to scale images:
-(UIImage*) imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Resources