Place a full sized image to fit the entire screen in a CALayer - ios

I have an image (png) which must fill the entire screen of my app. I'm using CALayers and doing everything programatically but still this sounds like something that should be trivial to but I can't get it to work. I have two versions of the image a retina version (2048px x 1536px) and a non-retina version 1024px x 768px). The image is listed a universal image in the Asset catalogue
The code is simple enough I think:
// CREATE FULL SCREEN CALAYER
CALayer *myLayer = [[CALayer alloc] init];
[myLayer setBounds:CGRectMake(0, 0, bounds.size.width, bounds.size.height)];
[myLayer setPosition:CGPointMake(bounds.size.width/2, bounds.size.height/2)];
[self.view.layer addSublayer:myLayer];
// LOAD THE IMAGE INTO THE LAYER —— AM EXPECTING IT TO FILL THE LAYER
UIImage *layerImage = [UIImage imageNamed:#"infoScreen"];
CGImageRef image = [layerImage CGImage];
[myLayer setContents:(__bridge id)image];
[myLayer setContentsGravity:kCAGravityCenter]; /* IT WORKS FINE IF I USE setContentsGravity:kCAGravityResizeAspectFill */
This code works fine on a non-iPad retina. However on the Retina iPad, the image is always loaded at twice the actual size (so it appears zoomed in). I'm using the Simulator and iOS 8. What am I doing wrong?

Beging your image processing with
func UIGraphicsBeginImageContextWithOptions(size: CGSize, opaque: Bool, scale: CGFloat)
The last parameter in the above function determines the scaling for the graphics. You can set
this value by retrieving the scale property of the main screen. In swift I would do it this way:
var screen = UIScreen.mainScreen()
var scale = screen.scale
Hope it helps.
Edit: - Code for doing this in swift, you can modify it to suit your need.
UIGraphicsBeginImageContextWithOptions(rect.size, true, 0.0)
var ctx : CGContextRef = UIGraphicsGetCurrentContext()
<UIImage>.drawInRect(rect)

I had this same problem, was solved by setting the contentsScale value on the CALayer - for some reason the default scale on CALayers is always 1.0, even on Retina devices.
i.e.
layer.contentsScale = [UIScreen mainScreen].scale;
Also, if you're drawing a shape using CAShapeLayer and wondering its edges look a little jagged on retina devices, try:
shapeLayer.rasterizationScale = [UIScreen mainScreen].scale;
shapeLayer.shouldRasterize = YES;

Related

UIImageView image aspect ratio is messed up after redrawing it to create a round mask

My app sends a GET request to google to attain certain user information. One piece of crucial returned data is a users picture which is placed inside a UIImageView that is always exactly (100, 100) then redrawn to create a round mask for this imageView. These pictures come from different sources and thus always have different aspect ratios. Some have a smaller width compared to their height, sometimes it's vice-versa. This results in the image looking compressed. I've tried things such as the following (none of them worked):
_personImage.layer.masksToBounds = YES;
_personImage.layer.borderWidth = 0;
_personImage.contentMode = UIViewContentModeScaleAspectFit;
_personImage.clipsToBounds = YES;
Here is the code I use to redraw my images (it was attained from user fnc12 as the third answer in Making a UIImage to a circle form):
/** Returns a redrawn image that had a circular mask created for the inputted image. */
-(UIImage *)roundedRectImageFromImage:(UIImage *)image size:(CGSize)imageSize withCornerRadius:(float)cornerRadius
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0); //<== Notice 0.0 as third scale parameter. It is important because default draw scale ≠ 1.0. Try 1.0 - it will draw an ugly image...
CGRect bounds = (CGRect){CGPointZero, imageSize};
[[UIBezierPath bezierPathWithRoundedRect:bounds cornerRadius:cornerRadius] addClip];
[image drawInRect:bounds];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
This method is always called like so:
[_personImage setImage:[self roundedRectImageFromImage:image size:CGSizeMake(_personImage.frame.size.width, _personImage.frame.size.height) withCornerRadius:_personImage.frame.size.width/2]];
So I end up having a perfectly round image but the image it self isn't right aspect-wise. Please help.
P.S. Here's how images look when their width is roughly 70% that of their height before the redrawing of the image to create a round mask:
Hello dear friend there!
Here is my version that works:
Code in ViewController:
[self.profilePhotoImageView setContentMode:UIViewContentModeCenter];
[self.profilePhotoImageView setContentMode:UIViewContentModeScaleAspectFill];
[CALayer roundView:self.profilePhotoImageView];
roundView function in My CALayer+Additions class:
+(void)roundView:(UIView*)view{
CALayer *viewLayer = view.layer;
[viewLayer setCornerRadius:view.frame.size.width/2];
[viewLayer setBorderWidth:0];
[viewLayer setMasksToBounds:YES];
}
May be you should try to change your way to create rounded ImageView using my version that create rounded ImageView by modifying ImageView's view layer . Hope it helps.
To maintain aspect ratio of UIImageView, after setting image use following line of code.
[_personImage setContentMode:UIViewContentModeScaleAspectFill];
For detailed description follow reference link:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIImageView_Class/

iOS: renderInContext and Landscape orientation issue

I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

Mask arbitrarily sized UIImageView with resizable UIImage mask

Current code:
self.backgroundImageView.image = [self.message imageOfSize:self.message.size]; // Random image, random size
UIImage *rightBubbleBackground = [[UIImage imageNamed:#"BubbleRight"]
resizableImageWithCapInsets:BubbleRightCapInsets
resizingMode:UIImageResizingModeStretch];
CALayer *mask = [CALayer layer];
mask.contents = (id)[rightBubbleBackground CGImage];
mask.frame = self.backgroundImageView.layer.frame;
self.backgroundImageView.layer.mask = mask;
self.backgroundImageView.layer.masksToBounds = YES;
This does not work properly. Though the mask is applied, the rightBubbleBackground does not resize correctly to fit self.backgroundImageView, even though it has resizing cap insets (BubbleRightCapInsets) set.
Original Image:
Mask image (rightBubbleBackground):
Result:
I found this answer but it only works for symmetrical images. Maybe I could modify that answer for my use.
I was wrong. That answer can be modified to work for asymmetrical images. I worked on that answer a bit and solved my own problem.
The following code made my cap insets work for the mask layer:
mask.contentsCenter =
CGRectMake(BubbleRightCapInsets.left/rightBubbleBackground.size.width,
BubbleRightCapInsets.top/rightBubbleBackground.size.height,
1.0/rightBubbleBackground.size.width,
1.0/rightBubbleBackground.size.height);
Result:
I had (part of) the same problem - i.e. the pixelated layer contents. For me it was solved by setting the contentsScale value on the CALayer - for some reason the default scale on CALayers is always 1.0, even on Retina devices.
i.e.
layer.contentsScale = [UIScreen mainScreen].scale;
Also, if you're drawing a shape using CAShapeLayer and wondering its edges look a little jagged on retina devices, try:
shapeLayer.rasterizationScale = [UIScreen mainScreen].scale;
shapeLayer.shouldRasterize = YES;

bezierPathWithRoundedRect: gives bad result on retina screen

I used a method to get rounded pictures on my iOS app which work perfectly fine on iphone 3. My problem is that as soon as I try it on iphone 4 or above, the pictures get a bad quality.
Is there any way, I can turn my code around to get high res rounded picture?
-(void) setRoundedView:(UIImageView *)imageView picture: (UIImage *)picture toDiameter:(float)newSize{
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0);
[[UIBezierPath bezierPathWithRoundedRect:imageView.bounds
cornerRadius:100.0] addClip];
CGRect frame=imageView.bounds;
frame.size.width=newSize;
frame.size.height=newSize;
[picture drawInRect:frame];
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Many thanks for your help!
The issue is how you're defining your image context. You are specifying a 1.0 scale, but on retina screens that should be 2.0.
Instead you can use 0.0 to default to native quality:
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 0.0);

Resources