IOS7 image losing quality when rendering UIView [duplicate] - ios

UIImageView *cellimage = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0 , 107, 70)];
The above statement i am sure will make appropriate sizes in both retina resolution devices and standard ones..that is a frame of 107 x 70 pixels on standard and 214 x 140 on retina.
What i want to know is if the below UIGraphicsGetImageFromCurrentImageContext does the same too.. image will be 67 x 67 for standard and 124 x 124 for retina versions?
CGSize imagesize = CGSizeMake(67, 67);
UIGraphicsBeginImageContext(imagesize);
NSLog(#" Converting ");
[image drawInRect:CGRectMake(0,0,imagesize.width,imagesize.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if not can anyone tell me how to differentiate between models.?
Thanks

You need to use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext, so that you can specify the scale factor of the image. This will use the scale factor of the device's main screen:
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
This will use the scale factor of the screen containing cellImage, if cellImage is on a screen:
UIGraphicsBeginImageContextWithOptions(imageSize, NO, cellImage.window.screen.scale);
This will hardcode the scale factor:
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 2);

Related

How to get image without resizing?

first of all i need to describe what i gonna to do:
i need to display a header image with full screen width for all devices.
there for i should provide multiple pictures with different sizes, and not just 2x and 3x.
so i would have these images:
header_width_1242.png for iphone 6 plus.
header_width_1125.png for iphone 6 plus display zoom
header_width_640.png for iphone 5,6
...
so i shouldn't choose image according to the scale, rather i should choose image according to the width:
let image_name = "header_width_" + String(UIScreen.mainScreen().scale * UIScreen.mainScreen().bounds.width)
let image = UIImage(named:image_name)
the problem, that ios scale the image automatically again. so if the device with scale 2x. then it return the image * 2 size.
e.g : for iphone 5 which has width 320 and scale 2, i need the header_width_640.png , but it seems that the system scale the image to 1280 (640 * 2).
How could i tell the system, to return image UIImage(named:image_name) without scaling ?thanks
You could declare a screenSize property on your viewController.
// Determine screen size
let screenSize: CGRect = UIScreen.mainScreen().bounds
Then, when you need to set an image, you could do the following:
if screenSize.width < 641 {
// set your imageView to the image for 640 screen width
} else if screenSize.width < 1126 {
// set your imageView to the image for 1125 screen width
} else if screenSize.width < 1243 {
// set your imageView to the image for 1242 screen width
}
You should not constraint assets by screen size, is better if you do by using size classes if available. Image assets directory make this possible in the inspector panel, by choosing the size classes for each image.
I do understand that sometimes is a need, but try to think in a relative perspective. If the image is something like a logo aligned by left or right you can use slicing to create stretchable end/beginning on the image.
If the image is center with a solid color or something drawable by code you can draw it at run time.
Here is a snippet in ObjC that use in an app of mine you can easily convert in SWIFT:
-(UIImage*) createNavBarBackgroundWithImage:(UIImage*) image {
CGSize screenSize = ((CGRect)[[UIScreen mainScreen] bounds]).size;
CGFloat width = screenSize.width;
CGFloat height = 64.0;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, 0.0);
//Draw background
UIColor * backGroundColor = [UIColor colorWithRed:27.f/255.f green:142.f/255.f blue:138.f/255.f alpha:1.0];
[backGroundColor setFill];
UIRectFill(CGRectMake(0, 0, width, height));
//Draw the image at the center
CGPoint point = CGPointMake(width/2 - image.size.width/2, 0);
[image drawAtPoint:point];;
UIImage *newBGImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newBGImage;
}
Use Image Asset
https://developer.apple.com/library/ios/recipes/xcode_help-image_catalog-1.0/chapters/AddingImageSets.html
header_width_1242.png for iphone 6 plus. Name it header_width_414.png drag to 3x
header_width_1125.png name it header_width_375.png drag to 3x
header_width_640.png name it header_width_320.png drag to 2x
let image_name = "header_width_" + UIScreen.mainScreen().bounds.width)
let image = UIImage(named:image_name)

Trying to crop my UIImage to a 1:1 aspect ratio (square) but it keeps enlarging the image causing it to be blurry. Why?

Given a UIImage, I'm trying to make it into a square. Just chop some of the largest dimension off to make it 1:1 in aspect ratio.
UIImage *pic = [UIImage imageNamed:#"pic"];
CGFloat originalWidth = pic.size.width;
CGFloat originalHeight = pic.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
CGRect square = CGRectMake(0, 0, smallestDimension, smallestDimension);
CGImageRef imageRef = CGImageCreateWithImageInRect([pic CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageView *imageView = [[UIImageView alloc] initWithImage:squareImage];
imageView.frame = CGRectMake(100, 100, imageView.bounds.size.width, imageView.bounds.size.height);
[self.view addSubview:imageView];
But this is what it results in:
When it should look like this, but just a little narrower.
Why is this? The images are pic(150x114) / pic#2x(300x228).
The problem is you're mixing up logical and pixel sizes. On non retina devices these two are the same, but on retina devices (like in your case) the pixel size is actually double the logical size.
Usually, when designing your GUI, you can always just think in logical sizes and coordinates, and iOS (or OS X) will make sure, that everything is doubled on retina screens. However, in some cases, especially when creating images yourself, you have to explicitly specify what size you mean.
UIImage's size method returns the logical size. That is the resolution on non-retina screens for instance. This is why CGImageCreateWithImageInRect will only create an new image, from the upper left half of the image.
Multiply your logical size with the scale of the image (1 on non-retina devices, 2 on retina devices):
CGFloat originalWidth = pic.size.width * pic.scale;
CGFloat originalHeight = pic.size.height * pic.scale;
This will make sure, that the new image is created from the full height (or width) of the original image. Now, one remaining problem is, that when you create a new UIImage using
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
iOS will think, this is a regular, non-retina image and it will display it twice as large as you would expect. To fix this, you have to specify the scale when you create the UIImage:
UIImage *squareImage = [UIImage imageWithCGImage:imageRef
scale:pic.scale
orientation:pic.imageOrientation];

How do I create a UIImage bigger than the device screen?

I'm working on an iPhone app that can create pictures and post them to Facebook and Instagram.
The correct size for Facebook photos seems to be 350x350, and indeed this code creates a 350x350 image exactly how I want:
-(UIImage *)createImage {
UIImageView *v = [[UIImageView alloc] initWithFrame:CGRectMake(0, screenHeight/2-349, 349, 349)];
v.image = [UIImage imageNamed:#"backgroundForFacebook.png"]; //"backgroundForFacebook.png" is 349x349.
//This code adds some text to the image.
CGSize dimensions = CGSizeMake(screenWidth, screenHeight);
CGSize imageSize = [self.ghhaiku.text sizeWithFont:[UIFont fontWithName:#"Georgia"
size:mediumFontSize]
constrainedToSize:dimensions lineBreakMode:0];
int textHeight = imageSize.height+16;
UITextView *tv = [self createTextViewForDisplay:self.ghhaiku.text];
tv.frame = CGRectMake((screenWidth/2)-(self.textWidth/2),s creenHeight/3.5,
self.textWidth/2 + screenWidth/2, textHeight*2);
[v addSubview:tv];
//End of text-adding code
CGRect newRect = CGRectMake(0, screenHeight/2-349, 349, 349);
UIGraphicsBeginImageContext(newRect.size);
[[v layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[v removeFromSuperview];
return myImage;
}
But when I use the same code to create an Instagram image, which needs to be 612x612, I get the text only, no background image:
-(UIImage *)createImageForInstagram {
UIImageView *v = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 612, 612)];
v.image = [UIImage imageNamed:#"backgroundForInstagram.png"]; //"backgroundForInstagram.png" is 612x612.
//...text-adding code...
CGRect newRect = CGRectMake(0, 0, 612, 612);
UIGraphicsBeginImageContext(newRect.size);
[[v layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[v removeFromSuperview];
return myImage;
}
What am I doing wrong, and how do I fix it?
(While I'm at it, I'll also say that I'm very new to using graphic contexts, so if there's any awkwardness in the code I'd appreciate your pointing it out.)
EDIT: Now I've reduced the two methods to one, and this time I don't even get the text. Argh!
-(UIImage *)addTextToImage:(UIImage *)myImage withFontSize:(int)sz {
NSString *string=self.displayHaikuTextView.text;
NSString *myWatermarkText = [string stringByAppendingString:#"\n\n\t--haiku.com"];
NSDictionary *attrs = [NSDictionary dictionaryWithObjectsAndKeys:[UIFont fontWithName:#"Georgia"
size:sz],
NSFontAttributeName,
nil];
NSAttributedString *attString = [[NSAttributedString alloc] initWithString:myWatermarkText attributes:attrs];
UIGraphicsBeginImageContextWithOptions(myImage.size,NO,1.0);
[myImage drawAtPoint: CGPointZero];
NSString *longestLine = ghv.listOfLines[1];
CGSize sizeOfLongestLine = [longestLine sizeWithFont:[UIFont fontWithName:#"Georgia" size:sz]];
CGSize siz = CGSizeMake(sizeOfLongestLine.width, sizeOfLongestLine.height*5);
[attString drawAtPoint: CGPointMake(myImage.size.width/2 - siz.width/2, myImage.size.height/2-siz.height/2)];
myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return myImage;
}
When I pass the arguments [UIImage imageNamed:"backgroundForFacebook.png"] (an image 349x349) and 12, everything is fine. I get the picture. When I pass the arguments [UIImage imageNamed:"backgroundForInstagram.png"] (an image 612x612) and 24, nothing doing.
Right now I'm just putting the text on the smaller image (#"backgroundForFacebook.png") and then resizing it, but that makes the text blurry, which I don't like.
EDIT: Just to cover the basics, here are images of 1) the method in which I call this method (to check the spelling) and 2) the Supporting Files and the Build Phases (to show the image is actually there). I also tried assigning longestLine a non-variable NSString. No luck. :(
FURTHER EDIT: Okay, logging the size and scale of the images as I go during addTextToImage: above, here's what I get for the smaller image, the one that's working:
2013-02-04 22:24:09.588 GayHaikuTabbed[38144:c07] 349.000000, 349.000000, 1.000000
And here's what I get for the larger image--it's a doozy.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextGetFontRenderingStyle: invalid context 0x0
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextSetFillColorWithColor: invalid context 0x0
//About thirty more of these.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGBitmapContextCreate: unsupported parameter combination: 0 integer bits/component; 0 bits/pixel; 0-component color space; kCGImageAlphaNoneSkipLast; 2448 bytes/row.
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGContextDrawImage: invalid context 0x0
Feb 4 22:20:36 Joels-MacBook-Air.local GayHaikuTabbed[38007] <Error>: CGBitmapContextCreateImage: invalid context 0x0
Step through the code. After you create myImage, go into the console and look at myImage.size and myImage.scale. Multiply the size numbers by the scale.
If your background image is Retina-quality, your image is actually 1224 x 1224.
From the UIImage docs:
You should avoid creating UIImage objects that are greater than 1024 x
1024 in size. Besides the large amount of memory such an image would
consume, you may run into problems when using the image as a texture
in OpenGL ES or when drawing the image to a view or layer. This size
restriction does not apply if you are performing code-based
manipulations, such as resizing an image larger than 1024 x 1024
pixels by drawing it to a bitmap-backed graphics context. In fact, you
may need to resize an image in this manner (or break it into several
smaller images) in order to draw it to one of your views.
If your image is actually 612 pixels (not points) but your code is rendering it as 1224 pixels, you can just change the scale property to 1.0.
If your image is actually 1224 pixels, you'll need to do something else, like
put your code on a bitmap-backed graphics context (i.e., calling UIGraphicsBeginImageContext around the offending code)
displaying a smaller version to the user
However, if your image is for Instagram, it should not be 1224 x 1224 :-)
Update: I noticed your app is haiku-related, so here is the answer in haiku format:
Big UIImage?
Bitmap-backed graphics context
Or shrink to 612
i always back up to the obvious questions:
is your image actually properly called/spelled backgroundForInstagram.png?
have you properly added it to your project?
when added, did it copied to the device in the copy steps of the build phases?
what's in the ghv at the time of the call in the edited code?
what's in item[1] of ghv.lines at the time of that rendering?
these are the things i would look at in terms of debugging this code.
Try This
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 10.0);// 10.0 means 10 time bigger
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Using one image png file for retina and normal screen in UIImageView

Say there is cool.png file with dimension 200 X 100 pixels and I'd like to use it for both retina and normal devices.
I need to get size of UIImageView 100 X 50 points.
I tried to decrease the size of UIImageView according to image dimensions and visually I don't see any difference whether I prepare two file with and without scale modifier #2x or use UIImageView to scale it by contentMode property.
BOOL retina = [[UIScreen mainScreen] scale] == 2.0 ? YES : NO;
UIImage *img = [UIImage imageNamed:#"cool.png"];
UIImageView *imgView = [[UIImageView alloc] initWithImage:img];
imgView.contentMode = UIViewContentModeScaleToFill;
CGFloat width = img.size.width;
CGFloat height = img.size.height;
if (!retina) {
width = width/2.0;
height = height/2.0;
}
imgView.frame = CGRectMake (somePoint.x, somePoint.y, width, height);
Is there something wrong in the approach?
You are taking the wrong approach here..
CGRect,Point and Size aren't measured in pixels.. They are points.. Points are iOSes coordinate system and will be scaled to the device... So for example if you make a UIView 320 points wide then it will fill the width on both a retina iPhone or iPad...
so if you want your cool.png to display at 100x50 points on all devices then you can simply set the frame to be 100x50, set the image to cool.png then set the imgView.contentView to UIViewContentModeScaleAspectFit... This will then rescale the 200x100 image to fit of its a non retina device... Then if it was a retina device it would be at full resolution (200x100) but in the 100x50 points...
However the #2x system was made for a reason as having to scale the images down increases loading times as it has to be scaled but if you can't use #2x images then you can still do the above

IOS : Reduce image size without reducing image quality

I am displaying an image in tableview cell (Image name saved in a plist). Before setting it to the cell, I am resizing the image to
imageSize = CGSizeMake(32, 32);
But, after resizing the image, quality is also getting degraded in retina display.
I have both the images added to the project (i.e. 1x and #2x).
This is how I am reducing the image size to 32x32.
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Any pointers on this is very much appreciated.
Thanks
try this : instead of UIGraphicsBeginImageContext(size);use UIGraphicsBeginImageContextWithOptions(size,NO,0.0);
from what i understand what you're doing there is resizing the image to 32x32 (in points) no matter what the resolution. the UIGraphicsBeginImageContextWithOptions scales the image to the scale of the device's screen..so you have the image resized to 32x32 points but the resolution is kept for retina display
(note that this is what i understood from apple's uikit reference..it may not be so..but it should)
read here

Resources