Apologies for posting an image-related question as a new user, I need more reputation in order to include pics in my post :)
I am trying to resize an image without scaling its corners. I tried using resizableImageWithCapInsets as well as slicing through an asset catalog (although the latter only supports a deployment target of iOS 7+, which really does not make it a solution option...). I am using an image named "ViewHeaderTest.png" which is 44x100 pixels. The caps/insets should be a 16x16 pixel square in each corner.
This is the code:
UIImageView *headerTest = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"ViewHeaderTest.png"]];
headerTest.image = [headerTest.image resizableImageWithCapInsets:UIEdgeInsetsMake(8, 8, 8, 8) resizingMode:UIImageResizingModeStretch];
headerTest.frame = CGRectMake(0, 0, 22, 50);
Code as well as asset catalog slicing produce the same result for me, which, oddly, does not appear mentioned anywhere else on stackoverflow: The caps / insets work fine, but they are scaled to double their original size. Basically it appears that resizableImageWithCapInsets takes returns an image of twice its original proportions.
Any takers? :)
Solution found:
As suspected, this is a retina issue. To avoid xcode confusing retina and non-retina images when using cap insets,
Specity image name without file extension, i.e. #"FileName" instead of #"FileName.png"
Have retina AND "normal" version of the file in corresponding resolutions in your project. I.e. have FileName.png (100px x 100 px) AND FileName#2x.png (200px x 200px) in the project.
I'm happy that it works, so I have not checked if either of these is redundant :)
Related
I have been using the code from here to add some text to my UIImage, however the quality on the image gets really bad. I have narrowed down the code a lot, to this:
public UIImage EditImage(UIImage myImage)
{
using (CGBitmapContext ctx = new CGBitmapContext(IntPtr.Zero, 75, 75, 8, 4 * 75, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
ctx.DrawImage(new CGRect(0, 0, 75, 75), myImage.CGImage);
return UIImage.FromImage(ctx.ToImage());
}
}
MyImage is a PNG image. I have added them like this:
MyImage.png (100x100 pixels)
MyImage#2.png (150x150 pixels)
MyImage#3.png (200x200 pixels)
I am not really sure which one of them is used, but when I inspect the image at runtime, the sizes are 75x75 nint (native int). That's why I set the CGBitmapContext height and width to 75.
My problem is that after processing myImage through this function the quality gets very poor. If I skip this process and just use myImage the quality is excellent, however I need the CGBitmapContext to add some text to the image.
Does anyone have any idea what is causing it to be bad quality, and how I can fix it? For the record, I am testing on an iPhone 6S.
You have multiple misunderstandings.
Lets start with the image sizes. If your base image is 100x100 the #2x version should be 200x200 because it is double the resolution. Following this schema your #3x should be 300x300. The operating-system will choose the right one depending on the capabilities of your device. With a 6S it should be #2x depending on how you load the image.
Lets move on to the way you are creating the CGBitmapContext. The source of your code contains exactly what you need but you changed it to fix values and now wonder why your image is 75x75. The bad quality comes right from that, because the system is now resizing your original image.
So I've done lots of reading on how to achieve perfect image quality on UIButton's .imageView property in iOS. Here's my example ->
I've got an UIButton 24x24 points as per the following line:
myButton = [UIButton buttonWithType:UIButtonTypeCustom];
myButton.frame = CGRectMake(82, 8, 24, 24);
myButton.contentMode = UIViewContentModeCenter;
buttonImage = [UIImage imageNamed:#"myImage.png"];
ogImage = [buttonImage imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal];
I then have the original image sized to 46x46 pixels (twice 23x23, 24x24 on button for even size to prevent iOS auto-aliasing), then the image#2x at 92x92 pixels. Im testing on an iPhone 6s (obviously retina display) and am still seeing some jaggedness on my UIButton's image. What am I doing wrong here? Am I still not understanding how to achieve perfect retina quality?
Here's an image, Im hoping it displays well for example:
Not sure if this is going to help, but this is my personal preference. If for example, I have a UIButoon of size 24x24, I always generate 24x24, 48x48, and 72x72 images. I rely on my Adobe Illustrator to creat pixel perfect images. Always check your images in pixel preview mode and make sure edges are aligned with the pixels. If not, it can produce artifacts that you see in xCode.
If it's what you want in Illustrator then it's what you get in xCode.
Why are there jagged edges on some of the texts and images in the app i am developping?
I have tried to go through the frames, and i have not used a division to set a frame (so the 1.134234 is not an issue), and I have tried different antialiasing methods.
Does anybody have an idea?
See attached for an example.
EDIT:
The images become jagged, when downscaled. So either resize them to fit the size directly in the actual file, OR via code as suggested in other StackOverflow questions.
Now trying to figure out how to fix the text also! :)
EDIT EDIT EDIT EDIT
Answer will posted tomorrow (after 24 hours).
1) Image problem: Make sure the actual image size you are using is close to the size you are actually using it... (Feks 100 points with an image at #1x that is 100, #2x that is 200, and #3x that is 300, where 100,200, and 300 are the actual image file pixels). Or resize using code in the correct way to match.
When iOS is downscaling (as well as upscaling) an image, the pixels get disorted.
The problem in my case was using a too big an image.
2) As to the button, I don't know exactly why, but it got solved using attributedText for the title instead of the usual text. Method used is:
[button setAttributedTitle: forState:];
Solved it :D
1) Image problem: Make sure the actual image size you are using is close to the size you are actually using it... (Feks 100 points with an image at #1x that is 100, #2x that is 200, and #3x that is 300, where 100,200, and 300 are the actual image file pixels). Or resize using code in the correct way to match.
When iOS is downscaling (as well as upscaling) an image, the pixels get disorted.
The problem in my case was using a too big an image.
2) As to the button, I don't know exactly why, but it got solved using attributedText for the title instead of the usual text. Method used is:
[button setAttributedTitle: forState:];
I understand that the Retina display has 2x as many pixels as the non retina displays, but what is difference between using the #2x version and taking and taking the 512 x 512 image and constraining it via the size of the frame ?
To Clarify:
if I have a button that is 72 x 72 The proper way to display that on an iPhone is to have a
image.png = 72x72
image#2x.png = 144 x 144 <---Fixed :) TY
But why not just use 1 image:
image.png = 512x512
and do something like this:
UIImageView *myImage = [[UIImageView alloc] init ];
[myImage setImage:[UIImage imageNamed:#"image.png"]];
[myImage setFrame:CGRectMake(50, 50, 72, 72)];
I am sure there is a good reason, I just dont know what it is, other then possibly a smaller app file size?
Thanks for the education!
There are several good reasons for sizing your images correctly, but the main one would have to be image clarity: When resizing images, you often end up w/ artifacts that make a picture look muddy or pixelated. By creating the images at the correct size, you'll know exactly what the end user will see on his or her screen.
Another reason would simply be to cut down on the overall file size of your binary: a 16x16 icon takes up orders of magnitude fewer bytes than a 512x512 image.
And if you need a third reason: Convenience methods such as [UIImage imageWithName:#"xxxx"] produce images of actual size and usually do not need additional frame/bounds code to go along with them. If you know the size, you can save yourself a lot of headache.
Because images may not be displayed correctly when resized. Also because larger images use more memory. But if both these are not issues for you, you can use one image for both retina and non retina displays.
Because large images consume a lot of memory and CPU/GPU cycles. Another reason is that scaling down an image causes pixel-level quality issues.
Besides the extra memory and CPU, downsampling an image is inherently lossy. Nice crisply rendered lines turn to crud.
The #2x naming convention exists in case the source image is exactly the same size as the displayed image. Then you can have a 57x57 app icon for non-retina iPhone, and 114x114 app icon for retina display iPhones.
The main advantage of using 2 images is, that both pictures can be handcrafted from the designers so everything looks fine and no up- or downscaling code needed, which cost energy, slows performance and may contains bugs.
From what I've read elsewhere, Apple recommends multiple versions of every graphical asset, so quality will be retained between pre-iPhone 4, iPhone 4 (with the retina display), and the iPad. But I'm using a technique that only requires one asset for all three cases.
I make each graphic the size I need for the iPhone 4 and the iPad, say a cat at 500x500 pixels. I name it myCat#2x.png. When I read it in for the iPhone:
CGRect catFrame = CGRectMake(0.0f, 0.0f, 250.0f, 250.0f);
UIImageView *theCat = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"myCat"]];
theCat.frame = catFrame;
[self.view addSubview:theCat];
[theCat release];
for the iPad, I do exactly the same thing, except for:
CGRect catFrame = CGRectMake(0.0f, 0.0f, 500.0f, 500.0f);
This seems to work fine in all three cases, and greatly reduces the number (and size) of graphic files. Is there anything wrong with this technique?
Check this: http://vimeo.com/30667638
We are releasing it soon. If you are interested in beta testing it drop me a lineEdit
This question has been a long time in circulation, so I will "answer" it based on my experience with the last couple of apps I've worked on.
I see no reason to have a separate image asset for retina display and non-retina display iPhones. The technique I outlined above seems to work just fine.
Probably, one will want a separate asset ("resource file") for the iPad, mostly for the change in aspect ratio of the screen. In my case (sprites for children's games) I was able to use many of the iPhone images on the iPad. The "look" was a little different, but I saved a lot of file space.
Whether this works for you will, of course, depend on the unique properties of your project.
Not all images scale well, even at 50%. dithering or patterns might get distorted. In general, scaling at factors of 1/2, 1/4 etc (dividing by two) will result in best results, but scaling down using an advanced algorithm like used in Photoshop will produce better results.
So, in most cases, this can produce acceptable results.