Ideal image size for iOS - ios

If I want to display an icon with 24x24 in an app, what sizes should I provide in the app?
For example, is it sufficient to save the image as 128x128 in the app?
Or are #1x #2x #3x sizes required?
It is a simple icon (width == height), example:

Different sizes are not necessarily required, but they can help reducing binary sizes.
For an image that you want to show in the dimensions of 24x24, you should provide a 24x24, a 48x48 and a 72x72 image.
For simple graphics like this, it's usually more suitable if you create a vector asset.

Related

Xcode #2x image suffix not showing as Retina in iOS

I am having difficulties with retina images.
The screenshot below shows the UICollectionView with a UIImageView contained within each UICollectionViewCell.
Within the app I have a large image 512x512 pixels called travel.png.
The green circle shows what is displayed on the app when I name this file: travel.png. The blue circle shows what I see when I update the image name to be travel#2x.png (i.e. retina naming).
I was hoping due to the large size of the image (512x512) that simply adding the #2x suffix would be enough to convert it to twice the definition (i.e. retina) but as you can see from the two screenshots, both version images show as non-retina.
How can I update the image so that it will display in retina?
travel.png:
travel#2x.png:
* Updated *
Following request in comments below:
I load this image by calling the following function:
// Note - when this method is called: contentMode is set to .scaleAspectFit & imageName is "travel"
public func setImageName(imageName: String, contentMode: ContentMode) {
self.contentMode = contentMode
if let image = UIImage(named: imageName) {
self.image = image
}
}
Here is how the image appears in Xcode before the app renders it (as you can see it is high enough definition):
The reason why you see the low quality image is anti-aliasing. When you provide images bigger then an actual frame of UIImageView (scaleAspectFit mode) the system will automatically downscale them. During scaling some anti-aliasing effects can be added at curve shapes. To avoid the effect you should provide the exact image size you want to display on the screen.
To detect if UIImageView autoscale the image you can switch on Debug->Color Misaligned Images at Simulator menu:
Now all scaled images will highlight at simulator with yellow color. Each highlighted image may have anti-aliasing artifacts and affect CPU usage for scaling algorithms:
To resolve the issue you should use exact sizes. So the system will use them directly without any additional calculations. For example, if your button have 80x80px size you should add three images to assert catalog with following sizes and dpi: 80x80px (72 dpi), 160x160px (144 dpi) and 240x240px (216 dpi):
Now the image will be drawn at the screen without downscaling with much better visual quality:
If your intention is to have just one image for all the sizes, I would suggest it having under Assets.xcassets. It is easy to create the folder structures and manage media assets here.
Steps
On clicking + icon, you will displayed a list of actions. Choose to create a New folder.
Choosing the new folder that is created, click on the + icon again and click on New Image Set.
Choose the imageset. And choose the attributes inspector.
Select Single Scale, under Scales.
Drag and drop the image.
Rename the image name and folder names as you wish.
Now you can use this image using the image name for all the screen sizes.
TL;DR;
Change the view layer's minificationFilter to .trilinear
imageView.layer.minificationFilter = .trilinear
as illustrated by the device screenshot below
As Anton's answer correctly pointed out, the aliasing effet you observe is caused by the large difference in dimensions between the source image and the image view it's displayed in. Adding the #2x suffix won't change anything if you do not change the dimensions of the source image itself.
That said there is an easy way to improve the situation without resizing the original image: CALayer offers some control over the method used by the graphics back-end to resize images : minificationFilter and magnificationFilter. The first one is relevant in your case since the image size is being reduced. The default value is CALayerContentsFilter.linear, just switch to .trilinear for a much better result (more info on those wikipedia pages). This will require more GPU power (thus battery), especially if you apply it on many images.
You should really consider resizing the images before displaying them, either statically or at run-time (and maybe cache the resized versions). In addition to the bad visual quality, using such large images in quantities in your UI will decrease performance and waste lots of memory, leading to potentially other issues.
I have fixed, #DarshanKunjadiya issue.
Make sure (if you are already using assets):
Make sure images are not un-assigned
Now use images in storyboard or code without extensions. (e.g. "image" NOT "image.png")
If you are not using images from assets, move them to assets.
Demo Projects
Hope it helps.
Let me know of your feedback.
I think images without the #2x and #3x are rendered for devices with low resolutions (like the iphone 4 an 3G).
The solution I think is to always use the .xcassets file or to add the #2x or #3X in the names of your images.
In iOS, content is placed on the screen based on the iOS coordinate system. for displaying an image on a standard resolution system having 1:1 pixel density we should supply image at #1x resolution. for higher resolution displays the pixel density will be a scale factor of 2.0, 3.0 which refers in the iOS system as #2x and #3x respectively. That is high-resolution displays demands images with higher density.
For example, if you want to display an image of size 128x128 in standard resolution. You have to supply the #2x and #3x size image of the same. ie., 256x256 at #2x version and 384x384 image at #3x version.
In the following screenshot, I have supplied an image of size 256x256 for 2x version to display a 128x128 pixel image in iPhone 6s. iPhone 6s render images at #2x size. Using the three version of images such as 1x, 2x and 3x with asset catalogue will resolve your issues. So the iPhone will automatically render the correct sized image automatically with the screen resolution.

button image gets pixelated

I have designed a lock icon in Sketch to add to a button in my application:
I exported it both in pdf and png (2x, 3x) to add to Xcode assets. Problem is when I run the app on iPhone (SE), heavy pixelation can be seen around the edges of the icon:
I've tried both pdf and png formats, but result stays the same. Am I missing any settings that need to be applied for image to look sharp on screen?
Bigger is not necessarily better for a UIButton's image. Try to export your icon in more or less the same size with which it will be used. (Note that this also frees up memory in comparison to a way bigger image).
To adapt to different screens' resolutions, you should provide up to three images (#1x, #2x, #3x). You should read this excellent Apple's documentation on Image Size and Resolution. It explains perfectly how big should the images you provide in Xcode be.
They also have a good explanation on which format you should use according to the purpose of the image.
EDIT:
You can also use vector ressources (.pdf files for instance) that will render perfectly for any resolution. You can read this article about how to implement it in your Xcode project (If you do so, please be careful in the attributes of the asset to check Preserve Vector Data and the Scales to Single Scale, otherwise it may not render well).
It will happen if image sizes are not correct
check the size of images. 1x,2x and 3x sizes are should be as followed
1x = 24x24 px
2x = 48x48 px
3x = 72x72 px
If images size are too big than ImageView then pixelate will happen
Hope this will help you

What is the correct procedure to create images/sprite for iPhone/iPad apps/games?

I am new in developing games with Xcode for iPhone/iPad. Thus I need some help with the correct procedure to create images/sprites for the game.
By now I have created my sprites with Illustrator and I exported them as PDF files. In Xcode I created this single scale asset and put the PDF in it.
If I understand the documentation correctly, Xcode automatically generates image files at #1x, #2x and #3x from the PDF. Does it generate PNG files?
Then I create a SKSpriteNode and set the size like this: abc.size = CGSize(width: 123, height: 123). Instead of 123, I fill in the width and height corresponding to the frame/image size I set up in Illustrator. Is this correct? I think so, because this is #1x version?!
But if I need the same image for iPhone and iPad in different sizes, i can't simply resize it, because the #1x image version isn't a vector anymore and bounded to the frame size I chose in Illustrator? What to do then? Do I have to resize my image in Illustrator and export it in a different size?
What is the correct procedure? Do I have to draw a sketch with pencil at the very beginning on a paper and the measure it with ruler? Then I would go to illustrator and set the frame width height at that what I measured manually?
So many questions. I am very confused with this images sizes, resolutions and #1x, #2x and #3x version. I am not sure why I should use vector files, if I still can't resize the images in the developing process as I would like to, because they are still bound to the frame size I chose in Illustrator.
Is there no possibility to set ratios between all my images and then just use the vector PDF file? How should I setup my Illustrator?
I hope somebody can bring some light into the dark. Thank you.
Your pdf should be sized in points #1x (not pixels). The points should be the same physical size on the phone and the ipad, but if you want them smaller on the phone you need a second set of images; the asset catalog lets you swap out images based on iphone/ipad. Xcode renders your pdf to png's #1x, #2x and #3x and your app will pick the correct png based on the resolution of the device. You are correct that these are no longer vector assets and that scaling them up could leave you with blurry/pixelated images. You have a couple of choices:
1) include a scaled up version of your image at its maximum scale in app and use this version only when you need to scale up (otherwise its a waste of memory and processing if you are always rendering a much smaller image). This is probably the easiest solution.
2) leave your assets as vectors and load them as vectors, You still can render them to images for performance at a constant scale or range of scales, but you can always re-render them at any scale if needed. Most likely you want to use an SVG library for this.
3) You can directly import your assets as code using a program such as paint code. There used to be similar plugins for illustrator but I haven't seen one for Swift 3/Illustrator CC. This is obviously faster than #2 since there is no need to decode the vector file. If your file has a lot of overdraw you may still want to rasterize to images for performance.
Here's what I've found from my experience:
1) Xcode does not generate #2x and #3x from .png files. It can't really - you need to manually supply #1x, #2x, and #3x sizes.
2) Whatever size you use for the CGSize(...), that should be your #1x image, then generate #2x, and #3x from that. I started by designing the size of a level in the scene editor, then made a generic SKSpriteNode shape just to get the size I wanted, then I started making the image from the size I found that looks good.
3) Xcode supports vector based graphics (svg, pdf), but you can't use them as part of a texture atlas, which makes them much less useful in my opinion.

Is it ok to scale down UIImage's?

I've been told not to scale down images, but always to use them in their original resolution.
I have a custom UITableViewCell that has a 'Respond Action' on the cell. The UIImageView in Storyboard is 16x16.
Now originally I was using a 256x256 image, and my UIImageView has an Aspect Ratio constraint, and a height constraint of 16.
Then it was suggested to me to not ever scale down images, but to use an image that fit the exact size I needed. So a 16x16 was designed for me, and I used that. It looks awfully blurry though.
The results are here (The font next to it is 11.0 point to give you an idea of it's size):
What is the correct way to go about this? Should you not scale down images? What is the reason? It looks much better than the 16x16.
Scale your image down before adding it to your project...
Take your 256x256 png image and scale it to 1x, 2x, and 3x size.
1x should equal the size that you need your image to display in the view.
2x and 3x will support retina and retina HD displays.
ex:
Name: image.png | Size: 16x16
Name: image#2x.png | Size: 32x32
Name: image#3x.png | Size: 48x48
Drop these images into your Images.xcassets
see: Specifying High-Resolution Images in iOS
also: Icon and Image Sizes
You don't need to scale down images, but it's a good idea. If you're cramming a 256x256 in a 16x16 means the user is downloading 65kb vs <1kb of data. Or if it's in your bundle, then your app is that much 'heavier'.
But it depends what screen resolution you're using. Even an iPhone 4 uses retina, which means your 16x16 image view should contain a 32x32 image. You should also supply a 48x48 for iPhone 6+ screens. This will make your images look great on each screen size.
Use Images.xcassets to manage your images.
You shouldn't scale down the images ideally, but you also need to cosier the screen resolution. Old devices are 1x, retina devices are 2x and he latest iPhone 6 devices use 3x images which means you should have 16x16, 32x32 and 48x48 images to be used by each device (preferably managed in an image asset). What you're seeing at the moment is a 1x image being scaled up to 2x or 3x so it's blurry. Previously you had a large image scaled down which is bad for memory usage but looks sharp.
If you are using an image from your asset catalog, provided to yourself during development. You don't want to scale down images at runtime as it will add unnecessary overhead to your app. Rather have the assets sized to the required dimensions and add them to your xcimage catalog for 1x, 2x and 3x displays. Think optimisation in terms of memory and bundle size, what's the minimum size required to make it work and look good? If your image is being displayed in a 20x20 square, the optimal sizes will always be 20x20, 40x40 and 60x60 for 1x, 2x and 3x displayed.
Now fetching images from the network is an entirely different ballgame. You will find that sometimes you might want to scale down the image, but only when the image you fetch is bigger than what you need to display, like when you're trying to fit in a 2000x2000px image in a 36x26px square. You want to do the scale down for three reasons:
it will reduce virtual memory usage.
it will reduce storage memory if you persist the image.
you will save yourself from image downsampling artifacts that appear when the system tries to render the image at runtime.
Best thing to do here is to scale down the image as soon as you download from the network and before using it anywhere within your app.
Hope this helps!
There is a purist answer for this and then the answer which saves you from spending your life converting images all day.
The following is a discussion on the topic of scaling.
iOS - Different images based on device OR scaling the same image?
If you want to avoid having to create too many images, then using x2 images with a resolution of x2 the expected point size usage is a reasonable compromise as that covers the majority of current devices. You would only then be scaling down for iPhone 3s and up for iPhone 6+. You only need large XxY images if you expect to use large images. Again I would go for x2 resolution for those. Often worth having an iPhone and iPad version of an image in case your expect usage on iPad is to use a bigger XxY point size.
As noted in the link, the impact of scaling depends on the image. I would only worry about the images you think look bad when scaled. So far I have not found scaling to be an issue and much easier than creating lots of bespoke PNGs.
Scale it down before set it:
Swift extension:
extension UIImage{
// returns a scaled version of the image
func imageScaledToSize(size : CGSize, isOpaque : Bool) -> UIImage{
// begin a context of the desired size
UIGraphicsBeginImageContextWithOptions(size, isOpaque, 0.0)
// draw image in the rect with zero origin and size of the context
let imageRect = CGRect(origin: CGPointZero, size: size)
self.drawInRect(imageRect)
// get the scaled image, close the context and return the image
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
Example:
aUIImageView.image = aUIImage.imageScaledToSize(aUIImageView.bounds.size, isOpaque : false)
Set isOpaque to true if the image has no alpha: drawing will have better performance.

iOS SDK retine images vs using bigger images for smaller placeholders

Is it the same if I use a big image i.e. a 40x40 image in a 20x20 place holder or a #2x image for retina?
I mean, I have two alternatives:
- use a 20x20 image.png and 40x40 image#2x.png
- use a 40x40 image.png
Is it the same?
Thanks.
Using only retina images and leaving the down-scaling on non-retina devices up to the system is possible but not recommendable in all cases.
It really depends on the contents of your graphics. If, for example you are using vector based graphics as your source (sharp lines etc.), then offering only the retina images will result into washed, blurry images on non-retina displays.
Again, it is possible and entirely fine if your content still looks good enough.

Resources