WatchKit Force Menu Custom Icons Blurry - ios

I'm creating custom icons for the WatchKit Force Menu. The documentation says to use an 80 x 80 size image with a drawing area of 54px square. All that works fine, but my image, when displayed in the button, looks very blurry compared to the built-in button images.
I'm creating them in Illustrator at 80px square. Saving as a .png image like the documentation says. Sizing is correct when saved at 72 dpi. If I do anything higher it causes the image in the button to be too large. I cannot find a way to scale the image.
Has anyone run into this? It seems like I would want to use a higher resolution image here or vector graphics.

You need to save the file with #2x in the filename to support retina displays.
So if your filename is myicon.png rename it to myicon#2x.png. In code you just use myicon for the name, Xcode automatically picks the correct size.
For iPhone 6, #3x is required...
I would recommend to use the Images.xcassets in Xcode for maintaining all images. There are templates for all needed resolutions (#1x, #2x, #3x, ...). Create the icons in these several resolutions and drag the files from finder to the placeholders. Later in your code you simply use the name of the image set in Xcassets.

Related

Is adding #2x or #1x image required

If I have #3x image and all I need to do is downscale that image to create #2x and #1x, why am I required to add them? I see some UIImage functions just downscale #3x image on the device whereas other UIImage functions fail if #2x image is not there. I find it very time consuming to downscale so many images and add them to project every time. Is there a workaround?
Edit: To make it more clear, my question is UIImage imageWithContentsOfFile returns nil if I only supply #3x without supplying #2x. But UIImage imageNamed works. Is that normal, or I am doing something wrong?
Not sure exactly what you need to do or how you are using the scaled images... or even if they are photos or icons.
If you are using these images as icons, add them to an asset folder. Then the assets can be referenced by name within a control on a storyboard. Easy.
For creating buttons, let’s say a 40x40 button, I create the icon, 40 x 40, in Inkpad and save the result as a PDF file. Then I put that in an image asset and select the option for only one image. Since it is a vector drawing, it scales up nice.
If you are using the same images in multiple apps, I would recommend creating an assets folder called something like Common_Images and just copy that folder from one app to another. Deliriously quick & easy,
I wrote a quick and simple app that lets the user pick a photo, type in the 1x width and height, press a button and done! Doing this on a Simulator lets me quickly drag and drop these new scaled pics right into an image asset within Xcode. Quick and simple.

Xcode #2x image suffix not showing as Retina in iOS

I am having difficulties with retina images.
The screenshot below shows the UICollectionView with a UIImageView contained within each UICollectionViewCell.
Within the app I have a large image 512x512 pixels called travel.png.
The green circle shows what is displayed on the app when I name this file: travel.png. The blue circle shows what I see when I update the image name to be travel#2x.png (i.e. retina naming).
I was hoping due to the large size of the image (512x512) that simply adding the #2x suffix would be enough to convert it to twice the definition (i.e. retina) but as you can see from the two screenshots, both version images show as non-retina.
How can I update the image so that it will display in retina?
travel.png:
travel#2x.png:
* Updated *
Following request in comments below:
I load this image by calling the following function:
// Note - when this method is called: contentMode is set to .scaleAspectFit & imageName is "travel"
public func setImageName(imageName: String, contentMode: ContentMode) {
self.contentMode = contentMode
if let image = UIImage(named: imageName) {
self.image = image
}
}
Here is how the image appears in Xcode before the app renders it (as you can see it is high enough definition):
The reason why you see the low quality image is anti-aliasing. When you provide images bigger then an actual frame of UIImageView (scaleAspectFit mode) the system will automatically downscale them. During scaling some anti-aliasing effects can be added at curve shapes. To avoid the effect you should provide the exact image size you want to display on the screen.
To detect if UIImageView autoscale the image you can switch on Debug->Color Misaligned Images at Simulator menu:
Now all scaled images will highlight at simulator with yellow color. Each highlighted image may have anti-aliasing artifacts and affect CPU usage for scaling algorithms:
To resolve the issue you should use exact sizes. So the system will use them directly without any additional calculations. For example, if your button have 80x80px size you should add three images to assert catalog with following sizes and dpi: 80x80px (72 dpi), 160x160px (144 dpi) and 240x240px (216 dpi):
Now the image will be drawn at the screen without downscaling with much better visual quality:
If your intention is to have just one image for all the sizes, I would suggest it having under Assets.xcassets. It is easy to create the folder structures and manage media assets here.
Steps
On clicking + icon, you will displayed a list of actions. Choose to create a New folder.
Choosing the new folder that is created, click on the + icon again and click on New Image Set.
Choose the imageset. And choose the attributes inspector.
Select Single Scale, under Scales.
Drag and drop the image.
Rename the image name and folder names as you wish.
Now you can use this image using the image name for all the screen sizes.
TL;DR;
Change the view layer's minificationFilter to .trilinear
imageView.layer.minificationFilter = .trilinear
as illustrated by the device screenshot below
As Anton's answer correctly pointed out, the aliasing effet you observe is caused by the large difference in dimensions between the source image and the image view it's displayed in. Adding the #2x suffix won't change anything if you do not change the dimensions of the source image itself.
That said there is an easy way to improve the situation without resizing the original image: CALayer offers some control over the method used by the graphics back-end to resize images : minificationFilter and magnificationFilter. The first one is relevant in your case since the image size is being reduced. The default value is CALayerContentsFilter.linear, just switch to .trilinear for a much better result (more info on those wikipedia pages). This will require more GPU power (thus battery), especially if you apply it on many images.
You should really consider resizing the images before displaying them, either statically or at run-time (and maybe cache the resized versions). In addition to the bad visual quality, using such large images in quantities in your UI will decrease performance and waste lots of memory, leading to potentially other issues.
I have fixed, #DarshanKunjadiya issue.
Make sure (if you are already using assets):
Make sure images are not un-assigned
Now use images in storyboard or code without extensions. (e.g. "image" NOT "image.png")
If you are not using images from assets, move them to assets.
Demo Projects
Hope it helps.
Let me know of your feedback.
I think images without the #2x and #3x are rendered for devices with low resolutions (like the iphone 4 an 3G).
The solution I think is to always use the .xcassets file or to add the #2x or #3X in the names of your images.
In iOS, content is placed on the screen based on the iOS coordinate system. for displaying an image on a standard resolution system having 1:1 pixel density we should supply image at #1x resolution. for higher resolution displays the pixel density will be a scale factor of 2.0, 3.0 which refers in the iOS system as #2x and #3x respectively. That is high-resolution displays demands images with higher density.
For example, if you want to display an image of size 128x128 in standard resolution. You have to supply the #2x and #3x size image of the same. ie., 256x256 at #2x version and 384x384 image at #3x version.
In the following screenshot, I have supplied an image of size 256x256 for 2x version to display a 128x128 pixel image in iPhone 6s. iPhone 6s render images at #2x size. Using the three version of images such as 1x, 2x and 3x with asset catalogue will resolve your issues. So the iPhone will automatically render the correct sized image automatically with the screen resolution.

How to handle iphone screen sizes/resolution for background images

After the iPhoneX, I am really struggling with the image sizes and naming conventions which supports all devices. Is there any way to use 3x images for 4.7, 5.5 snd 5.8 screens? What are the exact dimensions I should use for an imageview in full screen?
You can use image with .pdf format. So you need to manage only single scale image with 1x. It will support all screen sizes so you don't need to mange 1x,2x,3x png. So single image can adjust all devices and also reduce app size.
You can follow these simple steps to implement this method:
Step 1: Get your background image with 1x size in .png format and convert it to .pdf file
Step 2: Add that .pdf image in your project's Assets.xcassets and make you have to select "Single scale" in asset property otherwise you can't add .pdf formatted image.
Step 3: Add set that image in imageview and check with different devices
Feel free to ask anything. :-)

button image gets pixelated

I have designed a lock icon in Sketch to add to a button in my application:
I exported it both in pdf and png (2x, 3x) to add to Xcode assets. Problem is when I run the app on iPhone (SE), heavy pixelation can be seen around the edges of the icon:
I've tried both pdf and png formats, but result stays the same. Am I missing any settings that need to be applied for image to look sharp on screen?
Bigger is not necessarily better for a UIButton's image. Try to export your icon in more or less the same size with which it will be used. (Note that this also frees up memory in comparison to a way bigger image).
To adapt to different screens' resolutions, you should provide up to three images (#1x, #2x, #3x). You should read this excellent Apple's documentation on Image Size and Resolution. It explains perfectly how big should the images you provide in Xcode be.
They also have a good explanation on which format you should use according to the purpose of the image.
EDIT:
You can also use vector ressources (.pdf files for instance) that will render perfectly for any resolution. You can read this article about how to implement it in your Xcode project (If you do so, please be careful in the attributes of the asset to check Preserve Vector Data and the Scales to Single Scale, otherwise it may not render well).
It will happen if image sizes are not correct
check the size of images. 1x,2x and 3x sizes are should be as followed
1x = 24x24 px
2x = 48x48 px
3x = 72x72 px
If images size are too big than ImageView then pixelate will happen
Hope this will help you

What is the correct procedure to create images/sprite for iPhone/iPad apps/games?

I am new in developing games with Xcode for iPhone/iPad. Thus I need some help with the correct procedure to create images/sprites for the game.
By now I have created my sprites with Illustrator and I exported them as PDF files. In Xcode I created this single scale asset and put the PDF in it.
If I understand the documentation correctly, Xcode automatically generates image files at #1x, #2x and #3x from the PDF. Does it generate PNG files?
Then I create a SKSpriteNode and set the size like this: abc.size = CGSize(width: 123, height: 123). Instead of 123, I fill in the width and height corresponding to the frame/image size I set up in Illustrator. Is this correct? I think so, because this is #1x version?!
But if I need the same image for iPhone and iPad in different sizes, i can't simply resize it, because the #1x image version isn't a vector anymore and bounded to the frame size I chose in Illustrator? What to do then? Do I have to resize my image in Illustrator and export it in a different size?
What is the correct procedure? Do I have to draw a sketch with pencil at the very beginning on a paper and the measure it with ruler? Then I would go to illustrator and set the frame width height at that what I measured manually?
So many questions. I am very confused with this images sizes, resolutions and #1x, #2x and #3x version. I am not sure why I should use vector files, if I still can't resize the images in the developing process as I would like to, because they are still bound to the frame size I chose in Illustrator.
Is there no possibility to set ratios between all my images and then just use the vector PDF file? How should I setup my Illustrator?
I hope somebody can bring some light into the dark. Thank you.
Your pdf should be sized in points #1x (not pixels). The points should be the same physical size on the phone and the ipad, but if you want them smaller on the phone you need a second set of images; the asset catalog lets you swap out images based on iphone/ipad. Xcode renders your pdf to png's #1x, #2x and #3x and your app will pick the correct png based on the resolution of the device. You are correct that these are no longer vector assets and that scaling them up could leave you with blurry/pixelated images. You have a couple of choices:
1) include a scaled up version of your image at its maximum scale in app and use this version only when you need to scale up (otherwise its a waste of memory and processing if you are always rendering a much smaller image). This is probably the easiest solution.
2) leave your assets as vectors and load them as vectors, You still can render them to images for performance at a constant scale or range of scales, but you can always re-render them at any scale if needed. Most likely you want to use an SVG library for this.
3) You can directly import your assets as code using a program such as paint code. There used to be similar plugins for illustrator but I haven't seen one for Swift 3/Illustrator CC. This is obviously faster than #2 since there is no need to decode the vector file. If your file has a lot of overdraw you may still want to rasterize to images for performance.
Here's what I've found from my experience:
1) Xcode does not generate #2x and #3x from .png files. It can't really - you need to manually supply #1x, #2x, and #3x sizes.
2) Whatever size you use for the CGSize(...), that should be your #1x image, then generate #2x, and #3x from that. I started by designing the size of a level in the scene editor, then made a generic SKSpriteNode shape just to get the size I wanted, then I started making the image from the size I found that looks good.
3) Xcode supports vector based graphics (svg, pdf), but you can't use them as part of a texture atlas, which makes them much less useful in my opinion.

Resources