iOS: Cost of asset format - ios

In the previous places of employment I have used
to supply #3x png assets dodging #2x,
in the new place folks use pdfs to dodge multiple assets per image semisoft(?) requirement.
My understanding is that single #3x png is still cheaper than a pdf (exported from Figma in case it matters any)
What are the pros and cons of each method
of supplying assets?
1. lazy approach: #3x.png only
2. #2x & #3x (texture interpolation is better? faster?)
3. pdf
Bonus question: it would be interesting to know if SVG is mapped into CoreGraphics/CoreAnimation and what's that cost of THAT too.
Thanks!

This is far too complex of an issue for Stack Overflow -- but to give you a couple things to think about...
If your SVG is complex - say it has 1,000 layers with various gradients and alpha values - and will only ever be displayed in your app at 300 x 200 points (that is, 600 x 400 and 900 x 600 pixels), you'd almost certainly want to render #2x and #3x pngs.
If your SVG is a simple-ish line drawing, and may be displayed at various scales and / or ratios, then SVG will give you better results.
If it takes your art department 10 hours to produce #2x and #3x png, but takes them 40 hours to produce an SVG, well?
If your image is a photograph?
If your image is a tab-bar icon?
And so on.
No idea what your app is about, but would 25% smaller storage matter? Would 50ms vs 75ms load time matter?
Best to do some research - there are many, many articles about this out there.

Related

What is the correct procedure to create images/sprite for iPhone/iPad apps/games?

I am new in developing games with Xcode for iPhone/iPad. Thus I need some help with the correct procedure to create images/sprites for the game.
By now I have created my sprites with Illustrator and I exported them as PDF files. In Xcode I created this single scale asset and put the PDF in it.
If I understand the documentation correctly, Xcode automatically generates image files at #1x, #2x and #3x from the PDF. Does it generate PNG files?
Then I create a SKSpriteNode and set the size like this: abc.size = CGSize(width: 123, height: 123). Instead of 123, I fill in the width and height corresponding to the frame/image size I set up in Illustrator. Is this correct? I think so, because this is #1x version?!
But if I need the same image for iPhone and iPad in different sizes, i can't simply resize it, because the #1x image version isn't a vector anymore and bounded to the frame size I chose in Illustrator? What to do then? Do I have to resize my image in Illustrator and export it in a different size?
What is the correct procedure? Do I have to draw a sketch with pencil at the very beginning on a paper and the measure it with ruler? Then I would go to illustrator and set the frame width height at that what I measured manually?
So many questions. I am very confused with this images sizes, resolutions and #1x, #2x and #3x version. I am not sure why I should use vector files, if I still can't resize the images in the developing process as I would like to, because they are still bound to the frame size I chose in Illustrator.
Is there no possibility to set ratios between all my images and then just use the vector PDF file? How should I setup my Illustrator?
I hope somebody can bring some light into the dark. Thank you.
Your pdf should be sized in points #1x (not pixels). The points should be the same physical size on the phone and the ipad, but if you want them smaller on the phone you need a second set of images; the asset catalog lets you swap out images based on iphone/ipad. Xcode renders your pdf to png's #1x, #2x and #3x and your app will pick the correct png based on the resolution of the device. You are correct that these are no longer vector assets and that scaling them up could leave you with blurry/pixelated images. You have a couple of choices:
1) include a scaled up version of your image at its maximum scale in app and use this version only when you need to scale up (otherwise its a waste of memory and processing if you are always rendering a much smaller image). This is probably the easiest solution.
2) leave your assets as vectors and load them as vectors, You still can render them to images for performance at a constant scale or range of scales, but you can always re-render them at any scale if needed. Most likely you want to use an SVG library for this.
3) You can directly import your assets as code using a program such as paint code. There used to be similar plugins for illustrator but I haven't seen one for Swift 3/Illustrator CC. This is obviously faster than #2 since there is no need to decode the vector file. If your file has a lot of overdraw you may still want to rasterize to images for performance.
Here's what I've found from my experience:
1) Xcode does not generate #2x and #3x from .png files. It can't really - you need to manually supply #1x, #2x, and #3x sizes.
2) Whatever size you use for the CGSize(...), that should be your #1x image, then generate #2x, and #3x from that. I started by designing the size of a level in the scene editor, then made a generic SKSpriteNode shape just to get the size I wanted, then I started making the image from the size I found that looks good.
3) Xcode supports vector based graphics (svg, pdf), but you can't use them as part of a texture atlas, which makes them much less useful in my opinion.

Why do I need #1x, #2x and #3x iOS images?

Why do we need these 3 particular image types?
If I have a button on my app with a background image say, 50 pixels x 50 pixels, why do I need 3 versions of this image? What's stopping me from just making one image that's much higher in res, say, 700x700 so when it shrinks down on any iPhone it won't fall under the max res the device would want?
Only thing I can think of is it just takes up more space, but for simple apps / a simple button it seems like it wouldn't cause any issues. I've tried it on a few devices and see no difference between them when I simulate it and do this method. However, as I dive more into apps and stuff I'm sure there is substance behind this technique.
If you don't have the exact size, there are two things that can happen:
Upscaling
#3x or #2x can be upscaled from #1x but usually the visual result is blurry, with thick lines and doesn't look good. Upscaling #3x from #2x can be even worse because subpixels must be used.
Downscaling
In general, the results are much better than with upscaling, however, that doesn't apply for all the images. If you have a 1px border on a #3x image, after downscaling it to #1x the border won't be visible (0.33px). The same applies for any small objects in the image. Downscaling destroys all details.
In general - for an image to look perfect, you want to avoid both downscaling and upscaling. You can always go with only #2x or #3x images and add other scales only if you see visual problems. Using higher resolution won't improve downscaling. High resolutions are used only to avoid upscaling. Downscaling from a high scale (e.g. #100x) to #1x won't create better results than downscaling from #3x.
You need 3 kinds of images in Image Assets because in terms of Scaling or Pixels
There are 3 kinds of Apple Devices (iPhone and iPad) that is
Normal device which terms to 1 pixel = 1 point#1x (Older iPhone and iPad devices)
Retina device which terms to 4 pixels(2 x 2) = 1 point#2x (iPhone 4+)
Retina iPhone6 and iPad which terms to 9 pixels (3 x 3) = 1 point#3x (iPhone6+)
Thus for providing same image in 3 scales iOS decides which image to show for which devices.Hope could help you understand this.
EDIT
It is because if you provide one high resolution graphic it would be waste of space on a users' device. Thanks to app slicing the device will download (from App Store) only the parts that actually fits the device (so retina device won't download non retina graphics). This is why Apple created assets catalogs and this kind of rules to follow. They describe it in their sessions.
In short it is to decrease memory/disk usage so it is all about increasing performance and user experience
First of all, you need to know points vs. pixels behaviour. On non-retina devices, point vs pixels ratio is 1point=1pixel. On retina devices, there are two ratios: 1point = 2x2 pixels depending on screen size, and 1point=3x3 pixels, because of pixels density, that is quadrupled watching on non retina. That's why you need this 3 types of images, to be shown on its highest resolution.
Complementing what Sulthan said:
Because you didn't propitiated proper images for a specific device, it has to downscale or upscale. These processes will use up your memory and processing, resulting maybe in a decrease of performance, depending on how many images at a time you're doing it and the size of image.
If you provide only one big image you encounter several problems:
Downscaling leads to the loss of quality (even if it is not huge)
It takes more computational power to downscale the image than to display the already pre-rendered image
The size of your binary gets increased and you are not able to benefit from app thinning which is introduced with iOS 9.
As you can see, producing only one image will impact the performance and quality of your app and it will disproportionately hit those with older devices. This is because:
They need to downscale more. Also, the performance of their devices is not as good as that of the new ones, so they are much more likely to notice the lags with your app
They do not have as much storage space so you really want to be able to use app thinning to help them
The loss of quality will be the highest for them and considering the fact that the resolution of their devices is low, they will notice it.
Due to this users are likely to be unhappy and this is bad for you. Because, from my experience, unhappy users are 10 times more likely to rate your app than happy users. You don't want that, do you? :)

jpg or png for user profile pictures?

My app requires that each user has a profile picture of around 140*140px. Right now I am using jpgs, I am wondering if performance wise it will be better to use pngs. I read pngs are good for small UI elements and images, jpg for large images with detail such as photos. Obviously my profile pics are photos but they are small. Would it make much difference switching to png? Thanks
JPEG is best for small file sizes of photos, even for low resolutions.
PNG makes sense when there are many pixels of the exact same color next to each other. This is not the case with photos.
These should be helpful for you.
When to use PNG or JPG in iPhone development?
PNG vs. GIF vs. JPEG vs. SVG - When best to use?
Apple optimizes PNG images that are included in your iPhone app bundle. In fact, the iPhone uses a special encoding in which the color bytes are optimized for the hardware. XCode handles this special encoding for you when you build your project. So, you do see additional benefits to using PNG's on an iPhone other than their size consideration. For this reason it is definitely recommended to use PNG's for any images that appear as part of the interface (in a table view, labels, etc).
As for displaying a full screen image such as a photograph you may still reap benefits with PNG's since they are non-lossy and the visual quality should be better than a JPG not to mention resource usage with decoding the image. You may need to decrease the quality of your JPG's in order to see a real benefit in file size but then you are displaying non-optimal images.
File size is certainly a factor but there are other considerations at play as well when choosing an image format.

What's the best way to use big textures (2048*1536) in Unity3d with NGUI on ios?

I'm using Unity3d (4.3.1) and NGUI for creating an 2d iOS (iPad) app. Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
Now I'm using them with GUI type, override for iPhone with max size 2048 and compression quality: normal. And I'm using a UITexture with Unlit/Transparent shader to show them.
However, after about 40 images in the project XCode returns the terminated due to memory error. So the question is, what type of images do I need, and with which preferences to make them work?
I'm using iPad 3 as a test device with XCode 5.1.1. I'll be thankful for any help!
Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
I think your 2048x2048 size images use a very huge memory area. Basically, 2048 image use 16MB memory. So, this case need to use about a 1600MB memory! Normal application don't over about 200 MB.
So, I think you need to be reduce using a memory:
Remember that this texture is going to be expand 2048x2048 by unity.( http://www.opengl.org/wiki/NPOT_Texture ) So, if you are going to reduce file size to 1500x1000, your application still use 2048x2048 image. But if you can reduce file size to 1024x1024, do it. 1024 image just use 4 MB memory.
If you can use texture compression. Use it. PVRTC 4 bit ( https://docs.unity3d.com/Documentation/Manual/ReducingFilesize.html ) compression is make file size 1/8 than true color. Also memory size is going to reduce.(maybe reduced to half)
If your application don't display all images, load image dynamically. Use thumb nail.
Good luck:D
If you want to make a gallery-like app to render photos maybe you can try a different approach:
create two large editable textures and fill texels with image data (it must be editable otherwise you will no have access to write directly image data into them).
if you still have memory issues or if you want to use lower memory you can use several smaller textures as tiles. You can render then image parts to each smaller texture. Remember to configurate correctly the texture borders or so not use border texels to avoid wrapping problems.
Best way is to use a smaller texture. In an ipad you will need a magnifying glass to really appreciate the difference between 1024x1024 and larger textures. Remember an ipad screen is smaller (7"~10") than a computer one and with filtering enabled is really hard to tell the difference.
If you still need manager such a large texture for some other reason (zooming or similar) I recommend you one of the following approaches:
split the texture into layers with alpha channel (transparency): usually backgrounds can be rendered with lower resolutions.
split also the texture into blocks: usually most textures have repeating patterns.
use compression.
Always avoid use such large textures if possible.

Maximum image dimensions in a browser/CSS spec?

I want to display a page containing about 6000 tiny image thumbnails (40x40 each). To avoid having to make 6000 HTTP requests, I am exploring CSS sprites, i.e. concatenating all these thumbnails into one long strip and using CSS to crop the required images out. Unfortunately, I have discovered that JPEG files cannot be larger than 65500 pixels in any one dimension. Wary of further limits in the web stack, I am wondering: are any of the following unable to cope with an image with dimensions of 40x240000?
Internet Explorer
Opera
WebKit
Any CSS spec
Any HTML spec
The PNG spec
Edit: the purpose of this is simply to display an entire image collection at once, requiring that the user at most has to scroll. I want the "micro-thumbnails" to flow into an existing CSS layout, so I can't just use a big rectangular image. I don't want the user to have to click through multiple pages to see everything. The total number of pixels is not that great - only twice what would fit on a 2560x1600 display. The total file size of all the micro-thumbnails is only a couple of megabytes. Assuming every image is manipulated uncompressed in the browser's memory, taking 8 bytes of storage per pixel (RGBA plus 100% overhead fudge factor), we are talking RAM usage in the low hundreds of megabytes; not unreasonable for a specialized application in the year 2010. The only unreasonable thing is the volume of HTTP requests that would be generated if all micro-thumbnails were sent individually.
Well, Safari/iOS lists these limits:
The maximum size for decoded GIF, PNG, and TIFF images is 3 megapixels.
That is, ensure that width * height ≤ 3 * 1024 * 1024. Note that the decoded size is far larger than the encoded size of an image.
The maximum decoded image size for JPEG is 32 megapixels using subsampling.
JPEG images can be up to 32 megapixels due to subsampling, which allows JPEG images to decode to a size that has one sixteenth the number of pixels. JPEG images larger than 2 megapixels are subsampled—that is, decoded to a reduced size. JPEG subsampling allows the user to view images from the latest digital cameras.
Individual resource files must be less than 10 MB.
This limit applies to HTML, CSS, JavaScript, or nonstreamed media.
http://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/CreatingContentforSafarioniPhone/CreatingContentforSafarioniPhone.html
Based on your update, I'd still really recommend not using this approach. Don't you think there's a reason that Google's image search doesn't work like this?
As such, I'd recommend simply loading images as required via Ajax. (i.e.: When the user scrolls below the currently visible set of images.) Whilst this will use more connections, it'll mean that you can have sensibly sized thumbnails and as a general approach is much more manageable than having to re-generate pre-generated thumbnail image "sheets" on the back-end when a new image is added, etc.

Resources