On the fly PVRTC compression on iOS - ios

I'm writing an iPhone app that is quite heavy on GPU memory. Some of the textures are created procedurally by the app, which means that I cannot compress them to PVRTC to decrease their on-GPU size (and thus memory).
Does anyone know of a library that does this?
The closest I found was PVRTexLib ( http://www.imgtec.com/powervr/insider/powervr-pvrtexlib.asp ) but that is for MacOSX, and not iOS.
I found this question which is similar : Convert .png to PVRTC *on* the iPhone but they took it to a different direction (on why you shouldn't want to do it).
However, my app uses OpenGL so I do benefit greatly from being able to use PVRTC.
Does anyone know of such a library?

PVRTexTool from Imagination includes an SDK and a pre-compiled library for a number of platforms, including arm. That should be linkable to an iPhone executable.
https://community.imgtec.com/developers/powervr/tools/pvrtextool/
However, maybe more interestingly, can you write shaders that re-map the colors as needed? You don't need to re-compress the texture, or change the bits of it at all, if you can apply the function you need in a shader.

Related

Best way to compress textures in Unity for iOS (game is way too big)?

I have looked everywhere and at Unity iOS Build size is way big, and after building my game for iOS it is way too big at 170 MB. I checked my player size statistics in the editor log and 96% of the size is textures - this is because as I was going into this box and maxing out everything (I did not know what I was doing):
Now I need to go back and reduce my asset sizes. I don't know a lot about texture compression so I need to know - for iOS (and Android later), what are the optimal settings in this override box for compressing textures to minimum size?
Is there a way to do this to multiple images at once? What should the compressor quality be?
This is a great time to learn about the AssetPostProcessor! Specifically, AssetPostProcessor.OnPreprocessTexture will allow you to automatically handle the import settings of textures in your Unity project. It will modify their meta files as they're imported and Right-click > Reimport will force them to run (you can use this on a whole directory; even the whole Assets directory).
As far as which settings you should use, that's entirely dependent on your project. A few thoughts:
Make sure your Max Size is appropriate for the textures. Icons don't need to be 4k and background images shouldn't be scaled down to 256.
Compressor Quality "Normal" is fine in most cases. Any compression is far better than no compression.
We use "Compressed ETC2 8 bits" on Android and "Compressed PVRTC 4 bits" on iOS.
Whether or not you use "RGB" vs "RGBA" is important to consider; textures that don't need an alpha chanel would be wasting memory if you choose "RGBA" and textures that do need one wont function properly if you choose "RGB".

SpriteKit Vector graphics performance

For my SpriteKit game I am using a single pdf vector graphics for my SKSpriteNodes rather than many png sprites for all device resolutions for every entity in the game. the benefits of not having to worry too much about the graphics of the game have dramatically helped but my question is simple, would using vector graphics be a bad idea performance wise?
Wait, you are NOT using a PDF as texture for the SKSpriteNode
Instead you are probably using a PDF into the Xcode Asset Catalog right?
In this case, first of all this is a really good idea and it does NOT impact the performance of your game.
Infact when you load a PDF image into the Xcode Assets (and you set the Scale Factors to Single Vector), you are not using the PDF into your app or game.
As soon as you compile the app, Xcode automatically generates the PNG versions for the several resolutions.
E.g. if you game does support any device running iOS 9 then Xcode
automatically generates the 1x, 2x and 3x PNG (bitmap) images from your
original PDF (vector).
Another great benefit of this approach: if in the future Apple does release a device with a 4x pixel density, Xcode will likely be updated to support that and you'll just need to recompile your app to automatically generates the 4x images.
Answer
So the answer to your question is: NO. The PDF image will not impact your game performance simply because the game is not using the PDF, it's using the PNG instead (even if you can't see it).

SVGKit performance and should it be preferred over PNG?

I have been looking at SVGKit and I am finding conflicting ideas. Some say it's slower than PNG and others saying it is fast.
I was hoping to get a recommendation and which route I should take. When I am exporting my vector graphics to PNG for display, would it not make sense to use an SVG instead ?
Of course this gives the added value that it remains a vector.
Or is it still recommended in exporting everything to a PNG ?
You might consider the middle-way introduced in Xcode 7. Here you add your assets to the project as vector images (PDF) and at build-time Xcode automatically generates the PNGs in all needed sizes (1x, 2x, 3x).
Personally, I only use SVGs when necessary, like if I need to be able to change the color of the (parts of the) image. I believe there can be a performance hit when resizing vector images at run-time, although Android uses vectors as default, so it might be insignificant.
SVG is most resource intensive and can be used if you need to display something that can be zoomed in and out while PNG should be preferred for most UI graphics (logos, icons, etc.), as it is crisp yet remains lightweight and fast to display so there is no way to compare SVG with PNG in term of Performance.
if you are going after a Crystal clear images you can use pdf based graphics, which are supported by Xcode Using Vector Images in Xcode
if you still need to implement SVGKit i always suggest using some tools (like SVGCleaner) to clean and simplify SVG in order to enhance performance.

Xcode built-in png compression effects

I have a question about png-8 vs png-24 usage vs Xcode's built in "image compressor".
Some images converted to png-8 are just fine saved like that, because difference between png-24 version can't be noticed easily. But some images have to be stored as png-24 so that quality remains at high level... Same image is about 3 times smaller when saved like png-8, so I guess there would be some benefits in memory consumption when using png-8 vs png-24. But what I am not sure is:
Does iOS "likes" more png-24 ?
Are there any problems with using png-8 instead of png-24 in iOS and what is a preferred choice ?
What are benefits to optimize image in PS (or some program like TexturePacker) when COMPRESS_PNG_FILES in Xcode is set to YES because I suppose Xcode in some way overwrites our optimization done in PS?
What actually Xcode does when optimizing images?
I know that just letting Xcode to do what it suppose to do is probably more than enough, especially for newer devices with enough memory and cpu power, but I am curios what's happening "under the hood" and is it wasting of time doing optimization in Photoshop?
Does iOS "likes" more png-24?
iOS certainly likes its images to be close to its own hardware format (see below). However, it may not presume a certain format, or convert images at will. This would mean that the default postprocessing could convert images from palettized (8-bit) to true-color images, and that would be destructive if the application expects its images to contain a palette. There are many good & proper uses of palettized images.
Are there any problems with using png-8 instead of png-24 in iOS and what is a preferred choice?
Color depth - higher is better, for some kinds of images (but not all). Size - smaller is better (and for deciding when, you are on your own). Other than Sangony states, the PNG specification is generous enough to allow more than a single bit of alpha even in indexed mode. That is, the usual RGB palette may also be RGBA, including alpha. I am not aware of any "problems" with more common PNG formats, or even the uncommon ones.
What are benefits to optimize image in PS (or some program like TexturePacker) when COMPRESS_PNG_FILES in Xcode is set to YES because I suppose Xcode in some way overwrites our optimization done in PS?
Photoshop is not extremely good at optimizing PNGs, but then again it's certainly not one of the worst. pngcrush (the original) is written specifically to try and squeeze the very last byte out of a PNG -- but at its highest setting, it can really take a while to do so. I may have used Apple's modified pngcrush unknowingly, since it is "on" by default; I have not found such a huge delay when compiling code, so Apple's default may be not the highest possible setting. This suggests that manually running pngcrush could be worth the time, in which case you definitely do not want XCode to undo it.
What actually Xcode does when optimizing images?
The most visible 'optimizations' are: switching storage order from RGB to BGR and discarding the alpha channel by premultiplying it with the color channels. See also my earlier answer.
The storage order thingy is, presumably, optimal for the default target devices (iPads, iPhones). Premultiplying alpha is a common method of optimizing, because then it takes less calculations to display the images in real time. (There are some disadvantages to it as well.)
Without any exact measurements, one can only speculate if these optimizations really matter on modern hardware. All internal conversions to 'display' format may very well be cached as quickly as possible.
Xcode uses PngCrush behind the scenes to optimize .png files. Here is also a good blog post that can answer your questions.
Aside from the available colors between PNG8 and PNG 24, the main difference is the the transparency aspect.
PNG8 alpha can sometimes be somewhat jagged in appearance whereas PNG24 is much smoother. If alpha is not a concern for you and the image looks good enough, then PNG8 is probably the way to go.
PNG8 Alpha
PNG24 Alpha

iOS: Updating old application to Retina graphics (TweeJump)

I'm new to the site and also iOS developing as well, but I have experience with other developing platforms.
I have been studying and playing around with the TweeJump source code and I want to update it to Retina graphics, I have made my own but I'm not sure how to implement them properly. Does doing this cut off support for non-retina iPhones?
Some of the images are in a sprite, which I'm not familiar with.
If I just change all the images to high resolution ones, what problems will arise and how can I resolve them?
Please excuse my beginner knowledge. I will really appreciate ANY help you can offer.
Regards.
Updating your game to support Retina display is easy if you have the original non-rasterized (e.g vector) graphics for the images. Just export the graphics as twice as large as the SD images and append -hd (or -ipadhd for iPad) to the part of the filenames before the extension. Just make sure your app delegate calls [[CCDirector sharedDirector] enableRetinaDisplay:YES] and you are good to go.
Does doing this cut off support for non-retina iPhones?
Absolutely not. As long as you retain the SD images.
Some of the images are in a sprite, which I'm not familiar with.
You mean "spritesheet"? This is one of the situations where you need to have access to the original vector graphics for each individual sprite. Plus, you need to use a spritesheet editor in order to generate the HD version of the spritesheet. I recommend TexturePacker.
If I just change all the images to high resolution ones, what problems will arise and how can I resolve them?
Make sure to retain the SD versions as well. One of the problems that may arise is if you have a spritesheet with the size of larger than 1024 x 1024 for the SD version. The HD version will have size larger than 2048 x 2048 which OpenGL ES 1.1 cannot support. You would need to break the spritesheet into more pieces or convert to OpenGL ES 2.0 (i.e. convert to cocos2d-iphone 2.x).

Resources