I'm testing different compressions for my spritesheets for a game on iOS. In a surprising way, I get a more important memory (RAM) use with PVR 2 bits with alpha instead of a PNG 32 (RGBA 4444). I would say the consumption is 25% higher with PVR 2 bits instead of PNG 32 once the spritesheets are loaded inside memory. I'm using Instruments with xCode to verify the memory use on the physical device (iPad Air 2)
I'm using TexturePacker to generate my spritesheets.
I've read evrywhere PVR 2 or 4 is much less memory consumer than PNG 32. How is it possible ?
Edit:
This is strange because according my observations, PVRTC 4 bits RGBA uses a lot more memory (RAM) than PNG 32, neraly 3 times more according Instruments from XCode. PVRTC 2 bits RGBA is 25% higher than PNG 32 RGBA 4444. I'm talking about live RAM consomption, not disque size which has nothing to do and is not a problem. It seems iOS manages PVR differently than it's supposed to do, especially when loading them into RAM.
Edit2:
My textures are 2048x2048, there are POT and have the square format. Evrything work fine, except the RAM consomption is much higher that it should be. I make all my tests with a physical iPad Air 2 device connected to my Mac with a USB cable. I use Instruments inside xcode to verify and follow the RAM consumption in live. I've solved the RAM consomption problem by switching to a PNG 8 bits (indexed) format with a texture divided by 2 (1024x1024). I make a scale x2 in the code to recover a normal size texture. The RAM consumption droped to 240 MB (PNG 8 bits indexed) instead of 950 MB with (PVR 2 bits RGBA). My game is a video puzzles (with 8 seconds video loops at 15 fps) and uses a lot of sprites. (43 spritesheets in each puzzle generated by TexturePacker, around 130 sprites in each spritesheet)
Related
I'm doing some tests regarding loading of POT vs NPOT textures on OpenGL ES 2.0 iOS devices.
Surprisingly, NPOT textures (smaller in size) seem to take more memory than the next biggest POT texture. Can anybody explain why?
My test consists of a bare-bones App in which I load a really big texture (I'm using cocos2d, so this could be a bug in this engine). Then I output memory usage using this method. (I'm looking for a better way of reporting texture memory, see here).
The NPOT texture is 1010x1708 (3399 kB at RGBA4444). The equivalent POT texture is 1024 x 2048 (4096 kB at RGBA4444).
App usage using the POT memory usage stabilizes at a little over 1600000 bytes (I did three runs, with these values: 16261120, 16232448 and 16240640). The NPOT memory usage stabilizes at around 1900000 bytes (19173376, 19038208 and 19140608). Nothing else changes between runs, only the texture.
Why, oh, why? :-)
Note: I did these tests on iOS 6.1 (iOS 5 was known to have a bug which caused POT textures to take 33% more memory than NPOT ones.
I am loading a Cocos2d Scene that contains almost 700 png Images and even if I run this scene directly from Xcode I receive a Memory Warning message along with a long list of some of my Image names in the console..
I am properly deallocating them in dealloc but when I come again on this scene this time my game crashes during loading my half of the Images
Is this the problem of loading so much textures at once or problematic code?
How should I handle loading so many images and do proper memory management to avoid this crash?
700 png images? Hmmm. Ok, I like those games.
Let's assume each image is "only" 128x128 pixels. Each texture consumes 64 KB (128 times 128 times 4 Bytes). Total of 45 MB memory used for 700 such textures.
If your textures are twice that or even more, KA-BOOM!
Keep in mind that the file size has nothing to do with texture memory. The files may total a few megabytes in the file system. But that's because they're compressed. Textures created from PNG files however are not compressed.
What you can do:
use texture atlases
reduce color depth of textures to 16 Bit
use compressed PVR format
TexturePacker will help you with these tasks.
Apples docs state:
You should avoid creating UIImage objects that are greater than 1024 x
1024 in size. Besides the large amount of memory such an image would
consume, you may run into problems when using the image as a texture
in OpenGL ES or when drawing the image to a view or layer. This size
restriction does not apply if you are performing code-based
manipulations, such as resizing an image larger than 1024 x 1024
pixels by drawing it to a bitmap-backed graphics context. In fact, you
may need to resize an image in this manner (or break it into several
smaller images) in order to draw it to one of your views.
I assume this means that if we are working with non-square images, we should break them into smaller images? Is there any specific documentation or explanations on this, or does anyone have any any tips from experience?
Thanks for reading.
On the pre-A5 iOS devices, the maximum OpenGL ES texture size was 2048x2048 (Apple's documentation is incorrect in this regard by saying it's 1024x1024). What that means is that you can't have an image larger than that in either dimension. The newer iOS devices (iPhone 4S, iPad 2, iPad 3) have a maximum texture size of 4096x4096.
It does not mean that you have to have square images, just that an image must not have its width or height exceed 2048 (again, 4096 on newer devices). If you try to do so, I believe your image will just render as black.
This used to be a limitation for all UIViews not backed by a CATiledLayer, but I believe they now do tiling on large enough views automatically. If you need to work with an image larger than 2048x2048, you'll need to host it in a CATiledLayer or the like.
The memory cautions are worth paying attention to, though. Images are stored in their uncompressed form in memory, no matter their source, so you're looking at 16,777,216 bytes per 2048x2048 image (4 bytes per pixel for RGBA). That can add up pretty quickly, if you're not careful.
I am loading a RGBA texture which is 1024 x 1024. I expected the on-memory texture size would be 1024 x 1024 x 4 => 4 MB . But when I try to print the memory consumption I can see that the texture is taking around 7 - 8 MB, almost double. I was just wondering whether IPad is converting every channel from byte to half-float,
So is there any way to specify that every pixel should take 4 bytes and not 8 bytes.
The easiest way to specify it is using a sized internal format (like GL_RGBA8 instead of GL_RGBA), although I'm not sure if these are supported in ES. But I would be surprised if an ES device would store a standard RGBA texture with more than 8 bits per channel.
How do you determine the GPU memory consumption? I would rather guess the additional memory is due to other important GPU resources, like VBOs and not to forget the framebuffer itself (the memory you render into), that takes a reasonable amount of memory. And remember, when using mip-maps these additionally require around 33% of the base texture's memory.
And if you're talking about the size of the CPU data you create the texture from, then this doesn't have anything to do with the texture's size anyway and only depends on the size of your own data.
You have to specify the type and internal format of your texture when you create it using glTexImage2D.
Yours is probably set to GL_FLOAT or something .
Lookup the documentation here : http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
For iPhone game development, I switched from PNG format to PVRTC format for the sake of performance. But PVRTC compression is creating files that are much bigger than the PNG files.. So a PNG of 140 KB (1024x1024) gets bloated to 512 KB or more in the PVRTC format.. I read somewhere that a PNG file of 50KB got compressed to some 10KB and all, in my case, its the other way around..
Any reason why it happens this way and how I can avoid this.. If PVRTC compression is blindly doing 4bpp conversion (1024x1024x0.5) irrespective of the transparencies in the PNG, then whats the compression we are achieving here..
I have 100s of these 1024x1024 images in my game as there are numerous characters each doing some complex animations.. so in this rate of 512KB per image, my app would get more than 50MB.. which is unacceptable for my customer.. ( with PNG, I could have got my app to 10MB)..
In general, uncompressed image data is either 24bpp (RGB) or 32bpp (RGBA) flatrate. PVRTC is 4bpp (or 2bpp) flatrate so there is a compression of 6 or 8 (12 or 16) times compared to this.
A requirement for graphics hardware to use textures natively is that the format of the texture must be random accessible for the hardware. PVRTC is this kind of format, PNG is not and this is why PNG can achieve greater compression ratios. PVRTC is a runtime, deployment format; PNG is a storage format.
PVRTC compression is carried out on 4x4 blocks of pixels at a time and at a flat bit rate so it is easy to calculate where in memory to retrieve the data required to derive a particular texel's value from and there is only one access to memory required. There is dedicated circuitry in the graphics core which will decode this 4x4 block and give the texel value to your shader/texture combiner etc.
PNG compression does not work at a flat bitrate and is more complicated to retrieve specific values from; memory needs to be accessed from multiple locations in order to retrieve a single colour value and far more memory and processing would be required every single time a texture read occurs. So it's not suitable for use as a native texture format and this is why your textures must be decompressed before the graphics hardware will use them. This increases bandwidth use when compared to PVRTC, which requires no decompression for use.
So for offline storage (the size of your application on disk), PNG is smaller than PVRTC which is smaller than completely uncompressed. For runtime memory footprint and performance, PVRTC is smaller and faster than PNG which, because it must be decompressed, is just as large and slow as uncompressed textures. You might gain some advantage with PNG at initialisation for disk access, but then you'd lose time for decompression.
If you want to reduce the storage footprint of PVRTC you could try zip-style compression on the texture files and expand these when you load from disk.
PVRTC (PowerVR Texture Compression) is a texture compression format. On devices using PowerVR e.g. most higher end mobile phones including the iPhone and other ARM-based gadgets like the iPod it is very fast to draw since drawing it is hardware accelerated. It also uses much less memory since images are represented in their compressed form and decoded each draw, whereas a PNG needs to be decompressed before being drawn.
PNG is lossless compression.
PVRTC is lossy compression meaning it approximates the image. It has a completely different design criteria.
PVRTC will 'compress' (by approximating) any type of artwork, giving a fixed bits per texel, including photographic images.
PNG does not approximate the image, so if the image contains little redundancy it will hardly compress at all. On the other hand, a uniform image e.g. an illustration will compress best with PNG.
Its apples and oranges.
Place more than one frame tiled onto a single image and blit the subrectangles of the texture. This will dramatically reduce your memory consumption.
If you images are, say, 64x64, then you can place 256 of them on a 1024x1024 texture in a 16x16 arrangement.
With a little effort, images do not need to be all the same size, just so long as you keep track in the code of the rectangle in the texture that each image is at.
This is how iPhone game developers do it.
I agree with Will. There is no point in the question. I read the question 3 times, but I still don't know what Sankar want to know. It's just a complain, no question.
The only thing I can advice, don't use PVRTC if you mind to use it. It offers performance gain and saves VRAM, but it won't help you in this case. Because what you want is just reducing game volume, not a consideration about trade-off between performance and quality.