Decompressing a PVRTC compressed image format? - ios

I'm trying to open some textures from an iPhone game that I believe are using the PVRTC format (Pictured below)
PVRTC image format?
However everything I've tried in regards to opening it has failed. The PVRTexTool won't decompress it, the program only opens files with the extension .PVR and doesn't recognise it. I've also tried using TexturePacker but it doesn't recognise it either. It's been baffling me for a few days, any help towards decompressing the file would be appreciated thanks.

I can only offer some suggestions.
iOS restricts PVRTC textures to be square and power of 2 sizes, and they will be either 2bpp or, more likely, 4bpp. Therefore if we initially assume no MIP mapping, there can thus be only a few possible sizes for the raw data. From that you might be able to deduce the size of any header data and strip that off. I think the PowerVR SDK from Imagination Tech provides decoder source code in C (or at least it did last time I checked though admittedly that was a few years ago) if you have that raw data. Also, the data might be in Morton order.
If MIP mapping is used, then I think you'll need to include the entire MIP map chain in your size calculation, but note that the small maps will be rounded up to at least 8bytes each.

Related

Webgl Upload Texture Data to the gpu without a draw call

I'm using webgl to do YUV to RGB conversions on a custom video codec.
The video has to play at 30 fps. In order to make this happen I'm doing all my math every other requestAnimationFrame.
This works great, but I noticed when profiling that uploading the textures to the gpu takes the longest amount of time.
So I uploaded the "Y" texture and the "UV" texture separately.
Now the first "requestAnimationFrame" will upload the "Y" texture like this:
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, yTextureRef);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, textureWidth, textureHeight, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, yData);
The second "requestAnimationFrame" will upload the "UV" texture in the same way, and make a draw call to the fragment shader doing the math between them.
But this doesn't change anything in the profiler. I still show nearly 0 gpu time on the frame that uploads the "Y" texture, and the same amount of time as before on the frame that uploads the "UV" texture.
However if I add a draw call to my "Y" texture upload function, then the profiler shows the expected results. Every frame has nearly half the gpu time.
From this I'm guessing the Y texture isn't really uploaded to the gpu using the texImage2d function.
However I don't really want to draw the Y texture on the screen as it doesn't have the correct UV texture to do anything with until a frame later. So is there any way to force the gpu to upload this texture without performing a draw call?
Update
I mis-understood the question
It really depends on the driver. The problem is OpenGL/OpenGL ES/WebGL's texture API really sucks. Sucks is a technical term for 'has unintended consequences'.
The issue is the driver can't really fully upload the data until you draw because it doesn't know what things you're going to change. You could change all the mip levels in any order and any size and then fix them all in between and so until you draw it has no idea which other functions you're going to call to manipulate the texture.
Consider you create a 4x4 level 0 mip
gl.texImage2D(
gl.TEXTURE_2D,
0, // mip level
gl.RGBA,
4, // width
4, // height
...);
What memory should it allocate? 4(width) * 4(height) * 4(rgba)? But what if you call gl.generateMipmap? Now it needs 4*4*4+2*2*4+1*1*4. Ok but now you allocate an 8x8 mip on level 3. You intend to then replace levels 0 to 2 with 64x64, 32x32, 16x16 respectively but you did level 3 first. What should it do when you replace level 3 before replacing the levels above those? You then add in levels 4 8x8, 5 as 4x4, 6 as 2x2, and 7 as 1x1.
As you can see the API lets you change mips in any order. In fact I could allocate level 7 as 723x234 and then fix it later. The API is designed to not care until draw time when all the mips must be the correct size at which point they can finally allocate memory on the GPU and copy the mips in.
You can see a demonstration and test of this issue here. The test uploads mips out of order to verify that WebGL implementations correctly fail with they are not all the correct size and correctly start working once they are the correct sizes.
You can see this was arguably a bad API design.
They added gl.texStorage2D to fix it but gl.texStorage2D is not available in WebGL1 only WebGL2. gl.texStorage2D has new issues though :(
TLDR; textures get uploaded to the driver when you call gl.texImage2D but the driver can't upload to the GPU until draw time.
Possible solution: use gl.texSubImage2D since it does not allocate memory it's possible the driver could upload sooner. I suspect most drivers don't because you can use gl.texSubImage2D before drawing. Still it's worth a try
Let me also add that gl.LUMIANCE might be a bottleneck as well. IIRC DirectX doesn't have a corresponding format and neither does OpenGL Core Profile. Both support a RED only format but WebGL1 does not. So LUMIANCE has to be emulated by expanding the data on upload.
Old Answer
Unfortunately there is no way to upload video to WebGL except via texImage2D and texSubImage2D
Some browsers try to make that happen faster. I notice you're using gl.LUMINANCE. You might try using gl.RGB or gl.RGBA and see if things speed up. It's possible browsers only optimize for the more common case. On the other hand it's possible they don't optimize at all.
Two extensions what would allow using video without a copy have been proposed but AFAIK no browser as ever implemented them.
WEBGL_video_texture
WEBGL_texture_source_iframe
It's actually a much harder problem than it sounds like.
Video data can be in various formats. You mentioned YUV but there are others. Should the browser tell the app the format or should the browser convert to a standard format?
The problem with telling is lots of devs will get it wrong then a user will provide a video that is in a format they don't support
The WEBGL_video_texture extensions converts to a standard format by re-writing your shaders. You tell it uniform samplerVideoWEBGL video and then it knows it can re-write your color = texture2D(video, uv) to color = convertFromVideoFormatToRGB(texture(video, uv)). It also means they'd have to re-write shaders on the fly if you play different format videos.
Synchronization
It sounds great to get the video data to WebGL but now you have the issue that by the time you get the data and render it to the screen you've added a few frames of latency so the audio is no longer in sync.
How to deal with that is out of the scope of WebGL as WebGL doesn't have anything to do with audio but it does point out that it's not as simple as just giving WebGL the data. Once you make the data available then people will ask for more APIs to get the audio and more info so they can delay one or both and keep them in sync.
TLDR; there is no way to upload video to WebGL except via texImage2D and texSubImage2D

Sprite Animation file sizes in SpriteKit

I looked into inverse kinematics as a way of using animation, but overall thought I might want to proceed with using sprite texture atlases to create animation instead. The only thing is i'm concerned about size..
I wanted to ask for some help in the "overall global solution":
I will have 100 monsters. Each has 25 frames of animation for an attack, idle, and spawning animation. Thus 75 frames in total per monster.
I'd imagine I want to do 3x, 2x and 1x animations so that means even more frames (75 x 3 images per monster). Unless I do pdf vectors then it's just one size.
Is this approach just too much in terms of size? 25 frames of animation alone was 4MB on the hard disk, but i'm not sure what happens in terms of compression when you load that into the Xcode and texture atlas.
Does anyone know if this approach i'm embarking on will take up a lot of space and potentially be a poor decision long term if I want even more monsters (right now I only have a few monsters and other images and i'm already up to ~150MB when I go to the app on the phone and look at it's storage - so it's hard to tell what would happen in the long term with way more monsters but I feel like it would be prohibitively large like 4GB+).
To me, this sounds like the wrong approach, and yet everywhere I read, they encourage using sprites and atlases accordingly. What am I doing wrong? too many frames of animation? too many monsters?
Thanks!
So, you are correct that you will run into a problem. In general, the tutorials you find online simply ignore this issue of download side and memory use on device. When building a real game you will need to consider total download size and the amount of memory on the actual device when rendering multiple animations at the same time on screen. There are 3 approaches, just store everything as PNG, make use of an animation format that compresses better than PNG, or third you can encode things as H264. Each of these approaches has issues. If you would like to take a look at my solution to the memory use issue at runtime, have a peek at SpriteKitFireAnimation link at this question. If you want to roll your own approach with H264, you can get lots of compression but you will have issues with alpha channel support. The lazy thing to do is use PNGs, it will work and support alpha channel, but PNGs will bloat your app and runtime memory use is heavy.

Perceptual Image Comparison

I'm trying to do image comparison to detect changes in a video processing application. These are two images that look identical to me, but are different according to both
http://pdiff.sourceforge.net/
and http://www.itec.uni-klu.ac.at/lire/nightly/api/net/semanticmetadata/lire/imageanalysis/LireFeature.html
Can anyone explain the difference? Eventually I need to find a library that can detect differences that doesn't have any false positives.
The two images are different.
I used GIMP (open source) to stack the two images one on top of the other and do a difference for the top layer. It showed a very faint black image, i.e. very little difference. I then used Curve to raise the tones and it revealed that what seem to be JPEG artifacts, even though the files given are PNG. I recommend GIMP and sometimes I use it instead of Photoshop.
Using GIMP to do a blink comparison between layers at 400% view, I would guess that the first image is closer to the original. The second may be saved copy of the first or from the original but saved at a lower quality setting.
It seems that the metadata has been stripped off both images (haven't done a definitive look), so no clues there.
There was a program called Unique Filer that I used for years. It is tunable and rather good. But any comparator is likely to generate a number of false positives if you tune it well enough to make sure it doesn't miss duplicates. If you only want to catch images that are very similar like this pair, then you can tune it very tightly. It is old and may not work on Windows 7 or later.
I would like to find good image checkers / comparators too. I've considered writing my own program.

Besides standard/progressive, the 3rd kind of JPEG compression: load by channel?

this question might be an "Open Question" and many of you might be eager to close it, but please don't. Let me explain.
As we all know, JPEG has two kinds of compression (at least in Photoshop save dialog)
optimized, where image was loaded kinda like line-by-line
progressive, where image was loaded first mosaic-like, the progressively better till the original resolution
I have read a lot of PNG/JPEG optimization articles before, but now I encountered this awesome third kind compression, from a wild random Google Image search. The JPEG in question is this
http://storage.googleapis.com/marc-pres/boston-event-1012/images/google-data-center.jpg
Try load the link in Chrome/Firefox (in IE/Safari only until the image was fully loaded then displayed)
you can observe:
image were loaded first in black & white
then looks like the Red channel loaded
next the Green channel loaded
last the Blue channel loaded
I tried loading it again with a emulated very slow connection, and observed that the JPEG is not only loads by channel order, but in progressive way as well. So first loaded image is blank-and-white mosaic then green-ish mosaic then gradually full color mosaic and finally full resolution and full color image.
This is amazing technology, suppose you are building an e-magazine, where each page has a lot of pictures, you want the user to fast flip browsing through pages, and this kind of image is exactly what works best. For fast preview, load blank-n-white thumbnail, if the user stays, fully load the original image.
So my question is: How could I generate such image using Python Pillow or ImageMagick, or any kind of open source software?
If you think this question is inappropriate please comment, don't just close it.
Update 1:
It turns out Google used this technology in all of its JPEG pictures 1, 2 e.g. this
Update 2: I found another clue
The image data in a JPEG file can be sliced up in many different ways, and the slices (or "scans" as they're usually called) can be stored in the file in many different orders.
In most JPEG files, the first scan in the file contains all of the image's color components, interleaved together if it is a color image. In a non-progressive JPEG, the file will contain just that one scan. In a progressive JPEG, other scans will follow, each of which may contain one component or multiple components.
But there's nothing that requires it to be done that way. If the first scan in the file does not contain all the color components, we might call such a file "non-interleaved".
Your examples files are non-interleaved, and they are also progressive. Progressive non-interleaved JPEGs seem to be more widely supported than non-progressive non-interleaved JPEGs.
The standard IJG libjpeg software is capable of creating non-interleaved files. Though it's not exactly easy, you can use its cjpeg utility, with the -scans option documented in the wizard.txt file.

iOS: playing a frame-by-frame greyscale animation in a custom colour

I have a 32 frame greyscale animation of a diamond exploding into pieces (ie 32 PNG images # 1024x1024)
my game consists of 12 separate colours, so I need to perform the animation in any desired colour
this I believe rules out any Apple frameworks, also it rules out a lot of public code for animating frame by frame in iOS.
what are my potential solution paths?
these are the best SO links I have found:
Faster iPhone PNG Animations
frame by frame animation
Is it possible using video as texture for GL in iOS?
that last one just shows it is may be possible to load an image into a GL texture each frame ( he is doing it from the camera, so if I have everything stored in memory, that should be even faster )
I can see these options ( listed laziest first, most optimised last )
option A
each frame (courtesy of CADisplayLink), load the relevant image from file into a texture, and display that texture
I'm pretty sure this is stupid, so onto option B
option B
preload all images into memory
then as per above, only we load from memory rather than from file
I think this is going to be the ideal solution, can anyone give it the thumbs up or thumbs down?
option C
preload all of my PNGs into a single GL texture of the maximum size, creating a texture Atlas. each frame, set the texture coordinates to the rectangle in the Atlas for that frame.
while this is potentially a perfect balance between coding efficiency and performance efficiency, the main problem here is losing resolution; on older iOS devices maximum texture size is 1024x1024. if we are cramming 32 frames into this ( really this is the same as cramming 64 ) we would be at 128x128 for each frame. if the resulting animation is close to full screen on the iPad this isn't going to hack it
option D
instead of loading into a single GL texture, load into a bunch of textures
moreover, we can squeeze 4 images into a single texture using all four channels
I baulk at the sheer amount of fiddly coding required here. My RSI starts to tingle even thinking about this approach
I think I have answered my own question here, but if anyone has actually done this or can see the way through, please answer!
If something higher performance than (B) is needed, it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml
Rather than pull across one frame at a time from memory, we could arrange say 16 512x512x8-bit greyscale frames contiguously in memory, send this across to GL as a single 1024x1024x32bit RGBA texture, and then split it within GL using the above function.
This would mean that we are performing one [RAM->VRAM] transfer per 16 frames rather than per one frame.
Of course, for more modern devices we could get 64 instead of 16, since more recent iOS devices can handle 2048x2048 textures.
I will first try technique (B) and leave it at that if it works ( I don't want to over code ), and look at this if needed.
I still can't find any way to query how many GL textures it is possible to hold on the graphics chip. I have been told that when you try to allocate memory for a texture, GL just returns 0 when it has run out of memory. however to implement this properly I would want to make sure that I am not sailing close to the wind re: resources... I don't want my animation to use up so much VRAM that the rest of my rendering fails...
You would be able to get this working just fine with CoreGraphics APIs, there is no reason to deep dive into OpenGL for a simple 2D problem like this. For the general approach you should take to creating colored frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix. Basically, you need to use CGContextClipToMask() and then render a specific color so that what is left is the diamond colored in with the specific color you have selected. You could do this at runtime, or you could do it offline and create 1 video for each of the colors you want to support. It is be easier on your CPU if you do the operation N times and save the results into files, but modern iOS hardware is much faster than it used to be. Beware of memory usage issues when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer that describes the problem space. You could code it all up with texture atlases and complex openGL stuff, but an approach that makes use of videos would be a lot easier to deal with and you would not need to worry so much about resource usage, see my library linked in the memory post for more info if you are interested in saving time on the implementation.

Resources