I just crashed against the fact that cramming 60 960x640 PNG files into an UIImageView is a terrible mistake.
That said, I'm still required to show this animation, and since it's supposed to have a transparent background, I can't go for a MPMoviePlayer or something like that (or can I?). Besides, even if I could separate the different elements of the animation (which I can't without going to the guy who gave it to me) most of them are still quite large.
I'm at a complete loss for ideas. Do you have any?
Sounds like you might be better suited playing a video, but I'm not sure about the transparent background. Using separate pngs, you have to load each one into memory, which is (640 * 960) pixels * 24 bits/pixels * 60 images = a lot of memory.
That said, how fast do you need to play the animation? Instead of using UIImageView animations, you could use a timer and manually manage loading and unloading of the images into memory, only keeping around one or two at a time.
Related
I looked into inverse kinematics as a way of using animation, but overall thought I might want to proceed with using sprite texture atlases to create animation instead. The only thing is i'm concerned about size..
I wanted to ask for some help in the "overall global solution":
I will have 100 monsters. Each has 25 frames of animation for an attack, idle, and spawning animation. Thus 75 frames in total per monster.
I'd imagine I want to do 3x, 2x and 1x animations so that means even more frames (75 x 3 images per monster). Unless I do pdf vectors then it's just one size.
Is this approach just too much in terms of size? 25 frames of animation alone was 4MB on the hard disk, but i'm not sure what happens in terms of compression when you load that into the Xcode and texture atlas.
Does anyone know if this approach i'm embarking on will take up a lot of space and potentially be a poor decision long term if I want even more monsters (right now I only have a few monsters and other images and i'm already up to ~150MB when I go to the app on the phone and look at it's storage - so it's hard to tell what would happen in the long term with way more monsters but I feel like it would be prohibitively large like 4GB+).
To me, this sounds like the wrong approach, and yet everywhere I read, they encourage using sprites and atlases accordingly. What am I doing wrong? too many frames of animation? too many monsters?
Thanks!
So, you are correct that you will run into a problem. In general, the tutorials you find online simply ignore this issue of download side and memory use on device. When building a real game you will need to consider total download size and the amount of memory on the actual device when rendering multiple animations at the same time on screen. There are 3 approaches, just store everything as PNG, make use of an animation format that compresses better than PNG, or third you can encode things as H264. Each of these approaches has issues. If you would like to take a look at my solution to the memory use issue at runtime, have a peek at SpriteKitFireAnimation link at this question. If you want to roll your own approach with H264, you can get lots of compression but you will have issues with alpha channel support. The lazy thing to do is use PNGs, it will work and support alpha channel, but PNGs will bloat your app and runtime memory use is heavy.
I run into an optimization problem. I have a to display a lot of image a the same time on the screen (up to 150). These image are displayed like if it was a sphere, that can roll.
So far it run 60 fps on a new device, but there is some micro lags and I don't know what is the cause. The profiler tell me 59-60 fps but the problem is that the missing fps is visible and fell like a micro-slutter.
If I reduce the number of imageViews drastically, this problem disappear.
It doesn't seems to be gpu-limited, because I use something like 30% at most of the gpu, but the cpu never goes to more than 50% neither.
What could cause that/how can I detect the cause of this micro lag ? (the image are inited with contentOfFile)
What is the best solution to improve performance knowing that at least half of the image are not on the screen (other side of the sphere) ?
-Recycle UIImageViews
-Set not showed imageView to hidden (it's what i'm doing right now)
-Use rasterization layer (But since the size is changing everytime, it will re-render, right ?)
-Another magic solution that i'm not aware of.
If you want to try the app :
https://itunes.apple.com/us/app/oolala-instant-hangout-app/id972713282?mt=8
1) The way those shapes are created is also important. The most performant way to do that (that's I'm aware of) would be to pre-render images with alpha channel.
2) There is also a "classic" jpeg decompression problem
3) We used CALayer directly to render ~300 images in a UIScrollView on iPhone 4S screen without dropped frames. Might give it a shot.
_layer = [CALayer new];
_layer.contents = (id)_image.CGImage;
_layer.opaque = _opaque;
_layer.contentsScale = [UIScreen mainScreen].scale;
It was a long time ago and was used on iOS 5. You should probably disregard this suggestion unless you have a fast way to test it.
To clarify, I know that a texture atlas improves performance when using multiple distinct images. But I'm interested in how things are done when you are not doing this.
I tried doing some frame-by-frame animation manually in custom OpenGL where each frame I bind a new texture and draw it on the same point sprite. It works, but it is very slow compared to the UIImageView ability to abstract the same. I load all the textures up front, but the rebinding is done each frame. By comparison, UIImageView accepts the individual images, not a texture atlas, so I'd imagine it is doing similarly.
These are 76 images loaded individually, not as a texture atlas, and each is about 200px square. In OpenGL, I suspect the bottleneck is the requirement to rebind a texture at every frame. But how is UIImageView doing this as I'd expect a similar bottleneck?? Is UIImageView somehow creating an atlas behind the scenes so no rebinding of textures is necessary? Since UIKit ultimately has OpenGL running beneath it, I'm curious how this must be working.
If there is a more efficient means to animate multiple textures, rather than swapping out different bound textures each frame in OpenGL, I'd like to know, as it might hint at what Apple is doing in their framework.
If I did in fact get a new frame for each of 60 frames in a second, then it would take about 1.25 seconds to animate through my 76 frames. Indeed I get that with UIImageView, but the OpenGL is taking about 3 - 4 seconds.
I would say your bottleneck is somewhere else. The openGL is more then capable doing an animation the way you are doing. Since all the textures are loaded and you just bind another one each frame there is no loading time or anything else. Consider for a comparison I have an application that can in runtime generate or delete textures and can at some point have a great amount of textures loaded on the GPU, I have to bind all those textures every frame (not 1 every frame), using all from depth buffer, stencil, multiple FBOs, heavy user input, about 5 threads bottlenecked into 1 to process all the GL code and I have no trouble with the FPS at all.
Since you are working with the iOS I suggest you run some profilers to see what code is responsible for the overhead. And if for some reason your time profiler will tell you that the line with glBindTexture is taking too long I would still say that the problem is somewhere else.
So to answer your question, it is normal and great that UIImageView does its work so smoothly and there should be no problem achieving same performance with openGL. THOUGH, there are a few things to consider at this point. How can you say that image view does not skip images, you might be setting a pointer to a different image 60 times per second but the image view might just ask itself 30 times per second to redraw and when it does just uses a current image assigned to it. On the other hand with your GL code you are forcing the application to do the redraw 60FPS regardless to if it is capable of doing so.
Taking all into consideration, there is a thing called display link that apple developers created for you. I believe it is meant for exactly what you want to do. The display link will tell you how much time has elapsed between frames and by that key you should ask yourself what texture to bind rather then trying to force them all in a time frame that might be too short.
And another thing, I have seen that if you try to present render buffer at 100 FPS on most iOS devices (might be all), you will only get 60 FPS as the method to present render buffer will pause your thread if it has been called in less then 1/60s. That being said it is rather impossible do display anything at all at 60 FPS on iOS devices and everything running 30+ FPS is considered good.
"not as a texture atlas" is the sentence that is a red flag for me.
USing a texture atlas is a good thing....the texture is loaded into memory once and then you just move the rectangle position to play the animation. It's fast because its already all in memory. Any operation which involves constantly loading and reloading new image frames is going to be slower than that.
You'd have to post source code to get any more exact an answer than that.
I have a list of png images that I want them to show one after another to show an animation. In most of my cases I use a UIImageView with animationImages and it works fine. But in a couple of cases my pngs are 1280*768 (full screen iPad) animations with 100+ frames. I see that using the UIImageView is quite slow on the emulator (too long to load for the first time) and I believe that if I put it on the device it will be even slower.
Is there any alternative that can make show an image sequence quite smoothly? Maybe Core Animation? Is there any working example I can see?
Core Animation can be used for vector/key-frame based animation - not image sequences. Loading over a hundred full-screen PNGs on an iPad is a really bad idea, you'll almost certainly get a memory warning if not outright termination.
You should be using a video to display these kind of animations. Performance will be considerably better. Is there any reason why you couldn't use a H.264 video for your animation?
Make a video of your pictures. It is the simplest and probably most reasonable approach.
If you want really good performance and full control over your animation, you can convert the pictures to pvrtc4 format and draw them as billboards (textured sprites) with OpenGL. This can be a lot of work if you don't know how to do it.
Look at the second example
http://www.modejong.com/iPhone/
Extracts from http://www.modejong.com/iPhone/
There is also the UIImageView.animationImages API, but it quickly sucks up all the system memory when using more than a couple of decent size images.
I wanted to show a full screen animation that lasts 2 seconds, at 15 FPS that is a total of 30 PNG images of size 480x320. This example implements an animation oriented view controller that simply waits to read the PNG image data for a frame until it is needed.
Instead of alllocating many megabytes, this class run in about a half a meg of memory with about a 5-10% CPU utilization on a 2nd gen iPhone. This example has also been updated to include the ability to optionally play an audio file via AVAudioPlayer as the animation is displayed.
I have a three-second PNG sequence (a logo animation) that I'd like to display right after my iOS app launches. Since this is the only animated sequence in the app, I'd prefer not to use Cocos2D.
But with UIImageView's animationImages, the app runs out of memory on iPod Touch devices.
Is there more memory-conscious/efficient way to show this animation? Perhaps a sprite sheet class that doesn't involve Cocos2D? Or something else?
If this is an animated splash screen or similar, note that the HIG frowns on such behavior (outside of fullscreen games, at least).
If you're undeterred by such arguments (or making a game), you might consider saving your animation as an MPEG-4 video and using MPMoviePlayerController to present it. With a good compressor, it should be possible to get the size and memory usage down quite a lot and still have a good quality logo animation.
I doubt you're going to find much improvement any other way -- a sprite sheet, for example, is still going to be doing the same kind of work as as sequence of PNGs. The problem is that for most animations, a lot of the pixels are untouched from frame to frame... if you're presenting it just as a series of images, you're wasting a lot of time and space on temporally duplicated pixels. This is why we have video codecs.
You could try manually loading/unloading the png images as needed. I don't know what your frame rate requirements are. Also, consider a decent-quality jpg or animated gif. And you can always make the image smaller so it doesn't take up the whole screen. Just a few thoughts.