Cocos2d spritesheets and animation - ios

I have a bit of a problem that I need help with. I have a sprite I need animated, but the sprite is 600x600 with 25 frames this becomes quite a large spritesheet, infact bigger then the iPhone will allow. What would be a way to get around this limitation? Animate in flash and export for cocos2d? is that even possible?

Shrink it down. Why would you even need such an big sprite?
Or add the frames individually. (Less efficient)

For How to use animations and sprite-sheets in cocos2d Read this is best documentation.
Any image displayed by cocos2d will have to fit within those dimensions as cocos2d, uses OpenGL and all images must be loaded into texture memory.
I’m not sure if some other API gives you a way to load larger images.
For more information read this discussion.

when you make sprite sheet using Texture Packer make .pvr.ccz file instead of output file .png , it will reduce file size.you can also use to load this texture on RGB_4444 texture.

Related

Sprite Animation file sizes in SpriteKit

I looked into inverse kinematics as a way of using animation, but overall thought I might want to proceed with using sprite texture atlases to create animation instead. The only thing is i'm concerned about size..
I wanted to ask for some help in the "overall global solution":
I will have 100 monsters. Each has 25 frames of animation for an attack, idle, and spawning animation. Thus 75 frames in total per monster.
I'd imagine I want to do 3x, 2x and 1x animations so that means even more frames (75 x 3 images per monster). Unless I do pdf vectors then it's just one size.
Is this approach just too much in terms of size? 25 frames of animation alone was 4MB on the hard disk, but i'm not sure what happens in terms of compression when you load that into the Xcode and texture atlas.
Does anyone know if this approach i'm embarking on will take up a lot of space and potentially be a poor decision long term if I want even more monsters (right now I only have a few monsters and other images and i'm already up to ~150MB when I go to the app on the phone and look at it's storage - so it's hard to tell what would happen in the long term with way more monsters but I feel like it would be prohibitively large like 4GB+).
To me, this sounds like the wrong approach, and yet everywhere I read, they encourage using sprites and atlases accordingly. What am I doing wrong? too many frames of animation? too many monsters?
Thanks!
So, you are correct that you will run into a problem. In general, the tutorials you find online simply ignore this issue of download side and memory use on device. When building a real game you will need to consider total download size and the amount of memory on the actual device when rendering multiple animations at the same time on screen. There are 3 approaches, just store everything as PNG, make use of an animation format that compresses better than PNG, or third you can encode things as H264. Each of these approaches has issues. If you would like to take a look at my solution to the memory use issue at runtime, have a peek at SpriteKitFireAnimation link at this question. If you want to roll your own approach with H264, you can get lots of compression but you will have issues with alpha channel support. The lazy thing to do is use PNGs, it will work and support alpha channel, but PNGs will bloat your app and runtime memory use is heavy.

How does UIImageView animate so smoothly? (Or: how to get OpenGL to efficiently animate frame by frame textures)

To clarify, I know that a texture atlas improves performance when using multiple distinct images. But I'm interested in how things are done when you are not doing this.
I tried doing some frame-by-frame animation manually in custom OpenGL where each frame I bind a new texture and draw it on the same point sprite. It works, but it is very slow compared to the UIImageView ability to abstract the same. I load all the textures up front, but the rebinding is done each frame. By comparison, UIImageView accepts the individual images, not a texture atlas, so I'd imagine it is doing similarly.
These are 76 images loaded individually, not as a texture atlas, and each is about 200px square. In OpenGL, I suspect the bottleneck is the requirement to rebind a texture at every frame. But how is UIImageView doing this as I'd expect a similar bottleneck?? Is UIImageView somehow creating an atlas behind the scenes so no rebinding of textures is necessary? Since UIKit ultimately has OpenGL running beneath it, I'm curious how this must be working.
If there is a more efficient means to animate multiple textures, rather than swapping out different bound textures each frame in OpenGL, I'd like to know, as it might hint at what Apple is doing in their framework.
If I did in fact get a new frame for each of 60 frames in a second, then it would take about 1.25 seconds to animate through my 76 frames. Indeed I get that with UIImageView, but the OpenGL is taking about 3 - 4 seconds.
I would say your bottleneck is somewhere else. The openGL is more then capable doing an animation the way you are doing. Since all the textures are loaded and you just bind another one each frame there is no loading time or anything else. Consider for a comparison I have an application that can in runtime generate or delete textures and can at some point have a great amount of textures loaded on the GPU, I have to bind all those textures every frame (not 1 every frame), using all from depth buffer, stencil, multiple FBOs, heavy user input, about 5 threads bottlenecked into 1 to process all the GL code and I have no trouble with the FPS at all.
Since you are working with the iOS I suggest you run some profilers to see what code is responsible for the overhead. And if for some reason your time profiler will tell you that the line with glBindTexture is taking too long I would still say that the problem is somewhere else.
So to answer your question, it is normal and great that UIImageView does its work so smoothly and there should be no problem achieving same performance with openGL. THOUGH, there are a few things to consider at this point. How can you say that image view does not skip images, you might be setting a pointer to a different image 60 times per second but the image view might just ask itself 30 times per second to redraw and when it does just uses a current image assigned to it. On the other hand with your GL code you are forcing the application to do the redraw 60FPS regardless to if it is capable of doing so.
Taking all into consideration, there is a thing called display link that apple developers created for you. I believe it is meant for exactly what you want to do. The display link will tell you how much time has elapsed between frames and by that key you should ask yourself what texture to bind rather then trying to force them all in a time frame that might be too short.
And another thing, I have seen that if you try to present render buffer at 100 FPS on most iOS devices (might be all), you will only get 60 FPS as the method to present render buffer will pause your thread if it has been called in less then 1/60s. That being said it is rather impossible do display anything at all at 60 FPS on iOS devices and everything running 30+ FPS is considered good.
"not as a texture atlas" is the sentence that is a red flag for me.
USing a texture atlas is a good thing....the texture is loaded into memory once and then you just move the rectangle position to play the animation. It's fast because its already all in memory. Any operation which involves constantly loading and reloading new image frames is going to be slower than that.
You'd have to post source code to get any more exact an answer than that.

iOS Animated Logo -- Low memory alternatives

I have a three-second PNG sequence (a logo animation) that I'd like to display right after my iOS app launches. Since this is the only animated sequence in the app, I'd prefer not to use Cocos2D.
But with UIImageView's animationImages, the app runs out of memory on iPod Touch devices.
Is there more memory-conscious/efficient way to show this animation? Perhaps a sprite sheet class that doesn't involve Cocos2D? Or something else?
If this is an animated splash screen or similar, note that the HIG frowns on such behavior (outside of fullscreen games, at least).
If you're undeterred by such arguments (or making a game), you might consider saving your animation as an MPEG-4 video and using MPMoviePlayerController to present it. With a good compressor, it should be possible to get the size and memory usage down quite a lot and still have a good quality logo animation.
I doubt you're going to find much improvement any other way -- a sprite sheet, for example, is still going to be doing the same kind of work as as sequence of PNGs. The problem is that for most animations, a lot of the pixels are untouched from frame to frame... if you're presenting it just as a series of images, you're wasting a lot of time and space on temporally duplicated pixels. This is why we have video codecs.
You could try manually loading/unloading the png images as needed. I don't know what your frame rate requirements are. Also, consider a decent-quality jpg or animated gif. And you can always make the image smaller so it doesn't take up the whole screen. Just a few thoughts.

AS3: Is it possible to generate animated sprite sheets at runtime from vector?

I would like to use Bitmaps in my Actionscript games.
For me this represents a large change in my workflow as I have always used Vector but Bitmaps are really so much faster to render in certain circumstances. As far as I can see, 90% of all my game assets can be bitmaps.
Firstly, are there any good tools for working with Vector to BitmapData? Libraries or OpenSource utilities?
I know you can just draw to a BitmapData, and I do that, but what about Animations? What about a MovieClip of a laughing cow? How can I render that MovieClip at runtime to some kind of Bitmap version?
But more complex than that... What about situations where you do not have the MovieClip in a raw form?
Imagine 10000 cogs turning at the same rate which is generated with code. This is hard work for the processor, so drawing it to a Bitmap for the duration of 1 revolution, would replace 10000 cogs with a SpriteSheet. I could destroy the cogs, and keep the SpriteSheet.
Can anyone offer me any resources or google keywords I can search for, not sure of the technique but it seems to make sense? Especially with Starling..... My Vectors are going to have to become SpriteSheets at some point.
Thanks.
The basic process of converting a movie clip to a sprite sheet is this:
Choose a movieclip.
Get the bounds of the movie clip. You need to get the width and height of the widest and tallest frame of animation.
get the number of frames of the movie clip.
Make a new bitmapdata object that as wide as the # of frames times the width of a frame. And as high as one frame.
5 loop through each frame of the clip, and call bitmapData.draw() on each frame. Be sure to offset the matrix of the draw command on each frame by the width of one sprite frame.
The end result will be a single bitmapdata object with each frame rendered to it.
From there you can follow this tutorial on blitting.
http://www.8bitrocket.com/2008/07/02/tutorial-as3-the-basics-of-tile-sheet-animation-or-blitting/
Converting spritesheets to bitmaps at runtime is not exactly a trivial task, and you may be better served to build your spritesheets before compilation, and use a framework with a blitting engine, such as Flixel or Flashpunk (I'm not very familiar with Starling, but that would work too, I presume). There are a couple decent MovieClip/SWF to png converters:
Zoe
SWFSheet
TexturePacker
SWFSpriteSheet
However, if you are intent on making spritesheets at runtime, you can probably repurpose some of the code from Zoe (it is open source). Take a look at the CaptureSWF class, particularly capture() and handleVariableCaptureFrames(). These methods are the meat of converting individual frames of a MC to BitmapData, which can then be used to build spritesheets.

iOS: playing a frame-by-frame greyscale animation in a custom colour

I have a 32 frame greyscale animation of a diamond exploding into pieces (ie 32 PNG images # 1024x1024)
my game consists of 12 separate colours, so I need to perform the animation in any desired colour
this I believe rules out any Apple frameworks, also it rules out a lot of public code for animating frame by frame in iOS.
what are my potential solution paths?
these are the best SO links I have found:
Faster iPhone PNG Animations
frame by frame animation
Is it possible using video as texture for GL in iOS?
that last one just shows it is may be possible to load an image into a GL texture each frame ( he is doing it from the camera, so if I have everything stored in memory, that should be even faster )
I can see these options ( listed laziest first, most optimised last )
option A
each frame (courtesy of CADisplayLink), load the relevant image from file into a texture, and display that texture
I'm pretty sure this is stupid, so onto option B
option B
preload all images into memory
then as per above, only we load from memory rather than from file
I think this is going to be the ideal solution, can anyone give it the thumbs up or thumbs down?
option C
preload all of my PNGs into a single GL texture of the maximum size, creating a texture Atlas. each frame, set the texture coordinates to the rectangle in the Atlas for that frame.
while this is potentially a perfect balance between coding efficiency and performance efficiency, the main problem here is losing resolution; on older iOS devices maximum texture size is 1024x1024. if we are cramming 32 frames into this ( really this is the same as cramming 64 ) we would be at 128x128 for each frame. if the resulting animation is close to full screen on the iPad this isn't going to hack it
option D
instead of loading into a single GL texture, load into a bunch of textures
moreover, we can squeeze 4 images into a single texture using all four channels
I baulk at the sheer amount of fiddly coding required here. My RSI starts to tingle even thinking about this approach
I think I have answered my own question here, but if anyone has actually done this or can see the way through, please answer!
If something higher performance than (B) is needed, it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml
Rather than pull across one frame at a time from memory, we could arrange say 16 512x512x8-bit greyscale frames contiguously in memory, send this across to GL as a single 1024x1024x32bit RGBA texture, and then split it within GL using the above function.
This would mean that we are performing one [RAM->VRAM] transfer per 16 frames rather than per one frame.
Of course, for more modern devices we could get 64 instead of 16, since more recent iOS devices can handle 2048x2048 textures.
I will first try technique (B) and leave it at that if it works ( I don't want to over code ), and look at this if needed.
I still can't find any way to query how many GL textures it is possible to hold on the graphics chip. I have been told that when you try to allocate memory for a texture, GL just returns 0 when it has run out of memory. however to implement this properly I would want to make sure that I am not sailing close to the wind re: resources... I don't want my animation to use up so much VRAM that the rest of my rendering fails...
You would be able to get this working just fine with CoreGraphics APIs, there is no reason to deep dive into OpenGL for a simple 2D problem like this. For the general approach you should take to creating colored frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix. Basically, you need to use CGContextClipToMask() and then render a specific color so that what is left is the diamond colored in with the specific color you have selected. You could do this at runtime, or you could do it offline and create 1 video for each of the colors you want to support. It is be easier on your CPU if you do the operation N times and save the results into files, but modern iOS hardware is much faster than it used to be. Beware of memory usage issues when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer that describes the problem space. You could code it all up with texture atlases and complex openGL stuff, but an approach that makes use of videos would be a lot easier to deal with and you would not need to worry so much about resource usage, see my library linked in the memory post for more info if you are interested in saving time on the implementation.

Resources