I am creating an app that represents the pages of a book with animation and interactive areas. There is one character who is constant throughout but each page has them represented in a different look so I cannot re-use the frames very easily. This character has wings, legs and eyes which all need to move differently. What I am wondering is what is the best way to take them from the PSD into the app? The two approaches I can think of is either:
Create a separate png for each frame of the animation and then cycle through them (this would be combined into a single sprite atlas)
Split the character into their parts and then position, rotate, scale and move them in the app manually.
The main reason I am considering point 2 is that if I do point 1 then I will need to create a lot of frames of animation for each page and also create them all twice to cater for normal and retina displays.
Please let me know what the correct approach for this may be and if there is anything I should keep in mind.
Thanks
Option 1 sounds much more feasible. 300 frames is a bit too much, but you dont have to load all of them in the memory at the same time. Divide your frames into multiple spritesheets of 1024*1024 and make sure all the frames of the same animation are on a single spritesheet. So, at any given moment, only a single texture would be loaded in the memory, which I guess is the minimum anyway.
You can also do a bit more optimization maybe, by creating separate animations for things that behave the same in different poses. For example, if the eyes are blinking exactly the same in different poses, you can stop creating separate frames for each pose just for blinking. Just take out the eyes (ouch!), create a separate animation for them, and place it over your character's animation node.
Going with option 2 would create un-necessary complications, both for you and the poor device.
Related
I looked into inverse kinematics as a way of using animation, but overall thought I might want to proceed with using sprite texture atlases to create animation instead. The only thing is i'm concerned about size..
I wanted to ask for some help in the "overall global solution":
I will have 100 monsters. Each has 25 frames of animation for an attack, idle, and spawning animation. Thus 75 frames in total per monster.
I'd imagine I want to do 3x, 2x and 1x animations so that means even more frames (75 x 3 images per monster). Unless I do pdf vectors then it's just one size.
Is this approach just too much in terms of size? 25 frames of animation alone was 4MB on the hard disk, but i'm not sure what happens in terms of compression when you load that into the Xcode and texture atlas.
Does anyone know if this approach i'm embarking on will take up a lot of space and potentially be a poor decision long term if I want even more monsters (right now I only have a few monsters and other images and i'm already up to ~150MB when I go to the app on the phone and look at it's storage - so it's hard to tell what would happen in the long term with way more monsters but I feel like it would be prohibitively large like 4GB+).
To me, this sounds like the wrong approach, and yet everywhere I read, they encourage using sprites and atlases accordingly. What am I doing wrong? too many frames of animation? too many monsters?
Thanks!
So, you are correct that you will run into a problem. In general, the tutorials you find online simply ignore this issue of download side and memory use on device. When building a real game you will need to consider total download size and the amount of memory on the actual device when rendering multiple animations at the same time on screen. There are 3 approaches, just store everything as PNG, make use of an animation format that compresses better than PNG, or third you can encode things as H264. Each of these approaches has issues. If you would like to take a look at my solution to the memory use issue at runtime, have a peek at SpriteKitFireAnimation link at this question. If you want to roll your own approach with H264, you can get lots of compression but you will have issues with alpha channel support. The lazy thing to do is use PNGs, it will work and support alpha channel, but PNGs will bloat your app and runtime memory use is heavy.
I am taking a stab at John Conway's game of life [wiki] & [demo]. I have developed a small program in C to calculate the next state - using a 1D array (but with 2D array logic).
I am hoping to make a small iOS app out of this (to Objective-C!), and am wondering the best and fastest way to render a grid like seen in the video. Note, it would have to render every fraction of a second and would use an array of 1's and 0's to determine a "block's" respective colour.
Edit: I'm probably looking at around 10 frames/sec, but a very large grid. It'd be rendering out hundreds of thousands of squares. Of course, if this isn't physically possible with iPhone/iPad technology then I'll reduce the grid size. It is variable without issue, just looks more 'epic' on a grand scale.
Any suggestions will help out, never touched anything of this manner before.
The best way depends on your criteria. Fastest would probably be to use OpenGL. You might even be able to write a shader to do the entire simulation. However, OpenGL is hard. Really hard.
I suspect that using Core Graphics and implementing code in a view's drawRect method that renders the array of cells onto the screen would be fast enough. It depends on how many cells you have and how many frames/second you want to draw.
I have seen the usage of plist and png atlasses for the game i am developing. However I've notice a slight performance swiftness(speed up) keeping the 60 fps, and for a side note my app has not crash at the moment.
The thing is I noticed I have used SpriteFrameCache with plist to do CCactions and animations for my characters(sprites). However some of the characters ive been using SpriteBatchnode, but it was on accident, since I am relatively new to deep development of a game, I didnt notice this difference before, they both work, but I feel like both are the same, its just that one has an easier way of implementation than the other, i was thinking that perhaps it was developed in an earlier version....
so my question is. is there a difference between the two? will my game benefit for using SpriteFrameCache over SpriteBatchNode?
Thanks for the help.
FYI: this doesnt slow down my developing, its just a question because I know at the end when my game is finished maybe i would want to optimize performance for my game.
Batch nodes draw all child sprites in one draw call.
Sprite frames hold a reference to a texture and define a region in the texture to draw from. The cache allows you to access sprite frames by name.
Both are different concepts, they are not replacements for each other. You can use sprite frames to create sprites or sprite frame animations. In addition to that sprite batching enables you to speed up rendering of two or more sprites using the same texture.
Note that if you use a batch node with only a single child sprite this will not be any different from rendering the sprite without the batch node, since both create a single draw call so there is no positive effect in using the batch node.
let's say I want to display a customizable (2D, cartoon-like) character, where some properties e.g. eye color, hair style, clothing etc can be chosen from a predefined set of options. Now I want to animate the character. What's the best way to deal with the customization?
1) For example, I could make a sprite sheet for each combination of properties. That's not very memory efficient and not very flexible, but probably gives the best performance.
2) I could compose the character from various layers, where each property only affects one layer. Thus, I could make a sprite-sheet for the body, a collection of sprite-sheets for the eyes (one for each eye color) etc.
2a) In that case, I could merge the selected sprite-sheets in order to generate a single sprite-sheet containing the animation of the customized character.
2b) Alternatively, I could keep the sprite-sheets separate and try to animate them simultaneously as layers. I fear, that this might become a problem performance-wise.
3) I could try to modify the layers programmatically, e.g. use a sprite-sheet for the eyes as a mask and map some texture on it before merging it down to a single sprite-sheet. I would think this is a very flexible approach when it comes to simple properties like eye colors, but might become difficult for things like hair-style. I am aware that this depends much on the character and probably a general answer is difficult.
I assume that my problem is not new, so there is probably a standard approach to it.
Concerning the platform, I'm particularly interested in iOS and try to avoid OpenGL (well, I'm open-minded). Maybe there is a nice framework that can help me here?
Thanks!
Depending on what your working on, you might want to animate part/all of the animations outside in another tool, such as flash. It is much easier to work with a visual environment.
Then there are tools that take swf files, and create sprite sheets that you would then animate in cocos2d.
That is a common game creation workflow.
You problably want to take a look on how to create sprites at cocos2d.
Cocos2d comes with a set of tools that help you to animate single parts and offers abstractions to compose parts (like CCBatchNode or CCNode). Also, it comes with tools that helps you to pack sprites into sprite sheets (e.g Texture Packer) and develop levels (e.g Level Helper).
Cocos2d is an open source framework and it is widely used. You also have cocos3d but I never used it :).
I am wondering if what I'm attempting is just a bad idea. I'm currently working in monotouch. Is it possible to draw a screen-sized (on my iPhone 4 its about 320x460) buffer onto a UIView of equal size fast enough so that animated changes to that buffer look smooth to the end user (need it to be around 20ms per draw).
I've attempted many different implementations. The best one so far seems to be using an in-memory CGLayer and calling context.DrawLayer() to apply it to the view inside of Draw(). But even that takes 30-40ms per DrawLayer.
I'm writing my own tile-image control, and aside from performance, the idea is working well. I just can't figure out how to get the buffer onto the UIView fast enough.
Any ideas?
I've been dealing with custom views a lot lately, and i've had a bunch of performance problems, too.
All of these performance issues could be solved by determining the elements that need to be redrawn, and, more importantly, the elements that do not need to be redrawn.
Then, split the contents in the layer into individual sublayers and only redraw them if necessary. The good thing is, animations and so on are very smooth for those individual layers. (Their content is only a simple bitmap and does not change until you tell it to).
The only limitation i've come across was, that you cannot use CG blend modes (e.g. multiply) for the sublayers. As far as i know that is not possible. You can only use those blend modes inside the CG code used to draw the contents of the sublayers, but after that they are all composed in "normal" mode.
It really depends on what you are drawing.
If you are just drawing a solid filled color, that should not be a problem. The question is how much of the surface you are changing, and how you are changing it.
Again, it depends on what you are drawing and whether you could offload some of the work to the GPU. For example if you have static parts of your interface that will remain the same, or are animated/updated independently, you could use a different layer for those areas and let the GPU compose those.
Layers have the advantage that they are composited by the GPU, and they are backed by their own bitmaps. Once you draw into the surface of the layer, the OS will cache the result in the GPU and compose all of your layers at the same time.
Then you can determine which parts of your application actually need to be redrawn and only redraw those sections on each frame.
But again, it really will depend a lot on what you are trying to do.