I am working on the animation of some planets. Each planet has a different fog.
Each fog correspond to a sequence of 3/4 images that have the same size of the planet (e.g. 100pixels*100pixels).
What I want to achieve is a smooth animation where the fog gradually expands, and then reverses to the original size (e.g. fogA, fogA expands, fogA disappears while fogB is faded in, fogB exapnds, fogB disappears while fogC fadesIn, etc...).
The problem is that It seems that the only way to do so is to have a sprite child for each fog frame (e.g. child for fogA, child for fogB, etc..). Then yes I can apply the ScaleTo action and CCFadeOut/In to each child but there is no way to put those in a CCSequence of actions as CCSequence doesn't accept other CCSequence object as "finite animations". I guess its because CCSequence is not a finite animation.
Would anyone have a good solution for this?
Here is a representation of a "fog" made by three different images. The idea would be to have the first sprite to gradually expand and then be replaced by the second which as well will gradually expand and finally be replaced by a third sprite which will expand and then start to reverse the cycle (reduce, sprite B starts to reduce to become of initial size and then fades out while sprite A fades in and becomes as of original size - cycle repeats forever).
Could you not use a particle emitter to do this?
You can change the size and shape of the emitter to increase/decrease the amount of fog.
Change the size of the particles and lifespan/birthrate to increase/decrease the intensity of the fog, etc...
You wouldn't need much movement of the particles, just a slight movement and a fade over time.
Related
I have an infinite terrain generator and wanted to make a water sprite (moving water) to add some detail to the map. Terrain is devided into chunks and when I load new water chunks, the water sprite is not synced with old chunks.
So, my question is: How can I make sure all my water sprites are synced (same frame and same time of changing to a new frame), even when I load the new ones?
You can change them all yourself by adding an enterFrame listener to the Runtime. Each frame the listener function is called, you can use the sprite setFrame() to update the appearance of each sprite. This approach should not necessary be more costly than Corona's convenience methods for playing sequences of frames.
It is important that all those water sprites use the same image sheet for their frames to save space in texture memory. Also, I understand that your map is huge/infinite, you really only need to update the frame in ** on screen** sprites. As the player moves around the map and different grid squares need to move into view, you set the frame on those sprites to whatever it needs to be to fit with whatever was already on screen at the end of the previous frame.
I am animating some frames of a monster jumping and swinging a sword, and the frames are such that the width gets bigger or smaller as he swings the sword (the monster standing is 500 width, but his sword, fully extended to the left, adds another 200 width, thus he varies from 500 to 700 or more in width)
I originally took each frame, which is on a transparent background, and used the Photoshop magic wand tool to select just the monster. I then saved these frames like that, and when I used them to animate, the monster warped and changed sizes (it looked bad).
The original frames had a large 1000 x 1000 transparent background surrounding him, and as a result it always kept him "bound" so that it never warped.
My question is what is a good way to create frames of animation where the sprite inside might change size or width as he's moving so that there is no warping?
If I have to use a large border of transparent pixels, is that the recommended approach? I'm noticing that for my animation, each monster takes up about 3 - 5MB. I plan on potentially having a lot of these people ultimately, so i'm wondering if this is the best approach (using large 900 x 900 images all the time, plus I'll be using more for 2x and 1x). So all of this seems like it could spiral out of control to 4 or 5GB.
What are other people doing when making animations that require different poses and positions? Just fixing the frames with borders that are as small as possible?
Thanks!
You should probably change the approach to animation and use inverse kinematics instead. Take a look at this and Ray's tutorial.
I have 2 SKSpriteNode objects, and I want to crossfade them.
One of the easiest way is to create a fadeOut SKAction and a fadeIn SKAction, and apply them to the SKSpriteNode objects. But the problem of this approach is that during the action both of them have the alpha under 1.0, which looks not so good.
For example, I'd like to dissolve the red square SKNode to a green round SKNode
If just fade out the red square and fade in the green round during the action it'll look like this
So you can see background through these 2 objects. I hope to dissolve these 2 objects like this:
In UIKit I can use UIView.transitionWithView but in SpriteKit I only found a similar method for presenting scenes (SKViewObject.presentScene: transition:). So is there anyway to make a dissolve transition for SKNodes?
There's little you can about the background coming through when adjusting the alpha settings on both nodes. A possible hack could be to temporarily insert a solid node in front of your background but behind both other nodes until the action is done. At which point you remove it.
The node should be in the shape of both nodes cross-section (pardon the sloppy artwork on my part):
I searched a lot for this question and I nearly decided to use a shader to do this kind of dissolve (since you can directly edit the pixel in a shader), but then I found that there's an unusual way to solve this problem. It may not be useful for every situation, but if your background doesn't do things like scroll or zoom, this approach may be the easiest. In simple words, just create a screenshot for the current screen and display it at the top, then change your node to the sprite you need, and at last fade out the screenshot you took.
So at first you have to make sure all the nodes are in the correct node tree. Then use SKViewObject.textureFromNode(rootNode) to create the screenshot, make a sprite node from this texture, and add it to your screen. Then, you can create the fade out SKAction to fade out this screenshot sprite. You may also remove it when the action ends.
Using this approach, during the fade out the screen will just look like this:
I'm trying out SpriteKit with the following setup:
An SKScene with two child nodes used merely for grouping other
nodes: foreground and background.
background is really empty as of now, but would eventually hold some
type of background sprite / layers.
foreground is a SKEffectNode and whenever the user taps on the
screen, a new intance of a SKnode subclass which represents a game
element is added as child to it.
This SKNode subclass basically creates 3 SKShapeNodes and two labels: an outter
circumference, an inner circumference, 2 labels and an inner quarter circumference. The inner quarter circumference has an SKAction that
makes it rotate forever about its origin / center.
Now here's the issue, as long as foreground doesn't have any CIFilter or has shouldEnableEffects = NO, everything is fine. That is, I can tap on the screen and my game elements are instantiated and added to the main scene. But the minute I try adding a CIGaussianBlur or CIBloom to the foreground, I notice two things:
The framerate drops to about 2fps. Mind you, this happens even with
as little as 6 nodes alive in the scene.
The effect seems to be constantly cropping its contents or adjusting
it's frame. That is, if I have one node, the "full screen" effect
seems to try and constantly crop or adjust its bounds to the minimum
area required to hold all nodes. This is for one node:
And this is for 2 nodes:
In OpenGL ES 2, one would do a post blur / bloom by basically rendering the whole framebuffer (all objects) to texture, then doing at least one more pass to blur,etc on that texture and then either present that in the framebuffer attached to the display or compose that with the original render back to the framebuffer. I'd expect SKEffectNode to work in a similar way. However the cropping and the poor performance makes me think I might be using the effect node the wrong way. Any thoughts?
It seems to be a bug with the SKEffectNode trying to apply a filter on children SKShapeNodes as far as I can tell. I played around with this and achieved your results, but when I switched out the SKShapeNodes for SKSpriteNodes (using a simple png of a circle) the cropping no longer appears. It's a bug in that SKEffectNode doesn't handle the stroke of the SKShapeNode very well. If you take off the stroke (lineWidth = 0) and give it a fill color, you'll see that there is no cropping.
As for the frame rate, SKShapeNodes perform poorly. Doing the switch to SKSpriteNodes I mentioned earlier boosted my fps from 40 to 50 when I had 35 nodes on the screen (iPhone 5) with the filter applied.
Is it true that for Angry Birds or Cut the Rope, they draw the whole frame of the whole screen first (the whole view), and then paint the whole frame onto the screen, making the animation smooth?
That's because if we animate a metal ball, of size 20 x 20 pixel, and if we erase the ball first, and then draw the ball at a new location, then there might be some flickering very subtle but noticeable.
The same might be if it is animated by drawRect, which will erase the whole screen, and then draw everything in their new locations, which might have even more flicker than above?
Going back to the drawing whole frame method: if a ball was at coordinate (100,100), and now the ball is painted on top of the whole screenshot (with the new background exposed), at coordinate (103, 100), then it is very unnoticeable for the changes. (no disappearing and then reappearing happening at all).
How can smooth animation be achieved that looks like Angry Birds or Cut the Rope game?
They make use of OpenGL, which is a lot faster than any of the Quartz methods (ie. drawRect) since it makes use of the GPU instead of the CPU for rendering. Using Quartz can be hundreds or thousands of times slower depending on what you are doing exactly.
If you do not want to resort to OpenGL. You can put the object inside a UIView and then animate it. As long as the contents of the view is static, this is plenty fast for most applications. For example, making the background a view, and the metal ball a view, you can move that view around and achieve very smooth animations without problems.
Use CALayers. They are more lightweight than views.
If an app uses OpenGL, the answer is yes, it does its rendering before the frame buffer is presented to the screen. I think the other ways to draw to the screen use the same technique of drawing to an off-screen buffer before transferring the completed image to the screen, but I'm not so sure about that.