I have an infinite terrain generator and wanted to make a water sprite (moving water) to add some detail to the map. Terrain is devided into chunks and when I load new water chunks, the water sprite is not synced with old chunks.
So, my question is: How can I make sure all my water sprites are synced (same frame and same time of changing to a new frame), even when I load the new ones?
You can change them all yourself by adding an enterFrame listener to the Runtime. Each frame the listener function is called, you can use the sprite setFrame() to update the appearance of each sprite. This approach should not necessary be more costly than Corona's convenience methods for playing sequences of frames.
It is important that all those water sprites use the same image sheet for their frames to save space in texture memory. Also, I understand that your map is huge/infinite, you really only need to update the frame in ** on screen** sprites. As the player moves around the map and different grid squares need to move into view, you set the frame on those sprites to whatever it needs to be to fit with whatever was already on screen at the end of the previous frame.
Related
In my game, the size of the level can be larger than the screen of the phone and the camera will follow the player around the level, so there can be a decent amount of content(such as SKEmitterNodes) in the scene that is not visible at any given time. I've been reading through some of the SpriteKit documentation and found this quote in the SMEmitterNode section:
"Consider removing a particle emitter from the scene when it is not
visible onscreen. Add it just before it becomes visible."
Is this something that can be done in my type of game design? I don't want the nodes to be completely removed since they will eventually be put on the screen, but is there a good way for me to add/remove the EmitterNodes (or other SpriteNodes) that are a certain distance from the screen/is this a good idea to do? I'm looking to improve my frame-rate and don't want costly nodes like SMEmitterNodes working while they're not even being displayed, but will adding/removing them as the player moves around reduce the performance?
Here is the idea I currently have: create a rectangle that extends a certain distance around the screen and detect when a node comes into that rectangle, and if it's not already added to the scene, go ahead and add it. Thank you for any suggestions.
SKNodes really aren't a problem because when they are off screen they are not being rendered anyway, just evaluated. So the main thing to worry about with SKNodes are any physics bodies attached to them,
SKEmitterNodes however require some processing power, and that is why apple is recommending not having them emit if they are not on screen. I would just subclass my SKScene class, and do a checks only on SKEmitterNodes whether or not they are in frame, and emit based on that.
So, I would throw all your SKEmitterNodes into a container like an array, and have a loop function to have the node do a CGRectIntersectsRect check based on your camera location and viewable screen size. and if they intersect, add it to the scene, if not remove it from the scene. The array will keep a strong reference so you do not have to worry about it deiniting on you
I have an object that is moving very fast (max velocity 900). When it reaches max speed it starts to create trailing objects or motion blur.
But I just want it to be the object moving fast. I am running on 60 fps.
I like the speed of the object but I don't like how its getting rendered (motion blur). How do I handle this?
This object bounces all around the screen with a restitution of 1.02, because I want to make it pick up speed as it keeps bouncing. I want to make it go faster thats why I did the 1.02 restitution.
The motion blur may simply be due to the LCD display having an "afterglow". So the position the object was in the previous frame is still a little brighter in the next frame because it takes some time for the crystals inside the LCD to return to the unlit state.
This causes "motion blur" on any moving object on the screen, and is of course more noticeable the faster the object moves. You may even be able to make out multiple versions of the same objects at different light levels trailing behind the object's position.
This effect may also depend somewhat on the device and model, and is often called 'ghosting'.
Regardless, there's nothing you can do about the "motion blur" caused by the LCD screen's afterglow effect. Here's a good article explaining the effects and their causes.
Hmm... you'll have trouble getting it to render smoothly.
At that speed (900 points per second) it will move 15 points every FRAME if running at 60 fps. That's a significant amount to move in such a short amount of time. In about 1/3 of a second it will travel entirely across the screen.
I'm guessing it will be getting to the limit of the ability of the hardware. Both the processor, the screen and your actual eyes. I imagine you'll also hit physics errors too with it possibly escaping through walls etc...
Can you show a video of how it is currently behaving?
We are developing a game that has 2d elements displayed with UIViews over an OpenGL ES view (specifically, we're using GLKit's GLKView) and are having problems keeping the positions perfectly in sync.
In the parent view's layoutSubviews, we're projecting 3d positions in the world onto the screen, and using those as locations for several UIView "markers" in the game. The whole game only updates in response to the user moving the camera, and the camera tells the view setNeedsLayout each time it moves.
Everthing's working fine, except that the markers seem to be roughly 1 frame out of sync with the 3d rendering. I say roughly because (1) it's an estimate! and (2) I'm wondering whether there's potentially a multithreading issue: doesn't GLKView sync to a special screen refresh callback or something?
Is there some way of hooking a view's layoutSubviews so that it sync's to the 3d view update?
Update: Weirdly, calling layoutIfNeeded immediately after setNeedsLayout makes the problem worse! Possibly 2 or more frames out. Really don't understand that!
What's triggering your call to LayoutSubviews?
It all depends where in the RunLoop your call is triggered vs. where your GLK update call is triggered.
In general, for what you're doing, I'd aim to do your layout as a side-effect of the GLK update - i.e. don't wait for layoutSubviews to change your position.
(if you're using OpenGL, then the whole "layout" system isn't much use to you: GLK is running in its own little world of variable frame rate, and you want to make that your reference point)
This is impossible to do correctly without drawing the frames of the video using OpenGL (drawing in the same context so that you are always sure that one frame contains the same time of video and animation). Everyting else you do, framerate compensation, lag prediction, only depends on chance, and will always be a little bit unsynchronized.
I'm not familiar with UIView but if there is any way to let it play audio and copy the frames to a texture, do that. The lag in the audio is much easier to compensate and much less noticeable by the humans than in the video.
I am working on the animation of some planets. Each planet has a different fog.
Each fog correspond to a sequence of 3/4 images that have the same size of the planet (e.g. 100pixels*100pixels).
What I want to achieve is a smooth animation where the fog gradually expands, and then reverses to the original size (e.g. fogA, fogA expands, fogA disappears while fogB is faded in, fogB exapnds, fogB disappears while fogC fadesIn, etc...).
The problem is that It seems that the only way to do so is to have a sprite child for each fog frame (e.g. child for fogA, child for fogB, etc..). Then yes I can apply the ScaleTo action and CCFadeOut/In to each child but there is no way to put those in a CCSequence of actions as CCSequence doesn't accept other CCSequence object as "finite animations". I guess its because CCSequence is not a finite animation.
Would anyone have a good solution for this?
Here is a representation of a "fog" made by three different images. The idea would be to have the first sprite to gradually expand and then be replaced by the second which as well will gradually expand and finally be replaced by a third sprite which will expand and then start to reverse the cycle (reduce, sprite B starts to reduce to become of initial size and then fades out while sprite A fades in and becomes as of original size - cycle repeats forever).
Could you not use a particle emitter to do this?
You can change the size and shape of the emitter to increase/decrease the amount of fog.
Change the size of the particles and lifespan/birthrate to increase/decrease the intensity of the fog, etc...
You wouldn't need much movement of the particles, just a slight movement and a fade over time.
Is it true that for Angry Birds or Cut the Rope, they draw the whole frame of the whole screen first (the whole view), and then paint the whole frame onto the screen, making the animation smooth?
That's because if we animate a metal ball, of size 20 x 20 pixel, and if we erase the ball first, and then draw the ball at a new location, then there might be some flickering very subtle but noticeable.
The same might be if it is animated by drawRect, which will erase the whole screen, and then draw everything in their new locations, which might have even more flicker than above?
Going back to the drawing whole frame method: if a ball was at coordinate (100,100), and now the ball is painted on top of the whole screenshot (with the new background exposed), at coordinate (103, 100), then it is very unnoticeable for the changes. (no disappearing and then reappearing happening at all).
How can smooth animation be achieved that looks like Angry Birds or Cut the Rope game?
They make use of OpenGL, which is a lot faster than any of the Quartz methods (ie. drawRect) since it makes use of the GPU instead of the CPU for rendering. Using Quartz can be hundreds or thousands of times slower depending on what you are doing exactly.
If you do not want to resort to OpenGL. You can put the object inside a UIView and then animate it. As long as the contents of the view is static, this is plenty fast for most applications. For example, making the background a view, and the metal ball a view, you can move that view around and achieve very smooth animations without problems.
Use CALayers. They are more lightweight than views.
If an app uses OpenGL, the answer is yes, it does its rendering before the frame buffer is presented to the screen. I think the other ways to draw to the screen use the same technique of drawing to an off-screen buffer before transferring the completed image to the screen, but I'm not so sure about that.