I'm develop a game using LevelHelper (SpriteKit + Box2D) and I ran into a problem.
Box2D depends on the count of FPS. Those if the 60FPS, then the body moves at a speed of 10 meters per second, if 30FPS, then at a speed of 5 meters per second. It is necessary that regardless of the count of FPS was a constant speed.
Is there any solution to unbind Box2D from the count of FPS?
In regards to simulated time, Box2D only depends on what you have the world step's time delta set to. If you have the time delta set to 1/60th of a second, then it would match up with a 60FPS display refresh but the time delta can be other values.
Generally speaking, the simulation gets more accurate as the time delta gets smaller. So if instead of of using world steps that simulated 1/60th of a second you used steps that simulated 1/120th of a second you'd have a more accurate simulation. Using a smaller time delta, also allows bodies' max speeds (in distance traveled per simulated second) to be faster.
It's up to the Box2D library user to figure out how to coordinate the world steps with the display refresh. Just know that varying the world step time -- like by using real-time elapsed between calls to the world step method -- may cause unrealistic physics effects. So while varying the world step time delta at run-time during a simulation is possible, I wouldn't recommend it.
Related
Is there a formula to determine that max flash rate countable by a video camera? I am thinking that any flash rate > # of fps is not practical. I get hung up on the fact that the shutter is open only a fraction of the amount of time required to produce a frame. 30fps is roughly 33.33ms. If the shutter is set for say 1/125 which is about 8ms or roughly 25% of the frame time. Does the shutter speed matter? I am thinking that unless they are sync'd the shutter could open at any point in the lamp flash ultimately making counting very difficult.
The application is just a general one. With today's high speed cameras (60fps or 120fps) can one reliably decide on the flash rate of a lamp. Think alarm panels, breathing monitors or heart rate monitors or the case of trying to determine duty cycle by visual means.
What you describe is related to the sampling problem.
You can refer your problem to the Nyquist - Shannon theorem
Given a certain frequency of acquisition (# of FPS) you can be sure of your counting (in every case, no matter of syncronization) if
"# of FPS" >= 2* flashing light frequency (in Hz)
Of course this is a general theoric rule, things can work in a quite different way (I am answering only regarding the number of FPS in a general case)
In iOS, it is easy to access Linear Acceleration which is equal to subtracting Gravity from Raw acceleration.
I am trying to estimate position by double integrating Linear Acceleration. For that I recorded the data by keeping the phone steady on table.
Then I did double integration in Matlab using cumtrapz but when I plot the position it grows with time.
What am I doing wrong? I was expecting that the position should be 0.
From what I've read this is too error-prone to be useful. Accelerometer based position calculations are subject to small drift errors, which accumulate over time. (If the phone is traveling at 100 kph constant velocity when your app first launches, you can't tell.) All you can measure is acceleration.
There would always been biasing errors from the sensors which can grow with time while integration. Can you calculate the drift of sensor when the device is at rest?. And then try to take the mean of the drift and subtract it from the input so that sensor shows 0 at rest, Then try to double integrate it
For example in cocos2D:
- (void)update:(ccTime)delta
can someone explain what these time deltas or timestamps are used for? How are they relevant for how the game world is updated? Is it because we do not know the fps reliably and should not just rely on incremental property updates based on -update calls?
It is important for making frame independent movement. Typically any character movement you take into consideration the time since the last update call.
This is to ensure that your game behaves the same across devices of various performance. If you move a character by 1 pixel every frame then on a device that runs at 60fps the character will move twice as fast as on a device that gets 30fps.
By affecting all movement code for example by the delta time you ensure that all devices will behave the same.
It is simple to make movement frame rate independent. Something like multiplying a movement vector by deltaTime will achieve this.
Does anyone have any good ideas of how to accomplish a slow motion effect in Sprite Kit for iOS? This would make all nodes including particle nodes move at 1/2 the speed and also make the particles move that 1/2 the speed.
I can think of how to do this manually, but I wanted to get some more ideas before I start implementing.
I believe you can do:
self.physicsWorld.speed = 0.5;
The docs reference:
speed
The rate at which the simulation executes.
#property(nonatomic) CGFloat speed
Discussion
The default value is 1.0, which means the simulation runs at normal speed. A value other than the default changes the rate at which time passes in the physics simulation. For example, a speed value of 2.0 indicates that time in the physics simulation passes twice as fast as the scene’s simulation time. A value of 0.0 pauses the physics simulation.
Availability
Available in iOS 7.0 and later.
Declared In
SKPhysicsWorld.h
In update method where you calculate movement speed everywhere when calculations are done multiply the movement by some variable, have it be 1 by default. But when you need slow motion set it to 0.5.
time for another XNA question. This time it is purely from a technical design standpoint though.
My situation is this: I've created a particle-engine based on GPU-calculations, far from complete but it works. My GPU easily handles 10k particles without breaking a sweat and I wouldn't be surprised if I could add a bunch more.
My problem: Whenever I have a lot of particles created at the same time, my frame rate hates me. Why? A lot of CPU-usage, even though I have minimized it to contain almost only memory operations.
Creation of particles is still done by CPU-calls such as:
Method wants to create particle and makes a call.
Quad is created in form of vertices and stored in a buffer
Buffer is inserted into GPU and my CPU can focus on other things
When I have about 4 emitters creating one particle per frame, my FPS lowers (sure, only 4 frames per seconds but 15 emitters drops my FPS to 25).
Creation of a particle:
//### As you can see, not a lot of action here. ###
ParticleVertex []tmpVertices = ParticleQuad.Vertices(Position,Velocity,this.TimeAlive);
particleVertices[i] = tmpVertices[0];
particleVertices[i + 1] = tmpVertices[1];
particleVertices[i + 2] = tmpVertices[2];
particleVertices[i + 3] = tmpVertices[3];
particleVertices[i + 4] = tmpVertices[4];
particleVertices[i + 5] = tmpVertices[5];
particleVertexBuffer.SetData(particleVertices);
My thoughts are that maybe I shouldn't create particles that often, maybe there is a way to let the GPU create everything, or maybe I just don't know how you do these stuff. ;)
Edit: If I weren't to create particles that often, what is the workaround for still making it look good?
So I am posting here in hope that you know how a good particle-engine should be designed and if maybe I took the wrong route somewhere.
There is no way to have the GPU create everything (short of using Geometry Shaders which requires SM4.0).
If I were creating a particle system for maximum CPU efficiency, I would pre-create (just to pick a number for sake of example) 100 particles in a vertex and index buffer like this:
Make a vertex buffer containing quads (four vertices per particle, not six as you have)
Use a custom vertex format which can store a "time offset" value, as well as a "initial velocity" value (similar to the XNA Particle 3D Sample)
Set the time value such that each particle has a time offset of 1/100th less than the last one (so offsets range from 1.0 to 0.01 through the buffer).
Set the initial velocity randomly.
Use an index buffer that gives you the two triangles you need using the four vertices for each particle.
And the cool thing is that you only need to do this once - you can reuse the same vertex buffer and index buffer for all your particle systems (providing they are big enough for your largest particle system).
Then I would have a vertex shader that would take the following input:
Per-Vertex:
Time offset
Initial velocity
Shader Parameters:
Current time
Particle lifetime (which is also the particle time wrap-around value, and the fraction of particles in the buffer being used)
Particle system position/rotation/scale (the world matrix)
Any other interesting inputs you like, such as: particle size, gravity, wind, etc
A time scale (to get a real time, so velocity and other physics calculations make sense)
That vertex shader (again like the XNA Particle 3D Sample) could then determine the position of a particle's vertex based on its initial velocity and the time that that particle had been in the simulation.
The time for each particle would be (pseudo code):
time = (currentTime + timeOffset) % particleLifetime;
In other words, as time advances, particles will be released at a constant rate (due to the offset). And whenever a particle dies at time = particleLifetime (or is it at 1.0? floating-point modulus is confusing), time loops back around to time = 0.0 so that the particle re-enters the animation.
Then, when it came time to draw my particles, I would have my buffers, shader and shader parameters set, and call DrawIndexedPrimitives. Now here's the clever bit: I would set startIndex and primitiveCount such that no particle starts out mid-animation. When the particle system first starts I'd draw 1 particle (2 primitives), and by the time that particle is about to die, I'd be drawing all 100 particles, the 100th of which would just be starting.
Then, a moment later, the 1st particle's timer would loop around and make it the 101st particle.
(If I only wanted 50 particles in my system, I'd just set my particle lifetime to 0.5 and only ever draw the first 50 of the 100 particles in the vertex/index buffer.)
And when it came time to turn off the particle system - simply do the same in reverse - set the startIndex and primitiveCount such that particles stop being drawn after they die.
Now I must admit that I've glossed over the maths involved and some details about using quads for particles - but it should not be too hard to figure out. The basic principle to understand is that you're treating your vertex/index buffer as a circular buffer of particles.
One downside of a circular buffer is that, when you stop emitting particles, unless you stop when the current time is a multiple of the particle lifetime, you will end up with the active set of particles straddling the ends of the buffer with a gap in the middle - thus requiring two draw calls (a bit slower). To avoid this you could wait until the time is right before stopping - for most systems this should be ok, but might look weird for some (eg: a "slow" particle system that needs to stop instantly).
Another downside to this method is that particles must be released at a constant rate - although that is usually pretty typical for particle systems (obviously this is per-system and the rate is adjustable). With a little tweaking an explosion effect (all particles released at once) should be possible.
All that being said: If possible, it may be worthwhile using an existing particle library.