Drawing particles with CPU instead of GPU (XNA) - xna

I'm trying out modifications to the following particle system.
http://create.msdn.com/en-US/education/catalog/sample/particle_3d
I have a function such that when I press Space, all the particles have their positions and velocities set to 0.
for (int i = 0; i < particles.GetLength(0); i++)
{
particles[i].Position = Vector3.Zero;
particles[i].Velocity = Vector3.Zero;
}
However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0).
As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything.
I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

The sample you downloaded is not capable of doing what you ask. You are correct in your diagnosis of the problem: the particle system is entirely maintained by the GPU, and so your changes to the position and velocity only change the start values, not the actual real-time particle values. To make a particle system that is changeable by the CPU, you need to make a particle engine class and do it yourself. There are many other samples out there that do this.
Riemers XNA tutorials are very useful. Try out this link:
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2D/Particle_engine.php
It teaches you how to make a 2D particle system. This can be easily converted into 3D.
Or, if you want to just download an existing engine, try the Mercury particle engine:
http://mpe.codeplex.com/

This is quite simple ... all you have to do is do the position/velocity calculations on the CPU rather than offloading them to a shader. I of course can't see your code, so I can't really offer any more specific guidance ... but whether you animate your particles with a physics engine like FarSeer, or just do the basic equation of motion yourself. It'll happen on the CPU.

I would recommend DPSF (Dynamic Particle System Framework) for this. It does the calculations on the CPU, is fully customizable and very flexible, has great help docs and tutorials, and it even provides the full source code for the FireParticleSystem from the Particle3D sample that you are using. You should be able to have the particle system integrated into your game and accomplishing what you want within a matter of minutes.

Related

A way to apply zRotation so that it affects the SKPhysicsBody accordingly?

Let's say I have a SKSpriteNode that represents a car wheel, with a circular SKPhysicsBody successfully attached to it. The genre would be sideways scrolling 2d car driving game/simulator, obviously.
What I want is to calculate all the in-motor physics myself without resorting to the SpriteKit physics engine quite yet. I'd like to keep full control on how the mechanics of the motor, clutch, transmission etc. are calculated, all the way to the RPM of the drive wheel.
From that point onwards, though, I'd happily give control to the SpriteKit physics engine, that would then calculate what happens, when the revolving drive wheel touces the surface of the road. The car advances, slows down, accelerates and/or the wheels slip, whatever the case might be.
Calculating the mechanics to get the wheel RPM is no problem. I'm just not sure on how to go on from there.
I'm able to rotate the wheel simply by applying zRotation in the update: method, like this:
self.rearWheelInstance.zRotation += (self.theCar.wheelRPS/6.283 * timeSinceLastUpdate); // revolutions / 2pi = radians
This way I'm able to apply the exact RPM I've calculated earlier. The obvious downside is, SpriteKit's physics engine is totally oblivious about this rotation. For all that it knows, the wheel teleports from one phase to the next, so it doesn't create friction with the road surface or any other interaction with other SpriteKit physicsBodies, for that matter.
On the other hand, I can apply torque to the wheel:
[self.rearWheelInstance.physicsBody applyTorque: someTorque];
or angular impulse:
[self.rearWheelInstance.physicsBody applyAngularImpulse: someAngularImpulse];
This does revolve the wheel in a fashion that SpriteKit physics engine understands, thus making it interact with its surroundings correctly.
But unless I'm missing something obvious, this considers the wheel as a 'free rolling object' independent of crankshaft, transmission or drive axel RPM. In reality, though, the wheel doesn't have the 'choice' to roll at any other RPM than what is transmitted through the drivetrain to the axel (unless the transmission is on neutral, the clutch pedal is down or the clutch is slipping, but those are whole another stories).
So:
1) Am I able to somehow manipulate zRotation in a way that the SpriteKit physics engine 'understands' as revolving movement?
or
2) Do I have a clear flaw in my logic that indicates that this isn't what I'm supposed to be trying in the first place? If so, could you be so kind as to point me to the flaw(s) so that I could adopt a better practice instead?
Simple answer, mixing 2d UI settings, like position and zRotation, with a dynamic physics system isn't going to have the results you want, as you noticed. As you state, you'll need to use the pieces of the physics simulation, like impulse and angular momentum.
The two pieces of the puzzle that may also help you are:
Physics Joints - these can do things like connect a wheel to an axel so that it can freely rotate, set limits on rotation, but still impart rotational forces on it.
Dynamically Adjusting Physics Properties - like increasing friction, angular dampening or adding negative acceleration to the axel as the user presses the brakes.
After quite a few dead ends I noticed there is in fact a way to directly manipulate the rotation of the wheel (as opposed to applying torque or impact) in a way that affects the physics engine accordingly.
The trick is to manipulate the angularVelocity property of the physicsBody of the wheel, like so:
self.rearWheelInstance.physicsBody.angularVelocity = -self.theCar.wheelRadPS;
// Wheel's angular velocity, radians per second
// *-1 just to flip ccw rotation to cw rotation
This way I'm in direct control of the drive wheels' revolving speed without losing their ability to interact with other bodies in the SpriteKit physics simulation. This helped me over this particular obstacle, I hope it helps someone else, too.

How to create sprite surface like in "cham cham"

My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.

Algorithm for real-time tracking of several simple objects

I'm trying to write a program to track relative position of certain objects while I'm playing the popular game, League of Legends. Specifically, I want to track the x,y screen coordinates of any "minions" currently on the screen (The "minions" are the little guys in the center of the picture with little red and green bars over their heads).
I'm currently using the Java Robot class to send screen captures to my program while I'm playing, and am trying to figure out the best algorithm for locate the minions and track them so long as they stay on the screen.
My current thinking is to use a convolutional neural network to identify and locate the minions by the the colored bars over there heads. However, I'd have to re-identify and locate the minions on every new frame, and this seems like it'd be computationally expensive if I want to do this in real time (~10-60 fps).
These sorts of computer vision algorithms aren't really my specialization, but it seems reasonable that algorithms exist that exploit the fact objects in videos move in a continuous manner (i.e. they don't jump around from frame to frame).
So, is there an easily implementable algorithm for accomplishing this task?
Since this is a computer game, I think that the color of the bars should be constant. That might not be true only if the dynamic illumination affects the health bar, which is highly unlikely.
Thus, just find all of the pixels with this specific colors. Then you do some morphological operations and segment the image into blobs. By selecting only the blobs that fit some criteria, you can find the location of the units.
I know that my answer does not involve video, but the operations should be so simple, that it should be very quick.
As for the tracking, just find per each point the closest in the next frame.
Since the HUD location is constant, there should be no problem removing it.
Here is mine quick and not-so-robust implementation in Matlab that has a few limitations:
Units must be quite healthy (At least 40 pixels wide)
The bars do not overlap.
function FindUnits()
x = double(imread('c:\1.jpg'));
green = cat(3,149,194,151);
diff = abs(x - repmat(green,[size(x,1) size(x,2)]));
diff = mean(diff,3);
diff = logical(diff < 30);
diff = imopen(diff,strel('square',1));
rp = regionprops(diff,'Centroid','MajorAxisLength','MinorAxisLength','Orientation');
long = [rp.MajorAxisLength]./[rp.MinorAxisLength];
rp( long < 20) = [];
xy = [rp.Centroid];
x = xy(1:2:end);
y = xy(2:2:end);
figure;imshow('c:\1.jpg');hold on ;scatter(x,y,'g');
end
And the results:
You should use a model which includes a dynamic structure in it. For your object tracking purpose Hidden Markov Models (HMMs) (or in general Dynamic Bayesian Networks) are very well suitable. You can find a lot of resources on HMMs online. The issues you are going to face however, depends on your system model. If your system dynamics can easily be represented as a linear Gauss-Markov model then a simple Kalman Filter will do fine. However, in the case of nonlinear non-gaussian dynamics you should use Particle Filtering which is a Sequential Monte Carlo Method. Both Kalman Filter and Particle Filter are sequential methods so you will use the results you have at the current step to have a result at the next time step. I suggest you to check some online tutorials and papers on Multiple Object Tracking via Particle Filters. As far as I am concerned the main difficulty you will have is however, the number of objects you may want to track since you won't know the number of the objects you want to track and also a object you are tracking can just disappear as well (you may kill those little guys or they may just leave the screen) or some other guy can just enter the screen. Hope this helps.

XNA - Creating a lot of particles at the same time

time for another XNA question. This time it is purely from a technical design standpoint though.
My situation is this: I've created a particle-engine based on GPU-calculations, far from complete but it works. My GPU easily handles 10k particles without breaking a sweat and I wouldn't be surprised if I could add a bunch more.
My problem: Whenever I have a lot of particles created at the same time, my frame rate hates me. Why? A lot of CPU-usage, even though I have minimized it to contain almost only memory operations.
Creation of particles is still done by CPU-calls such as:
Method wants to create particle and makes a call.
Quad is created in form of vertices and stored in a buffer
Buffer is inserted into GPU and my CPU can focus on other things
When I have about 4 emitters creating one particle per frame, my FPS lowers (sure, only 4 frames per seconds but 15 emitters drops my FPS to 25).
Creation of a particle:
//### As you can see, not a lot of action here. ###
ParticleVertex []tmpVertices = ParticleQuad.Vertices(Position,Velocity,this.TimeAlive);
particleVertices[i] = tmpVertices[0];
particleVertices[i + 1] = tmpVertices[1];
particleVertices[i + 2] = tmpVertices[2];
particleVertices[i + 3] = tmpVertices[3];
particleVertices[i + 4] = tmpVertices[4];
particleVertices[i + 5] = tmpVertices[5];
particleVertexBuffer.SetData(particleVertices);
My thoughts are that maybe I shouldn't create particles that often, maybe there is a way to let the GPU create everything, or maybe I just don't know how you do these stuff. ;)
Edit: If I weren't to create particles that often, what is the workaround for still making it look good?
So I am posting here in hope that you know how a good particle-engine should be designed and if maybe I took the wrong route somewhere.
There is no way to have the GPU create everything (short of using Geometry Shaders which requires SM4.0).
If I were creating a particle system for maximum CPU efficiency, I would pre-create (just to pick a number for sake of example) 100 particles in a vertex and index buffer like this:
Make a vertex buffer containing quads (four vertices per particle, not six as you have)
Use a custom vertex format which can store a "time offset" value, as well as a "initial velocity" value (similar to the XNA Particle 3D Sample)
Set the time value such that each particle has a time offset of 1/100th less than the last one (so offsets range from 1.0 to 0.01 through the buffer).
Set the initial velocity randomly.
Use an index buffer that gives you the two triangles you need using the four vertices for each particle.
And the cool thing is that you only need to do this once - you can reuse the same vertex buffer and index buffer for all your particle systems (providing they are big enough for your largest particle system).
Then I would have a vertex shader that would take the following input:
Per-Vertex:
Time offset
Initial velocity
Shader Parameters:
Current time
Particle lifetime (which is also the particle time wrap-around value, and the fraction of particles in the buffer being used)
Particle system position/rotation/scale (the world matrix)
Any other interesting inputs you like, such as: particle size, gravity, wind, etc
A time scale (to get a real time, so velocity and other physics calculations make sense)
That vertex shader (again like the XNA Particle 3D Sample) could then determine the position of a particle's vertex based on its initial velocity and the time that that particle had been in the simulation.
The time for each particle would be (pseudo code):
time = (currentTime + timeOffset) % particleLifetime;
In other words, as time advances, particles will be released at a constant rate (due to the offset). And whenever a particle dies at time = particleLifetime (or is it at 1.0? floating-point modulus is confusing), time loops back around to time = 0.0 so that the particle re-enters the animation.
Then, when it came time to draw my particles, I would have my buffers, shader and shader parameters set, and call DrawIndexedPrimitives. Now here's the clever bit: I would set startIndex and primitiveCount such that no particle starts out mid-animation. When the particle system first starts I'd draw 1 particle (2 primitives), and by the time that particle is about to die, I'd be drawing all 100 particles, the 100th of which would just be starting.
Then, a moment later, the 1st particle's timer would loop around and make it the 101st particle.
(If I only wanted 50 particles in my system, I'd just set my particle lifetime to 0.5 and only ever draw the first 50 of the 100 particles in the vertex/index buffer.)
And when it came time to turn off the particle system - simply do the same in reverse - set the startIndex and primitiveCount such that particles stop being drawn after they die.
Now I must admit that I've glossed over the maths involved and some details about using quads for particles - but it should not be too hard to figure out. The basic principle to understand is that you're treating your vertex/index buffer as a circular buffer of particles.
One downside of a circular buffer is that, when you stop emitting particles, unless you stop when the current time is a multiple of the particle lifetime, you will end up with the active set of particles straddling the ends of the buffer with a gap in the middle - thus requiring two draw calls (a bit slower). To avoid this you could wait until the time is right before stopping - for most systems this should be ok, but might look weird for some (eg: a "slow" particle system that needs to stop instantly).
Another downside to this method is that particles must be released at a constant rate - although that is usually pretty typical for particle systems (obviously this is per-system and the rate is adjustable). With a little tweaking an explosion effect (all particles released at once) should be possible.
All that being said: If possible, it may be worthwhile using an existing particle library.

Surface Detection in 2d Game?

I'm working on a 2D Platform game, and I was wondering what's the best (performance-wise) way to implement Surface (Collision) Detection.
So far I'm thinking of constructing a list of level objects constructed of a list of lines, and I draw tiles along the lines.
alt text http://img375.imageshack.us/img375/1704/lines.png
I'm thinking every object holds the ID of the surface that he walks on, in order to easily manipulate his y position while walking up/downhill.
Something like this:
//Player/MovableObject class
MoveLeft()
{
this.Position.Y = Helper.GetSurfaceById(this.SurfaceId).GetYWhenXIs(this.Position.X)
}
So the logic I use to detect "droping/walking on surface" is a simple point (player's lower legs)-touches-line (surface) check
(with some safety approximation
- let`s say 1-2 pixels over the line).
Is this approach OK?
I`ve been having difficulty trying to find reading material for this problem, so feel free to drop links/advice.
Having worked with polygon-based 2D platformers for a long time, let me give you some advice:
Make a tile-based platformer.
Now, to directly answer your question about collision-detection:
You need to make your world geometry "solid" (you can get away with making your player object a point, but making it solid is better). By "solid" I mean - you need to detect if the player object is intersecting your world geometry.
I've tried "does the player cross the edge of this world geometry" and in practice is doesn't work (even though it might seem to work on paper - floating point precision issues will not be your only problem).
There are lots of instructions online on how to do intersection tests between various shapes. If you're just starting out I recommend using Axis-Aligned Bounding Boxes (AABBs).
It is much, much, much, much, much easier to make a tile-based platformer than one with arbitrary geometry. So start with tiles, detect intersections with AABBs, and then once you get that working you can add other shapes (such as slopes).
Once you detect an intersection, you have to perform collision response. Again a tile-based platformer is easiest - just move the player just outside the tile that was collided with (do you move above it, or to the side? - it will depend on the collision - I will leave how to do this is an exercise).
(PS: you can get terrific results with just square tiles - look at Knytt Stories, for example.)
Check out how it is done in the XNA's Platformer Starter Kit Project. Basically, the tiles have enum for determining if the tile is passable, impassable etc, then on your level you GetBounds of the tiles and then check for intersections with the player and determine what to do.
I've had wonderful fun times dealing with 2D collision detection. What seems like a simple problem can easily become a nightmare if you do not plan it out in advance.
The best way to do this in a OO-sense would be to make a generic object, e.g. classMapObject. This has a position coordinate and slope. From this, you can extend it to include other shapes, etc.
From that, let's work with collisions with a Solid object. Assuming just a block, say 32x32, you can hit it from the left, right, top and bottom. Or, depending on how you code, hit it from the top and from the left at the same time. So how do you determine which way the character should go? For instance, if the character hits the block from the top, to stand on, coded incorrectly you might inadvertently push the character off to the side instead.
So, what should you do? What I did for my 2D game, I looked at the person's prior positioning before deciding how to react to the collision. If the character's Y position + Height is above the block and moving west, then I would check for the top collision first and then the left collision. However, if the Character's Y position + height is below the top of the block, I would check the left collision.
Now let's say you have a block that has incline. The block is 32 pixels wide, 32 pixels tall at x=32, 0 pixels tall at x=0. With this, you MUST assume that the character can only hit and collide with this block from the top to stand on. With this block, you can return a FALSE collision if it is a left/right/bottom collision, but if it is a collision from the top, you can state that if the character is at X=0, return collision point Y=0. If X=16, Y=16 etc.
Of course, this is all relative. You'll be checking against multiple blocks, so what you should do is store all of the possible changes into the character's direction into a temporary variable. So, if the character overlaps a block by 5 in the X direction, subtract 5 from that variable. Accumulate all of the possible changes in the X and Y direction, apply them to the character's current position, and reset them to 0 for the next frame.
Good luck. I could provide more samples later, but I'm on my Mac (my code is on a WinPC) This is the same type of collision detection used in classic Mega Man games IIRC. Here's a video of this in action too : http://www.youtube.com/watch?v=uKQM8vCNUTM
You can try to use one of physics engines, like Box2D or Chipmunk. They have own advanced collision detection systems and a lot of different bonuses. Of course they don't accelerate your game, but they are suitable for most of games on any modern devices
It is not that easy to create your own collision detection algorithm. One easy example of a difficulty is: what if your character is moving at a high enough velocity that between two frames it will travel from one side of a line to the other? Then your algorithm won't have had time to run in between, and a collision will never be detected.
I would agree with Tiendil: use a library!
I'd recommend Farseer Physics. It's a great and powerful physics engine that should be able to take care of anything you need!
I would do it this way:
Strictly no lines for collision. Only solid shapes (boxes and triangles, maybe spheres)
2D BSP, 2D partitioning to store all level shapes, OR "sweep and prune" algorithm. Each of those will be very powerfull. Sweep and prune, combined with insertion sort, can easily thousands of potentially colliding objects (if not hundreds of thousands), and 2D space partitioning will allow to quickly get all nearby potentially colliding shapes on demand.
The easiest way to make objects walk on surfaces is to make then fall down few pixels every frame, then get the list of surfaces object collides with, and move object into direction of surface normal. In 2d it is a perpendicular. Such approach will cause objects to slide down on non-horizontal surfaces, but you can fix this by altering the normal slightly.
Also, you'll have to run collision detection and "push objects away" routine several times per frame, not just once. This is to handle situations if objects are in a heap, or if they contact multiple surfaces.
I have used a limited collision detection approach that worked on very different basis so I'll throw it out here in case it helps:
A secondary image that's black and white. Impassible pixels are white. Construct a mask of the character that's simply any pixels currently set. To evaluate a prospective move read the pixels of that mask from the secondary image and see if a white one comes back.
To detect collisions with other objects use the same sort of approach but instead of booleans use enough depth to cover all possible objects. Draw each object to the secondary entirely in the "color" of it's object number. When you read through the mask and get a non-zero pixel the "color" is the object number you hit.
This resolves all possible collisions in O(n) time rather than the O(n^2) of calculating interactions.

Resources