XNA - Creating a lot of particles at the same time - xna

time for another XNA question. This time it is purely from a technical design standpoint though.
My situation is this: I've created a particle-engine based on GPU-calculations, far from complete but it works. My GPU easily handles 10k particles without breaking a sweat and I wouldn't be surprised if I could add a bunch more.
My problem: Whenever I have a lot of particles created at the same time, my frame rate hates me. Why? A lot of CPU-usage, even though I have minimized it to contain almost only memory operations.
Creation of particles is still done by CPU-calls such as:
Method wants to create particle and makes a call.
Quad is created in form of vertices and stored in a buffer
Buffer is inserted into GPU and my CPU can focus on other things
When I have about 4 emitters creating one particle per frame, my FPS lowers (sure, only 4 frames per seconds but 15 emitters drops my FPS to 25).
Creation of a particle:
//### As you can see, not a lot of action here. ###
ParticleVertex []tmpVertices = ParticleQuad.Vertices(Position,Velocity,this.TimeAlive);
particleVertices[i] = tmpVertices[0];
particleVertices[i + 1] = tmpVertices[1];
particleVertices[i + 2] = tmpVertices[2];
particleVertices[i + 3] = tmpVertices[3];
particleVertices[i + 4] = tmpVertices[4];
particleVertices[i + 5] = tmpVertices[5];
particleVertexBuffer.SetData(particleVertices);
My thoughts are that maybe I shouldn't create particles that often, maybe there is a way to let the GPU create everything, or maybe I just don't know how you do these stuff. ;)
Edit: If I weren't to create particles that often, what is the workaround for still making it look good?
So I am posting here in hope that you know how a good particle-engine should be designed and if maybe I took the wrong route somewhere.

There is no way to have the GPU create everything (short of using Geometry Shaders which requires SM4.0).
If I were creating a particle system for maximum CPU efficiency, I would pre-create (just to pick a number for sake of example) 100 particles in a vertex and index buffer like this:
Make a vertex buffer containing quads (four vertices per particle, not six as you have)
Use a custom vertex format which can store a "time offset" value, as well as a "initial velocity" value (similar to the XNA Particle 3D Sample)
Set the time value such that each particle has a time offset of 1/100th less than the last one (so offsets range from 1.0 to 0.01 through the buffer).
Set the initial velocity randomly.
Use an index buffer that gives you the two triangles you need using the four vertices for each particle.
And the cool thing is that you only need to do this once - you can reuse the same vertex buffer and index buffer for all your particle systems (providing they are big enough for your largest particle system).
Then I would have a vertex shader that would take the following input:
Per-Vertex:
Time offset
Initial velocity
Shader Parameters:
Current time
Particle lifetime (which is also the particle time wrap-around value, and the fraction of particles in the buffer being used)
Particle system position/rotation/scale (the world matrix)
Any other interesting inputs you like, such as: particle size, gravity, wind, etc
A time scale (to get a real time, so velocity and other physics calculations make sense)
That vertex shader (again like the XNA Particle 3D Sample) could then determine the position of a particle's vertex based on its initial velocity and the time that that particle had been in the simulation.
The time for each particle would be (pseudo code):
time = (currentTime + timeOffset) % particleLifetime;
In other words, as time advances, particles will be released at a constant rate (due to the offset). And whenever a particle dies at time = particleLifetime (or is it at 1.0? floating-point modulus is confusing), time loops back around to time = 0.0 so that the particle re-enters the animation.
Then, when it came time to draw my particles, I would have my buffers, shader and shader parameters set, and call DrawIndexedPrimitives. Now here's the clever bit: I would set startIndex and primitiveCount such that no particle starts out mid-animation. When the particle system first starts I'd draw 1 particle (2 primitives), and by the time that particle is about to die, I'd be drawing all 100 particles, the 100th of which would just be starting.
Then, a moment later, the 1st particle's timer would loop around and make it the 101st particle.
(If I only wanted 50 particles in my system, I'd just set my particle lifetime to 0.5 and only ever draw the first 50 of the 100 particles in the vertex/index buffer.)
And when it came time to turn off the particle system - simply do the same in reverse - set the startIndex and primitiveCount such that particles stop being drawn after they die.
Now I must admit that I've glossed over the maths involved and some details about using quads for particles - but it should not be too hard to figure out. The basic principle to understand is that you're treating your vertex/index buffer as a circular buffer of particles.
One downside of a circular buffer is that, when you stop emitting particles, unless you stop when the current time is a multiple of the particle lifetime, you will end up with the active set of particles straddling the ends of the buffer with a gap in the middle - thus requiring two draw calls (a bit slower). To avoid this you could wait until the time is right before stopping - for most systems this should be ok, but might look weird for some (eg: a "slow" particle system that needs to stop instantly).
Another downside to this method is that particles must be released at a constant rate - although that is usually pretty typical for particle systems (obviously this is per-system and the rate is adjustable). With a little tweaking an explosion effect (all particles released at once) should be possible.
All that being said: If possible, it may be worthwhile using an existing particle library.

Related

For batch rendering multiple similar objects which is more performant, drawArrays(TRIANGLE_STRIP) with "degenerate triangles" or drawArraysInstanced?

MDN states that:
Fewer, larger draw operations will generally improve performance. If
you have 1000 sprites to paint, try to do it as a single drawArrays()
or drawElements() call.
It's common to use "degenerate triangles" if you need to draw
discontinuous objects as a single drawArrays(TRIANGLE_STRIP) call.
Degenerate triangles are triangles with no area, therefore any
triangle where more than one point is in the same exact location.
These triangles are effectively skipped, which lets you start a new
triangle strip unattached to your previous one, without having to
split into multiple draw calls.
However, it is also commmonly recommended that for multiple similar objects one should use instanced rendered. For webGl2 something like drawArraysInstanced() or for webGl1 drawArrays with the ANGLE_instanced_arrays extension activated.
For my personal purposes I need to render a large amount of rectangles of the same width in a 2d plane but with varying heights (webgl powered charting application). So any recommendation particular to my usecase is valuable.
Degenerate triangles are generally faster than drawArraysInstanced but there's arguably no reason to use degenerate triangles when you can just make quads with no degenerate triangles.
While it's probably true that degenerate triangles are slightly faster than quads you're unlikely to notice that difference. In fact I suspect it wold be difficult to create an example in WebGL that would show the difference.
To be clear I'm suggesting manually instanced quads. If you want to draw 1000 quads put 1000 quads in a single vertex buffer and draw all with 1 draw call using either drawElements or drawArrays
On the other hand instanced quads using drawArraysInstances might be the most convenient way depending on what you are trying to do.
If it was me though I'd first test without optimization, drawing 1 quad per draw call unless I already knew I was going to draw > 1000 quads. Then I'd find some low-end hardware and see if it's too slow. Most GPU apps get fillrate bound (drawing pixels) before they get vertex bound so even on a slow machine drawing lots of quads might be slow in a way that optimizing vertex calculation won't fix the issue.
You might find this and/or this useful
You can take as a given that the performance of rendering has been optimized by the compiler and the OpenGL core.
static buffers
If you have a buffers that are static then there is generally an insignificant performance difference between the techniques mentioned. Though different hardware (GPUs) will favor one technique over another, but there is no way to know what type of GPU you are running on.
Dynamic buffers
If however when the buffers are dynamic then you need to consider the transfer of data from the CPU RAM to the GPU RAM. This transfer is a slow point and on most GPU's will stop rendering as the data is moved (Messing up concurrent rendering advantages).
On average anything that can be done to reduce the size of the buffers moved will improve the performance.
2D Sprites Triangle V Triangle_Strip
At the most basic 2 floats per vertex (x,y for 2D sprites) you need to modify and transfer a total of 6 verts per quad for gl.TRIANGLE (6 * 2 * b = 48bytes per quad. where b is bytes per float (4)). If you use (gl.TRIANGLE_STRIP) you need to move only 4 verts for a single quad, but for more than 1 you need to create the degenerate triangle each of which requires an additional 2 verts infront and 2 verts behind. So the size per quad is (8 * 2 * 4 = 64bytes per quad (actual can drop 2verts lead in and 2 lead out, start and end of buffer))
Thus for 1000 sprites there are 12000 doubles (64Bit) that are converted to Floats (32Bit) then transfer is 48,000bytes for gl.TRIANGLE. For gl.TRIANGLE_STRIP there are 16,000 doubles for a total of 64,000bytes transferred
There is a clear advantage when using triangle over triangle strip in this case. This is compounded if you include additional per vert data (eg texture coords, color data, etc)
Draw Array V Element
The situation changes when you use drawElements rather than drawArray as the verts used when drawing elements are located via the indices buffer (a static buffer). In this case you need only modify 4Verts per quad (for 1000 quads modify 8000 doubles and transfer 32,000bytes)
Instanced V modify verts
Now using elements we have 4 verts per quad (modify 8 doubles, transfer 32bytes).
Using drawArray or drawElement and each quad has a uniform scale, be rotated, and a position (x,y), using instanced rendering each quad needs only 4 doubles per vert, the position, scale, and rotation (done by the vertex shader).
In this case we have reduced the work load down to (for 1000 quads) modify 4,000 doubles and transfer 16,000bytes
Thus instanced quads are the clear winner in terms of alleviating the transfer and JavaScript bottle necks.
Instanced elements can go further, in the case where it is only position needed, and if that position is only within a screen you can position a quad using only 2 shorts (16bit Int) reducing the work load to modify 2000 ints (32bit JS Number convert to shorts which is much quicker than the conversion of Double to Float)) and transfer only 4000bytes
Conclusion
It is clear in the best case that instanced elements offer up to 16times less work setting and transferring quads to the GPU.
This advantage does not always hold true. It is a balance between the minimal data required per quad compared to the minimum data set per vert per quad (4 verts per quad).
Adding additional capabilities per quad will alter the balance, so will how often you modify the buffers (eg with texture coords you may only need to set the coords once when not using instanced, by for instanced you need to transfer all the data per quad each time anything for that quad has changed (Note the fancy interleaving of instance data can help)
There is also the hardware to consider. Modern GPUs are much better at state changes (transfer speeds), in these cases its all in the JavaScript code where you can gain any significant performance increase. Low end GPUs are notoriously bad at state changes, though optimal JS code is always important, reducing the data per quad is where the significant performance is when dealing with low end devices

Can I use the output one GLSL shader frame as input to the next?

I'd like to use a single bitmap to store the state of a 2D dynamical system which evolves over time according to some rules. Think of it as a grayscale image where the brightness of each pixel represents the temperature of each point on a rectangular plate at time t, call this T(t). And I have a function that will give me the temperature at each point at time t + 1 as a function of this; so T(t+1) = f(T(t)). Or if you prefer the pixels represent the height of the surface of water at each point and the function is calculating how waves on the water evolve over time.
Anyway my thinking is that it should be possible to do all of this in a GLSL shader, as long as the results of the previous shader pass at time t are available to the shader on the next frame, i.e. at time t + 1.
So far pretty much every basic example I can find (and I'm a real beginner to shaders so maybe I just need to find some not so basic examples) seems to be kind of a one-way street; the shader's output (color at each point) can be a function of time and of the existing texture but not of it's own previous output.
Am I missing something?
I'm specifically trying to implement something on iOS using SpriteKit which allows for GLSL shaders but I'd imagine that pointers to something like this in other languages/platforms would still be useful.

What is the time delta or timestamp used for in game loop update methods?

For example in cocos2D:
- (void)update:(ccTime)delta
can someone explain what these time deltas or timestamps are used for? How are they relevant for how the game world is updated? Is it because we do not know the fps reliably and should not just rely on incremental property updates based on -update calls?
It is important for making frame independent movement. Typically any character movement you take into consideration the time since the last update call.
This is to ensure that your game behaves the same across devices of various performance. If you move a character by 1 pixel every frame then on a device that runs at 60fps the character will move twice as fast as on a device that gets 30fps.
By affecting all movement code for example by the delta time you ensure that all devices will behave the same.
It is simple to make movement frame rate independent. Something like multiplying a movement vector by deltaTime will achieve this.

Which is faster: creating a detailed mesh before execution or tessellating?

For simplicity of the problem let's consider spheres. Let's say I have a sphere, and before execution I know the radius, the position and the triangle count. Let's also say the triangle count is sufficiently large (e.g. ~50k triangles).
Would it be faster generally to create this sphere mesh before hand and stream all 50k triangles to the graphics card, or would it be faster to send a single point (representing the centre of the sphere) and use tessellation and geometry shaders to build the sphere on the GPU?
Would it still be faster if I had 100 of these spheres in different positions? Can I use hull/geometry shaders to create something which I can then combine with instancing?
Tessellation is certainly valuable. Especially when combined with displacement from a heightmap. The isolated environment described in your question is bound not to fully answer your question.
Before using tessellation you would need to know that you will become CPU poly/triangle bound and therefore need to start utilizing the GPU to help you increase the overall triangles of your game/scene. Calculations are very fast on the GPU so yes using tessellation multiple subdivision levels is advisable if you are going to do it...though sometimes I've been happy with just subdividing 3-4 times from a 200 tri plane.
Mainly tessellation is used for environmental/static mesh scene objects so that you can spend your tri's on characters and other moving/animated models without becoming CPU bound.
Checkout engines like Unity3D and CryEngine for tessellation examples to help the learning curve.
I just so happen to be working with this at the same time.
In terms of FPS, the pre-computed method would be faster in this situation since you can
dump one giant 50K triangle sphere payload (like any other model) and
draw it in multiple places from there.
The tessellation method would be slower since all the triangles would
be generated from a formula, multiple times per frame.

Drawing particles with CPU instead of GPU (XNA)

I'm trying out modifications to the following particle system.
http://create.msdn.com/en-US/education/catalog/sample/particle_3d
I have a function such that when I press Space, all the particles have their positions and velocities set to 0.
for (int i = 0; i < particles.GetLength(0); i++)
{
particles[i].Position = Vector3.Zero;
particles[i].Velocity = Vector3.Zero;
}
However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0).
As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything.
I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.
The sample you downloaded is not capable of doing what you ask. You are correct in your diagnosis of the problem: the particle system is entirely maintained by the GPU, and so your changes to the position and velocity only change the start values, not the actual real-time particle values. To make a particle system that is changeable by the CPU, you need to make a particle engine class and do it yourself. There are many other samples out there that do this.
Riemers XNA tutorials are very useful. Try out this link:
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2D/Particle_engine.php
It teaches you how to make a 2D particle system. This can be easily converted into 3D.
Or, if you want to just download an existing engine, try the Mercury particle engine:
http://mpe.codeplex.com/
This is quite simple ... all you have to do is do the position/velocity calculations on the CPU rather than offloading them to a shader. I of course can't see your code, so I can't really offer any more specific guidance ... but whether you animate your particles with a physics engine like FarSeer, or just do the basic equation of motion yourself. It'll happen on the CPU.
I would recommend DPSF (Dynamic Particle System Framework) for this. It does the calculations on the CPU, is fully customizable and very flexible, has great help docs and tutorials, and it even provides the full source code for the FireParticleSystem from the Particle3D sample that you are using. You should be able to have the particle system integrated into your game and accomplishing what you want within a matter of minutes.

Resources