is depth buffers mandatory - directx

I am just trying to better understand the directX pipeline. Just curious if depth buffers are mandatory in order to get things work. Or is it just a buffer you need if you want objects to appear behind one another.

The depth buffer is not mandatory. In a 2D game, for example, there is usually no need for it.
You need a depth buffer if you want objects to appear behind each other, but still want to be able to draw them in arbitrary order.
If you draw all triangles from the back to the front, and none of them intersect, then you could do without the depth buffer. However, it's generally easier to do away with depth sorting and just to use the depth buffer anyway.

Depth buffers are not mandatory. They simply solve the following problem: suppose you have an object near to the camera which is drawn first. Then, after that is already drawn, you want to draw an object which is far away, but at the same position as the nearby object on-screen. Without depth buffers, it gets drawn on top, which looks wrong. With depth buffers, it is obscured, because the GPU figures out its behind something else that has already been drawn.
You can turn them off and deal with that, eg. by drawing back-to-front (but this has other problems solved by depth buffering), which is easy in 2D games. Alternatively for some reason you might want that over-draw as some kind of effect. But it is by no means necessary for basic rendering.

Related

Making GL_POINTS look like a 3D Rectangle

Suppose one has an array of GL_POINTS and wants to make each appear to have a distinct "height" or "depth", so instead of appearing like a flat scatter of squares they appear to be a scatter of 3D rectangles / right rectangular prisms.
Is there a technique in WebGL that will allow one to achieve this effect? One could of course use vertices that actually articulate those 3D rectangles, but my goal is to optimize for performance as I have ~100,000 of these rectangles to render, and I thought points would be the best primitive to use.
Right now I am thinking one could probably use a series of point sprites each with varying depth, then assign each point the sprite that corresponds most closely with the desired depth (effectively quantizing the depth data field). But is there a way to keep the depth field continuous?
Any pointers on this question would be greatly appreciated!
In my experience POINTS are not faster than making your own vertices. Also, if you use instanced drawing you can get away with almost the same amount of data. You need one quad and then position, width, and height for each rectangle. Not sure instancing is as fast as just making all the vertices though. Might depend on the GPU/driver
As pointed out 😄 in many other Q&As on points, the maximum point size is GPU/driver specific and allowed to be as low as 1 pixel. There are plenty of GPUs that only allow size >= 256 pixels (no idea why) and a few with only size >= 64. Yet another reason to not use POINTS
Otherwise though, POINTS always draw a square so you'd have to draw a square large enough that contains your rectangle and then in the fragment shader, discard the pixels outside of the rectangle.
That's unlikely to be good for speed though. Every pixel of the square will still need to be evaluated by the fragment shader which is slower than drawing a rectangle with vertices since then those pixels outside the rectangle are not even considered. Further, using discard in a shader is often slower than not using it. This is because, for example, things like setting the depth buffer, if there is no discard nothing needs to be checked, the depth buffer can be updated unconditionally separate from the shader. With discard the depth buffer can't be updated until the GPU knows if the shader kept or discarded the fragment.
As for making them appear 3D I'm not sure what you mean. Effectively points are just like drawing a square quad so you can put anything you want on that square. The majority of shaders on shadertoy can be adapted to draw themselves on points. I wouldn't recommend it as it would likely be slow but just pointing out that it's just a quad. Draw a texture on them, draw a procedural texture on them, draw a solid color on them, draw a procedural snail on them.
Another possible solution is you can apply a normal map to the quad and then do lighting calculations on those normals so each quad will have the correct lighting for its position relative to your light(s)

How to acquire mapbox-gl-js z-buffer

I'm developing webgl application, where I draw detailed building on top of mapbox-gl-js.
Everything goes fine except one detail, I don't know how to acquire depth buffer of every drawn frame.
In some cases my overlay is drawn over extruded by mapbox-gl-js style buildings, but it must be behind it.
I see only one possibility to do this correctly - acquire depth buffer from mapbox-gl-js and pass it in to my shader as texture and compare with my actual depth buffer values.
As in deferred rendering technique.
Any possibility to do that?
You may be better off using a Custom Layer.

HLSL How to properly outline a flat-shaded model

I have a question and hope you can help me with it. I've been busy making a game with xna and have recently started getting into shaders (hlsl). There's this shader that i like, use, and would like to improve.
The shader creates an outline by drawing the back faces of a model (in black), and translating the vertex along its normal. Now, for a smooth-shaded model, this is fine. I, however, am using flat-shaded models (I'm posting this from my phone, but if anyone is confused about what that means, I can upload a picture later). This means that the vertices are translated along the normal of its corresponding face, resulting in visible gaps between the faces.
Now, the question is: is there a way to calculate (either in the shader or in xna), how i should translate each vertex, or is the only viable option just making a copy of the 3d model, but with smooth shading?
Thanks in advance, hope you can educate me!
EDIT: Alternatively, I could load only a smooth-shaded model, and try to flat-shade it in the shader. That would, however, mean that I have to be able to find the normals of all vertices of the corresponding face, add their normals, and normalize the result. Is there a way to do this?
EDIT2: So far, I've found a few options that don't seem to work in the end: setting "shademode" in hlsl is now deprecated. Setting the fillmode to "wireframe" would be neat (while culling front faces), if only I would be able to set the line thickness.
I'm working on a new idea here. I could maybe iterate through vertices, find their position on the screen, and draw 2d lines between those points using something like the RoundLine library. I'm going to try this, and see if it works.
Ok, I've been working on this problem for a while and found something that works quite nicely.
Instead doing a lot of complex mathematics to draw 2d lines at the right depth, I simply did the following:
Set a rasterizer state that culls front faces, draws in wireframe, and has a slightly negative depth bias.
Now draw the model in all black (I modified my shader for this)
Set a rasterizer state that culls back faces, draws in fillmode.solid and has a 0 depth bias.
Now draw the model normally.
Since we can't change the thickness of wireframe lines, we're left with a very slim outline. For my purposes, this was actually not too bad.
I hope this information is useful to somebody later on.

How to create sprite surface like in "cham cham"

My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.

How should I optimize drawing a large, dynamic number of collections of vertices?

...or am I insane to even try?
As a novice to using bare vertices for 3d graphics, I haven't ever worked with vertex buffers and the like before. I am guessing that I should use a dynamic buffer because my game deals with manipulating, adding and deleting primitives. But how would I go about doing that?
So far I have stored my indices in a Triangle.cs class. Triangles are stored in Quads (which contain the vertices that correspond to their indices), quads are stored in blocks. In my draw method, I iterate through each block, each quad in each block, and finally each triangle, apply the appropriate texture to my effect, then call DrawUserIndexedPrimitives to draw the vertices stored in the triangle.
I'd like to use a vertex buffer because this method cannot support the scale I am going for. I am assuming it to be dynamic. Since my vertices and indices are stored in a collection of separate classes, though, can I still effectively use a buffer? Is using separate buffers for each quad silly (I'm guessing it is)? Is it feasible and effective for me to dump vertices into the buffer the first time a quad is drawn and then store where those vertices were so that I can apply that offset to that triangle's indices for successive draws? Is there a feasible way to handle removing vertices from the buffer in this scenario (perhaps event-based shifting of index offsets in triangles)?
I apologize that these questions may be either far too novicely or too confusing/vague. I'd be happy to provide clarification. But as I've said, I'm new to this and I may not even know what I'm talking about...
I can't exactly tell what you're trying to do, but using a seperate buffer for every quad is very silly.
The golden rule in graphics programming is batch, batch, batch. This means to pack as much stuff into a single DrawUserIndexedPrimitives call as possible, your graphics card will love you for it.
In your case, put all of your verticies and indicies into one vertex buffer and index buffer (you might need to use more, I have no idea how many verticies we're talking about). Whenever the user changes one of the primatives, regenerate the entire buffer. If you really have a lot of primatives, split them up into multiple buffers and on only regenerate the ones you need when the user changes something.
The most important thing is to minimize the amount of 'DrawUserIndexedPrimitives' calls, those things have a lot of overhead, you could easily make your game on the order of 20x faster.
Graphics cards are pipelines, they like being given a big chunk of data for them to eat away at. What you're doing by giving it one triangle at a time is like forcing a large-scale car factory to only make one car at a time. Where they can't start on building the next car before the last one is finished.
Anyway good luck, and feel free to ask any questions.

Resources