Best way to draw a cube with solid-coloured faces - directx

I'm completely new to DirectX (11) so this question will be extremely basic. Sorry about that.
I'd like to draw a cube on screen that has solid-coloured faces. All of the examples that I've seen have 8 vertices, with a colour defined at each vertex (red, green, blue). The pixel shader then interpolates between these vertices to give a spectrum of colours. This looks nice, but isn't what I'm trying to achieve. I'd just like a cube with six, coloured faces.
Two ideas come to mind:
use 24 vertices, and have each vertex referenced only a single time, i.e. no sharing. This way I can define three different colours at each 3D position, one for each face.
use a texture for each face that 'stretches' to give the face the correct colour. I'm not very familiar with textures right now, so not all that sure about this idea.
What's the typical/canonical way to achieve this effect? I'm sure this 'problem' has been solved many, many times before.

For your particular problem, vertex coloring might be the easiest and best solution. But the more complex you models will become the more complicated is to create a proper vertex coloring, because you don't always want to limit you in your imagination to the underlying geometry.
In general 3D objects are colored with one or more textures. Therefore you create an UV-Mapping (wiki), which unwraps you three-dimensional surface onto a 2D-Plane, the texture. Now you can paint freely in any resolution you want colors on your object, which gives you the most freedom to have the model look as you want.
Of course each application has its own characteristics, so some projects would choose another approach, but I think this is the most popular way to colorize models.

Option 1 is the way to go if:
You want zero color bleed between faces
You want zero texture bleed between faces
You later want to use the color as a lighting scheme ala Minecraft
Caveats:
Could use more memory as more verts being used (There are some techniques around this depending on how large your object is and its spacial resolution. eg using 1 byte for x/y/z instead of a float)

Related

Monogame - how to have draw layer while on SpriteSortMode.Texture

I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.

Why using triangulation for flat terrain?

I've seen many terrains in wire mode and all of them used triangles. I get it if you use it for different heights BUT why do people use so many triangles for flat areas in their terrain? If there is a large flat area wouldn't it be wise to create one big square or at least one big triangle (as big as possible) instead of using so many small ones?
So my question is, is there a reason to do this (maybe for textures)? I know tesselation does something like this but still leaves too many triangles from my point of view.
Possible reasons:
They don't have terrain optimization routines.
They use vertex lighting. Unless terrain is densely triangulated, it'll look horrible.
Shader does not work well with huge triangles. Interpolating huge values (like light dir etc) across a triangle might cause precision problems.
Physics engine does not work with huge triangles.
Huge triangles cause artifacts (I think there's a hardware-dependent limit on number of texture repeats).
Multiple materials (more than 8) across terrain. That'll go above multitexturing limits on certain cards, so it'll be necessary to split terrain.
Multiple different materials or regions across terrain, streamed zones. Different materials might require different texture coordinates, etc, and if there's some kind of 2nd set of coordinates on top of the terrain (optimized unwrapped lightmap), you won't be able to use one big flat triangle.
Per-pixel lighting with multiple sources. If you have several light sources, and want to use them all at once, you might have to use additive alpha-blending. With triangulated terrain you can pick out a small region that is affected by this particular resource, and redraw it with added specular from that lightsource. If you simply cut big triangle with clip-planes, you'll see z-fighting. If you don't select small region of terrain light affects, you'll have to redraw entire terrain, which is very likely to cause performance drop (fillrate/shader performance because many pixels are redrawn).

How to draw thousands of Sprites with different transparency?

Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.

XNA Adding Craters (via GPU) with a "Burn" Effect

I am currently working on a 2D "Worms" clone in XNA, and one of the features is "deformable" terrain (e.g. when a rocket hits the terrain, there is an explosion and a chunk of the terrain disappears).
How I am currently doing this is by using a texture that has a progressively higher Red value as it approaches the center. I cycle through every pixel of that "Deform" texture, and if the current pixel overlaps a terrain pixel and has a high enough red value, I modify the color array representing the terrain to transparent. If the current pixel does NOT have a high enough Red value, I blacken the terrain color (it gets blacker the closer the Red value is to the threshold). At the end of this operation I use SetData to update my terrain texture.
I realize this is not a good way to do it, not only because I have read about pipeline stalls and such, but also because it can become quite laggy if lots of craters are being added at the same time. I want to remake my Crater Generation on the GPU instead using Render Targets "ping-ponging" between being the target and the texture to modify. That isn't the problem, I know how to do that. the problem is I don't know how to keep my burn effect using this method.
Here's how the burn effect looks right now:
Does anybody have an idea how I would create a similar burn effect (darkening the edges around the formed crater)? I am completely unfamiliar with Shaders but if it requires it I would be really thankful if someone walked me through on how to do it. If there are any other ways that'd be great too.
Sounds like you're good in the right way. But you're doing a lot of things by hand, which can also be done by just drawing sprites and applying the right formulas.
For example:
Suppose your terrain is saved into a giant texture in the alpha channel of the texture. 1 is terrain, 0 is nothing.
An explosion happens and the terrain has to be deformed. Update your texture easily by just drawing a black transparent sphere (or explosion area) onto your texture. The terrain is gone, because the alpha value is 0 of the black sphere. Your texture is now up to date, everything was done by the spriteBatch. And nothing had to be checked.
I don't know if you wanted a solution for this as well, but now you have one.
For the burn effect
Now that we have our terrain in a texture, we can do a post effect on the drawing by using a shader (just like you said). The shader obtains the texture's alpha channel and can now do different things to get our burn effect.
The first option is to do edge detection. Check a few pixels in all 4 directions and see if the pixel is at the edge. If so, it needs to do a burn by, for example, multiplying it with the distance to the edge (or any other function you like)
Another way is quite similar to the first one, but does it in two steps. First you do the same kind of edge detection, but you save the edges in a seperate texture. Now, when you are drawing your texture, you can overlay your edges. So it's quite the same as just drawing the ground at once.
The main difference for the second option is that you can also choose to just draw your normal ground and you are not adjusting the pixel in the ground texture on rendering.
I know this is a long story, but it is a nice technique. Have a look at toon shaders, they do edge detection as well, even though it is 3D.
Keywords: Toon shading, HLSL, Post effects, edge detection, image processing.
Recommended reading: http://rbwhitaker.wikidot.com/xna-tutorials

Sprite pixel parsing to determine Vector

Given an image that can contain any variety of solid color images, what is the best method for parsing the image at a given point and then determining the slope (or Vector if you prefer) of that area?
Being new to XNA development, I feel there must be an established method for doing this sort of thing but I have Googled this issue for awhile now.
By way of example, I have mocked up a quick image to demonstrate what I am trying to do. The white portion of the image (where the labels are shown) would be transparent pixels. The "ground" would be a RenderTarget2D or Texture2D object that will provide the Color array of pixels.
Example
What you are looking for is the tangent, which is 90 degrees to the normal (which is more commonly used). These two terms should assist you in your searching.
This is trivial if you've got the polygon outline data. If all you have is an image, then you have to come up with a way to convert it into a polygon.
It may not be entirely suitable for your problem, but the first place I would go is the Farseer Physics Engine, which has a "texture to polygon" feature you could possibly reuse.
If you are using the terrain as some kind of "ground", you can possibly cheat a bit by looking at the adjacent column of pixels and using that to determine the ground slope at that exact point. Kind of like what Lemmings and Worms do.
If you make that determination at the boundary between each pixel, you can get gradients of rise:run between two pixels horizontally. Usually you just break it into categories: so flat (1:1), 45 degrees (2:1) or too steep (>3:1). With a more complicated algorithm, that looks outwards to more columns, you can get better resolution.

Resources