Multi-Texturing - Interpolation between two layers of an 3D texture - directx

I'm trying to achieve terrain texturing using 3D texture that consists of several layers of material and to make smooth blending between materials.
Maybe my illustration will explain it better:
Just imagine that each color is a cool terrain texture, like grass, stone, etc.
I want to get them properly blended, but with current approach I get all textures between requested besides textures which I want to appear (it seems logical because, as I've read, 3D texture is treated as three-dimensional array instead of texture pillars).
Current (and foolish, obviously) approach is simple as a pie ('current' result is rendered using point interpolation, desired result is hand-painted):
Vertexes:
Vertex 1: Position = Vector3.Zero, UVW = Vector3.Zero
Vertex 2: Position = Vector3(0, 1, 0), UVW = Vector3(0, 1, 0.75f)
Vertex 3: Position = Vector3(0, 0, 1), UVW = Vector3(1, 0, 1)
As you can see, first vertex of the triangle uses first material (the red one), second vertex uses third material (the blue one) and third vertex uses last fourth material (the yellow one).
This is how it's done in pixel shader (UVW is directly passed without changes):
float3 texColor = tex3D(ColorTextureSampler, input.UVW);
return float4(texColor, 1);
The reason about my choice is my terrain structure. The terrain is being generated from voxels (each voxel holds material ID) using marching cubes. Each vertex is 'welded' because meshes is pretty big and I don't want to make every triangle individual (but I can still do it if there is no way to solve my question using connected vertices).
I recently came to an idea about storing material IDs of other two vertices of the triangle and their blend factors (I would have an float2 UV pair, float3 for material IDs and float3 for blend factor of each material id) in each vertex, but I don't see any way to accomplish this without breaking my mesh into individual triangles.
Any help would be greatly appreciated. I'm targeting for SlimDX with C# and Direct3D 9 API. Thanks for reading.
P.S.: I'm sorry if I made some mistakes in this text, English is not my native language.

Probably, your ColorTextureSampler using point filtering (D3DTEXF_POINT). Use either D3DTEXF_LINEAR or D3DTEXF_ANISOTROPIC to acheve desired interpolation effect.
I'm not very familiar with SlimDX 9, but you've got the idea.
BTW, nice illustration =)
Update 1
Result in your comment below seems appropriate to your code.
Looks like to get desired effect you must change overall approach.
It is not complete solution for you, but there is how we make it in plain 3D terrains:
Every vertex has 1 pair (u, v) of texure coodrinates
You have n textures to sample into (T1, T2, T3, ..., Tn) that represents different layers of terrain: sand, grass, rock, etc.
You have mask texture(s) n channels in total, that stores blending coefficients for each texture T in its channels: R channel holds alpha for T1, G channel for T2, B for T3, ... etc.
In pixel shader you sampling your layer textures as usual, and get color values float4 val1, val2, val3, ...
Then you sampling masks texture(s) for corresponding blend coefficients and get float blend1, blend2, blend3, ...
Then you applying some kind of blending algorith, for example simple linear interpolation:
float4 terrainColor = lerp( val1, val2, blend1 );
terrainColor = lerp( terrainColor, val3, blend2);
terrainColor = lerp( terrainColor, ..., blendN );
For example if your T1 is a grass, and you have a big grass field in a middle of your map, you will wave a big red field in the middle.
This algorithm is a bit slow, because of much texture sampling, but simple to implement, gives good visual results and most flexible. You can use not only mask as blend coefficients, but any values: for example height (sample more snow in mountain peaks, rock in mountains, dirt in low ground), slope (rock on steep, grass on flat), even fixed values, etc. Or mix up all of that. Also, you can vary a blending: use built-in lerp or something more complicated (warning! this example is stupid):
float4 terrainColor = val1 * val2 * blend1 + val2 * val3 * blend2;
terrainColor = saturate(terrainColor);
Playing with blend algo is the most interesting part of this aproach. And you can find many-many techniques in google.
Not sure, but hope it helps!
Happy coding! =)

Related

Vertex Shader: compute the leftmost vertex

Target: OpenGL ES >= 3.0
My app:
1) creates several complicated Meshes
2) for each Mesh, renders it:
a) runs vertex shader which distorts the Mesh' vertices in nontrivial ways
b) nothing special in fragment shader
3) Again for each Mesh:
a) postprocess the area taken by it
Now, in order for postprocessing to be efficient, I call glScissor and make only the smallest rectangle containing the Mesh pass the Scissor test. In order to do that, I need to know the bounding rectangle, and to compute that, I need to know the Mesh
a) leftmost
b) rightmost
c) topmost
d) bottom-most
vertices in window coordinates. It wouldn't be such a big problem if not for the Vertex Shader which distorts the Mesh' vertices (step 2a above).
I deal with that by setting up Transform Feedback so that after step 2, I have the transformed vertices in CPU. I then compute the leftmost- (and the 3 others) one by simply one loop though all of them.
There are hundreds of thousands of vertices though and I was thinking if this job couldn't be done by the Vertex Shader itself.
Question: can a Vertex Shader - one which modifies the vertices positions - figure out the leftmost one and only pass me back it (and the 3 other 'extreme' vertices) ?

HLSL vertex shader

I've been studying shaders in HLSL for an XNA project (so no DX10-DX11) but almost all resouces I found were tutorial of effects where the most part of the work was done in the pixel shader. For istance in lights the vertex shader is used only to serve to the pixel one normals and other things like that.
I'd like to make some effect based on the vertex shader rather than the pixel one, like deformation for istance. Could someone suggest me a book or a website? Even the bare effect name would be useful since than I could google it.
A lot of lighting, etc. is done in the pixel shader because the resulting image quality will be much better.
Imagine a sphere that is created by subdividing a cube or icosahedron. If lighting calculations are done in the vertex shader, the resulting values will be interpolated between face edges, which can lead to a flat or faceted appearance.
Things like blending and morphing are done in the vertex shader because that's where you can manipulate the vertices.
For example:
matrix World;
matrix View;
matrix Projection;
float WindStrength;
float3 WindDirection;
VertexPositionColor VS(VertexPositionColor input)
{
VertexPositionColor output;
matrix wvp = mul(mul(World,View),Projection);
float3 worldPosition = mul(World,input.Position);
worldPosition += WindDirection * WindStrength * worldPosition.y;
output.Position = mul(mul(View,Projection),worldPositioninput);
output.Color = input.Color;
return output;
}
(Pseudo-ish code since I'm writing this in the SO post editor.)
In this case, I'm offsetting vertices that are "high" on the Y axis with a wind direction and strength. If I use this when rendering grass, for instance, the tops of the blades will lean in the direction of the wind, while the vertices that are closer to the ground (ideally with a Y of zero) will not move at all. The math here should be tweaked a bit to take into account really tall things that would cause unacceptable large changes, and the wind should not be uniformly applied to all blades, but it should be clear that here the vertex shader is modifying the mesh in a non-uniform way to get an interesting effect.
No matter the effect you are trying to achieve - morphing, billboards (so the item you're drawing always faces the camera), etc., you're going to wind up passing some parameters into the VS that are then selectively applied to vertices as they pass through the pipeline.
A fairly trivial example would be "inflating" a model into a sphere, based on some parameter.
Pseudocode again,
matrix World;
matrix View;
matrix Projection;
float LerpFactor;
VertexShader(VertexPositionColor input)
float3 normal = normalize(input.Position);
float3 position = lerp(input.Position,normal,LerpFactor);
matrix wvp = mul(mul(World,View),Projection);
float3 outputVector = mul(wvp,position);
....
By stepping the uniform LerpFactor from 0 to 1 across a number of frames, your mesh (ideally a convex polyhedron) will gradually morph from its original shape to a sphere. Of course, you could include more explicit morph targets in your vertex declaration and morph between two model shapes, collapse it to a less complex version of a model, open the lid on a box (or completely unfold it), etc. The possibilites are endless.
For more information, this page has some sample code on generating and using morph targets on the GPU.
If you need some good search terms, look for "xna bones," "blendweight" and "morph targets."

Drawing Multiple 2d shapes in DirectX

I completed a tutorial on rendering 2d triangles in directx. Now, I want to use my knowledge of rendering a single triangle to render multiple triangles, or for that matter multiple objects on screen.
Should I create a list/stack/vector of vertexbuffers and input layouts and then draw each object? Or is there a better approach to this?
My process would be:
Setup directx, including vertex and pixel shaders
Create vertex buffers for each shape that has to be drawn on the screen and store them in an array.
Draw them to the render target for each frame(each frame)
Present the render target(each frame)
Please assume very rudimentary knowledge of DirectX and graphics programming in general when answering.
You don't need to create vertex buffer for each shape, you can just create one to store all the vertices of all triangles, then create a index buffer to store all indices of all shapes, at last draw them with index buffer.
I am not familiar with DX11, So, I just list the links for D3D 9 for your reference, I think the concept was same, just with some API changes.
Index Buffers(Direct3D 9)
Rendering from Vertex and Index buffers
If the triangles are in the same shape, just with different position or colors, you can consider using geometry instancing, it's a powerful way to render multiple copies of the same geometry.
Geometry Instancing
Efficiently Drawing Multiple Instances of Geometry(D3D9)
I don't know much about DirectX but general rule in rendering on GPU is to use separate vertex and index buffers for every mesh.
Although there is nothing limiting you from using single vertex buffer with many index buffers, in fact you may get some performance gains especially for small meshes...
You'll need just one vertex buffer for do this , and then Batching them,
so here is what you can do, you can make an array/vector holding the triangle information, let's say (pseudo-code)
struct TriangleInfo{
..... texture;
vect2 pos;
vect2 dimension;
float rot;
}
then in you draw method
for(int i=0; i < vector.size(); i++){
TriangleInfo tInfo = vector[i];
matrix worldMatrix = Transpose(matrix(tInfo.dimension) * matrix(tInfo.rot) * matrix(tInfo.pos));
shaderParameters.worldMatrix = worldMatrix; //info to the constabuffer
..
..
dctx->PSSetShaderResources(0, 1, &tInfo.texture);
dctx->Draw(0,4);
}
then in your vertex shader:
cbuffer cbParameters : register( b0 ) {
float4x4 worldMatrix;
};
VOut main(float4 position : POSITION, float4 texCoord : TEXCOORD)
{
....
output.position = mul(position,worldMatrix);
...
}
Remenber all is pseudo-code, but this should give you the idea, but there is a problem if you are planing to make a lot of Triangle, let's say 1000 triangles, maybe this is not the best option, you should using DrawIndexed and modifying the vertex position of each triangle, or you can use DrawInstanced , that is simpler , to be able to send all the information in just once Draw call, because calling Draw * triangleCount , is very heavy for large amounts

OpenGL ES 2 (iOS) Morph / Animate between two set of vertexes

I have two sets of vertexes used as a line strip:
Vertexes1
Vertexes2
It's important to know that these vertexes have previously unknown values, as they are dynamic.
I want to make an animated transition (morph) between these two. I have come up with two different ways of doing this:
Option 1:
Set a Time uniform in the vertex shader, that goes from 0 - 1, where I can do something like this:
// Inside main() in the vertex shader
float originX = Position.x;
float destinationX = DestinationVertexPosition.x;
float interpolatedX = originX + (destinationX - originX) * Time;
gl_Position.x = interpolatedX;
As you probably see, this has one problem: How do I get the "DestinationVertexPosition" in there?
Option 2:
Make the interpolation calculation outside the vertex shader, where I loop through each vertex and create a third vertex set for the interpolated values, and use that to render:
// Pre render
// Use this vertex set to render
InterpolatedVertexes
for (unsigned int i = 0; i < vertexCount; i++) {
float originX = Vertexes1[i].x;
float destinationX = Vertexes2[i].x;
float interpolatedX = originX + (destinationX - originX) * Time;
InterpolatedVertexes[i].x = interpolatedX;
}
I have highly simplified these two code snippets, just to make the idea clear.
Now, from the two options, I feel like the first one is definitely better in terms of performance, given stuff happens at the shader level, AND I don't have to create a new set of vertexes each time the "Time" is updated.
So, now that the introduction to the problem has been covered, I would appreciate any of the following three things:
A discussion of better ways of achieving the desired results in OpenGL ES 2 (iOS).
A discussion about how Option 1 could be implemented properly, either by providing the "DestinationVertexPosition" or by modifying the idea somehow, to better achieve the same result.
A discussion about how Option 2 could be implemented.
In ES 2 you specify such attributes as you like — there's therefore no problem with specifying attributes for both origin and destination, and doing the linear interpolation between them in the vertex shader. However you really shouldn't do it component by component as your code suggests you want to as GPUs are vector processors, and the mix GLSL function will do the linear blend you want. So e.g. (with obvious inefficiencies and assumptions)
int sourceAttribute = glGetAttribLocation(shader, "sourceVertex");
glVertexAttribPointer(sourceAttribute, 3, GL_FLOAT, GL_FALSE, 0, sourceLocations);
int destAttribute = glGetAttribLocation(shader, "destVertex");
glVertexAttribPointer(destAttribute, 3, GL_FLOAT, GL_FALSE, 0, destLocations);
And:
gl_Position = vec4(mix(sourceVertex, destVertex, Time), 1.0);
Your two options here have a trade off: supply twice the geometry once and interpolate between that, or supply only one set of geometry, but do so for each frame. You have to weigh geometry size vs. upload bandwidth.
Given my experience with iOS devices, I'd highly recommend option 1. Uploading new geometry on every frame can be extremely expensive on these devices.
If the vertices are constant, you can upload them once into one or two vertex buffer objects (VBOs) with the GL_STATIC_DRAW flag set. The PowerVR SGX series has hardware optimizations for dealing with static VBOs, so they are very fast to work with after the initial upload.
As far as how to upload two sets of vertices for use in a single shader, geometry is just another input attribute for your shader. You could have one, two, or more sets of vertices fed into a single vertex shader. You just define the attributes using code like
attribute vec3 startingPosition;
attribute vec3 endingPosition;
and interpolate between them using code like
vec3 finalPosition = startingPosition * (1.0 - fractionalProgress) + endingPosition * fractionalProgress;
Edit: Tommy points out the mix() operation, which I'd forgotten about and is a better way to do the above vertex interpolation.
In order to inform your shader program as to where to get the second set of vertices, you'd use pretty much the same glVertexAttribPointer() call for the second set of geometry as the first, only pointing to that VBO and attribute.
Note that you can perform this calculation as a vector, rather than breaking out all three components individually. This doesn't get you much with a highp default precision on current PowerVR SGX chips, but could be faster on future ones than doing this one component at a time.
You might also want to look into other techniques used for vertex skinning, because there might be other ways of animating vertices that don't require two full sets of vertices to be uploaded.
The one case that I've heard where option 2 (uploading new geometry on each frame) might be preferable is in specific cases where using the Accelerate framework to do vector manipulation of the geometry ends up being faster than doing the skinning on-GPU. I remember the Unity folks were talking about this once, but I can't remember if it was for really small or really large sets of geometry. Option 1 has been faster in all the cases I've worked with myself.

Directx vertex rendering: unable to get texture to display correctly for trapetzoids

I'm trying to create 3d effect using vertex and index buffers in 2d (z-coord is 0) using DirectX7.
It's easier to explain with a picture:
The problem is that the lines are broken. They should be straight. To render this image it gets break up in triangles and rendered using DrawIndexedPrimitiveVB. Obviously each of the triangle is skewed a little differently and I don't see why.
Am I missing something trivial here?
I'm not sure if this will help, but the source and destination quads are as follow:
SPoint4:= pBounds4(1, 1, W - 2, H - 2);
DPoint4:= Point4(ProjTo2dX(i, FlyDist + DeepDist, W), ProjTo2dY(0, FlyDist + DeepDist, H), ProjTo2dX(W - i, FlyDist, W), ProjTo2dY(0, FlyDist, H), ProjTo2dX(W - i, FlyDist, W), ProjTo2dY(H, FlyDist, H), ProjTo2dX(i, FlyDist + DeepDist, W), ProjTo2dY(H, FlyDist + DeepDist, H));
One way to map a square/rectangular texture to an arbitrary quad is projective interpolation. I've written an article showing how to do this (using vertex/pixel shaders).
The short version: you interpolate UVs across the quad in a way analogous to how GPUs do it for perspective-correct rendering (which, as you may have noticed, does not produce a visible seam between the two triangles). To do this, you need to calculate a false "depth" value for each vertex of the quad, and do the interpolation using homogeneous coordinates based on this "depth". Full details are in the article linked above.
You need to provide some perspective information to have a proper texture coordinates interpolation on a trapezoid, see
Problems with texture deformation in OpenGL ES 1.1 on quad made out of triangle strips
I found a solution or at least a workaround. Instead of breaking the image up in 2 triangles, I break it up in many (several horizontal strips, each consisting of 2 triangles). In this case the image looks ok.
in this case the image is split in 10 strips (20 triangles).
I'll be happy for any comments or other solutions. Thank you.

Resources