I've been studying shaders in HLSL for an XNA project (so no DX10-DX11) but almost all resouces I found were tutorial of effects where the most part of the work was done in the pixel shader. For istance in lights the vertex shader is used only to serve to the pixel one normals and other things like that.
I'd like to make some effect based on the vertex shader rather than the pixel one, like deformation for istance. Could someone suggest me a book or a website? Even the bare effect name would be useful since than I could google it.
A lot of lighting, etc. is done in the pixel shader because the resulting image quality will be much better.
Imagine a sphere that is created by subdividing a cube or icosahedron. If lighting calculations are done in the vertex shader, the resulting values will be interpolated between face edges, which can lead to a flat or faceted appearance.
Things like blending and morphing are done in the vertex shader because that's where you can manipulate the vertices.
For example:
matrix World;
matrix View;
matrix Projection;
float WindStrength;
float3 WindDirection;
VertexPositionColor VS(VertexPositionColor input)
{
VertexPositionColor output;
matrix wvp = mul(mul(World,View),Projection);
float3 worldPosition = mul(World,input.Position);
worldPosition += WindDirection * WindStrength * worldPosition.y;
output.Position = mul(mul(View,Projection),worldPositioninput);
output.Color = input.Color;
return output;
}
(Pseudo-ish code since I'm writing this in the SO post editor.)
In this case, I'm offsetting vertices that are "high" on the Y axis with a wind direction and strength. If I use this when rendering grass, for instance, the tops of the blades will lean in the direction of the wind, while the vertices that are closer to the ground (ideally with a Y of zero) will not move at all. The math here should be tweaked a bit to take into account really tall things that would cause unacceptable large changes, and the wind should not be uniformly applied to all blades, but it should be clear that here the vertex shader is modifying the mesh in a non-uniform way to get an interesting effect.
No matter the effect you are trying to achieve - morphing, billboards (so the item you're drawing always faces the camera), etc., you're going to wind up passing some parameters into the VS that are then selectively applied to vertices as they pass through the pipeline.
A fairly trivial example would be "inflating" a model into a sphere, based on some parameter.
Pseudocode again,
matrix World;
matrix View;
matrix Projection;
float LerpFactor;
VertexShader(VertexPositionColor input)
float3 normal = normalize(input.Position);
float3 position = lerp(input.Position,normal,LerpFactor);
matrix wvp = mul(mul(World,View),Projection);
float3 outputVector = mul(wvp,position);
....
By stepping the uniform LerpFactor from 0 to 1 across a number of frames, your mesh (ideally a convex polyhedron) will gradually morph from its original shape to a sphere. Of course, you could include more explicit morph targets in your vertex declaration and morph between two model shapes, collapse it to a less complex version of a model, open the lid on a box (or completely unfold it), etc. The possibilites are endless.
For more information, this page has some sample code on generating and using morph targets on the GPU.
If you need some good search terms, look for "xna bones," "blendweight" and "morph targets."
Related
The Metal Shading Language includes a lot of mathematic functions, but it seems most of the codes inside Metal official documentation just use it to map vertexes from pixel space to clip space like
RasterizerData out;
out.clipSpacePosition = vector_float4(0.0, 0.0, 0.0, 1.0);
float2 pixelSpacePosition = vertices[vertexID].position.xy;
vector_float2 viewportSize = vector_float2(*viewportSizePointer);
out.clipSpacePosition.xy = pixelSpacePosition / (viewportSize / 2.0);
out.color = vertices[vertexID].color;
return out;
Except for GPGPU using kernel functions to do parallel computation, what things that vertex function can do, with some examples? In a game, if all vertices positions are calculated by the CPU, why GPU still matters? What does vertex function do usually?
Vertex shaders compute properties for vertices. That's their point. In addition to vertex positions, they also calculate lighting normals at each vertex. And potentially texture coordinates. And various material properties used by lighting and shading routines. Then, in the fragment processing stage, those values are interpolated and sent to the fragment shader for each fragment.
In general, you don't modify vertices on the CPU. In a game, you'd usually load them from a file into main memory, put them into a buffer and send them to the GPU. Once they're on the GPU you pass them to the vertex shader on each frame along with model, view, and projection matrices. A single buffer containing the vertices of, say, a tree or a car's wheel might be used multiple times. Each time all the CPU sends is the model, view, and projection matrices. The model matrix is used in the vertex shader to reposition and scale the vertice's positions in world space. The view matrix then moves and rotates the world around so that the virtual camera is at the origin and facing the appropriate way. Then the projection matrix modifies the vertices to put them into clip space.
There are other things a vertex shader can do, too. You can pass in vertices that are in a grid in the x-y plane, for example. Then in your vertex shader you can sample a texture and use that to generate the z-value. This gives you a way to change the geometry using a height map.
On older hardware (and some lower-end mobile hardware) it was expensive to do calculations on a texture coordinate before using it to sample from a texture because you lose some cache coherency. For example, if you wanted to sample several pixels in a column, you might loop over them adding an offset to the current texture coordinate and then sampling with the result. One trick was to do the calculation on the texture coordinates in the vertex shader and have them automatically interpolated before being sent to the fragment shader, then doing a normal look-up in the fragment shader. (I don't think this is an optimization on modern hardware, but it was a big win on some older models.)
First, I'll address this statement
In a game, if all vertices positions are calculated by the CPU, why GPU still matters? What does vertex function do usually?
I don't believe I've seen anyone calculating positions for meshes that will be later used to render them on a GPU. It's slow, you would need to get all this data from CPU to a GPU (which means copying it through a bus if you have a dedicated GPU). And it's just not that flexible. There are much more things other than vertex positions that are required to produce any meaningful image and calculating all this stuff on CPU is just wasteful, since CPU doesn't care for this data for the most part.
The sole purpose of vertex shader is to provide rasterizer with primitives that are in clip space. But there are some other uses that are mostly tricks based on different GPU features.
For example, vertex shaders can write out some data to buffers, so, for example, you can stream out transformed geometry if you don't want to transform it again at a later vertex stage if you have multi-pass rendering that uses the same geometry in more than one pass.
You can also use vertex shaders to output just one triangle that covers the whole screen, so that fragment shaders gets called one time per pixel for the whole screen (but, honestly, you are better of using compute (kernel) shaders for this).
You can also write out data from vertex shader and not generate any primitives. You can do that by generating degenerate triangles. You can use this to generate bounding boxes. Using atomic operations you can update min/max positions and read them at a later stage. This is useful for light culling, frustum culling, tile-based processing and many other things.
But, and it's a BIG BUT, you can do most of this stuff in a compute shader without incurring GPU to run all the vertex assembly pipeline. That means, you can do full-screen effects using just a compute shader (instead of vertex and fragment shader and many pipeline stages in between, such as rasterizer, primitive culling, depth testing and output merging). You can calculate bounding boxes and do light culling or frustum culling in compute shader.
There are reasons to fire up the whole rendering pipeline instead of just running a compute shader, for example, if you will still use triangles that are output from vertex shader, or if you aren't sure how primitives are laid out in memory so you need vertex assembler to do the heavy lifting of assembling primitives. But, getting back to your point, almost all of the reasonable uses for vertex shader include outputting primitives in clip space. If you aren't using resulting primitives, it's probably best to stick to compute shaders.
I wrote two simple WebGL demos which use a 512x512 image as texture. But the result is not what I want. I know the solution is to use projective texture mapping(or any other solutions?) but I have no idea how to implement it in my simple demos. Anyone can help?
The results are as follows(both of them are incorrect):
Codes of demos are here: https://github.com/jiazheng/WebGL-Learning/tree/master/texture
note: Both the model and texture could not be modified in my case.
In order to get perspective-correct texture mapping, you must actually be using perspective. That is, instead of narrowing the top of your polygon along the x axis, move it backwards along the z axis, and apply a standard perspective projection matrix.
I'm a little hazy on the details myself, but my understanding is that the way the perspective matrix maps the z coordinate into the w coordinate is the key to getting the GPU to interpolate along the surface “correctly”.
If you have already-perspective-warped 2D geometry, then you will have to implement some method of restoring it to 3D data, computing appropriate z values. There is no way in WebGL to get a perspective quadrilateral, because the primitives are triangles and there is not enough information in three points to define the texture mapping you're looking for unambiguously — your code must use the four points to work out the corresponding depths. Unfortunately, I don't have enough grasp of the math to advise you on the details.
You must specify vec4 texture coordinates not vec2. The 4th field in each vec4 will be homogeneous w that when divided into x and y produce your desired coordinate. This in turn should allow the perspective correction division in hardware to give you a non affine mapping within the triangle provided your numbers are correct. Now, if you use a projection matrix to transform a vec4 with w=1 in your vertex shader you should get the correct vec4 numbers ready for perspective correction going into setup and rasterization for your fragment shader. If this is unclear then you need to seek out tutorials on projective texture transformation and homogeneous coordinates in projection.
I'm trying to achieve terrain texturing using 3D texture that consists of several layers of material and to make smooth blending between materials.
Maybe my illustration will explain it better:
Just imagine that each color is a cool terrain texture, like grass, stone, etc.
I want to get them properly blended, but with current approach I get all textures between requested besides textures which I want to appear (it seems logical because, as I've read, 3D texture is treated as three-dimensional array instead of texture pillars).
Current (and foolish, obviously) approach is simple as a pie ('current' result is rendered using point interpolation, desired result is hand-painted):
Vertexes:
Vertex 1: Position = Vector3.Zero, UVW = Vector3.Zero
Vertex 2: Position = Vector3(0, 1, 0), UVW = Vector3(0, 1, 0.75f)
Vertex 3: Position = Vector3(0, 0, 1), UVW = Vector3(1, 0, 1)
As you can see, first vertex of the triangle uses first material (the red one), second vertex uses third material (the blue one) and third vertex uses last fourth material (the yellow one).
This is how it's done in pixel shader (UVW is directly passed without changes):
float3 texColor = tex3D(ColorTextureSampler, input.UVW);
return float4(texColor, 1);
The reason about my choice is my terrain structure. The terrain is being generated from voxels (each voxel holds material ID) using marching cubes. Each vertex is 'welded' because meshes is pretty big and I don't want to make every triangle individual (but I can still do it if there is no way to solve my question using connected vertices).
I recently came to an idea about storing material IDs of other two vertices of the triangle and their blend factors (I would have an float2 UV pair, float3 for material IDs and float3 for blend factor of each material id) in each vertex, but I don't see any way to accomplish this without breaking my mesh into individual triangles.
Any help would be greatly appreciated. I'm targeting for SlimDX with C# and Direct3D 9 API. Thanks for reading.
P.S.: I'm sorry if I made some mistakes in this text, English is not my native language.
Probably, your ColorTextureSampler using point filtering (D3DTEXF_POINT). Use either D3DTEXF_LINEAR or D3DTEXF_ANISOTROPIC to acheve desired interpolation effect.
I'm not very familiar with SlimDX 9, but you've got the idea.
BTW, nice illustration =)
Update 1
Result in your comment below seems appropriate to your code.
Looks like to get desired effect you must change overall approach.
It is not complete solution for you, but there is how we make it in plain 3D terrains:
Every vertex has 1 pair (u, v) of texure coodrinates
You have n textures to sample into (T1, T2, T3, ..., Tn) that represents different layers of terrain: sand, grass, rock, etc.
You have mask texture(s) n channels in total, that stores blending coefficients for each texture T in its channels: R channel holds alpha for T1, G channel for T2, B for T3, ... etc.
In pixel shader you sampling your layer textures as usual, and get color values float4 val1, val2, val3, ...
Then you sampling masks texture(s) for corresponding blend coefficients and get float blend1, blend2, blend3, ...
Then you applying some kind of blending algorith, for example simple linear interpolation:
float4 terrainColor = lerp( val1, val2, blend1 );
terrainColor = lerp( terrainColor, val3, blend2);
terrainColor = lerp( terrainColor, ..., blendN );
For example if your T1 is a grass, and you have a big grass field in a middle of your map, you will wave a big red field in the middle.
This algorithm is a bit slow, because of much texture sampling, but simple to implement, gives good visual results and most flexible. You can use not only mask as blend coefficients, but any values: for example height (sample more snow in mountain peaks, rock in mountains, dirt in low ground), slope (rock on steep, grass on flat), even fixed values, etc. Or mix up all of that. Also, you can vary a blending: use built-in lerp or something more complicated (warning! this example is stupid):
float4 terrainColor = val1 * val2 * blend1 + val2 * val3 * blend2;
terrainColor = saturate(terrainColor);
Playing with blend algo is the most interesting part of this aproach. And you can find many-many techniques in google.
Not sure, but hope it helps!
Happy coding! =)
I completed a tutorial on rendering 2d triangles in directx. Now, I want to use my knowledge of rendering a single triangle to render multiple triangles, or for that matter multiple objects on screen.
Should I create a list/stack/vector of vertexbuffers and input layouts and then draw each object? Or is there a better approach to this?
My process would be:
Setup directx, including vertex and pixel shaders
Create vertex buffers for each shape that has to be drawn on the screen and store them in an array.
Draw them to the render target for each frame(each frame)
Present the render target(each frame)
Please assume very rudimentary knowledge of DirectX and graphics programming in general when answering.
You don't need to create vertex buffer for each shape, you can just create one to store all the vertices of all triangles, then create a index buffer to store all indices of all shapes, at last draw them with index buffer.
I am not familiar with DX11, So, I just list the links for D3D 9 for your reference, I think the concept was same, just with some API changes.
Index Buffers(Direct3D 9)
Rendering from Vertex and Index buffers
If the triangles are in the same shape, just with different position or colors, you can consider using geometry instancing, it's a powerful way to render multiple copies of the same geometry.
Geometry Instancing
Efficiently Drawing Multiple Instances of Geometry(D3D9)
I don't know much about DirectX but general rule in rendering on GPU is to use separate vertex and index buffers for every mesh.
Although there is nothing limiting you from using single vertex buffer with many index buffers, in fact you may get some performance gains especially for small meshes...
You'll need just one vertex buffer for do this , and then Batching them,
so here is what you can do, you can make an array/vector holding the triangle information, let's say (pseudo-code)
struct TriangleInfo{
..... texture;
vect2 pos;
vect2 dimension;
float rot;
}
then in you draw method
for(int i=0; i < vector.size(); i++){
TriangleInfo tInfo = vector[i];
matrix worldMatrix = Transpose(matrix(tInfo.dimension) * matrix(tInfo.rot) * matrix(tInfo.pos));
shaderParameters.worldMatrix = worldMatrix; //info to the constabuffer
..
..
dctx->PSSetShaderResources(0, 1, &tInfo.texture);
dctx->Draw(0,4);
}
then in your vertex shader:
cbuffer cbParameters : register( b0 ) {
float4x4 worldMatrix;
};
VOut main(float4 position : POSITION, float4 texCoord : TEXCOORD)
{
....
output.position = mul(position,worldMatrix);
...
}
Remenber all is pseudo-code, but this should give you the idea, but there is a problem if you are planing to make a lot of Triangle, let's say 1000 triangles, maybe this is not the best option, you should using DrawIndexed and modifying the vertex position of each triangle, or you can use DrawInstanced , that is simpler , to be able to send all the information in just once Draw call, because calling Draw * triangleCount , is very heavy for large amounts
We have an iOS drawing app. Currently, the drawing is implemented with OpenGL ES 1.1. We use some algorithms to smooth the lines such as Bezier curves. So, when touch events occur, we get some set of points out of touch event points (based on algorithms) and draw these points. We also use brush texture for points to have more natural look.
I wonder if it's possible to implement these algorithms in OpenGL ES 2.0 shaders. Something like to call an OpenGL function to draw lines made of touch points and on output have smoothed brush-textured curve rendered.
Points P0, P1, ... P4 here are touch events and the points on red curve - output points, with such step for T so that the distance between two neighbor points on curve is not greater than 1 pixel.
And here is the link with Bezier algorithm explanation:
Bézier curve - Wikipedia, the free encyclopedia
Any help is much appreciated.
Thanks.
You cannot generate new vertices inside the vertex shader (you can do it in the geometry shader, which ES doesn't have). The number of output vertices is always the same as the number of input vertices, you can only change their positions (and ohter attributes of course).
So you would have to draw a line strip made out of enough vertices to guarantee a smooth enough curve. What you can do is put in always the same line strip, having the curve parameter values T as 1D vertex positions. In the shader you then use this input position (the parameter value) to compute the actual 2D/3D position on the curve using the DeCasteljau algorithm (or whatever) and the points P0 to P4 which you put into the shader as constants (uniform variables in GLSL terms).
But I'm not sure if that would really buy you anything over just computing those points on the CPU and putting them into a dynamic VBO. What you save is the copying of the curve points from CPU to GPU and the computation on the CPU, but on the other hand your vertex shader is much more complex. It needs to be evaluated which is the better approach. If you need to compute the curve points each frame (because the control points change each frame) and the curve is rather high detail, it might not be that bad an idea. But otherwise I don't think it really pays. And also your shader won't be adaptable that easily to a changing number of control points/curve degree at runtime.
But once again, you cannot put in 5 control points and generate N curve points on the GPU. The vertex shader always works on a single vertex and results in a single vertex, the same as the fragment shader always works on a single fragment (say pixel, though it isn't one yet) and result in a single (or no) fragment.