I am trying to do normal mapping on flat surface but I can't get any noticeable result :(
My shader
http://pastebin.com/raEvBY92
For my eye, shader looks fine, but it doesn't render desired result( https://dl.dropbox.com/u/47585151/sss/final.png).
All values are passed.Normals,tengents and binormals are computed correctly when I create the grid,I have checked that!
Here are screens of ambient,diffuse,specular and bump map.
https://dl.dropbox.com/u/47585151/sss/ambient.png
https://dl.dropbox.com/u/47585151/sss/bumpMap.png
https://dl.dropbox.com/u/47585151/sss/diffuse.png
https://dl.dropbox.com/u/47585151/sss/specular.png
They seems to be legit...
The bump map,which is the result of (bump=normalize(mul(bump, input.WorldToTangentSpace)) definitely looks correct,but doesn't have any impact on end result.
Maybe I don't understand the different spaces idea or I changed the order of matrix multiplication.By world matrix I understand the position and orientation of the grid,which never changes and it is identity matrix.Only view matrix changes and represents camera position and orientation in own space.
Where is my mistake?
First of all, if you're having a problem, it's a good ideo to comment everything out, which doesn't belong to this. The whole lightcomputation with ambient, specular or even the diffusetexture isn't interesting at this moment. With
output.color=float4(diffuse ,1);
You can focus on your problem and see clearly what change, if you change something in you code.
If your quad lies in the xy-plane with z=0, you should change your lightvector, he wouldn't work. Generally I use for testing purpose a diagonal vector (like normalize(1,-1,1)) to prevent a parallel direction to my object.
When I look over your code it seems to me, that you didn't get the idea of the different spaces, as how you thought ;) The basic idea of normalmapping is to give additional information about the surface with additional normals. They are saved in a normalmap, so encoded to rgb, where b is usually the up-vector. Now you must fit them into your 3D-world, because they aren't in the world space, but in the tangent space (tangent space = surface space of the triangle). Because this transformation is more complex, the normal computation goes the other way round. You transform with the normal,binormal and tangent as a matrix your lightvector and viewvector from world space into tangent space (you are mapping the axis of world space xyz to tnb - tangent,normal,binormal, the order can be wrong I usually swap them until it works ;) ). With your line
bump = normalize(mul(bump, input.WorldToTangentSpace));
you try to transform you normal in tangent space to tangent space. Change this, so you transform the view and the lightvector in the vertexshader into tangent space and pass the transformed vectors to the pixelshader. There you can do the lightcomputation in tangent space. Maybe read an additional tutorial to normalmapping, then you will get this working! :)
PS: If youre finished with the basic lighting, your specular computation seems to have some errors, too.
float3 reflect = normalize(2*diffuse*bump-LightDirection);
This line seems to should compute the halfway-vector, but therefore you need the viewvector and shouldn't use a lightingstrength like diffuse. But a tutorial can explain this in more detail than me now.
Related
I am really having a problem with this.
I have a polygon (a quad) which can be any shape. When my mouse is inside the polygon I need to find the x,y values of where my mouse is (inside the quad) as though the poygon were are perfect square. Further explanation; I have a 32x32 texture applied to the polygon and I need to know the x,y of the texture that the mouse is over.
I have some code that works for most shapes but which breaks if TR.Y is less than TL.y for instance.
I have some pretty simple code that tests whether the cursor is inside the polygon (via two triangle tests). But I cannot figure out how to use this to generate an x,y of a virtual square projection.
This problem is killing me. What is the name of operation i am trying to perform? Does anyone know of an explanation where the equations are presented in code form (any kind of code) (rather than just mathematical notation?). Any kind of help would be so appreciated.
I am on the verge of doing a 2nd render with specially formatted textures (each pixel having a unique value) so that I can just colour test to get an approximate x,y match (and precision is something that can be compromised here without causing too much trouble) - but then I will have to work around the DX Lib's attempt to blend and smooth the special texture as it is warped to fill the quad)
**Edit: Code that works for many quad shapes
It depends on method - how the texture is drawn at this quad.
If it uses perspective transform Square=>Quad, you have to use matrix of inverse transform Quad=>Square. Short article
For linear interpolation approach see this page
I'm trying out D3D11 and struggling to render a model correctly.
Here's my problem; while my world and view transformations seem right,
my perspective transformation seems to be wrong.
When I first rendered a model, something felt wrong, so I tried rotating the model to see what it was.
Then I noticed that, parts of the model closer to the camera appears smaller, and further parts appear larger.
If it's relevant, I'm using assimp to load my model, and here's how I do it.
mScene = aiImportFile(filename.c_str(), aiProcessPreset_TargetRealtime_MaxQuality | aiProcess_GenSmoothNormals | aiProcess_ConvertToLeftHanded | aiProcess_TransformUVCoords);
And here's how I build my projection matrix.
mProjection = XMMatrixPerspectiveFovLH(XMConvertToRadians(45.0f), 800.0f / 600.0f, -1.0f, 1.0f);
I fiddled with nearZ and farZ arguments of XMMatrixPerspectiveFovLH.
I tried increasing farZ gradually every frame, and then realized that as the value increases, the far clipping plane comes closer and closer to the camera, which is exactly the opposite of what I thought would happen.
In vertex shader, here's what I'm doing with vertex positions. It's pretty basic.
Out.Position = mul(mul(mul(position, World), CameraView), CameraProjection);
The model renders correctly in terms of position, scaling, rotation, and view-position.
So I'm assuming that world and view transforms are fine, and the problem is about the projection matrix.
To summarize, I'm thinking that Z values of projected vertices are, somehow, "flipped".
I google-searched many many times to no avail.
If someone could point out what I could be doing wrong, it would be very much appreciated.
If you need to see some of my code to help, please tell me.
Your near and far plane distances should be positive.
Use something like:
mProjection = XMMatrixPerspectiveFovLH(XMConvertToRadians(45.0f), 800.0f / 600.0f, 0.1f, 10.0f);
I'll make a note to consider adding assert( NearZ > 0.f); assert( NearZ < FarZ ); to those DirectXMath functions and make sure that's explicit in the docs. distance means positive number here.
PS: You should take a look at DirectX Tool Kit
How do I make this curve a straight line of the same length (basically by unbending it )? I guess I need to apply some kind of non-linear transformation. But I am not sure which transformation will work best here.
Please note that if I try taking its projection on a straight line, I will end up with a shorter line.
Please provide your suggestions.
I think you can do connection analysis to get every point(pixel) in the curve, and calculate the pixels number. The length of pixels is the length of transformed line, the orientation can be the two vertex connection line's orientation.
I hope someone can point me, to how I can solve my issue. . I have 6000 X-rays where I have to measure the angle between bones.
My strategy is the following: If I can somehow draw a line1 though the long axis of bone1, and line2 though the long axis of bone2, then I can simply measure the angle between the 2 lines.
So how can I find the axis in the first place? Is it possible to do it this way? :
(It is an x-ray picture) Lets say 1 cm from the top of the picture, we scan that row for the first pixel that turns white (the first edge of the bone), here we have a dot A1, the we continue scanning until we find the first pixel that turns black (the second edge of bone ), this is dot A2, we draw a line between Y1(A1,A2).
We do the same procedure, we go just further down lets say 10 cm from the top, we then have another line Y2(B1,B2). A line that goes from the middle of Y1 to the middle of Y2, will be the axis of the bone
I already managed to play with the threshold, and making and edge. to make it easy to draw the lines ?
Does it make sense?
Please, can it be done? Any idea how?
Any help will be appreciated, thank you!
Here's an idea:
Maybe if you downsample the images to get less artifacts and/or apply some mathematical morphology (http://en.wikipedia.org/wiki/Mathematical_morphology) to reduce the noise you can convert the bones into more line-shaped separated figures.
Apply some threshold so you can have black/white binary pictures. Use math to find a point in each of the 2 shapes and then try to match them to a rectangle or an oval. These will give you the axis you are looking for and then you can measure the angle.
This is too general a question. Images would always be appreciated! I guess you have 6000 xrays producing a grayscale image of the bones. In this case the general idea would be to:
1. Find a good binary segmentation of the bones in 3d
2. Find a good skeletonization of the 2 bones, also look at this
3. Replace the main skeletons of the two bones by line segments that best approximate it and measure the two angles (in 3d) between them
4. If this is two bones in the body - there is usually a limit to the degrees of freedom of two connected bones. It would be good to validate it wrt to this reference.
Tracing the line in realtime might not be the best in terms of accuracy. I guess this is obvious.
This could give an idea for the full human pose.
If I have the vertex normals of a normal scene showing up as colours in a texture in world space is there a way to calculate edges efficiently or is it mathematically impossible? I know it's possible to calculate edges if you have the normals in view space but I'm not sure if it is possible to do so if you have the normals in world space (I've been trying to figure out a way for the past hour..)
I'm using DirectX with HLSL.
if ( normalA dot normalB > cos( maxAngleDiff )
then you have an edge. It won't be perfect but it will definitely find edges that other methods won't.
Or am i misunderstanding the problem?
Edit: how about, simply, high pass filtering the image?
I assume you are trying to make cartoon style edges for a cell shader?
If so, simply make a dot product of the world space normal with the world space pixel position minus camera position. As long as your operands are all in the same space you should be ok.
float edgy = dot(world_space_normal, pixel_world_pos - camera_world_pos);
If edgy is near 0, it's an edge.
If you want a screen space sized edge you will need to render additional object id information on another surface and post process the differences to the color surface.
It will depend on how many colors your image contain, and how they merge: sharp edges, dithered, blended,...
Since you say you have the vertex normals I am assuming that you can access the color-information on a single plane.
I have used two techniques with varying success:
I searched the image for local areas of the same color (RGB) and then used the highest of R, G or B to find the 'edge' - that is where the selected R,G or B is no longer the highest value;
the second method I used is to reduce the image to 16 colors internally, and it is easy to find the outlines in this case.
To construct vectors would then depend on how fine you want the granularity of your 'wireframe'-image to be.