Displacement Map UV Mapping? - webgl

Summary
I'm trying to apply a displacement map (Height map) to a rather simple object (Hexagonal plane) and I'm having some unexpected results. I am using grayscale and as such, I was under the impression my height map should only be affecting the Z values of my mesh. However, the displacement map I've created stretches the mesh across the X and Y planes. Furthermore, it doesn't seem to use the UV mapping I've created that all other textures are successfully applied to.
Model and UV Map
Here are reference images of my hexagonal mesh and its corresponding UV map in Blender.
Diffuse and Displacement Textures
These are the diffuse and displacement map textures I am applying to my mesh through Three.JS.
Renders
When I render the plane without a displacement map, you can see that the hexagonal plane stays within the lines. However, when I add the displacement map it clearly affects the X and Y positions of the vertices rather than affecting only the Z, expanding the plane well over the lines.
Code
Here's the relevant Three.js code:
// Textures
var diffuseTexture = THREE.ImageUtils.loadTexture('diffuse.png', null, loaded);
var displacementTexture = THREE.ImageUtils.loadTexture('displacement.png', null, loaded);
// Terrain Uniform
var terrainShader = THREE.ShaderTerrain["terrain"];
var uniformsTerrain = THREE.UniformsUtils.clone(terrainShader.uniforms);
//uniformsTerrain["tNormal"].value = null;
//uniformsTerrain["uNormalScale"].value = 1;
uniformsTerrain["tDisplacement"].value = displacementTexture;
uniformsTerrain["uDisplacementScale"].value = 1;
uniformsTerrain[ "tDiffuse1" ].value = diffuseTexture;
//uniformsTerrain[ "tDetail" ].value = null;
uniformsTerrain[ "enableDiffuse1" ].value = true;
//uniformsTerrain[ "enableDiffuse2" ].value = true;
//uniformsTerrain[ "enableSpecular" ].value = true;
//uniformsTerrain[ "uDiffuseColor" ].value.setHex(0xcccccc);
//uniformsTerrain[ "uSpecularColor" ].value.setHex(0xff0000);
//uniformsTerrain[ "uAmbientColor" ].value.setHex(0x0000cc);
//uniformsTerrain[ "uShininess" ].value = 3;
//uniformsTerrain[ "uRepeatOverlay" ].value.set(6, 6);
// Terrain Material
var material = new THREE.ShaderMaterial({
uniforms:uniformsTerrain,
vertexShader:terrainShader.vertexShader,
fragmentShader:terrainShader.fragmentShader,
lights:true,
fog:true
});
// Load Tile
var loader = new THREE.JSONLoader();
loader.load('models/hextile.js', function(g) {
//g.computeFaceNormals();
//g.computeVertexNormals();
g.computeTangents();
g.materials[0] = material;
tile = new THREE.Mesh(g, new THREE.MeshFaceMaterial());
scene.add(tile);
});
Hypothesis
I'm currently juggling three possibilities as to why this could be going wrong:
The UV map is not applying to my displacement map.
I've made the displacement map incorrectly.
I've missed a crucial step in the process that would lock the displacement to Z-only.
And of course, secret option #4 which is none of the above and I just really have no idea what I'm doing. Or any mixture of the aforementioned.
Live Example
You can view a live example here.
If anybody with more knowledge on the subject could guide me I'd be very grateful!
Edit 1: As per suggestion, I've commented out computeFaceNormals() and computeVertexNormals(). While it did make a slight improvement, the mesh is still being warped.

In your terrain material, set wireframe = true, and you will be able to see what is happening.
Your code and textures are basically fine. The problem occurs when you compute vertex normals in the loader callback function.
The computed vertex normals for the outer ring of your geometry point somewhat outward. This is most likely because in computeVertexNormals() they are computed by averaging the face normals of each neighboring face, and the face normals of the "sides" of your model (the black part) are averaged into the vertex normal calculation for those vertices that make up the outer ring of the "cap".
As a result, the outer ring of the "cap" expands outward under the displacement map.
EDIT: Sure enough, straight from your model, the vertex normals of the outer ring point outward. The vertex normals for the inner rings are all parallel. Perhaps Blender is using the same logic to generate vertex normals as computeVertexNormals() does.

The problem is how your object is constructed becuase the displacement happens along the normal vector.
the code is here.
https://github.com/mrdoob/three.js/blob/master/examples/js/ShaderTerrain.js#L348-350
"vec3 dv = texture2D( tDisplacement, uvBase ).xyz;",
This takes a the rgb vector of the displacement texture.
"float df = uDisplacementScale * dv.x + uDisplacementBias;",
this takes only red value of the vector becuase uDisplacementScale is normally 1.0 and uDisplacementBias is 0.0.
"vec3 displacedPosition = normal * df + position;",
This displaces the postion along the normal vector.
so to solve you either update the normals or the shader.

Related

(DX12 Shadow Mapping) Depth buffer is always filled with 1

I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.

What's wrong with my HLSL depth shader?

I'm trying to render depth texture in XNA 4.0. I'm read few different tutorials several times and realy cannot understand what I'm doing wrong.
Depth shader:
float4x4 WVPMatrix;
struct VertexShaderOutput
{
float4 Position : position0;
float Depth : texcoord0;
};
VertexShaderOutput VertexShader1(float4 pPosition : position0)
{
VertexShaderOutput output;
output.Position = mul(pPosition, WVPMatrix);
output.Depth.x = 1 - (output.Position.z / output.Position.w);
return output;
}
float4 PixelShader1(VertexShaderOutput pOutput) : color0
{
return float4(pOutput.Depth.x, 0, 0, 1);
}
technique Technique1
{
pass Pass1
{
AlphaBlendEnable = false;
ZEnable = true;
ZWriteEnable = true;
VertexShader = compile vs_2_0 VertexShader1();
PixelShader = compile ps_2_0 PixelShader1();
}
}
Drawing:
this.depthRenderTarget = new RenderTarget2D(
this.graphicsDevice,
this.graphicsDevice.PresentationParameters.BackBufferWidth,
this.graphicsDevice.PresentationParameters.BackBufferHeight);
...
public void Draw(GameTime pGameTime, Camera pCamera, Effect pDepthEffect, Effect pOpaqueEffect, Effect pNotOpaqueEffect)
{
this.graphicsDevice.SetRenderTarget(this.depthRenderTarget);
this.graphicsDevice.Clear(Color.CornflowerBlue);
this.DrawChunksDepth(pGameTime, pCamera, pDepthEffect);
this.graphicsDevice.SetRenderTarget(null);
this.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointClamp, null, null);
this.spriteBatch.Draw(this.depthRenderTarget, Vector2.Zero, Color.White);
this.spriteBatch.End();
}
private void DrawChunksDepth(GameTime pGameTime, Camera pCamera, Effect pDepthEffect)
{
// ...
this.graphicsDevice.RasterizerState = RasterizerState.CullClockwise;
this.graphicsDevice.DepthStencilState = DepthStencilState.Default;
// draw mesh with pDepthEffect
}
Result:
As I see output.Position.z always equals output.Position.w, but why?
There are several depth definitions that might be useful. Here are some of them.
The easiest is the z-coordinate in camera space (i.e. after applying world and view transform). It usually has the same units as the world coordinate system and it is linear. However, it is always measured in parallel to the view direction. This means that moving left/right and up/down does not change the distance because you stay at the same plane (parallel to the znear/zfar clipping planes). A slight variation is z/far which just scales the values to the [0, 1] interval.
If real distances (in the Euclidean metric) are needed, you have to calculate them in the shader. If you just need coarse values, the vertex shader is enough. If the values should be accurate, do this in the pixel shader. Basically, you need to calculate the length of the position vector after applying world and view transforms. Units are equal to world space units.
Depth buffer depth is non-linear and optimized for depth buffering. This is the depth that is returned by the projection transform (and following w-clip). The near clipping plane is mapped to a depth of 0, the far clipping plane to a depth of 1. If you change the location of a very near pixel along the view direction, the depth value changes a lot more than a far pixel that is moved equally. This is because display errors (due to floating point imprecision) at near pixels are a lot more visible than at far pixels. This depth is also measured in parallel to the view direction.

How to convert TangoXyxIjData into a matrix of z-values

I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location

how to pass from farseer vertices list to vertexpositioncolortexture data vertex

My issue started when i was doing the texture to vertices example (https://gamedev.stackexchange.com/questions/30050/building-a-shape-out-of-an-texture-with-farseer) then i pop up if its posible to pass this "farseer vertices" to vertex data that can be used in DrawUserIndexedPrimitives in order to have the vertices ready for modification on alpha textures.
Example:
You draw your texture(with transparency in some places) over the triangle strip vertex data so you can manipulate the points in order to disort the image like this:
http://www.tutsps.com/images/Water_Design_avec_Photoshop/Water_Design_avec_Photoshop_20.jpg
As you can see the A letter was just a normal image on a PNG file but after the conversion iam looking it can be used to disort image.
plz any solution give some code or link to tutorial that can help me to figure out this...
Thanks all!!
P.D. i think the main issue is how to make the indexData and the textureCoordination from just the vertices that PolygonTools.CreatePolygon makes.
TexturedFixture polygon = fixture.UserData as TexturedFixture;
effect.Texture = polygon.Texture;
effect.CurrentTechnique.Passes[0].Apply();
VertexPositionColorTexture[] points;
int vertexCount;
int[] indices;
int triangleCount;
polygon.Polygon.GetTriangleList(fixture.Body.Position, fixture.Body.Rotation, out points, out vertexCount, out indices, out triangleCount);
GraphicsDevice.SamplerStates[0] = SamplerState.AnisotropicClamp;
GraphicsDevice.RasterizerState = new RasterizerState() { FillMode = FillMode.Solid, CullMode = CullMode.None, };
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColorTexture>(PrimitiveType.TriangleList, points, 0, vertexCount, indices, 0, triangleCount);
This will do the trick

Punching alpha fillled holes to render-to-textures in Three.js

I am using render-to-texture to do postprocessing and then blending several 2D layers together.
Currently I am using stencil mask to make "holes" in render-to-texture targets and leaving some of the areas transparent. However, this is little cumbersome in my case. I'd rather ignore the stencil mask and then just would use normal polyfill operations to draw the holes.
What kind of methods there exist for rendering "fill to alpha 0.0" areas in the scene? I.e. the existing rendet-to-texture destination alpha value would be ignored and just replaced with 0.0 value. I assume you can set OpenGL mode bits so (how?) that this can done, without the need of using a custom fragment shader.
I already know how to set depth mask to ignore mode, so I can redraw over the top of the existing polygons.
You just have to use the THREE.NoBlending blending mode in the material used in the polygons you draw to make the holes.The material should be a ShaderMaterial so you can write the desired alpha, like here:
var r = 0.5;
var g = 0;
var b = 0;
var a = 0.8;
var material = new THREE.ShaderMaterial( {
uniforms: {
col: { type: "v4", value: new THREE.Vector4( r, g, b, a ) }
},
fragmentShader: "uniform vec4 col; void main() {\n\tgl_FragColor = col;\n}",
side: THREE.DoubleSide
} );
material.transparent = true;
material.blending = THREE.NoBlending;
(Note that the DoubleSide parameter is not related to the problem but it is useful sometimes.)

Resources