z value is not available in fragment shader in WebGL? - opencv

I'm trying to apply OpenCV's fisheye projection model to this demo (source code is here) showing 3D cube using three.js developed by Giliam de Carpentier.
the above demo is already applied fisheye-like effect, but I need a real fisheye projection with a real camera calibration.
I started implementing a new fragment shader on it, but I noticed 3rd value (z) of position of vertex is always 0.
So, it's not possible to compute a=x/z, b=y/z in the formula of the projection model.
Since I'm very new to three.js and webGL, I could not figure out what's wrong with that.
Could you tell me how can I get z-axis value of a vertex position?
Thank you in advance.
update:
to reproduce the problem, please replace vertex shader by the following code in the lensdistortion-webgl.html.
vertexShader: [
"void main() {",
"vec3 vposition = position;",
"if (position[2] == 0.0) { ", // check if z value is zero.
" vposition[1] = 0.5 * vposition[1];", // if z value is zero, then shorten shaps along y-axis.
"}",
"gl_Position = projectionMatrix * (modelViewMatrix * vec4(vposition, 1.0));",
"}"
].join("\n"),
you will see the shorten boxes that is shown only when z coordinate is zero.

Related

How can I make my WebGL Coordinate System "Top Left" Oriented?

Because of computation efficiency, I use a fragment shader to implement a simple 2D metaballs algorithm. The data of the circles to render is top-left oriented.
I have everything working, except that the origin of WebGL's coordinate system (bottom-left) is giving me a hard time: Obviously, the rendered output is mirrored along the horizontal axis.
Following https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html (and others), I tried to rotate things using a vertex shader. Without any success unfortunately.
What is the most simple way of achieving the reorientation of WebGL's coordinate system?
I'd appreciate any hints and pointers, thanks! :)
Please find a working (not working ;) ) example here:
https://codesandbox.io/s/gracious-fermat-znbsw?file=/src/index.js
Since you are using gl_FragCoord in your pixels shader, you can't do it from the vertex shader becasuse gl_FragCoord is the canvas coordinates but upside down. You could easily invert it in javascript in your pass trough to WebGL
gl.uniform3fv(gl.getUniformLocation(program, `u_circles[${i}]`), [
circles[i].x,
canvas.height - circles[i].y - 1,
circles[i].r
]);
If you want to do it in the shader and keep using gl_FragCoord then you should pass the height of the canvas to the shader using a uniform and do the conversion of y there by doing something like
vec2 screenSpace = vec2(gl_FragCoord.x, canvasHeight - gl_FragCoord.y - 1);
The -1 is because the coordinates start at 0.

Computing x,y coordinate (3D) from image point won't work

I am trying to repeat this code here to have real world coordinates on Python. But the results doesn't coinside.
My code :
uvPoint = np.matrix('222.84; 275.05; 1')
rots = cv2.Rodrigues(_rvecs[18])[0]
rot = np.linalg.inv(rots)
cam = np.linalg.inv(camera_matrix)
leftSideMat = rot * cam * uvPoint;
rightSideMat = rot * _tvecs[18];
s = np.true_divide((4.52 + rightSideMat), leftSideMat)
rot * (s * cam * uvPoint - _tvecs[18])
my camera matrix
array([[613.87755242, 0. , 359.6984484 ],
[ 0. , 609.35282925, 242.55955439],
[ 0. , 0. , 1. ]])
rotation matrix
array([[ 0.73824258, 0.03167042, 0.67379142],
[ 0.13296486, 0.97246553, -0.19139263],
[-0.66130042, 0.23088477, 0.71370441]])
and translation vector
array([[-243.00462163],
[ -95.97464544],
[ 935.8852482 ]])
I don't know what is Zconst, but whatever I try for z constant I can't even get close to real world coordinates which is (36, 144). What am I doing wrong here?
based on your comment. i think what you want is the pose estimation with known camera projection matrix.
You should check this link out for python implementation of what you want. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_pose/py_pose.html
Edit
From your code. it seems your conversion of left and right matrix is correct. But are you sure the input rotation and translation is correct? can you try to plot a box following this tutorial on the original image based on this rotation and translation? If the box coincides with the 2D projected pattern in your data.
If possible. please post the original image that you are using.
Edit
Please check on the original post where scale s is calculated based on
element in which is the 3rd position in the output vector. rightSideMat.at(2,0))/leftSideMat.at(2,0))
and you are doing it as
output vector of rightSideMat), leftSideMat)
Try to do it with the same element operation. By right, scale S should be a float
element, not a matrix.

Accessing z-coordinate in world space in shader modifier SceneKit

Looking for
I’m having trouble accessing the z-coordinate of the rendered pixel in world space. In SceneKit, I am looking for a 3d plane whose rendered color is directly related to the z-coordinates of the rendered point.
Situation
I’m working with SpriteKit and I’m using a SK3DNode to embed a SceneKit scene inside my SpriteKit scene. For the SceneKit scene, I’m using a .dae Collada file exported from Blender. It contains a plane mesh and a light.
I’m applying shader modifiers to modify the geometry and the lighting model.
self.waterGeometry.shaderModifiers = #{
SCNShaderModifierEntryPointGeometry : self.geomModifier,
SCNShaderModifierEntryPointSurface : self.cellShadingModifier
};
The geometry modifier code (self.geomModifier):
// Waves Modifier
float Amplitude = 0.02;
float Frequency = 15.0;
vec2 nrm = _geometry.position.xz;
float len = length(nrm)+0.0001; // for robustness
nrm /= len;
float a = len + Amplitude*sin(Frequency * _geometry.position.z + u_time * 1.6);
_geometry.position.xz = nrm * a;
The geometry modifier applies a sine transformation to the _surface property to simulate waves. In the image below, the sketched sprites are SpriteKit sprites which have a higher zPosition and do not interfere with the SK3DNode. Notice the subtle waves (z displacement) as a result of the geometry modifier.
The next step, I want to output color to be computed based on the point’s z-coordinate in world space. This could be either _surface.diffuse or _output.color, that doesn't matter that much to (would imply a different point of insertion for the shader modifier but not an issue).
I have tried
The following code in the surface modifier (self.cellShadingModifier).
vec4 geometry = u_inverseViewTransform * vec4(_surface.position, 1.0);
if (geometry.y < 0.0) {
_surface.diffuse.rgb *= vec3(0.4);
}
_surface.position is in view space, and I hoped to transform it to world space by using u_inverseViewTransform. Apple docs says:
Geometric fields (such as position and normal) are expressed in view
space. You can use SceneKit’s uniforms (such as
u_inverseViewTransform) to operate in a different coordinate space,
[...]
As you can see it is flickering and does not appear to be based on the the geometry.position I just modified. I have tested this both in the simulator and on device (iPad Air). I believe I am making a simple error as I'm probably confusing _surface and _geometry properties.
Can anyone tell me where I can get the z-coordinate (world space) of the currently shaded point of the mesh, so I can use it in my rendering method?
Note
I have also tried to access _geometry inside the surface shader modifier, but I get the error Use of undeclared identifier '_geometry', which is strange because Apple documentation says:
You can use the structures defined by earlier entry points in later
entry points. For example, a snippet associated with the
SCNShaderModifierEntryPointFragment entry point can read from the
_surface structure defined by the SCNShaderModifierEntryPointSurface entry point.
Note 2
I could have the LightingModel shader calculate off of the generated sine wave (and avoid the search for the z-coordinate), but in the future I may be adding additional waves and using the z-coordinate would be more maintainable, not to mention elegant.
I've also been learning how to use the shader modifiers. I have a solution to this which works for me, using both the inverse model transform and the inverse view transform.
The following code will paint the right-hand side of the model at the centre of the scene with a red tint. You should be able to check the other position element (y, I think) to get the result you want.
vec4 orig = _surface.diffuse;
vec4 transformed_position = u_inverseModelTransform * u_inverseViewTransform * vec4(_surface.position, 1.0);
if (transformed_position.z < 0.0) {
_surface.diffuse = mix(vec4(1.0,0.0,0.0,1.0), orig, 0.5);
}

DirectX11 - Matrix Translation

I am creating a 3D scene and I have just inserted a cube object into it. It is rendered fine at the origin but when I try to rotate it and then translate it I get a huge deformed cube. Here is the problem area in my code:
D3DXMATRIX cubeROT, cubeMOVE;
D3DXMatrixRotationY(&cubeROT, D3DXToRadian(45.0f));
D3DXMatrixTranslation(&cubeMOVE, 10.0f, 2.0f, 1.0f);
D3DXMatrixTranspose(&worldMatrix, &(cubeROT * cubeMOVE));
// Put the model vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_Model->Render(m_Direct3D->GetDeviceContext());
// Render the model using the light shader.
result = m_LightShader->Render(m_Direct3D->GetDeviceContext(), m_Model->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix,
m_Model->GetTexture(), m_Light->GetDirection(), m_Light->GetDiffuseColor());
// Reset the world matrix.
m_Direct3D->GetWorldMatrix(worldMatrix);
I have discovered that it's the cubeMOVE part of the transpose that is giving me the problem but I have no idea why.
This rotates the cube properly:
D3DXMatrixTranspose(&worldMatrix, &cubeROT);
This translates the cube properly:
D3DXMatrixTranslation(&worldMatrix, 10.0f, 2.0f, 1.0f);
But this creates the deformed mesh:
D3DXMatrixTranspose(&worldMatrix, &cubeMOVE);
I'm quite new to DirectX so any help would be very much appreciated.
I don't think transpose does what you think it does. To combine transformation matrices, you just multiply them -- no need to transpose. I guess it should be simply:
worldMatrix = cubeROT * cubeMOVE;
Edit
The reason "transpose" seems to work for rotation but not translation, is that transpose flips the non-diagonal parts of the matrix. But for an axis-rotation matrix, that leaves the matrix nearly unchanged. (It does change a couple of signs, but that would only affect the direction of the rotation.) For a translation matrix, applying a transpose would completely deform it -- hence the result you see.

Matrix mult order in Direct3D

I've received two conflicting answers in terms of multiplying matrices in Direct3D to achieve results. Tutorials do state to multiply from left to right and that's fine but it's not how I would visualize it.
Here's an example:
OpenGL (reading from top to bottom):
GLRotatef(90.0f);
GLTranslatef(20.0f,0,0);
So you visualize the world axis rotating 30 degrees. Then you translate 20.0 on the now rotated x-axis so it looks like you are going up on the world y-axis.
In Direct3D, doing:
wm = rotatem * translatem;
is different. It looks like the object was just rotated at the origin and translated on the world's x-axis so it goes to the right and not up. It only works once I reverse the order and read from right to left.
Also for example, in frank luna's book on DX10, he goes into explaining how to do mirror reflections. I get all of that but when he does for example:
reflection_matrix = world_m * reflection_m;
around the xy plane, do I interpret this as first doing a the world positioning then a reflection or the opposite?
The problem is the order you are multiplying the matrices to get the composite transform matrix is reversed from what it should be. You are doing: wm = rotatem * translatem, which follows the order of operations you are doing for OpenGL, but for DirectX the matrix should have been wm = translatem * rotatem
The fundamental difference between OpenGL and DirectX arises from the fact that OpenGL treats matrices in column major order, while DirectX treats matrics in row major order.
To go from column major to row major you need to find the transpose ( swap the rows and the columns ) of the OpenGL matrix.
So, if you write wm = rotatem * translatem in OpenGL, then you want the transpose of that for DirectX, which is:
wmT = (rotatem*translatem)T = translatemT * rotatemT
which explains why the order of the matrix multiply has to be reversed in DirectX.
See this answer. In OpenGL, each subsequent operation is a pre-multiplication of all the operations before it, not a post-multiplication. You can see a matrix multiplication of a vector as a function evaluation.
If what you want is to first rotate a vector and then translate your rotated vector, which you in OpenGL would have solved by first calling glRotatef and then calling glTranslatef, you could express that using function calls as
myNewVector = translate(rotate(myOldVector))
The rotate function does this
rotate(anyVector) = rotationMatrix * anyVector
and the translate function does this
translate(anyOtherVector) = translationMatrix * anyOtherVector
so your equivalent expression using matrix multiplications would look like
myNewVector = translationMatrix * rotationMatrix * myOldVector
That is, your combined transformation matrix would look be translationMatrix * rotationMatrix.

Resources