I'm trying to develop a deferred render on my own webGL engine.
I have everything ready to calculate the lightning but I can't reconstruct the position from the depth buffer properly...
Right now, I'm trying to do the calculations out of the shader at the CPU with JS, and if I'm able to get the correct position then translate it to the shader.
This is the JS code:
var model = mat4.create();
var view = mat4.create();
var projection = mat4.perspective(mat4.create(), 45, gl.canvas.width/gl.canvas.height, 0.1, 100);
var pos = [0,0,-1,1];
var viewport = gl.getViewport();
var modelview = mat4.multiply(mat4.create(), view, model);
var viewprojection = mat4.multiply(mat4.create(), projection, view);
var mvp = mat4.multiply(mat4.create(), projection, modelview);
var posScreenCoords = vec4.transformMat4(vec4.create(), pos, mvp);
var vec3unproj = vec3.create();
vec3.unproject(vec3unproj, [posScreenCoords[0],posScreenCoords[1],posScreenCoords[2]], viewprojection, viewport);
I know there are some useless calculations, like multiplying model and view when they are at the identity, but I want them to be there just for showing the step by step.
At the identity, the camera is facing to the [0,0,-1] world coords, so when I transform this coords with the MVP matrix to get the screen coords, it should result a point at the center of the screen [0,0,Z], this is what I'm getting.
Everything OK at this point. Here is where I'm stuck. I have the same environment as if I am at the shader, I have an XY which are the screen coords, and a Z . I have to unproject this coords to the world coords.
I'm using the gl-matrix library, but it hasn't the vec3.unproject and got it from external source, but I'm not getting the desired result [0,0,-1] from it:
var unprojectMat = mat4.create();
var unprojectVec = vec4.create();
vec3.unproject = function (out, vec, viewprojection, viewport) {
var m = unprojectMat;
var v = unprojectVec;
v[0] = (vec[0] - viewport[0]) * 2.0 / viewport[2] - 1.0;
v[1] = (vec[1] - viewport[1]) * 2.0 / viewport[3] - 1.0;
v[2] = 2.0 * vec[2] - 1.0;
v[3] = 1.0;
if(!mat4.invert(m,viewprojection))
return null;
vec4.transformMat4(v, v, m);
if(v[3] === 0.0)
return null;
out[0] = v[0] / v[3];
out[1] = v[1] / v[3];
out[2] = v[2] / v[3];
return out;
};
You can see the test here. Any ideas? Thanks
Related
I have a disparity image created with a calibrated stereo camera pair and opencv. It looks good, and my calibration data is good.
I need to calculate the real world distance at a pixel.
From other questions on stackoverflow, i see that the approach is:
depth = baseline * focal / disparity
Using the function:
setMouseCallback("disparity", onMouse, &disp);
static void onMouse(int event, int x, int y, int flags, void* param)
{
cv::Mat &xyz = *((cv::Mat*)param); //cast and deref the param
if (event == cv::EVENT_LBUTTONDOWN)
{
unsigned int val = xyz.at<uchar>(y, x);
double depth = (camera_matrixL.at<float>(0, 0)*T.at<float>(0, 0)) / val;
cout << "x= " << x << " y= " << y << " val= " << val << " distance: " << depth<< endl;
}
}
I click on a point that i have measured to be 3 meters away from the stereo camera.
What i get is:
val= 31 distance: 0.590693
The depth mat values are between 0 and 255, the depth mat is of type 0, or CV_8UC1.
The stereo baseline is 0.0643654 (in meters).
The focal length is 284.493
I have also tried:
(from OpenCV - compute real distance from disparity map)
float fMaxDistance = static_cast<float>((1. / T.at<float>(0, 0) * camera_matrixL.at<float>(0, 0)));
//outputDisparityValue is single 16-bit value from disparityMap
float fDisparity = val / (float)cv::StereoMatcher::DISP_SCALE;
float fDistance = fMaxDistance / fDisparity;
which gives me a (closer to truth, if we assume mm units) distance of val= 31 distance: 2281.27
But is still incorrect.
Which of these approaches is correct? And where am i going wrong?
Left, Right, Depth map. (EDIT: this depth map is from a different pair of images)
EDIT: Based on an answer, i am trying this:
`std::vector pointcloud;
float fx = 284.492615;
float fy = 285.683197;
float cx = 424;// 425.807709;
float cy = 400;// 395.494293;
cv::Mat Q = cv::Mat(4,4, CV_32F);
Q.at<float>(0, 0) = 1.0;
Q.at<float>(0, 1) = 0.0;
Q.at<float>(0, 2) = 0.0;
Q.at<float>(0, 3) = -cx; //cx
Q.at<float>(1, 0) = 0.0;
Q.at<float>(1, 1) = 1.0;
Q.at<float>(1, 2) = 0.0;
Q.at<float>(1, 3) = -cy; //cy
Q.at<float>(2, 0) = 0.0;
Q.at<float>(2, 1) = 0.0;
Q.at<float>(2, 2) = 0.0;
Q.at<float>(2, 3) = -fx; //Focal
Q.at<float>(3, 0) = 0.0;
Q.at<float>(3, 1) = 0.0;
Q.at<float>(3, 2) = -1.0 / 6; //1.0/BaseLine
Q.at<float>(3, 3) = 0.0; //cx - cx'
//
cv::Mat XYZcv(depth_image.size(), CV_32FC3);
reprojectImageTo3D(depth_image, XYZcv, Q, false, CV_32F);
for (int y = 0; y < XYZcv.rows; y++)
{
for (int x = 0; x < XYZcv.cols; x++)
{
cv::Point3f pointOcv = XYZcv.at<cv::Point3f>(y, x);
Eigen::Vector4d pointEigen(0, 0, 0, left.at<uchar>(y, x) / 255.0);
pointEigen[0] = pointOcv.x;
pointEigen[1] = pointOcv.y;
pointEigen[2] = pointOcv.z;
pointcloud.push_back(pointEigen);
}
}`
And that gives me a cloud.
I would recommend to use reprojectImageTo3D of OpenCV to reconstruct the distance from the disparity. Note that when using this function you indeed have to divide by 16 the output of StereoSGBM. You should already have all the parameters f, cx, cy, Tx. Take care to give f and Tx in the same units. cx, cy are in pixels.
Since the problem is that you need the Q matrix, I think that this link or this one should help you to build it. If you don't want to use reprojectImageTo3D I strongly recommend the first link!
I hope this helps!
To find the point-based depth of an object from the camera, use the following formula:
Depth = (Baseline x Focallength)/disparity
I hope you are using it correctly as per your question.
Try the below nerian calculator for the therotical error.
https://nerian.com/support/resources/calculator/
Also, use sub-pixel interpolation in your code.
Make sure object you are identifying for depth should have good texture.
The most common problems with depth maps are:
Untextured surfaces (plain object)
Calibration results are bad.
What is the RMS value for your calibration, camera resolution, and lens type(focal
length)? This is important to provide much better data for your program.
I'm writing an iOS app that renders a pointcloud in SceneKit using a custom geometry. This post was super helpful in getting me there (though I translated this to Objective-C), as was David Rönnqvist's book 3D Graphics with SceneKit (see chapter on custom geometries). The code works fine, but I'd like to make the points render at a larger point size - at the moment the points are super tiny.
According to the OpenGL docs, you can do this by calling glPointSize(). From what I understand, SceneKit is built on top of OpenGL so I'm hoping there is a way to access this function or do the equivalent using SceneKit. Any suggestions would be much appreciated!
My code is below. I've also posted a small example app on bitbucket accessible here.
// set the number of points
NSUInteger numPoints = 10000;
// set the max distance points
int randomPosUL = 2;
int scaleFactor = 10000; // because I want decimal points
// but am getting random values using arc4random_uniform
PointcloudVertex pointcloudVertices[numPoints];
for (NSUInteger i = 0; i < numPoints; i++) {
PointcloudVertex vertex;
float x = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float y = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
float z = (float)(arc4random_uniform(randomPosUL * 2 * scaleFactor));
vertex.x = (x - randomPosUL * scaleFactor) / scaleFactor;
vertex.y = (y - randomPosUL * scaleFactor) / scaleFactor;
vertex.z = (z - randomPosUL * scaleFactor) / scaleFactor;
vertex.r = arc4random_uniform(255) / 255.0;
vertex.g = arc4random_uniform(255) / 255.0;
vertex.b = arc4random_uniform(255) / 255.0;
pointcloudVertices[i] = vertex;
// NSLog(#"adding vertex #%lu with position - x: %.3f y: %.3f z: %.3f | color - r:%.3f g: %.3f b: %.3f",
// (long unsigned)i,
// vertex.x,
// vertex.y,
// vertex.z,
// vertex.r,
// vertex.g,
// vertex.b);
}
// convert array to point cloud data (position and color)
NSData *pointcloudData = [NSData dataWithBytes:&pointcloudVertices length:sizeof(pointcloudVertices)];
// create vertex source
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticVertex
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(PointcloudVertex)];
// create color source
SCNGeometrySource *colorSource = [SCNGeometrySource geometrySourceWithData:pointcloudData
semantic:SCNGeometrySourceSemanticColor
vectorCount:numPoints
floatComponents:YES
componentsPerVector:3
bytesPerComponent:sizeof(float)
dataOffset:sizeof(float) * 3
dataStride:sizeof(PointcloudVertex)];
// create element
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:nil
primitiveType:SCNGeometryPrimitiveTypePoint
primitiveCount:numPoints
bytesPerIndex:sizeof(int)];
// create geometry
SCNGeometry *pointcloudGeometry = [SCNGeometry geometryWithSources:#[ vertexSource, colorSource ] elements:#[ element]];
// add pointcloud to scene
SCNNode *pointcloudNode = [SCNNode nodeWithGeometry:pointcloudGeometry];
[self.myView.scene.rootNode addChildNode:pointcloudNode];
I was looking into rendering point clouds in ios myself and found a solution on twitter, by a "vade", and figured I post it here for others:
ProTip: SceneKit shader modifiers are useful:
mat.shaderModifiers = #{SCNShaderModifierEntryPointGeometry : #"gl_PointSize = 16.0;"};
I am able to detect markers, identify markers and initialise OpenGL objects on screen. The issue I'm having is overlaying them on top of the markers position in the camera world. My camera is calibrated best I can using this method Iphone 6 camera calibration for OpenCV. I feel there is an issue with my cameras projection matrix, I create it as follows:
-(void)buildProjectionMatrix:
(Matrix33)cameraMatrix:
(int)screen_width:
(int)screen_height:
(Matrix44&) projectionMatrix
{
float near = 0.01; // Near clipping distance
float far = 100; // Far clipping distance
// Camera parameters
float f_x = cameraMatrix.data[0]; // Focal length in x axis
float f_y = cameraMatrix.data[4]; // Focal length in y axis
float c_x = cameraMatrix.data[2]; // Camera primary point x
float c_y = cameraMatrix.data[5]; // Camera primary point y
std::cout<<"fx "<<f_x<<" fy "<<f_y<<" cx "<<c_x<<" cy "<<c_y<<std::endl;
std::cout<<"width "<<screen_width<<" height "<<screen_height<<std::endl;
projectionMatrix.data[0] = - 2.0 * f_x / screen_width;
projectionMatrix.data[1] = 0.0;
projectionMatrix.data[2] = 0.0;
projectionMatrix.data[3] = 0.0;
projectionMatrix.data[4] = 0.0;
projectionMatrix.data[5] = 2.0 * f_y / screen_height;
projectionMatrix.data[6] = 0.0;
projectionMatrix.data[7] = 0.0;
projectionMatrix.data[8] = 2.0 * c_x / screen_width - 1.0;
projectionMatrix.data[9] = 2.0 * c_y / screen_height - 1.0;
projectionMatrix.data[10] = -( far+near ) / ( far - near );
projectionMatrix.data[11] = -1.0;
projectionMatrix.data[12] = 0.0;
projectionMatrix.data[13] = 0.0;
projectionMatrix.data[14] = -2.0 * far * near / ( far - near );
projectionMatrix.data[15] = 0.0;
}
This is the method to estimate the position of the marker:
void MarkerDetector::estimatePosition(std::vector<Marker>& detectedMarkers)
{
for (size_t i=0; i<detectedMarkers.size(); i++)
{
Marker& m = detectedMarkers[i];
cv::Mat Rvec;
cv::Mat_<float> Tvec;
cv::Mat raux,taux;
cv::solvePnP(m_markerCorners3d, m.points, camMatrix, distCoeff,raux,taux);
raux.convertTo(Rvec,CV_32F);
taux.convertTo(Tvec ,CV_32F);
cv::Mat_<float> rotMat(3,3);
cv::Rodrigues(Rvec, rotMat);
// Copy to transformation matrix
for (int col=0; col<3; col++)
{
for (int row=0; row<3; row++)
{
m.transformation.r().mat[row][col] = rotMat(row,col); // Copy rotation component
}
m.transformation.t().data[col] = Tvec(col); // Copy translation component
}
// Since solvePnP finds camera location, w.r.t to marker pose, to get marker pose w.r.t to the camera we invert it.
m.transformation = m.transformation.getInverted();
}
}
The OpenGL shape is able to track and account for size and roation, but something is going wrong with the translation. If the camera is turned 90 degrees, the opengl shape swings around 90 degrees about the centre of the marker. Its almost as if I am translating before rotating, but I am not.
See video for issue:
https://vid.me/fLvX
I guess you can have some problem with projecting the 3-D modelpoints. Essentially, solvePnP gives a transformation that brings points from the model coordinate system to the camera coordinate system and this is composed of a rotation and translation vector (output of solvePnP):
cv::Mat rvec, tvec;
cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec)
At this point you are able to project model points onto the image plane
std::vector<cv::Vec2d> imagePointsRP; // Reprojected image points
cv::projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePointsRP);
Now, you should only draw the points of imagePointsRP over the incoming image and if the pose estimation was correct then you'll see the reprojected corners over the corners of the marker
Anyway, the matrices of model TO camera and camera TO model direction can be composed as below:
cv::Mat rmat
cv::Rodrigues(rvec, rmat); // mRmat is 3x3
cv::Mat modelToCam = cv::Mat::eye(4, 4, CV_64FC1);
modelToCam(cv::Range(0, 3), cv::Range(0, 3)) = rmat * 1.0;
modelToCam(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1.0;
cv::Mat camToModel = cv::Mat::eye(4, 4, CV_64FC1);
cv::Mat rmatinv = rmat.t(); // rotation of inverse
cv::Mat tvecinv = -rmatinv * tvec; // translation of inverse
camToModel(cv::Range(0, 3), cv::Range(0, 3)) = rmatinv * 1.0;
camToModel(cv::Range(0, 3), cv::Range(3, 4)) = tvecinv * 1.0;
In any case, it's also useful to estimate reprojection error and discard the poses with high error (remember, the PnP problem has only unique solution if n=4 and these points are coplanar):
double totalErr = 0.0;
for (size_t i = 0; i < imagePoints.size(); i++)
{
double err = cv::norm(cv::Mat(imagePoints[i]), cv::Mat(imagePointsRP[i]), cv::NORM_L2);
totalErr += err*err;
}
totalErr = std::sqrt(totalErr / imagePoints.size());
I am trying to map the depth from the Kinectv2 to RGB space from a DSLR camera and I am stuck with weird pixel mapping.
I am working on Processing, using OpenCV and Nicolas Burrus' method where :
P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
P3D.z = depth(x_d,y_d)
P3D' = R.P3D + T
P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb
Unfortunatly i have a problem when I reproject 3D point to RGB World Space. In order to check if the problem came from my OpenCV calibration I used MRPT Kinect & Setero Calibration in order to get the intrinsics and distorsion coefficients of the cameras and the rototranslation relative transformation between the two cameras.
eDatas from stereo calibration MRPT
Here my datas :
depth c_x = 262.573912;
depth c_y = 216.804166;
depth f_y = 462.676558;
depth f_x = 384.377033;
depthDistCoeff = {
1.975280e-001, -6.939150e-002, 0.000000e+000, -5.830770e-002, 0.000000e+000
};
DSLR c_x_R = 538.134412;
DSLR c_y_R = 359.760525;
DSLR f_y_R = 968.431461;
DSLR f_x_R = 648.480385;
rgbDistCoeff = {
2.785566e-001, -1.540991e+000, 0.000000e+000, -9.482198e-002, 0.000000e+000
};
R = {
8.4263457190597e-001, -8.9789363922252e-002, 5.3094712387890e-001,
4.4166517232817e-002, 9.9420220953803e-001, 9.8037162878270e-002,
-5.3667149820385e-001, -5.9159417476295e-002, 8.4171483671105e-001
};
T = {-4.740111e-001, 3.618596e-002, -4.443195e-002};
Then I use the data in processing in order to compute the mapping using :
PVector pixelDepthCoord = new PVector(i * offset_, j * offset_);
int index = (int) pixelDepthCoord .x + (int) pixelDepthCoord .y * depthWidth;
int depth = 0;
if (rawData[index] != 255)
{
//2D Depth Coord
depth = rawDataDepth[index];
} else
{
}
//3D Depth Coord - Back projecting pixel depth coord to 3D depth coord
float bppx = (pixelDepthCoord.x - c_x) * depth / f_x;
float bppy = (pixelDepthCoord.y - c_y) * depth / f_y;
float bppz = -depth;
//transpose 3D depth coord to 3D color coord
float x_ =(bppx * R[0] + bppy * R[1] + bppz * R[2]) + T[0];
float y_ = (bppx * R[3] + bppy * R[4] + bppz * R[5]) + T[1];
float z_ = (bppx * R[6] + bppy * R[7] + bppz * R[8]) + T[2];
//Project 3D color coord to 2D color Cood
float pcx = (x_ * f_x_R / z_) + c_x_R;
float pcy = (y_ * f_y_R / z_) + c_y_R;
Then i get the following transformations :
Weird mapping behavior
I think i have a probleme in my method but i don't get it. Does anyone has any ideas or a clues. I am racking my brain since many days on this problem ;)
Thanks
I implemented the LoD with diameter from following withpaper NVidia TerrainTessellation WhitePaper. In Chapter "Hull Shader:Tessellation LOD" Page 7 there is a very good explenantion of the LoD with diameter. Here a good quote:
For each patch edge, the shader computes the edge length and then conceptually fits a sphere around it. The sphere is projected into screen space and its screen space diameter is used to compute the tessellation factor for the edge.
Here my HullShader:
// Globals
cbuffer TessellationBuffer // buffer need to be aligned to 16!!
{
float4 cameraPosition;
float tessellatedTriSize;
float3 padding;
matrix worldMatrix;
matrix projectionMatrix;
};
// Typedefs
struct HullInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};
struct ConstantOutputType
{
float edges[3] : SV_TessFactor;
float inside : SV_InsideTessFactor;
};
struct HullOutputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};
// Rounding function
float roundTo2Decimals(float value)
{
value *= 100;
value = round(value);
value *= 0.01;
return value;
}
float calculateLOD(float4 patch_zero_pos, float4 patch_one_pos)//1,3,1,1; 3,3,0,1
{
float diameter = 0.0f;
float4 radiusPos;
float4 patchDirection;
// Calculates the distance between the patches and fits a sphere around.
diameter = distance(patch_zero_pos, patch_one_pos); // 2.23607
float radius = diameter/2; // 1.118035
patchDirection = normalize(patch_one_pos - patch_zero_pos); // 0.894,0,-0.447,0 direction from base edge_zero
// Calculate the position of the radiusPos (center of sphere) in the world.
radiusPos = patch_zero_pos + (patchDirection * radius);//2,3,0.5,1
radiusPos = mul(radiusPos, worldMatrix);
// Get the rectangular points of the sphere to the camera.
float4 camDirection;
// Direction from camera to the sphere center.
camDirection = normalize(radiusPos - cameraPosition); // 0.128,0,0.99,0
// Calculates the orthonormal basis (sUp,sDown) of a vector camDirection.
// Find the smallest component of camDirection and set it to 0. swap the two remaining
// components and negate one of them to find sUp_ which can be used to find sDown.
float4 sUp_;
float4 sUp;
float4 sDown;
float4 sDownAbs;
sDownAbs = abs(camDirection);//0.128, 0 ,0.99, 0
if(sDownAbs.y < sDownAbs.x && sDownAbs.y < sDownAbs.z) { //0.99, 0, 0.128
sUp_.x = -camDirection.z;
sUp_.y = 0.0f;
sUp_.z = camDirection.x;
sUp_.w = camDirection.w;
} else if(sDownAbs.z < sDownAbs.x && sDownAbs.z < sDownAbs.y){
sUp_.x = -camDirection.y;
sUp_.y = camDirection.x;
sUp_.z = 0.0f;
sUp_.w = camDirection.w;
}else{
sUp_.x = 0.0f;
sUp_.y = -camDirection.z;
sUp_.z = camDirection.y;
sUp_.w = camDirection.w;
}
// simple version
// sUp_.x = -camDirection.y;
// sUp_.y = camDirection.x;
// sUp_.z = camDirection.z;
// sUp_.w = camDirection.w;
sUp = sUp_ / length(sUp_); // =(0.99, 0, 0.128,0)/0.99824 = 0.991748,0,0.128226,0
sDown = radiusPos - (sUp * radius); // 0.891191,3,0.356639,1 = (2,3,0.5,1) - (0.991748,0,0.128226,0)*1.118035
sUp = radiusPos + (sUp * radius); // = (3.10881,3,0.643361,1)
// Projects sphere in projection space (2d).
float4 projectionUp = mul(sUp, projectionMatrix);
float4 projectionDown = mul(sDown, projectionMatrix);
// Calculate tessellation factor for this edge according to the diameter on the screen.
float2 sUp_2;
sUp_2.x = projectionUp.x;
sUp_2.y = projectionUp.y;
float2 sDown_2;
sDown_2.x = projectionDown.x;
sDown_2.y = projectionDown.y;
// Distance between the 2 points in 2D
float projSphereDiam = distance(sUp_2, sDown_2);
//Debug
//return tessellatedTriSize;
//if(projSphereDiam < 2.0f)
// return 1.0f;
//else if(projSphereDiam < 10.0f)
// return 2.0f;
//else
// return 10.0f;
return projSphereDiam*tessellatedTriSize;
}
// Patch Constant Function
// set/calculate any data constant to entire patch.
// is invoked once per patch
// direction vector w = 0 ; position vector w = 1
// receives as input a patch with 3 control points and each control point is represented by the structure of HullInputType
// patch control point should be displaced vertically, this can significantly affect the distance of the camera
// patchId is an identifier number of the patch generated by the Input Assembler
ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID)
{
ConstantOutputType output;
////ret distance(x, y) Returns a distance scalar between two vectors.
float ret, retinside;
retinside = 0.0f;
float4 patch_zero_pos;//1,3,1,1
patch_zero_pos = float4(inputPatch[0].position.xyz, 1.0f);
float4 patch_one_pos;//3,3,0,1
patch_one_pos = float4(inputPatch[1].position.xyz, 1.0f);
float4 patch_two_pos;
patch_two_pos = float4(inputPatch[2].position.xyz, 1.0f);
// calculate LOD by diametersize of the edges
ret = calculateLOD(patch_zero_pos, patch_one_pos);
ret = roundTo2Decimals(ret);// rounding
output.edges[0] = ret;
retinside += ret;
ret = calculateLOD(patch_one_pos, patch_two_pos);
ret = roundTo2Decimals(ret);// rounding
output.edges[1] = ret;
retinside += ret;
ret = calculateLOD(patch_two_pos, patch_zero_pos);
ret = roundTo2Decimals(ret);// rounding
output.edges[2] = ret;
retinside += ret;
// Set the tessellation factor for tessallating inside the triangle.
// see image tessellationOuterInner
retinside *= 0.333;
// rounding
retinside = roundTo2Decimals(retinside);
output.inside = retinside;
return output;
}
// Hull Shader
// The hull shader is called for each output control point.
// Trivial pass through
[domain("tri")]
[partitioning("fractional_odd")] //fractional_odd
[outputtopology("triangle_cw")]
[outputcontrolpoints(3)]
[patchconstantfunc("ColorPatchConstantFunction")]
HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID)
{
HullOutputType output;
// Set the position for this control point as the output position.
output.position = patch[pointId].position;
// Set the input color as the output color.
output.tex = patch[pointId].tex;
output.normal = patch[pointId].normal;
return output;
}
Some graphical explenation to the code:
First find the Center between the two vertices
Find orthogonal basis (rectangular to the camera direction) from the camera on the "circle"
project sUp and sDown in Projection space for calculating the length to calculate the tessellation factor.
The Problem
The Tessellation worked fine. But for some testing reason I let the object rotate, so I can see if the tessellation is going with the rotation aswell. Some how I think it is not 100% correct. Look at the Plane, this plane is rotated by (1.0f, 2.0f, 0.0f) and the ligther red is to show higher tessellation factors compared to the darker red. the green color are factors of 1.0. It should be more detailed on the top of the plane, than on the bottom.
What am I missing?
Some test cases
If I remove rotation stuff it looks like this:
If I remove rotation and I'm including this simple version of orthogonale base calculation:
// simple version
sUp_.x = -camDirection.y;
sUp_.y = camDirection.x;
sUp_.z = camDirection.z;
sUp_.w = camDirection.w;
it looks like this:
Could it be a problem, if I'm not using a lookUp Vector?
How are you doing LoD? I'm open trying something else...
What IDE are you using? If you're using Visual Studio, you should try Visual Studio Graphics Debugger or PIX, depending on the version of VS that you have.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb943994(v=vs.85).aspx
I used the world matrix instead the view matrix. Always use the matrix the camera is using for rotations or other transformations.