I'am trying to apply view frustum culling to my scene.
I basically have this setup:
Rendering one quad displayed as Quadtree
For example if depth is 1 then the quad gets split into 4 smaller quads
Each quad has a texture
I have a first person camera, which means I can only look around but not move (the orogin of my camera is always at (0,0,0))
The Quad at depth 1 looks like this:
1---2
3---4
Where quad 1 represents the nw quadrant of the quadtree, quad 2 the ne quadrant of the quadtree and so on.
My goal is (for example depth 1) that whenever I move my camera and for example quad 1 and 3 are not visible anymore, they should not be rendered anymore.
However my current implementation is not correct. I'am extracting the frustumplanes like this and I think this the error.
createFrustumPlanes(vp) {
let leftPlane = vec4.fromValues(vp[0] + vp[3], vp[4] + vp[7], vp[8] + vp[11], vp[12] + vp[15]);
vec4.normalize(leftPlane, leftPlane);
let rightPlane = vec4.fromValues(-vp[0] + vp[3], -vp[4] + vp[7], -vp[8] + vp[11], -vp[12] + vp[15]);
vec4.normalize(rightPlane, rightPlane);
let bottomPlane = vec4.fromValues(vp[1] + vp[3], vp[5] + vp[7], vp[9] + vp[11], vp[13] + vp[15]);
vec4.normalize(bottomPlane, bottomPlane);
let topPlane = vec4.fromValues(-vp[1] + vp[3], -vp[5] + vp[7], -vp[9] + vp[11], -vp[13] + vp[15]);
vec4.normalize(topPlane, topPlane);
let nearPlane = vec4.fromValues(vp[2] + vp[3], vp[6] + vp[7], vp[10] + vp[11], vp[14] + vp[15]);
vec4.normalize(nearPlane, nearPlane);
let farPlane = vec4.fromValues(-vp[2] + vp[3], -vp[6] + vp[7], -vp[10] + vp[11], -vp[14] + vp[15]);
vec4.normalize(farPlane, farPlane);
return [
leftPlane,
rightPlane,
bottomPlane,
topPlane,
nearPlane,
farPlane
];
}
Where vp is the following this._vpMatrix:
this._lookAtMatrix = mat4.create();
mat4.lookAt(this._lookAtMatrix, vec3.fromValues(0, 0, 0), this._lookAtVector, vec3.fromValues(0, 1, 0));
this._projectionMatrix = mat4.create();
mat4.perspective(this._projectionMatrix, this._FOV * Math.PI / 180.0, this._width / this._height, 0.001, 50);
this._vpMatrix = mat4.create();
mat4.multiply(this._vpMatrix, this._projectionMatrix, this._lookAtMatrix);
How are you extracting frustum planes and from which matrix are the planes created?
Related
How can I change the colour of my Turtle and of the Koch-Star it draws? I want it to not be black and be coloured in. Maybe have a Blue outline and fill in the shape with a light blue.
trtl : turtle object
The turtle window object to be drawed to
lenght : float
The length of the line Koch is drawed to
currentdepth : integer
The current iteration depth of Koch - 1 is straight line of length run
if currentdepth == depth:
trtl.forward(length)
else:
currentlength = length/3.0
koch_segment(trtl, currentlength,currentdepth + 1)
trtl.left(60)
koch_segment(trtl, currentlength, currentdepth + 1)
trtl.right(120)
koch_segment(trtl, currentlength, currentdepth + 1)
trtl.left(60)
koch_segment(trtl, currentlength, currentdepth + 1)
def setup_turtle(depth):
wn = turtle.Screen()
wx = wn.window_width() * .5
wh = wn.window_height() * .5
base_lgth = 2.0 / math.sqrt(3.0) * wh # is the base length dependant on the screen size
myturtle = turtle.Turtle()
myturtle.speed(0.5 * (depth + 1)) # value between 1 and 10 (fast)
myturtle.penup()
myturtle.setposition(-wx / 2, -wh / 2) # start in the lower left quadrant middle point
myturtle.pendown()
myturtle.left(60)
return myturtle, base_lgth```
I have an rendered Image. I want to apply radial and tangential distortion coefficients to my image that I got from opencv. Even though there is undistort function, there is no distort function. How can I distort my images with distortion coefficients?
I was also looking for the same type of functionality. I couldn't find it, so I implemented it myself. Here is the C++ code.
First, you need to normalize the image point using the focal length and centers
rpt(0) = (pt_x - cx) / fx
rpt(1) = (pt_y - cy) / fy
then distort the normalized image point
double x = rpt(0), y = rpt(1);
//determining the radial distortion
double r2 = x*x + y*y;
double icdist = 1 / (1 - ((D.at<double>(4) * r2 + D.at<double>(1))*r2 + D.at<double>(0))*r2);
//determining the tangential distortion
double deltaX = 2 * D.at<double>(2) * x*y + D.at<double>(3) * (r2 + 2 * x*x);
double deltaY = D.at<double>(2) * (r2 + 2 * y*y) + 2 * D.at<double>(3) * x*y;
x = (x + deltaX)*icdist;
y = (y + deltaY)*icdist;
then you can translate and scale the point using the center of projection and focal length:
x = x * fx + cx
y = y * fy + cy
I'm using in OpenCV the method
triangulatePoints(P1,P2,x1,x2)
to get the 3D coordinates of a point by its image points x1/x2 in the left/right image and the projection matrices P1/P2.
I've already studied epipolar geometry and know most of the maths behind it. But what how does this algorithm get mathematically the 3D Coordinates?
Here are just some ideas, to the best of my knowledge, should at least work theoretically.
Using the camera equation ax = PX, we can express the two image point correspondences as
ap = PX
bq = QX
where p = [p1 p2 1]' and q = [q1 q2 1]' are the matching image points to the 3D point X = [X Y Z 1]' and P and Q are the two projection matrices.
We can expand these two equations and rearrange the terms to form an Ax = b system as shown below
p11.X + p12.Y + p13.Z - a.p1 + b.0 = -p14
p21.X + p22.Y + p23.Z - a.p2 + b.0 = -p24
p31.X + p32.Y + p33.Z - a.1 + b.0 = -p34
q11.X + q12.Y + q13.Z + a.0 - b.q1 = -q14
q21.X + q22.Y + q23.Z + a.0 - b.q2 = -q24
q31.X + q32.Y + q33.Z + a.0 - b.1 = -q34
from which we get
A = [p11 p12 p13 -p1 0; p21 p22 p23 -p2 0; p31 p32 p33 -1 0; q11 q12 q13 0 -q1; q21 q22 q23 0 -q2; q31 q32 q33 0 -1], x = [X Y Z a b]' and b = -[p14 p24 p34 q14 q24 q34]'. Now we can solve for x to find the 3D coordinates.
Another approach is to use the fact, from camera equation ax = PX, that x and PX are parallel. Therefore, their cross product must be a 0 vector. So using,
p x PX = 0
q x QX = 0
we can construct a system of the form Ax = 0 and solve for x.
I am trying to map the depth from the Kinectv2 to RGB space from a DSLR camera and I am stuck with weird pixel mapping.
I am working on Processing, using OpenCV and Nicolas Burrus' method where :
P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
P3D.z = depth(x_d,y_d)
P3D' = R.P3D + T
P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb
Unfortunatly i have a problem when I reproject 3D point to RGB World Space. In order to check if the problem came from my OpenCV calibration I used MRPT Kinect & Setero Calibration in order to get the intrinsics and distorsion coefficients of the cameras and the rototranslation relative transformation between the two cameras.
eDatas from stereo calibration MRPT
Here my datas :
depth c_x = 262.573912;
depth c_y = 216.804166;
depth f_y = 462.676558;
depth f_x = 384.377033;
depthDistCoeff = {
1.975280e-001, -6.939150e-002, 0.000000e+000, -5.830770e-002, 0.000000e+000
};
DSLR c_x_R = 538.134412;
DSLR c_y_R = 359.760525;
DSLR f_y_R = 968.431461;
DSLR f_x_R = 648.480385;
rgbDistCoeff = {
2.785566e-001, -1.540991e+000, 0.000000e+000, -9.482198e-002, 0.000000e+000
};
R = {
8.4263457190597e-001, -8.9789363922252e-002, 5.3094712387890e-001,
4.4166517232817e-002, 9.9420220953803e-001, 9.8037162878270e-002,
-5.3667149820385e-001, -5.9159417476295e-002, 8.4171483671105e-001
};
T = {-4.740111e-001, 3.618596e-002, -4.443195e-002};
Then I use the data in processing in order to compute the mapping using :
PVector pixelDepthCoord = new PVector(i * offset_, j * offset_);
int index = (int) pixelDepthCoord .x + (int) pixelDepthCoord .y * depthWidth;
int depth = 0;
if (rawData[index] != 255)
{
//2D Depth Coord
depth = rawDataDepth[index];
} else
{
}
//3D Depth Coord - Back projecting pixel depth coord to 3D depth coord
float bppx = (pixelDepthCoord.x - c_x) * depth / f_x;
float bppy = (pixelDepthCoord.y - c_y) * depth / f_y;
float bppz = -depth;
//transpose 3D depth coord to 3D color coord
float x_ =(bppx * R[0] + bppy * R[1] + bppz * R[2]) + T[0];
float y_ = (bppx * R[3] + bppy * R[4] + bppz * R[5]) + T[1];
float z_ = (bppx * R[6] + bppy * R[7] + bppz * R[8]) + T[2];
//Project 3D color coord to 2D color Cood
float pcx = (x_ * f_x_R / z_) + c_x_R;
float pcy = (y_ * f_y_R / z_) + c_y_R;
Then i get the following transformations :
Weird mapping behavior
I think i have a probleme in my method but i don't get it. Does anyone has any ideas or a clues. I am racking my brain since many days on this problem ;)
Thanks
My problem is this:
Is it possible to measure with Photoshop script (I use CS5.1) THE EXACT (x,y) of the center of the graphic (as shown in the image), related to the upper left corner of the canvas (0,0)? What is the tactic I should follow? Anyone has an idea? (The graphic is in its own layer, and I want to do the measure for each graphic, layer by layer, in order to form the layout in Corona).
Yes, in Photoshop, click on "Image" in the navigation menu, then choose Image Size. Take the width and divide by 2, take the height and divide by 2.
To find the coordinates of the centre of the image you need to find the layer bounds, which will tell you the left, top, right and bottom values of the image. From this we can work out the width and height of the image and the centre (from the top left of the photoshop image)
//pref pixels
app.preferences.rulerUnits = Units.PIXELS;
// call the source document
var srcDoc = app.activeDocument;
// get current width values
var W = srcDoc.width.value;
var H = srcDoc.height.value;
var X = srcDoc.activeLayer.bounds[0]
var Y = srcDoc.activeLayer.bounds[1]
var X1 = srcDoc.activeLayer.bounds[2]
var Y1 = srcDoc.activeLayer.bounds[3]
var selW = parseFloat((X1-X));
var selH = parseFloat((Y1-Y));
var posX = Math.floor(parseFloat((X+X1)/2));
var posY = Math.floor(parseFloat((Y+Y1)/2));
alert(X + ", " + Y + ", " + X1 + ", " + Y1 + "\n" + "W: " + selW + ", H: " + selH + "\nPosition " + posX + "," + posY);