I'm building a ray tracer as an assignment. I'm trying to get refraction working for spheres and I got it half-working. The problem is I can't get rid of the black dot in the centre of the sphere
This is the code for the intersection:
double a = rayDirection.DotProduct(rayDirection);
double b = rayOrigin.VectAdd(sphereCenter.Negative()).VectMult(2).DotProduct(rayDirection);
double c = rayOrigin.VectAdd(sphereCenter.Negative()).DotProduct(rayOrigin.VectAdd(sphereCenter.Negative())) - (radius * radius);
double discriminant = b * b - 4 * a * c;
if (discriminant >= 0)
{
// the ray intersects the sphere
// the first root
double root1 = ((-1 * b - sqrt(discriminant)) / 2.0 * a) - 0.000001;
double root2 = ((-1 * b + sqrt(discriminant)) / 2.0 * a) - 0.000001;
if (root1 > 0.00001)
{
// the first root is the smallest positive root
return root1;
}
else
{
// the second root is the smallest positive root
return root2;
}
}
else
{
// the ray missed the sphere
return -1;
}
This is the code responsible for computing the direction of the new refracted ray:
double n1 = refractionRay.GetRefractiveIndex();
double n2 = sceneObjects.at(indexOfWinningObject)->GetMaterial().GetRefractiveIndex();
if (n1 == n2)
{
// ray inside the same material, means that it is going to be refracted outside,
n2 = 1.000293;
}
double n = n1 / n2;
Vect I = refractionRay.GetRayDirection();
Vect N = sceneObjects.at(indexOfWinningObject)->GetNormalAt(intersectionPosition);
double cosTheta1 = -N.DotProduct(I);
// we need the normal pointing towards the side the ray is coming from
if (cosTheta1 < 0)
{
N = N.Negative();
cosTheta1 = -N.DotProduct(I);
}
double cosTheta2 = sqrt(1 - (n * n) * (1 - (cosTheta1 * cosTheta1)));
Vect refractionDirection = I.VectMult(n).VectAdd(N.VectMult(n * cosTheta1 - cosTheta2));
Ray newRefractionRay(intersectionPosition.VectAdd(refractionDirection.VectMult(0.001)), refractionDirection, n2, refractionRay.GetRemainingIntersections());
When creating the new refracting ray, I tried adding the direction times a small value to the intersection position to make the origin of this new ray inside the sphere. The size of the black dot changes if I change that small value. If I make it too big the margins of the sphere start turning black as well.
If I add colour to the object it looks like this:
And if make that small constant bigger (0.1) this happens:
Is there a special condition I should take into account? Thank you!
You should remove the epsilon factors that you subtract when you calculate the two roots:
double root1 = ((-1 * b - sqrt(discriminant)) / 2.0 * a);
double root2 = ((-1 * b + sqrt(discriminant)) / 2.0 * a);
In my experience the only place you need a comparison against epsilon is when checking whether the found root is along the path of the ray and not at its origin, per your:
if (root1 > 0.00001)
NB: you could eke out a little more performance by only doing the square root calculation once, and also by only calculating root2 if root1 <= epsilon
Related
Firstly, I am not using 3Js in my Orbits app because I encountered a number of limitations including, but not limited to, issues with texture resolution and my requirement for complex lighting equations but I would like to implement something like 3Js' raycaster to allow me to detect the object clicked by the user.
I'm new to WebGL, but an "old hand" in software development so I'm looking for some hints about where to start.
The approach is as follows:
You generate your scene twice, once normally which is displayed and the second, with the objects uniquely coloured but not displayed. Then you use gl.readPixels from the second scene using the position on the first and decode the colour to identify the object.
Now I have to implement it myself.
Picking spheres
When picking spheres, or objects that are separated (not one inside another) you can use a simple distance from ray to very quickly get the closest object.
Example
The function returns a function that does the calculation. As it is only the closest you are interested in the distances can remain as squares. The distance from the camera is held as a unit distance along the ray.
function distanceFromRay() {
var dSqr, ox, oy, oz, vx, vy, vz;
function distanceSqr(px, py, pz) {
const ax = px - ox, ay = py - oy, az = pz - oz;
const u = (ax * vx + ay * vy + az * vz) / dSqr;
distanceSqr.unit = u;
if (u > 0) { // is past origin
const bx = ox + vx * u - px, by = oy + vy * u - py, bz = oz + vz * u - pz;
return bx * bx + by * by + bz * bz; // dist sqr to closest point on ray
}
return Infinity;
}
distanceSqr.unit = 0;
distanceSqr.setRay(x, y, z, xx, yy, zz) { // ray from origin x, y,z,
// infinite length along xx,yy,zz
(ox = x, oy = y, oz = z);
(vx = xx, vy = yy, vz = zz);
dSqr = vx * vx + vy * vy + vz * vz;
}
return distanceSqr;
}
Usage
There is a one time setup call;
// setup
const distToRay = distanceFromRay();
At the start of a frame that requires a pick, calculate the pick ray and set it. Also set the min distance from ray and eye.
// at start of frame set pick ray
distToRay.setRay(eye.x, eye.y, eye.z, pointer.ray.x, pointer.ray.y, pointer.ray.y);
var minDist = maxObjRadius * maxObjRadius;
var nearestObj = undefined;
var eyeDist = Infinity;
Then for each pickable object get the distance by passing the objects center and comparing it to any previous (in frame) found distance, objects radius, and distance from eye.
// per object
const dis = distToRay(obj.pos.x, obj.pos.y, obj.pos.z);
if (dis < obj.radius && dis < minDist && distToRay.unit > 0 && distToRay.unit < eyeDist ) {
minDist = dis;
eyeDist = distToRay.unit;
nearestObj = obj;
}
At the end of the frame if nearestObj is not undefined it will hold a reference to the picked object.
// end of frame
if (nearestObj) {
// you have the closest object
}
Demo almost (?) working example: https://ellie-app.com/4h9F8FNcRPya1/1
For demo: Click to draw ray, and rotate camera with left and right to see ray. (As the origin is from the camera, you can't see it from the position it is created)
Context
I am working on an elm & elm-webgl project where I would like to know if the mouse is over an object when clicked. To do is I tried to implement a simple ray cast. What I need is two things:
1) The coordinate of the camera (This one is easy)
2) The coordinate/direction in 3D space of where was clicked
Problem
The steps to get from 2D view space to 3D world space as I understand are:
a) Make coordinates to be in a range of -1 to 1 relative to view port
b) Invert projection matrix and perspective matrix
c) Multiply projection and perspective matrix
d) Create Vector4 from normalised mouse coordinates
e) Multiply combined matrices with Vector4
f) Normalise result
Try so far
I have made a function to transform a Mouse.Position to a coordinate to draw a line to:
getClickPosition : Model -> Mouse.Position -> Vec3
getClickPosition model pos =
let
x =
toFloat pos.x
y =
toFloat pos.y
normalizedPosition =
( (x * 2) / 1000 - 1, (1 - y / 1000 * 2) )
homogeneousClipCoordinates =
Vec4.vec4
(Tuple.first normalizedPosition)
(Tuple.second normalizedPosition)
-1
1
inversedProjectionMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse (camera model))
inversedPerspectiveMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse perspective)
inversedMatrix2 =
Mat4.mul inversedProjectionMatrix inversedPerspectiveMatrix
to =
Vec4.vec4
(Tuple.first normalizedPosition)
(Tuple.second normalizedPosition)
1
1
toInversed =
mulVector inversedMatrix2 to
toNorm =
Vec4.normalize toInversed
toVec3 =
vec3 (Vec4.getX toNorm) (Vec4.getY toNorm) (Vec4.getZ toNorm)
in
toVec3
Result
The result of this function is that the rays are too much to the center to where I click. I added a screenshot where I clicked in all four of the top face of the cube. If I click on the center of the viewport the ray will be correctly positioned.
It feels close, but not quite there yet and I can't figure out what I am doing wrong!
After trying other approaches I found a solution:
getClickPosition : Model -> Mouse.Position -> Vec3
getClickPosition model pos =
let
x =
toFloat pos.x
y =
toFloat pos.y
normalizedPosition =
( (x * 2) / 1000 - 1, (1 - y / 1000 * 2) )
homogeneousClipCoordinates =
Vec4.vec4
(Tuple.first normalizedPosition)
(Tuple.second normalizedPosition)
-1
1
inversedViewMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse (camera model))
inversedProjectionMatrix =
Maybe.withDefault Mat4.identity (Mat4.inverse perspective)
vec4CameraCoordinates = mulVector inversedProjectionMatrix homogeneousClipCoordinates
direction = Vec4.vec4 (Vec4.getX vec4CameraCoordinates) (Vec4.getY vec4CameraCoordinates) -1 0
vec4WorldCoordinates = mulVector inversedViewMatrix direction
vec3WorldCoordinates = vec3 (Vec4.getX vec4WorldCoordinates) (Vec4.getY vec4WorldCoordinates) (Vec4.getZ vec4WorldCoordinates)
normalizedVec3WorldCoordinates = Vec3.normalize vec3WorldCoordinates
origin = model.cameraPos
scaledDirection = Vec3.scale 20 normalizedVec3WorldCoordinates
destination = Vec3.add origin scaledDirection
in
destination
I left it as verbose as possible, if someone finds I use incorrect terminology please make a comment and I will update the answer.
I am sure there are lots of optimisations possible (Multiplying matrices before inverting or combining some of the steps.)
Updated the ellie app here: https://ellie-app.com/4hZ9s8S92PSa1/0
I receive a sinusoidal data from a sensor, which is in the form (A + B(sin(n/N+a))), where N is the Total Number of Samples, plus some small noises. I know that in N samples (1000 samples), the sine wave will complete one revolution. The signal looks like this:
I want to extract the variable amplitude "B" and phase "a", using as little data as possible. In other word, I want to predict the sensor's signal as soon as possible using DSP. I have already tried "correlation" but it didn't work.
Using TMS320C000 with FPU, TMU unit.
First note that if your sine wave is periodic with period N, it would actually be of the form A+B*sin(2*pi*n/N + a).
For a signal with no noise and a known frequency, the unknown parameters A, B and a could be obtained with as little as 3 samples. This can be done by first solving the following linear equation system to obtain B and a:
then using back substitution to obtain A = x[0] - B*sin(a). This solution can be implemented with the following code:
double K = 2*PI/N;
// setup matrix
// [sin(1/N)-sin(0/N) cos(1/N)-cos(0/N)]
// [sin(2/N)-sin(1/N) cos(2/N)-cos(1/N)]
double a11 = sin(K);
double a12 = cos(K)-1;
double a21 = sin(2*K)-sin(K);
double a22 = cos(2*K)-cos(K);
// Compute 2x2 matrix inverse, and multiply by delta x vector
// see https://www.wolframalpha.com/input/?i=inverse+%7B%7Ba,+b%7D,+%7Bc,+d%7D%7D
double d = 1.0/(a11*a22-a12*a21);
double y0 = d*(a22*(x[1]-x[0])-a12*(x[2]-x[1])); // B*cos(a)
double y1 = d*(a11*(x[2]-x[1])-a21*(x[1]-x[0])); // B*sin(a)
// Solve for parameters
double B = sqrt(y0*y0+y1*y1);
double a = atan2(y1, y0);
double A = x[0] - B*sin(a);
Since your signal includes some noise, you would get better results using a least square approximate solution to an over-determined system making use of more samples. This over-determined system can be written as:
With the following definitions:
the solution is then given as:
You then have a tradeoff between the solution's accuracy, and the number of sample used. For a solution using M samples, this can be implemented using the following C-like pseudo code:
// Setup initial conditions
double K = 2*PI/N;
double s = sin(K);
double c = cos(K);
double sp = s;
double cp = c;
double sn = s*cp + c*sp;
double cn = c*cp - s*sp;
double t1 = s;
double t2 = c-1;
double b11 = 0.0;
double b12 = 0.0;
double b21 = 0.0;
double b22 = 0.0;
double z1 = 0.0;
double z2 = 0.0;
double dx = 0.0;
// Iterative portion
for (int i = 0; i < M-1; i++)
{
// B_{i,j} = (A^T A)_{i,j} = sum_{k} A_{k,i} A_{k,j}
// For a 2x2 matrix B and a given "k",
// we have two values t1 & t2 which represent A_{k,1} and A_{k,2}
b11 += t1*t1;
b12 += t1*t2;
b21 += t2*t1;
b22 += t2*t2;
// Update z_i = (A^T \Delta x)_i = \sum_k A_{k,i} (\Delta x)_i
dx = x[i+1]-x[i];
z1 += t1 * dx;
z2 += t2 * dx;
// Update t1 & t2 for next term
t1 = sn-sp;
t2 = cn-cp;
// Iteratively compute sin(2*pi*k/N) & cos(2*pi*k/N) using trig identities:
// sin(x+2pi/N) = sin(2pi/N)*cos(x) + cos(2pi/N)*sin(x)
// cos(x+2pi/N) = cos(2pi/N)*cos(x) - sin(2pi/N)*sin(x)
sp = sn;
cp = cn;
sn = s*cp + c*sp;
cn = c*cp - s*sp;
}
// Finalization
// Compute inverse of B
double dinv = 1.0/(b11*b22-b12*b21);
double binv11 = b22*dinv;
double binv12 = -b21*dinv;
double binv21 = -b12*dinv;
double binv22 = b11*dinv;
// Compute inv(B)*A^T \Delta x which gives us the 2D vector [B*cos(a) B*sin(a)]
double y1 = binv11*z1 + binv12*z2; // B*cos(a)
double y2 = binv21*z1 + binv22*z2; // B*sin(a)
// Solve for "B", "a" and "A" parameters
double B = sqrt(y1*y1+y2*y2);
double a = atan2(y2, y1);
double A = x[0] - B*sin(a);
You may find it convenient to execute one iteration of the loop for each new sample as you receive them. Also, if you want to get continuous updates on your A, B and a estimate you simply need to execute the finalization part (the part of the code after the loop) every iteration.
Finally, for a little bit of extra robustness to input spikes, you could skip updates on b11, b12, b21, b22, z1 and z2 for large dx:
dx = x[i+1]-x[i];
// either use fixed threshold (requires manual tuning) for simplicity
// or update threshold dynamically as a fraction of B once a reasonable estimate of
// B is obtained.
if (abs(dx) < threshold)
{
b11 += t1*t1;
b12 += t1*t2;
b21 += t2*t1;
b22 += t2*t2;
z1 += t1 * dx;
z2 += t2 * dx;
}
// always update t1, t2, sp, cp, sn, cn
...
I'm trying to find the orientation of a binary image (where orientation is defined to be the axis of least moment of inertia, i.e. least second moment of area). I'm using Dr. Horn's book (MIT) on Robot Vision which can be found here as reference.
Using OpenCV, here is my function, where a, b, and c are the second moments of area as found on page 15 of the pdf above (page 60 of the text):
Point3d findCenterAndOrientation(const Mat& src)
{
Moments m = cv::moments(src, true);
double cen_x = m.m10/m.m00; //Centers are right
double cen_y = m.m01/m.m00;
double a = m.m20-m.m00*cen_x*cen_x;
double b = 2*m.m11-m.m00*(cen_x*cen_x+cen_y*cen_y);
double c = m.m02-m.m00*cen_y*cen_y;
double theta = a==c?0:atan2(b, a-c)/2.0;
return Point3d(cen_x, cen_y, theta);
}
OpenCV calculates the second moments around the origin (0,0) so I have to use the Parallel Axis Theorem to move the axis to the center of the shape, mr^2.
The center looks right when I call
Point3d p = findCenterAndOrientation(src);
rectangle(src, Point(p.x-1,p.y-1), Point(p.x+1, p.y+1), Scalar(0.25), 1);
But when I try to draw the axis with lowest moment of inertia, using this function, it looks completely wrong:
line(src, (Point(p.x,p.y)-Point(100*cos(p.z), 100*sin(p.z))), (Point(p.x, p.y)+Point(100*cos(p.z), 100*sin(p.z))), Scalar(0.5), 1);
Here are some examples of input and output:
(I'd expect it to be vertical)
(I'd expect it to be horizontal)
I worked with the orientation sometimes back and coded the following. It returns me the exact orientation of the object. largest_contour is the shape that is detected.
CvMoments moments1,cenmoments1;
double M00, M01, M10;
cvMoments(largest_contour,&moments1);
M00 = cvGetSpatialMoment(&moments1,0,0);
M10 = cvGetSpatialMoment(&moments1,1,0);
M01 = cvGetSpatialMoment(&moments1,0,1);
posX_Yellow = (int)(M10/M00);
posY_Yellow = (int)(M01/M00);
double theta = 0.5 * atan(
(2 * cvGetCentralMoment(&moments1, 1, 1)) /
(cvGetCentralMoment(&moments1, 2, 0) - cvGetCentralMoment(&moments1, 0, 2)));
theta = (theta / PI) * 180;
// fit an ellipse (and draw it)
if (largest_contour->total >= 6) // can only do an ellipse fit
// if we have > 6 points
{
CvBox2D box = cvFitEllipse2(largest_contour);
if ((box.size.width < imgYellowThresh->width) && (box.size.height < imgYellowThresh->height))
{
cvEllipseBox(imgYellowThresh, box, CV_RGB(255, 255 ,255), 3, 8, 0 );
}
}
Well the question says it all,
I know the function Line(), which draws line segment between two points.
I need to draw line NOT a line segment, also using the two points of the line segment.
[EN: Edit from what was previously posted as an answer for the question]
I used your solution and it performed good results in horizontal lines, but I still got problems in vertical lines.
For example, follows below an example using the points [306,411] and [304,8] (purple) and the draw line (red), on a image with 600x600 pixels. Do you have some tip?
I see this is pretty much old question. I had exactly the same problem and I used this simple code:
double Slope(int x0, int y0, int x1, int y1){
return (double)(y1-y0)/(x1-x0);
}
void fullLine(cv::Mat *img, cv::Point a, cv::Point b, cv::Scalar color){
double slope = Slope(a.x, a.y, b.x, b.y);
Point p(0,0), q(img->cols,img->rows);
p.y = -(a.x - p.x) * slope + a.y;
q.y = -(b.x - q.x) * slope + b.y;
line(*img,p,q,color,1,8,0);
}
First I calculate a slope of the line segment and then I "extend" the line segment into image's borders. I calculate new points of the line which lies in x = 0 and x = image.width. The point itself can be outside the Image, which is a kind of nasty trick, but the solution is very simple.
You will need to write a function to do that for yourself. I suggest you put your line in ax+by+c=0 form and then intersect it with the 4 edges of your image. Remember if you have a line in the form [a b c] finding its intersection with another line is simply the cross product of the two. The edges of your image would be
top_horizontal = [0 1 0];
left_vertical = [1 0 0];
bottom_horizontal = [0 1 -image.rows];
right_vertical = [1 0 -image.cols];
Also, if you know something about the distance between your points you could also just pick points very far along the line in each direction, I don't think the points handed to Line() need to be on the image.
I had the same problem and found out that there it is a known bug on 2.4.X OpenCV, fixed already for newer versions.
For the 2.4.X versions, the solution is to clip the line before plot it using cv::clipLine()
Here there is a function I did to myself that works fine on the 2.4.13 OpenCV
void Detector::drawFullImageLine(cv::Mat& img, const std::pair<cv::Point, cv::Point>& points, cv::Scalar color)
{
//points of line segment
cv::Point p1 = points.first;
cv::Point p2 = points.second;
//points of line segment which extend the segment P1-P2 to
//the image borders.
cv::Point p,q;
//test if line is vertical, otherwise computes line equation
//y = ax + b
if (p2.x == p1.x)
{
p = cv::Point(p1.x, 0);
q = cv::Point(p1.x, img.rows);
}
else
{
double a = (double)(p2.y - p1.y) / (double) (p2.x - p1.x);
double b = p1.y - a*p1.x;
p = cv::Point(0, b);
q = cv::Point(img.rows, a*img.rows + b);
//clipline to the image borders. It prevents a known bug on OpenCV
//versions 2.4.X when drawing
cv::clipLine(cv::Size(img.rows, img.cols), p, q);
}
cv::line(img, p, q, color, 2);
}
This answer is forked from pajus_cz answer but a little improved.
We have two points and we need to get the line equation y = mx + b to be able to draw the straight line.
There are two variables we need to get
1- Slope(m)
2- b which can be retrieved through the line equation using any given point from the two we have already after calculating the slope b = y - mx .
void drawStraightLine(cv::Mat *img, cv::Point2f p1, cv::Point2f p2, cv::Scalar color)
{
Point2f p, q;
// Check if the line is a vertical line because vertical lines don't have slope
if (p1.x != p2.x)
{
p.x = 0;
q.x = img->cols;
// Slope equation (y1 - y2) / (x1 - x2)
float m = (p1.y - p2.y) / (p1.x - p2.x);
// Line equation: y = mx + b
float b = p1.y - (m * p1.x);
p.y = m * p.x + b;
q.y = m * q.x + b;
}
else
{
p.x = q.x = p2.x;
p.y = 0;
q.y = img->rows;
}
cv::line(*img, p, q, color, 1);
}