I was wondering if someone helps me understand how to convert the top image to the bottom image.
The images are available in the following link.The top image is in Cartesian coordinate. The bottom image is the converted image in polar coordinate
This is a basic rectangular to polar coordinate transform. To do the conversion, scan across the output image and treat x and y as if they were r and theta. Then use them as r and theta to look up the corresponding pixel in the input image. So something like this:
int x, y;
for (y = 0; y < outputHeight; y++)
{
Pixel* outputPixel = outputRowStart (y); // <- get a pointer to the start of the output row
for (x = 0; x < outputWidth; x++)
{
float r = y;
float theta = 2.0 * M_PI * x / outputWidth;
float newX = r * cos (theta);
float newY = r * sin (theta);
*outputPixel = getInputPixel ( newX, newY ); // <- Should probably do at least bilinear resampling in this function
outputPixel++;
}
}
Note that you may want to handle wrapping depending on what you're trying to achieve. The theta value wraps at 2pi.
Related
I'm trying to create a ray-casting camera in DirectX11 using XMVector3Unproject(). From my understanding, I will be passing in the (Vector3)position of the pixel on the near plane, and in separate call, a corresponding position on the far plane. Then I would subtract these vectors to get the direction of the ray. The origin would then be the Unprojected coordinate on the near plane. My problem here is calculating the origin of the ray to be passed in.
Example
// assuming screenHeight and screenWidth are the number of pixels.
const uint32_t screenHeight = 768;
const uint32_t screenWidth = 1024;
struct Ray
{
XMFLOAT3 origin;
XMFLOAT3 direction;
};
Ray rays[screenWidth * screenHeight];
for (uint32_t i = 0; i < screenHeight; ++i)
{
for (uint32_t j = 0; j < screenWidth; ++j)
{
// 1. ***calculate and store the current pixel position on the near plane***
// 2. ***calculate the corresponding point on the far plane***
// 3. ***pass both positions separately into XMVector3Unproject() (2 total calls to the function)***
// 4. ***store the returned vectors' difference into rays[i * screenWidth + j].direction***
// 5. ***store the near plane pixel position's returned vector into rays[i * screenWidth + j].origin***
}
}
Hopefully I'm understanding this correctly. Any help in determining the ray origins, or corrections would be greatly appreciated.
According to the documentation, the XMVector3Unproject function gives you the coordinates of a ray you have provided in camera space (Normalized-device coordinates), in object space (given your model matrix).
To generate your camera rays, you consider your camera pinhole (all the light passes through one point, which is your camera (0, 0, 0), then you choose your ray direction. Let say you want to generate W*H camera rays, your loop might look like this
Vector3 ray_origin = Vector3(0, 0, 0);
for (float x = -1.f; x <= 1.f; x += 2.f / W) {
for (float y = -1.f; y <= 1.f; y += 2.f / H) {
Vector3 ray_direction = Normalize(Vector3(x, y, -1.f)) - ray_origin;
Vector3 ray_in_model = Unproject(ray_direction, 0.f, 0.f,
width, height, znear, zfar,
proj, view, model);
}
}
You might also want to have a look at this link which sounds interesting
I am trying to implement oilfy filter using openCV, and I came across this code.
The code uses gd2 lib. But as my application already uses OpenCV for image processing, its is not recommended to use another lib.
I couldn't understand what the following code does
for (y = 0; y < maskHeight; y++)
{
for (x = 0; x < maskWidth; x++)
{
index = y * maskWidth + x;
rTable[index] = (double) gdImageRed(imageptr,gdImageGetPixel(imageptr,w + x - maskWidth / 2, h + y - maskHeight / 2));
gTable[index] = (double) gdImageGreen(imageptr,gdImageGetPixel(imageptr,w + x - maskWidth / 2, h + y - maskHeight / 2));
bTable[index] = (double) gdImageBlue(imageptr,gdImageGetPixel(imageptr,w + x - maskWidth / 2, h + y - maskHeight / 2));
}
}
Can someone, help me with understanding the oilfy algorithm or tell me how to convert the code into OpenCV?
Any openCV code for oilfy effect will be of much help.
Check out this link -
https://libgd.github.io/manuals/2.2.3/files/gd-c.html#gdImageGetPixel
gdImageGetPixel returns the color value at that pixel in integer format.This contains combination of RGB. The pixel is indicated with gdImagePtr object followed by x and y co-ordinates of the pixel in the image.
gdImageRed returns the red color intensity value in that color, similarly gdImageBlue and gdImageGreen.
I have this function which returns x and y position an just adding up degrees, it make objects to move around in circular movements like a satellite around a planet.
In my case it moves like an ellipse because I added +30 to dist.
-(CGPoint)circularMovement:(float)degrees moonDistance:(CGFloat)dist
{
if(degrees >=360)degrees = 0;
float x = _moon.position.x + (dist+30 + _moon.size.height/2) *cos(degrees);
float y = _moon.position.y + (dist + _moon.size.height/2) *sin(degrees);
CGPoint position= CGPointMake(x, y);
return position;
}
What I would like is to reverse this function, giving the x and y position of an object and getting back the dist value.
Is this possible?
If so, how would I go about achieving it?
If you have an origin and a target, the origin having the coordinates (x1, y1) and the target has the coordinates (x2, y2) the distance between them is found using the Pythagorean theorem.
The distance between the points is the square root of the difference between x2 and x1 plus the difference between y2 and y1.
In most languages this would look something like this:
x = x2 - x1;
y = y2 - y1;
distance = Math.SquareRoot(x * x + y * y);
Where Math is your language's math library.
float x = _moon.position.x + (dist+30 + _moon.size.height/2) *cos(degrees);
float y = _moon.position.y + (dist + _moon.size.height/2) *sin(degrees);
is the way you have originally calculated the values, so the inverse formula would be:
dist = ((y - _moon.position.y) / (sin(degrees))) - _moon.size.height/2
You could calculate it based on x as well, but there is no point, it is simpler based on y.
I have used Canny method to get edges of a image.Then I applied the approxPolyDP method to approximate edges,and got a set of polylines (not polygons) and line segments.Each polyline is formed from line segments.My purpose is to get coordinates of each line segment's end points in Cartesian coordinate system (2D plane) and the corresponding parameters (rho,theta) in polar coordinate.Any idea?Thanks!
BTW:I know that we can use the HoughLines method to find lines (not line segments) and get the parameters (rho,theta) in polar coordinate,or we can use the HoughLinesP method to find line segments and coordinates of end points.But the two method can not get the coordinates of end points of line segment and the corresponding parameters (rho,theta) at the same time.
Here's some sample C++ code for interpreting a line from OpenCV HoughLines. If you want the slope, just find two points and compute rise over run.
The important formula, which unfortunately is not as obvious as it should be in the documentation, is: rho = x*cos(theta) + y*sin(theta)
float yForHoughLine(float x, const Vec2f houghLine) {
float rho = houghLine[0];
float theta = houghLine[1];
if (theta == 0)
return NAN;
return (rho - (x * cos(theta))) / sin(theta);
}
float xForHoughLine(float y, const Vec2f houghLine) {
float rho = houghLine[0];
float theta = houghLine[1];
float cosTheta = cos(theta);
if (cosTheta == 0)
return NAN;
return (rho - (y * sin(theta))) / cosTheta;
}
Here's the equivalent Python:
def y_for_line(x, r, theta):
if theta == 0:
return np.nan
return (r - (x * np.cos(theta))) / np.sin(theta)
def x_for_line(y, r, theta):
cos_theta = np.cos(theta)
if cos_theta == 0:
return np.nan
return (r - (y * np.sin(theta))) / cos_theta
I need the angular velocity expressed as a quaternion for updating the quaternion every frame with the following expression in OpenCV:
q(k)=q(k-1)*qwt;
My angular velocity is
Mat w; //1x3
I would like to obtain a quaternion form of the angles
Mat qwt; //1x4
I couldn't find information about this, any ideas?
If I understand properly you want to pass from this Axis Angle form to a quaternion.
As shown in the link, first you need to calculate the module of the angular velocity (multiplied by delta(t) between frames), and then apply the formulas.
A sample function for this would be
// w is equal to angular_velocity*time_between_frames
void quatFromAngularVelocity(Mat& qwt, const Mat& w)
{
const float x = w.at<float>(0);
const float y = w.at<float>(1);
const float z = w.at<float>(2);
const float angle = sqrt(x*x + y*y + z*z); // module of angular velocity
if (angle > 0.0) // the formulas from the link
{
qwt.at<float>(0) = x*sin(angle/2.0f)/angle;
qwt.at<float>(1) = y*sin(angle/2.0f)/angle;
qwt.at<float>(2) = z*sin(angle/2.0f)/angle;
qwt.at<float>(3) = cos(angle/2.0f);
} else // to avoid illegal expressions
{
qwt.at<float>(0) = qwt.at<float>(0)=qwt.at<float>(0)=0.0f;
qwt.at<float>(3) = 1.0f;
}
}
Almost every transformation regarding quaternions, 3D space, etc is gathered at this website.
You will find time derivatives for quaternions also.
I find it useful the explanation of the physical meaning of a quaternion, which can be seen as an axis angle where
a = angle of rotation
x,y,z = axis of rotation.
Then the conversion uses:
q = cos(a/2) + i ( x * sin(a/2)) + j (y * sin(a/2)) + k ( z * sin(a/2))
Here is explained thoroughly.
Hope this helped to make it clearer.
One little trick to go with this and get rid of those cos and sin functions. The time derivative of a quaternion q(t) is:
dq(t)/dt = 0.5 * x(t) * q(t)
Where, if the angular velocity is {w0, w1, w2} then x(t) is a quaternion of {0, w0, w1, w2}. See David H Eberly's book section 10.5 for proof