Oilfy in OpenCV - ios

I am trying to implement oilfy filter using openCV, and I came across this code.
The code uses gd2 lib. But as my application already uses OpenCV for image processing, its is not recommended to use another lib.
I couldn't understand what the following code does
for (y = 0; y < maskHeight; y++)
{
for (x = 0; x < maskWidth; x++)
{
index = y * maskWidth + x;
rTable[index] = (double) gdImageRed(imageptr,gdImageGetPixel(imageptr,w + x - maskWidth / 2, h + y - maskHeight / 2));
gTable[index] = (double) gdImageGreen(imageptr,gdImageGetPixel(imageptr,w + x - maskWidth / 2, h + y - maskHeight / 2));
bTable[index] = (double) gdImageBlue(imageptr,gdImageGetPixel(imageptr,w + x - maskWidth / 2, h + y - maskHeight / 2));
}
}
Can someone, help me with understanding the oilfy algorithm or tell me how to convert the code into OpenCV?
Any openCV code for oilfy effect will be of much help.

Check out this link -
https://libgd.github.io/manuals/2.2.3/files/gd-c.html#gdImageGetPixel
gdImageGetPixel returns the color value at that pixel in integer format.This contains combination of RGB. The pixel is indicated with gdImagePtr object followed by x and y co-ordinates of the pixel in the image.
gdImageRed returns the red color intensity value in that color, similarly gdImageBlue and gdImageGreen.

Related

OpenCV How to apply camera distortion to an image

I have an rendered Image. I want to apply radial and tangential distortion coefficients to my image that I got from opencv. Even though there is undistort function, there is no distort function. How can I distort my images with distortion coefficients?
I was also looking for the same type of functionality. I couldn't find it, so I implemented it myself. Here is the C++ code.
First, you need to normalize the image point using the focal length and centers
rpt(0) = (pt_x - cx) / fx
rpt(1) = (pt_y - cy) / fy
then distort the normalized image point
double x = rpt(0), y = rpt(1);
//determining the radial distortion
double r2 = x*x + y*y;
double icdist = 1 / (1 - ((D.at<double>(4) * r2 + D.at<double>(1))*r2 + D.at<double>(0))*r2);
//determining the tangential distortion
double deltaX = 2 * D.at<double>(2) * x*y + D.at<double>(3) * (r2 + 2 * x*x);
double deltaY = D.at<double>(2) * (r2 + 2 * y*y) + 2 * D.at<double>(3) * x*y;
x = (x + deltaX)*icdist;
y = (y + deltaY)*icdist;
then you can translate and scale the point using the center of projection and focal length:
x = x * fx + cx
y = y * fy + cy

Accurate Image resizing

I need to resize an image using bilinear interpolation and create an image pyramid.I will detect corners at the different levels of the pyramid and scale the pixel co-ordinates so that they are relative to the dimensions of the largest image.
If a corner of an object is detected as a corner/keypoint/feature in all the levels,after scaling the corresponding pixel co-ordinates from the different levels so that they fall on the largest image, ideally I would like them to have the same value. Thus when resizing the images, I am trying to be as accurate as possible.
Let's assume I am resizing an image L_n_minus_1 to create a smaller image L_n. My scale factor is "ratio" (ratio>1).
*I cannot use any library.
I can resize using the pseudocode below (which is what I generally find when I search online for resizing algorithms.)
int offset = 0;
for (int i = 0; i < height_of_L_n; i++){
for (int j = 0; j < width_of_L_n; j++){
//********* This part will differ in the later version I provided below
//
int xSrcInt = (int)(ratio * j);
float xDiff = ratio * j - xSrcInt;
int ySrcInt = (int)(ratio * i);
float yDiff = ratio * i - ySrcInt;
// The above code will differ in the later version I provided below
index = (ySrcInt * width_of_L_n_minus_1 + xSrcInt);
//Get the 4 pixel values to interpolate
a = L_n_minus_1[index];
b = L_n_minus_1[index + 1];
c = L_n_minus_1[index + width_of_L_n_minus_1];
d = L_n_minus_1[index + width_of_L_n_minus_1 + 1];
//Calculate the co-efficients for interpolation
float c0 = (1 - x_diff)*(1 - y_diff);
float c1 = (x_diff)*(1 - y_diff);
float c2 = (y_diff)*(1 - x_diff);
float c3 = (x_diff*y_diff);
//half is added for rounding the pixel intensity.
int intensity = (a*c0) + (b*c1) + (c*c2) + (d*c3) + 0.5;
if (intensity > 255)
intensity = 255;
L_n[offset++] = intensity;
}
}
Or I could use this modified piece of code below :
int offset = 0;
for (int i = 0; i < height_of_L_n; i++){
for (int j = 0; j < width_of_L_n; j++){
// Here the code differs from the first piece of code
// Assume pixel centers start from (0.5,0.5). The top left pixel has co-ordinate (0.5,0.5)
// 0.5 is added to go to the co-ordinates where top left pixel has co-ordinate (0.5,0.5)
// 0.5 is subtracted to go to the generally used co-ordinates where top left pixel has co-ordinate (0,0)
// or in other words map the new co-ordinates to array indices
int xSrcInt = int((ratio * (j + 0.5)) - 0.5);
float xDiff = (ratio * (j + 0.5)) - 0.5 - xSrcInt;
int ySrcInt = int((ratio * (i + 0.5)) - 0.5);
float yDiff = (ratio * (i + 0.5)) - 0.5 - ySrcInt;
// Difference with previous code ends here
index = (ySrcInt * width_of_L_n_minus_1 + xSrcInt);
//Get the 4 pixel values to interpolate
a = L_n_minus_1[index];
b = L_n_minus_1[index + 1];
c = L_n_minus_1[index + width_of_L_n_minus_1];
d = L_n_minus_1[index + width_of_L_n_minus_1 + 1];
//Calculate the co-efficients for interpolation
float c0 = (1 - x_diff)*(1 - y_diff);
float c1 = (x_diff)*(1 - y_diff);
float c2 = (y_diff)*(1 - x_diff);
float c3 = (x_diff*y_diff);
//half is added for rounding the pixel intensity.
int intensity = (a*c0) + (b*c1) + (c*c2) + (d*c3) + 0.5;
if (intensity > 255)
intensity = 255;
L_n[offset++] = intensity;
}
}
The second piece of code was developed assuming pixel centers having co-ordinates like (0.5, 0.5) as they have in textures.
This way the top left pixel will have co-ordinate (0.5, 0.5).
Let us assume :
a 2 by 2 Destination Image is being resized from a 4 by 4 Source Image.
In the first piece of code, it is assumed that the first pixel has co-ordinates (0,0), thus for example my ratio is 2. Then
xSrcInt = (int)(0*2); // 0
ySrcInt = (int)(0*2); // 0
xDiff = (0*2) - 0; // 0
yDiff = (0*2) - 0; // 0
Thus effectively I will just be copying the first pixel value from the source, as c0 will be 1 and c1,c2 and c3 will be 0.
But in the second piece of code I will get
xSrcInt = (int)((0.5*2) - 0.5); // 0;
ySrcInt = (int)((0.5*2) - 0.5); // 0;
xDiff = ((0.5*2) - 0.5) - 0; // 0.5;
yDiff = ((0.5*2) - 0.5) - 0; // 0.5;
In this case c0,c1,c2 and c3 will all be equal to 0.25. Thus I will be using the 4 pixels at the top left.
Please let me know what do you think and if there is any bug in my second piece of code. As far as visual results go they are working perfectly.
But yes I do seem to notice better alignment of keypoints with the second piece of code. But may be that's because I am judging with prejudice :-).
Thanks in advance.

Get Rho and Theta from Hough-Transform opencvsharp?

I have Hough-Transform implemented using Opencvsharp (opencv), and get the lines detected on my image in console application/windows-from-application:
lines = edgeImg.HoughLines2(storage, HoughLinesMethod.Probabilistic, 1, Math.PI / 180, 60, 100, 100);
for (int i = 0; i < lines.Total; i++)
{
CvLineSegmentPoint segP= lines.GetSeqElem<CvLineSegmentPoint>(i).Value;
double angle = Math.Atan2((segP.P2.Y) - (segP.P1.Y), (segP.P2.X) - (segP.P1.X)) * 180 / Math.PI;
if (Math.Abs(angle) <= 60)
continue;
if (segP.P1.Y > segP.P2.Y + 20 || segP.P1.Y < segP.P2.Y - 20)
src.Line(segP.P1, segP.P2, CvColor.blue, 2, LineType.AntiAlias, 0);
}
I have tried different methods for visualizing the rho-theta space. since "HoughLinesMethod" does all the transformation internally, I have tried to get these values from x,y in the reverse way:
double angle = Math.Atan2(dy, dx) * 180 / Math.PI;
double theta = 90 - angle;
var thetaRad = theta*Math.PI/180;
double rho = (x1 * Math.Cos(thetaRad) + y1 * Math.Sin(thetaRad));
my first question is if I need to get two values for rho/theta, both for x1,y1 and also x2,y2 ; or calculating only one "rho/theta" would be the right intersect?
Thanks!
second, how can I visualize them in the right format? (what I currently see on my outout image is some random white dots at the top left corner of my output)
third, is it rational to get rho,theta values back in this way or you would suggest to perform the hough transform by myself and reduce the complexity? (I used opencvsharp function for better and efficient performance!)

Image conversion from Cartesian coordinate to polar coordinate

I was wondering if someone helps me understand how to convert the top image to the bottom image.
The images are available in the following link.The top image is in Cartesian coordinate. The bottom image is the converted image in polar coordinate
This is a basic rectangular to polar coordinate transform. To do the conversion, scan across the output image and treat x and y as if they were r and theta. Then use them as r and theta to look up the corresponding pixel in the input image. So something like this:
int x, y;
for (y = 0; y < outputHeight; y++)
{
Pixel* outputPixel = outputRowStart (y); // <- get a pointer to the start of the output row
for (x = 0; x < outputWidth; x++)
{
float r = y;
float theta = 2.0 * M_PI * x / outputWidth;
float newX = r * cos (theta);
float newY = r * sin (theta);
*outputPixel = getInputPixel ( newX, newY ); // <- Should probably do at least bilinear resampling in this function
outputPixel++;
}
}
Note that you may want to handle wrapping depending on what you're trying to achieve. The theta value wraps at 2pi.

Get points on ellipse arc

How to get points to adding identical intervals?
This code works for circle where theta increment by a fixed value
for (theta = 0 -> 360 degrees)
{r = ellipse_equation(theta);
x = r*cos(theta) + h;
y = r*sin(theta) + k;
}
But if increment is fixed for ellipse turns non identical intervals
This doesn't look right to me:
x = r*cos(theta) + h;
y = r*sin(theta) + k;
Shouldn't that actually be
x = cos(theta) * h;
y = sin(theta) * k;
?
And could you please clarify what you mean by "identical intervals"?
Edit:
I don't think there is an 'easy' way to get what you want. Unlike a circle, an ellipse's circumference cannot be trivially calculated: http://en.wikipedia.org/wiki/Ellipse#Circumference, or http://en.wikipedia.org/wiki/Elliptic_integral

Resources