Unexpected roll and pitch extracted from GLKQuaternion - ios

I use quaternion from CMAttitude to rotate SceneKit camera.
Also I need to use Y rotation angle extracted from the quaternion.
I expected that it would be roll, but after extraction Y rotation angle corresponds to the pitch which has -90:90 range.
How can I convert this range to 0:180 or 0:360?
- (SCNQuaternion)SCNQuaternionFromCMQuaternion:(CMQuaternion)q {
GLKQuaternion Q = GLKQuaternionMake(q.x, q.y, q.z, q.w);
GLKQuaternion xRotQ = GLKQuaternionMakeWithAngleAndAxis(-M_PI_2, 1, 0, 0);
Q = GLKQuaternionMultiply(xRotQ, Q);
double roll = atan2(2.0 * (Q.y * Q.z - Q.w * Q.x), 1.0 - 2.0 * (Q.x * Q.x + Q.y * Q.y)); // 0:180 but around X
double pitch = RADIANS_TO_DEGREES(asin(-2.0f * (Q.x * Q.z + Q.w * Q.y))); // 0:90 around Y
NSLog(#"%f", pitch);
// ...
CMQuaternion rq = {.x = Q.x, .y = Q.y, .z = Q.z, .w = Q.w};
return SCNVector4Make(rq.x, rq.y, rq.z, rq.w);
}

I found this way:
- (SCNQuaternion)SCNQuaternionFromCMQuaternion:(CMQuaternion)q {
GLKQuaternion Q = GLKQuaternionMake(q.x, q.y, q.z, q.w);
GLKQuaternion xRotQ = GLKQuaternionMakeWithAngleAndAxis(-M_PI_2, 1, 0, 0);
Q = GLKQuaternionMultiply(xRotQ, Q);
double gx = 2.0 * (Q.y * Q.w - Q.x * Q.z);
//double gy = 2.0 * (Q.x * Q.y + Q.z * Q.w);
double gz = Q.x * Q.x - Q.y * Q.y - Q.z * Q.z + Q.w * Q.w;
double pitch = RADIANS_TO_DEGREES(-asin( -2.0 * (Q.y * Q.w - Q.x * Q.z)));
if (gx >= 0 && gz < 0)
pitch = 180 - pitch;
else if (gx < 0 && gz < 0)
pitch = 180 - pitch;
else if (gx < 0 && gz >= 0)
pitch = 360 + pitch;
NSLog(#"%f", pitch); // now it has 0-360 range
CMQuaternion rq = {.x = Q.x, .y = Q.y, .z = Q.z, .w = Q.w};
return SCNVector4Make(rq.x, rq.y, rq.z, rq.w);
}

Related

How to get XY value from ct in Philips Hue?

How to get XY value from ct.
Ex: ct = 217, I want to get x="0.3127569", y= "0.32908".
I'm able to convert XY value into ct value using this below code.
float R1 = [hue[0] floatValue];
float S1 = [hue[1] floatValue];
float result = ((R1-0.332)/(S1-0.1858));
NSString *ctString = [NSString stringWithFormat:#"%f", ((-449*result*result*result)+(3525*result*result)-(6823.3*result)+(5520.33))];
float micro2 = (float) (1 / [ctString floatValue] * 1000000);
NSString *ctValue = [NSString stringWithFormat:#"%f", micro2];
ctValue = [NSString stringWithFormat:#"%d", [ctValue intValue]];
if ([ctValue integerValue] < 153) {
ctValue = [NSString stringWithFormat:#"%d", 153];
}
Now I want reverse value, which is from ct to XY.
On Phillips HUE
2000K maps to 500 and 6500K maps to 153 given in ct as color temperature but can be thought as actually being Mired.
Mired means micro reciprocal degree wikipedia.
ct is possibly used because it is not 100% Mired. Quite sure Phillips uses a lookup table as a lot CIE algorithms do because there are just 347 indexes in this range from 153 to 500.
The following is not a solution, it's just simple concept of a lookup table.
And as the CIE 1931 xy to CCT Formula by McCamy suggests found here it is possible to use a lookup table to find x and y as well.
A table can be found here but i am not sure if that is the right lookup table.
reminder so the following is not a solution, but to find an reverse algo the code may help.
typedef int Kelvin;
typedef float Mired;
Mired linearMiredByKelvin(Kelvin k) {
if (k==0) return 0;
return 1000000.0/k;
}
-(void)mired {
Mired miredMin = 2000.0/13.0; // 153,84 = reciprocal 6500K
Mired miredMax = 500.0; // 500,00 = reciprocal 2000K
Mired lookupMiredByKelvin[6501]; //max 6500 Kelvin + 1 safe index
//Kelvin lookupKelvinByMired[501]; //max 500 Mired + 1 safe index
// dummy stuff, empty unused table space
for (Kelvin k = 0; k < 2000; k++) {
lookupMiredByKelvin[k] = 0;
}
//for (Mired m = 0.0; m < 154.0; m++) {
// lookupKelvinByMired[(int)m] = 0;
//}
for (Kelvin k=2000; k<6501; k++) {
Mired linearMired = linearMiredByKelvin(k);
float dimm = (linearMired - miredMin) / ( miredMax - miredMin);
Kelvin ct = (Kelvin)(1000000.0/(dimm*miredMax - dimm*miredMin + miredMin));
lookupMiredByKelvin[k] = linearMiredByKelvin(ct);
if (k==2000 || k==2250 || k==2500 || k==2750 ||
k==3000 || k==3250 || k==3500 || k==3750 ||
k==4000 || k==4250 || k==4500 || k==4750 ||
k==5000 || k==5250 || k==5500 || k==5750 ||
k==6000 || k==6250 || k==6500 || k==6501 )
fprintf(stderr,"%d %f %f\n",ct, dimm, lookupMiredByKelvin[k]);
}
}
at least this is proof that x and y will not sit on a simple vector.
CCT means correlated colour temperature and like the implementation in the question shows can be calculated via n= (x-0.3320)/(0.1858-y); CCT = 437*n^3 + 3601*n^2 + 6861*n + 5517. (after McCamy)
but a cct=217 is out of range of above link'ed lookup table.
following the idea in this git-repo from colour-science
and ported to C it could look like..
void CCT_to_xy_CIE_D(float cct) {
//if (CCT < 4000 || CCT > 25000) fprintf(stderr, "Correlated colour temperature must be in domain, unpredictable results may occur! \n");
float x = calculateXviaCCT(cct);
float y = calculateYviaX(x);
NSLog(#"cct=%f x%f y%f",cct,x,y);
}
float calculateXviaCCT(float cct) {
float cct_3 = pow(cct, 3); //(cct*cct*cct);
float cct_2 = pow(cct, 2); //(cct*cct);
if (cct<=7000)
return -4.607 * pow(10, 9) / cct_3 + 2.9678 * pow(10, 6) / cct_2 + 0.09911 * pow(10, 3) / cct + 0.244063;
return -2.0064 * pow(10, 9) / cct_3 + 1.9018 * pow(10, 6) / cct_2 + 0.24748 * pow(10, 3) / cct + 0.23704;
}
float calculateYviaX(float x) {
return -3.000 * pow(x, 2) + 2.870 * x - 0.275;
}
CCT_to_xy_CIE_D(6504.38938305); //proof of concept
//cct=6504.389160 x0.312708 y0.329113
CCT_to_xy_CIE_D(217.0);
//cct=217.000000 x-387.131073 y-450722.750000
// so for sure Phillips hue temperature given in ct between 153-500 is not a good starting point
//but
CCT_to_xy_CIE_D(2000.0);
//cct=2000.000000 x0.459693 y0.410366
this seems to work fine with CCT between 2000 and 25000, but maybe confusing is CCT is given in Kelvin here.
EDIT
This has been through so many revisions and ideas. To keep it simple I edited most of that out and just give you the final result.
This fits your function perfectly except for a region in the middle (temp from 256 to 316) where it deviates a bit.
The problem with your function is that it has approximately infinite solutions, so to solve it nicely you need more constraints, but what? Ol Sen's reference https://www.waveformlighting.com/tech/calculate-color-temperature-cct-from-cie-1931-xy-coordinates discusses it in some detail and then mentions that you want a Duv to be zero. It also gives a way to calculate Duv and so I added that to my optimiser and voila!
Nice and smooth. The optimiser now solves for x and y that both satisfies your function and also minimises Duv.
To get it to work nicely I had to scale Duv quite a bit. That page mentions that Duv should be very small so I think this is a good thing. Also, as the temp increases the scaling should to help the optimiser.
Below prints from 153 to 500.
#import <Foundation/Foundation.h>
// Function taken from your code
// Simplified a bit
int ctFuncI ( float x, float y )
{
// float R1 = [hue[0] floatValue];
// float S1 = [hue[1] floatValue];
float result = (x-0.332)/(y-0.1858);
float cubic = - 449 * result * result * result + 3525 * result * result - 6823.3 * result + 5520.33;
float micro2 = 1 / cubic * 1000000;
int ct = ( int )( micro2 + 0.5 );
if ( ct < 153 )
{
ct = 153;
}
return ct;
}
// Need this
// Float version of your code
float ctFuncF ( float x, float y )
{
// float R1 = [hue[0] floatValue];
// float S1 = [hue[1] floatValue];
float result = (x-0.332)/(y-0.1858);
float cubic = - 449 * result * result * result + 3525 * result * result - 6823.3 * result + 5520.33;
return 1000000 / cubic;
}
// We need an additional constraint
// https://www.waveformlighting.com/tech/calculate-duv-from-cie-1931-xy-coordinates
// Given x, y calculate Duv
// We want this to be 0
float duv ( float x, float y )
{
float f = 1 / ( - 2 * x + 12 * y + 3 );
float u = 4 * x * f;
float v = 6 * y * f;
// I'm typing float but my heart yells double
float k6 = -0.00616793;
float k5 = 0.0893944;
float k4 = -0.5179722;
float k3 = 1.5317403;
float k2 = -2.4243787;
float k1 = 1.925865;
float k0 = -0.471106;
float du = u - 0.292;
float dv = v - 0.24;
float Lfp = sqrt ( du * du + dv * dv );
float a = acos( du / Lfp );
float Lbb = k6 * pow ( a, 6 ) + k5 * pow( a, 5 ) + k4 * pow( a, 4 ) + k3 * pow( a, 3 ) + k2 * pow(a,2) + k1 * a + k0;
return Lfp - Lbb;
}
// Solver!
// Returns iterations
int ctSolve ( int ct, float * x, float * y )
{
int iter = 0;
float dx = 0.001;
float dy = 0.001;
// Error
// Note we scale duv a bit
// Seems the higher the temp, the higher scale we require
// Also note the jump at 255 ...
float s = 1000 * ( ct > 255 ? 10 : 1 );
float d = fabs( ctFuncF ( * x, * y ) - ct ) + s * fabs( duv ( * x, * y ) );
// Approx
while ( d > 0.5 && iter < 250 )
{
iter ++;
dx *= fabs( ctFuncF ( * x + dx, * y ) - ct ) + s * fabs( duv ( * x + dx, * y ) ) < d ? 1.2 : - 0.5;
dy *= fabs( ctFuncF ( * x, * y + dy ) - ct ) + s * fabs( duv ( * x, * y + dy ) ) < d ? 1.2 : - 0.5;
* x += dx;
* y += dy;
d = fabs( ctFuncF ( * x, * y ) - ct ) + s * fabs( duv ( * x, * y ) );
}
return iter;
}
// Tester
int main(int argc, const char * argv[]) {
#autoreleasepool
{
// insert code here...
NSLog(#"Hello, World!");
float x, y;
int sume = 0;
int sumi = 0;
for ( int ct = 153; ct <= 500; ct ++ )
{
// Initial guess
x = 0.4;
y = 0.4;
// Approx
int iter = ctSolve ( ct, & x, & y );
// CT and error
int ctEst = ctFuncI ( x, y );
int e = ct - ctEst;
// Diagnostics
sume += abs ( e );
sumi += iter;
// Print out results
NSLog ( #"want ct = %d x = %f y = %f got ct %d in %d iter error %d", ct, x, y, ctEst, iter, e );
}
NSLog ( #"Sum of abs errors %d iterations %d", sume, sumi );
}
return 0;
}
To use it, do as below.
// To call it, init x and y to some guess
float x = 0.4;
float y = 0.4;
// Then call solver with your temp
int ct = 217;
ctSolve( ct, & x, & y ); // Note you pass references to x and y
// Done, answer now in x and y
a bit more compact answer and functions to convert back and forth..
beware there are rounding issues because McCamy's formula relies and mathematical assumptions. And so the backward calculation does also.
if you want to find more results search directly for "n= (x-0.3320)/(0.1858-y); CCT = 437*n^3 + 3601*n^2 + 6861*n + 5517" there are plenty of different methods to convert back and forth.
so here Phillips-Hue #[#x,#y] to Phillips-ct,Phillips-ct to CCT, CCT to x,y
void CCT_to_xy_CIE_D(float cct) {
//if (CCT < 4000 || CCT > 25000) fprintf(stderr, "Correlated colour temperature must be in domain, unpredictable results may occur! \n");
float x = calculateXviaCCT(cct);
float y = calculateYviaX(x);
fprintf(stderr,"cct=%f x%f y%f",cct,x,y);
}
float calculateXviaCCT(float cct) {
float cct_3 = pow(cct, 3); //(cct*cct*cct);
float cct_2 = pow(cct, 2); //(cct*cct);
if (cct<=7000.0)
return -4.607 * pow(10, 9) / cct_3 + 2.9678 * pow(10, 6) / cct_2 + 0.09911 * pow(10, 3) / cct + 0.244063;
return -2.0064 * pow(10, 9) / cct_3 + 1.9018 * pow(10, 6) / cct_2 + 0.24748 * pow(10, 3) / cct + 0.23704;
}
float calculateYviaX(float x) {
return -3.000 * x*x + 2.870 * x - 0.275;
}
int calculate_PhillipsHueCT_withCCT(float cct) {
if (cct>6500.0) return 2000.0/13.0;
if (cct<2000.0) return 500.0;
//return (float) (1 / cct * 1000000); // same as..
return 1000000 / cct;
}
float calculate_CCT_withPhillipsHueCT(float ct) {
if (ct == 0.0) return 0.0;
return 1000000 / ct;
}
float calculate_CCT_withHueXY(NSArray *hue) {
float x = [hue[0] floatValue]; //R1
float y = [hue[1] floatValue]; //S1
//x = 0.312708; y = 0.329113;
float n = (x-0.3320)/(0.1858-y);
float cct = 437.0*n*n*n + 3601.0*n*n + 6861.0*n + 5517.0;
return cct;
}
// MC Camy formula n=(x-0.3320)/(0.1858-y); cct = 437*n^3 + 3601*n^2 + 6861*n + 5517;
-(void)testPhillipsHueCt_backAndForth {
NSArray *hue = #[#(0.312708),#(0.329113)];
float cct = calculate_CCT_withHueXY(hue);
float ct = calculate_PhillipsHueCT_withCCT(cct);
NSLog(#"ct %f",ct);
CCT_to_xy_CIE_D(cct); // check
CCT_to_xy_CIE_D(6504.38938305); //proof of concept
CCT_to_xy_CIE_D(2000.0);
CCT_to_xy_CIE_D(calculate_CCT_withPhillipsHueCT(217.0));
}

Convert cv::Vec4f line to cv::Vec2f

I have a pair of Cartesian coordinates that represent a line in an image. I would like to convert this line to polar form and draw it over the image.
e.g
cv::Vec4f line {10,20,60,70};
float x1 = line[0];
float y1 = line[1];
float x2 = line[2];
float y2 = line[3];
I want this line to be represented in cv::Vec2f form(rho,theta).
Taking care of rho & theta with all possible slopes.
Given are the image dimensions :: w and h;
w = image.cols
h = image.rows
How can I achieve this.
N.B: We can also assume that the line can be an extended one running across the image.
for (size_t i = 0; i < lines.size(); i++)
{
int x1 = lines[i][0];
int y1 = lines[i][1];
int x2 = lines[i][2];
int y2 = lines[i][3];
float d = sqrt(((y1-y2)*(y1-y2)) + ((x2-x1)*(x2-x1)) );
float rho = (y1*x2 - y2*x1)/d;
float theta = atan2(x2 - x1,y1-y2) ;
if(rho < 0){
theta *= -1;
rho *= -1;
}
linv2f.push_back(cv::Vec2f(rho,theta));
}
The above approach doesnt give me results when I plot the lines I dont get the lines that are overlapping their original vec4f form.
I use this to convert vec2f to vec4f for testing :
cv::Vec4f cvtVec2fLine(const cv::Vec2f& data, const cv::Mat& img)
{
float const rho = data[0];
float const theta = data[1];
cv::Point pt1,pt2;
if((theta < CV_PI/4. || theta > 3. * CV_PI/4.)){
pt1 = cv::Point(rho / std::cos(theta), 0);
pt2 = cv::Point( (rho - img.rows * std::sin(theta))/std::cos(theta), img.rows);
}else {
pt1 = cv::Point(0, rho / std::sin(theta));
pt2 = cv::Point(img.cols, (rho - img.cols * std::cos(theta))/std::sin(theta));
}
cv::Vec4f l;
l[0] = pt1.x;
l[1] = pt1.y;
l[2] = pt2.x;
l[3] = pt2.y;
return l;
}
rho-theta equation has form
x * Cos(Theta) + y * Sin(Theta) - Rho = 0
We want to represent equation 'by two points' into rho-theta form (page 92 in pdf here). If we have
x * A + y * B - C = 0
and need coefficients in trigonometric form, we can divide all equation by magnitude of (A,B) coefficient vector.
D = Length(A,B) = Math.Hypot(A,B)
x * A/D + y * B/D - C/D = 0
note that (A/D)^2 + (B/D)^2 = 1 - basic trigonometric equality, so we can consider A/D and B/D as cosine and sine of some angle theta.
Your line equation is
(y-y1) * (x2-x1) - (x-x1) * (y2-y1) = 0
or
x * (y1-y2) + y * (x2-x1) - (y1 * x2 - y2 * x1) = 0
let
D = Sqrt((y1-y2)^2 + (x2-x1)^2)
so
Theta = ArcTan2(x2-x1, y1-y2)
Rho = (y1 * x2 - y2 * x1) / D
edited
If Rho is negative, change sign of Rho and shift Theta by Pi
Example:
x1=1,y1=0, x2=0,y2=1
Theta = atan2(-1,-1)=-3*Pi/4
D=Sqrt(2)
Rho=-Sqrt(2)/2 negative =>
Rho = Sqrt(2)/2
Theta = Pi/4
Back substitutuon - find points of intersection with axes
0 * Sqrt(2)/2 + y0 * Sqrt(2)/2 - Sqrt(2)/2 = 0
x=0 y=1
x0 * Sqrt(2)/2 + 0 * Sqrt(2)/2 - Sqrt(2)/2 = 0
x=1 y=0

How to get buffer polygon coordinates (latitude and longitude) in iOS?

I have created one polygon on map with some set of coordinates.
I need help regarding making one buffered polygon with some given distance outside of original polygon border.
so what i need a method with such algorithm in which i pass set of coordinates as input and should get buffered set of coordinates as output.
I tried to achieve this by using arcgis library for ios with bufferGeometry method of AGSGeometryEngine but problem is, this is tightly coupled and only will work their GIS Map but I am using Mapbox which is different Map. So I want one generic method which can resolve my problem independent to map.
The solution of #Ravikant Paudel though comprehensive didn't work for me so I have implemented the approach myself.
Also, I implemented the approach in kotlin and adding it here so that someone else who is facing a similar problem will find it helpful.
Approach:
Find the angle of the angle bisector theta for every vertice of the polygon.
Draw a circle with radius = bufferedDistance / sin(angleBisctorTheta)
Find intersections of the circle and angle bisector.
Out of the 2 intersection points the one inside the polygon will give you the buffered vertice for the shrunk polygon and the outside point for the buffered polygon.
This approach does not account for the corner cases in which both points somehow fall inside or outside the polygon -> in which case the buffered polygon formed will be malformed.
Code:
private fun computeAngleBisectorTheta(
prevLatLng: LatLng,
currLatLng: LatLng,
nextLatLng: LatLng
): Double {
var phiBisector = 0.0
try {
val aPrime = getDeltaPrimeVector(prevLatLng, currLatLng)
val cPrime = getDeltaPrimeVector(nextLatLng, currLatLng)
val thetaA = atan2(aPrime[1], aPrime[0])
val thetaC = atan2(cPrime[1], cPrime[0])
phiBisector = (thetaA + thetaC) / 2.0
} catch (e: Exception) {
logger.error("[Error] in computeAngleBisectorSlope: $e")
}
return phiBisector
}
private fun getDeltaPrimeVector(
aLatLng: LatLng,
bLatLng: LatLng
): ArrayList<Double> {
val arrayList: ArrayList<Double> = ArrayList<Double>(2)
try {
val aX = convertToXY(aLatLng.latitude)
val aY = convertToXY(aLatLng.longitude)
val bX = convertToXY(bLatLng.latitude)
val bY = convertToXY(bLatLng.longitude)
arrayList.add((aX - bX))
arrayList.add((aY - bY))
} catch (e: Exception) {
logger.error("[Error] in getDeltaPrimeVector: $e")
}
return arrayList
}
private fun convertToXY(coordinate: Double) =
EARTH_RADIUS * toRad(coordinate)
private fun convertToLatLngfromXY(coordinate: Double) =
toDegrees(coordinate / EARTH_RADIUS)
private fun computeBufferedVertices(
angle: Double, bufDis: Int,
centerLatLng: LatLng
): ArrayList<LatLng> {
var results = ArrayList<LatLng>()
try {
val distance = bufDis / sin(angle)
var slope = tan(angle)
var inverseSlopeSquare = sqrt(1 + slope * slope * 1.0)
var distanceByInverseSlopeSquare = distance / inverseSlopeSquare
var slopeIntoDistanceByInverseSlopeSquare = slope * distanceByInverseSlopeSquare
var p1X: Double = convertToXY(centerLatLng.latitude) + distanceByInverseSlopeSquare
var p1Y: Double =
convertToXY(centerLatLng.longitude) + slopeIntoDistanceByInverseSlopeSquare
var p2X: Double = convertToXY(centerLatLng.latitude) - distanceByInverseSlopeSquare
var p2Y: Double =
convertToXY(centerLatLng.longitude) - slopeIntoDistanceByInverseSlopeSquare
val tempLatLng1 = LatLng(convertToLatLngfromXY(p1X), convertToLatLngfromXY(p1Y))
results.add(tempLatLng1)
val tempLatLng2 = LatLng(convertToLatLngfromXY(p2X), convertToLatLngfromXY(p2Y))
results.add(tempLatLng2)
} catch (e: Exception) {
logger.error("[Error] in computeBufferedVertices: $e")
}
return results
}
private fun getVerticesOutsidePolygon(
verticesArray: ArrayList<LatLng>,
polygon: ArrayList<LatLng>
): LatLng {
if (isPointInPolygon(
verticesArray[0].latitude,
verticesArray[0].longitude,
polygon
)
) {
if (sPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
logger.error("[ERROR] Malformed polygon! Both Vertices are inside the polygon! $verticesArray")
} else {
return verticesArray[1]
}
} else {
if (PolygonGeofenceHelper.isPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
return verticesArray[0]
} else {
logger.error("[ERROR] Malformed polygon! Both Vertices are outside the polygon!: $verticesArray")
}
}
//returning a vertice anyway because there is no fall back policy designed if both vertices are inside or outside the polygon
return verticesArray[0]
}
private fun toRad(angle: Double): Double {
return angle * Math.PI / 180
}
private fun toDegrees(radians: Double): Double {
return radians * 180 / Math.PI
}
private fun getVerticesInsidePolygon(
verticesArray: ArrayList<LatLng>,
polygon: ArrayList<LatLng>
): LatLng {
if (isPointInPolygon(
verticesArray[0].latitude,
verticesArray[0].longitude,
polygon
)
) {
if (isPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
logger.error("[ERROR] Malformed polygon! Both Vertices are inside the polygon! $verticesArray")
} else {
return verticesArray[0]
}
} else {
if (PolygonGeofenceHelper.isPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
return verticesArray[1]
} else {
logger.error("[ERROR] Malformed polygon! Both Vertices are outside the polygon!: $verticesArray")
}
}
//returning a vertice anyway because there is no fall back policy designed if both vertices are inside or outside the polygon
return LatLng(0.0, 0.0)
}
fun getBufferedPolygon(
polygon: ArrayList<LatLng>,
bufferDistance: Int,
isOutside: Boolean
): ArrayList<LatLng> {
var bufferedPolygon = ArrayList<LatLng>()
var isBufferedPolygonMalformed = false
try {
for (i in 0 until polygon.size) {
val prevLatLng: LatLng = polygon[if (i - 1 < 0) polygon.size - 1 else i - 1]
val centerLatLng: LatLng = polygon[i]
val nextLatLng: LatLng = polygon[if (i + 1 == polygon.size) 0 else i + 1]
val computedVertices =
computeBufferedVertices(
computeAngleBisectorTheta(
prevLatLng, centerLatLng, nextLatLng
), bufferDistance, centerLatLng
)
val latLng = if (isOutside) {
getVerticesOutsidePolygon(
computedVertices,
polygon
)
} else {
getVerticesInsidePolygon(
computedVertices,
polygon
)
}
if (latLng.latitude == 0.0 && latLng.longitude == 0.0) {
isBufferedPolygonMalformed = true
break
}
bufferedPolygon.add(latLng)
}
if (isBufferedPolygonMalformed) {
bufferedPolygon = polygon
logger.error("[Error] Polygon generated is malformed returning the same polygon: $polygon , $bufferDistance, $isOutside")
}
} catch (e: Exception) {
logger.error("[Error] in getBufferedPolygon: $e")
}
return bufferedPolygon
}
You'll need to pass an array of points present in the polygon in the code and the buffer distance the third param is to get the outside buffer or the inside buffer. (Note: I am assuming that the vertices in this list are adjacent to each other).
I have tried to keep this answer as comprehensive as possible. Please feel free to suggest any improvements or a better approach.
You can find the detailed math behind the above code on my portfolio page.
Finding angle bisector
To approximate latitude and longitude to a 2D cartesian coordinate system.
To check if the point is inside a polygon I am using the approach mentioned in this geeks for geeks article
I have same problem in my app and finally found the solution by the help of this site
I am an android developer and my code may not be useful to you but the core concept is same.
At first we need to find the bearing of the line with the help of two points LatLng points.(i have done by using computeDistanceAndBearing(double lat1, double lon1,double lat2, double lon2) function)
Now to get the buffering of certain point we need to give the buffering distance ,LatLng point and bearing (which i obtain from computeDistanceAndBearing function).(I have done this by using computeDestinationAndBearing(double lat1, double lon1,double brng, double dist) function ). from single LatLng point we get two points by producing them with their bearing with certain distance.
Now we need to find the interestion point of the two point to get the buffering that we want. for this remember to take new obtain point and bearing of another line and same with another. This helps to obtain new intersection point with buffering you want.(i have done this in my function computeIntersectionPoint(LatLng p1, double brng1, LatLng p2, double brng2))
Do this to all the polygon points and then you get new points whichyou joint to get buffering.
This is the way i have done in my android location app whis is
Here is my code
//computeDistanceAndBearing(double lat1, double lon1,
double lat2, double lon2)
public static double[] computeDistanceAndBearing(double lat1, double lon1,
double lat2, double lon2) {
// Based on http://www.ngs.noaa.gov/PUBS_LIB/inverse.pdf
// using the "Inverse Formula" (section 4)
double results[] = new double[3];
int MAXITERS = 20;
// Convert lat/long to radians
lat1 *= Math.PI / 180.0;
lat2 *= Math.PI / 180.0;
lon1 *= Math.PI / 180.0;
lon2 *= Math.PI / 180.0;
double a = 6378137.0; // WGS84 major axis
double b = 6356752.3142; // WGS84 semi-major axis
double f = (a - b) / a;
double aSqMinusBSqOverBSq = (a * a - b * b) / (b * b);
double L = lon2 - lon1;
double A = 0.0;
double U1 = Math.atan((1.0 - f) * Math.tan(lat1));
double U2 = Math.atan((1.0 - f) * Math.tan(lat2));
double cosU1 = Math.cos(U1);
double cosU2 = Math.cos(U2);
double sinU1 = Math.sin(U1);
double sinU2 = Math.sin(U2);
double cosU1cosU2 = cosU1 * cosU2;
double sinU1sinU2 = sinU1 * sinU2;
double sigma = 0.0;
double deltaSigma = 0.0;
double cosSqAlpha = 0.0;
double cos2SM = 0.0;
double cosSigma = 0.0;
double sinSigma = 0.0;
double cosLambda = 0.0;
double sinLambda = 0.0;
double lambda = L; // initial guess
for (int iter = 0; iter < MAXITERS; iter++) {
double lambdaOrig = lambda;
cosLambda = Math.cos(lambda);
sinLambda = Math.sin(lambda);
double t1 = cosU2 * sinLambda;
double t2 = cosU1 * sinU2 - sinU1 * cosU2 * cosLambda;
double sinSqSigma = t1 * t1 + t2 * t2; // (14)
sinSigma = Math.sqrt(sinSqSigma);
cosSigma = sinU1sinU2 + cosU1cosU2 * cosLambda; // (15)
sigma = Math.atan2(sinSigma, cosSigma); // (16)
double sinAlpha = (sinSigma == 0) ? 0.0 : cosU1cosU2 * sinLambda
/ sinSigma; // (17)
cosSqAlpha = 1.0 - sinAlpha * sinAlpha;
cos2SM = (cosSqAlpha == 0) ? 0.0 : cosSigma - 2.0 * sinU1sinU2
/ cosSqAlpha; // (18)
double uSquared = cosSqAlpha * aSqMinusBSqOverBSq; // defn
A = 1 + (uSquared / 16384.0) * // (3)
(4096.0 + uSquared * (-768 + uSquared * (320.0 - 175.0 * uSquared)));
double B = (uSquared / 1024.0) * // (4)
(256.0 + uSquared * (-128.0 + uSquared * (74.0 - 47.0 * uSquared)));
double C = (f / 16.0) * cosSqAlpha * (4.0 + f * (4.0 - 3.0 * cosSqAlpha)); // (10)
double cos2SMSq = cos2SM * cos2SM;
deltaSigma = B
* sinSigma
* // (6)
(cos2SM + (B / 4.0)
* (cosSigma * (-1.0 + 2.0 * cos2SMSq) - (B / 6.0) * cos2SM
* (-3.0 + 4.0 * sinSigma * sinSigma)
* (-3.0 + 4.0 * cos2SMSq)));
lambda = L
+ (1.0 - C)
* f
* sinAlpha
* (sigma + C * sinSigma
* (cos2SM + C * cosSigma * (-1.0 + 2.0 * cos2SM * cos2SM))); // (11)
double delta = (lambda - lambdaOrig) / lambda;
if (Math.abs(delta) < 1.0e-12) {
break;
}
}
double distance = (b * A * (sigma - deltaSigma));
results[0] = distance;
if (results.length > 1) {
double initialBearing = Math.atan2(cosU2 * sinLambda, cosU1 * sinU2
- sinU1 * cosU2 * cosLambda);
initialBearing *= 180.0 / Math.PI;
results[1] = initialBearing;
if (results.length > 2) {
double finalBearing = Math.atan2(cosU1 * sinLambda, -sinU1 * cosU2
+ cosU1 * sinU2 * cosLambda);
finalBearing *= 180.0 / Math.PI;
results[2] = finalBearing;
}
}
return results;
}
//computeDestinationAndBearing(double lat1, double lon1,double brng, double dist)
public static double[] computeDestinationAndBearing(double lat1, double lon1,
double brng, double dist) {
double results[] = new double[3];
double a = 6378137, b = 6356752.3142, f = 1 / 298.257223563; // WGS-84
// ellipsiod
double s = dist;
double alpha1 = toRad(brng);
double sinAlpha1 = Math.sin(alpha1);
double cosAlpha1 = Math.cos(alpha1);
double tanU1 = (1 - f) * Math.tan(toRad(lat1));
double cosU1 = 1 / Math.sqrt((1 + tanU1 * tanU1)), sinU1 = tanU1 * cosU1;
double sigma1 = Math.atan2(tanU1, cosAlpha1);
double sinAlpha = cosU1 * sinAlpha1;
double cosSqAlpha = 1 - sinAlpha * sinAlpha;
double uSq = cosSqAlpha * (a * a - b * b) / (b * b);
double A = 1 + uSq / 16384
* (4096 + uSq * (-768 + uSq * (320 - 175 * uSq)));
double B = uSq / 1024 * (256 + uSq * (-128 + uSq * (74 - 47 * uSq)));
double sinSigma = 0, cosSigma = 0, deltaSigma = 0, cos2SigmaM = 0;
double sigma = s / (b * A), sigmaP = 2 * Math.PI;
while (Math.abs(sigma - sigmaP) > 1e-12) {
cos2SigmaM = Math.cos(2 * sigma1 + sigma);
sinSigma = Math.sin(sigma);
cosSigma = Math.cos(sigma);
deltaSigma = B
* sinSigma
* (cos2SigmaM + B
/ 4
* (cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM) - B / 6
* cos2SigmaM * (-3 + 4 * sinSigma * sinSigma)
* (-3 + 4 * cos2SigmaM * cos2SigmaM)));
sigmaP = sigma;
sigma = s / (b * A) + deltaSigma;
}
double tmp = sinU1 * sinSigma - cosU1 * cosSigma * cosAlpha1;
double lat2 = Math.atan2(sinU1 * cosSigma + cosU1 * sinSigma * cosAlpha1,
(1 - f) * Math.sqrt(sinAlpha * sinAlpha + tmp * tmp));
double lambda = Math.atan2(sinSigma * sinAlpha1, cosU1 * cosSigma - sinU1
* sinSigma * cosAlpha1);
double C = f / 16 * cosSqAlpha * (4 + f * (4 - 3 * cosSqAlpha));
double L = lambda
- (1 - C)
* f
* sinAlpha
* (sigma + C * sinSigma
* (cos2SigmaM + C * cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM)));
double lon2 = (toRad(lon1) + L + 3 * Math.PI) % (2 * Math.PI) - Math.PI; // normalise
// to
// -180...+180
double revAz = Math.atan2(sinAlpha, -tmp); // final bearing, if required
results[0] = toDegrees(lat2);
results[1] = toDegrees(lon2);
results[2] = toDegrees(revAz);
return results;
}
private static double toRad(double angle) {
return angle * Math.PI / 180;
}
private static double toDegrees(double radians) {
return radians * 180 / Math.PI;
}
//computeIntersectionPoint(LatLng p1, double brng1, LatLng p2, double brng2)
public static LatLng computeIntersectionPoint(LatLng p1, double brng1, LatLng p2, double brng2) {
double lat1 = toRad(p1.latitude), lng1 = toRad(p1.longitude);
double lat2 = toRad(p2.latitude), lng2 = toRad(p2.longitude);
double brng13 = toRad(brng1), brng23 = toRad(brng2);
double dlat = lat2 - lat1, dlng = lng2 - lng1;
double delta12 = 2 * Math.asin(Math.sqrt(Math.sin(dlat / 2) * Math.sin(dlat / 2)
+ Math.cos(lat1) * Math.cos(lat2) * Math.sin(dlng / 2) * Math.sin(dlng / 2)));
if (delta12 == 0) return null;
double initBrng1 = Math.acos((Math.sin(lat2) - Math.sin(lat1) * Math.cos(delta12)) / (Math.sin(delta12) * Math.cos(lat1)));
double initBrng2 = Math.acos((Math.sin(lat1) - Math.sin(lat2) * Math.cos(delta12)) / (Math.sin(delta12) * Math.cos(lat2)));
double brng12 = Math.sin(lng2 - lng1) > 0 ? initBrng1 : 2 * Math.PI - initBrng1;
double brng21 = Math.sin(lng2 - lng1) > 0 ? 2 * Math.PI - initBrng2 : initBrng2;
double alpha1 = (brng13 - brng12 + Math.PI) % (2 * Math.PI) - Math.PI;
double alpha2 = (brng21 - brng23 + Math.PI) % (2 * Math.PI) - Math.PI;
double alpha3 = Math.acos(-Math.cos(alpha1) * Math.cos(alpha2) + Math.sin(alpha1) * Math.sin(alpha2) * Math.cos(delta12));
double delta13 = Math.atan2(Math.sin(delta12) * Math.sin(alpha1) * Math.sin(alpha2), Math.cos(alpha2) + Math.cos(alpha1) * Math.cos(alpha3));
double lat3 = Math.asin(Math.sin(lat1) * Math.cos(delta13) + Math.cos(lat1) * Math.sin(delta13) * Math.cos(brng13));
double dlng13 = Math.atan2(Math.sin(brng13) * Math.sin(delta13) * Math.cos(lat1), Math.cos(delta13) - Math.sin(lat1) * Math.sin(lat3));
double lng3 = lng1 + dlng13;
return new LatLng(toDegrees(lat3), (toDegrees(lng3) + 540) % 360 - 180);
}
I will suggest you to go through the the above site and get the knowledge as i had also done the same.
Hope this may help , i know the is not in ios but the concept is same as i done my project by changing code of javascript.
Cheers !!!
My requirement was something similar to this. I ended up writing up my own algo for this. https://github.com/RanaRanvijaySingh/PolygonBuffer
All you need to use is this line
double distance = 0.0001;
List bufferedPolygonList = AreaBuffer.buffer(pointList, distance);
It gives you a list of buffered polygon points at a given distance from your original polygon.
I would recommend to use Turf.js library for buffering and many basic gis operations. You would be able to retrieve each edge from the path that returned. For geometry buffer, it is easy to use, quite light weighted and it works without any problem for my applications using MapBox.js or leaflet.
More details : Turf.js Buffer
But if you are looking for a geodesic distance buffer that could be problem. I would use Arcgis Javascript API
Take a look at BOOST this is a big C++ library, you may find library/source code for almost everything up there like, buffer methods with different types such as miter,round,square.
Just install the latest version of the Boost which I guess is 1.58.0 right now, and take a look at BOOST/Geometry/Strategies/Cartesian/buffer[Something]-Square/Miter/Round
Here it is a good document
You need to convert your geodetic coordinates (lat/long) to cartesian (x/y) and use the Boost library and reverse the conversion. you do not need to use ArcGIS or any other GIS library at all.

Fast Voxel Traversal Algorithm with negative direction

I'm trying to implement fast voxel traversal algorithm and calculate T and M according to this answer (T is tDelta, M is tMax). All is good if the two components of the direction vector V are positive. But if at least one of them is negative, it's work wrong.
Green point is start, red is end. All seems correct.
And now from bigger to less position.
Traversal method:
private IEnumerable<Vector2> GetCrossedCells(Vector2 pPoint1, Vector2 pPoint2)
{
Vector2 V = pPoint2 - pPoint1; // direction & distance vector
if (V != Vector2.Zero)
{
Vector2 U = Vector2.Normalize(V); // direction unit vector
Vector2 S = new Vector2(Math.Sign(U.X), Math.Sign(U.Y)); // sign vector
Vector2 P = pPoint1; // position
Vector2 G = new Vector2((int) Math.Floor(P.X / CELL_SIZE), (int) Math.Floor(P.Y / CELL_SIZE)); // grid coord
Vector2 T = new Vector2(Math.Abs(CELL_SIZE / U.X), Math.Abs(CELL_SIZE / U.Y));
Vector2 M = new Vector2(
Single.IsInfinity(T.X) ? Single.PositiveInfinity : T.X * (1.0f - (P.X / CELL_SIZE) % 1),
Single.IsInfinity(T.Y) ? Single.PositiveInfinity : T.Y * (1.0f - (P.Y / CELL_SIZE) % 1));
Vector2 D = Vector2.Zero;
bool isCanMoveByX = S.X != 0;
bool isCanMoveByY = S.Y != 0;
while (isCanMoveByX || isCanMoveByY)
{
yield return G;
D = new Vector2(
S.X > 0 ? (float) (Math.Floor(P.X / CELL_SIZE) + 1) * CELL_SIZE - P.X :
S.X < 0 ? (float) (Math.Ceiling(P.X / CELL_SIZE) - 1) * CELL_SIZE - P.X :
0,
S.Y > 0 ? (float) (Math.Floor(P.Y / CELL_SIZE) + 1) * CELL_SIZE - P.Y :
S.Y < 0 ? (float) (Math.Ceiling(P.Y / CELL_SIZE) - 1) * CELL_SIZE - P.Y :
0);
if (Math.Abs(V.X) <= Math.Abs(D.X))
{
D.X = V.X;
isCanMoveByX = false;
}
if (Math.Abs(V.Y) <= Math.Abs(D.Y))
{
D.Y = V.Y;
isCanMoveByY = false;
}
if (M.X <= M.Y)
{
M.X += T.X;
G.X += S.X;
if (isCanMoveByY)
{
D.Y = U.Y / U.X * D.X; // U.X / U.Y = D.X / D.Y => U.X * D.Y = U.Y * D.X
}
}
else
{
M.Y += T.Y;
G.Y += S.Y;
if (isCanMoveByX)
{
D.X = U.X / U.Y * D.Y;
}
}
V -= D;
P += D;
}
}
}
In debug I can see that for example M.Y > M.X when should be the opposite if S.X < 0 or S.Y < 0.
Tell me please what my code work wrong for negative directions?
So, I solve it.
I make code cleaner and problem is gone.
private IEnumerable<Vector2> GetCrossedCells(Vector2 pPoint1, Vector2 pPoint2)
{
if (pPoint1 != pPoint2)
{
Vector2 V = (pPoint2 - pPoint1) / CELL_SIZE; // direction & distance vector
Vector2 U = Vector2.Normalize(V); // direction unit vector
Vector2 S = new Vector2(Math.Sign(U.X), Math.Sign(U.Y)); // sign vector
Vector2 P = pPoint1 / CELL_SIZE; // position in grid coord system
Vector2 G = new Vector2((int) Math.Floor(P.X), (int) Math.Floor(P.Y)); // grid coord
Vector2 T = new Vector2(Math.Abs(CELL_SIZE / U.X), Math.Abs(CELL_SIZE / U.Y));
Vector2 D = new Vector2(
S.X > 0 ? 1 - P.X % 1 : S.X < 0 ? P.X % 1 : 0,
S.Y > 0 ? 1 - P.Y % 1 : S.Y < 0 ? P.Y % 1 : 0);
Vector2 M = new Vector2(
Single.IsInfinity(T.X) || S.X == 0 ? Single.PositiveInfinity : T.X * D.X,
Single.IsInfinity(T.Y) || S.Y == 0 ? Single.PositiveInfinity : T.Y * D.Y);
bool isCanMoveByX = S.X != 0;
bool isCanMoveByY = S.Y != 0;
while (isCanMoveByX || isCanMoveByY)
{
yield return G;
D = new Vector2(
S.X > 0 ? (float) Math.Floor(P.X) + 1 - P.X :
S.X < 0 ? (float) Math.Ceiling(P.X) - 1 - P.X :
0,
S.Y > 0 ? (float) Math.Floor(P.Y) + 1 - P.Y :
S.Y < 0 ? (float) Math.Ceiling(P.Y) - 1 - P.Y :
0);
if (Math.Abs(V.X) <= Math.Abs(D.X))
{
D.X = V.X;
isCanMoveByX = false;
}
if (Math.Abs(V.Y) <= Math.Abs(D.Y))
{
D.Y = V.Y;
isCanMoveByY = false;
}
if (M.X <= M.Y)
{
M.X += T.X;
G.X += S.X;
if (isCanMoveByY)
{
D.Y = U.Y / U.X * D.X; // U.X / U.Y = D.X / D.Y => U.X * D.Y = U.Y * D.X
}
}
else
{
M.Y += T.Y;
G.Y += S.Y;
if (isCanMoveByX)
{
D.X = U.X / U.Y * D.Y;
}
}
V -= D;
P += D;
}
}
}
Update
I'm began from removing redundant divisions on GRID_CELL and then notice mistake in M calculation.
There are using Frac() function in answer to the question, a link to which I provided. I'm calculate it like (1 - P % 1), but that is a case for S > 0, and there are should be (P % 1) if S < 0, and Inf for S = 0.
Update 2
Also there should be
Vector2 D = new Vector2(
S.X > 0 ? (float) Math.Floor(P.X) + 1 - P.X :
S.X < 0 ? (float) Math.Ceiling(P.X) - 1 - P.X :
0,
S.Y > 0 ? (float) Math.Floor(P.Y) + 1 - P.Y :
S.Y < 0 ? (float) Math.Ceiling(P.Y) - 1 - P.Y :
0);
Instead of
Vector2 D = new Vector2(
S.X > 0 ? 1 - P.X % 1 : S.X < 0 ? P.X % 1 : 0,
S.Y > 0 ? 1 - P.Y % 1 : S.Y < 0 ? P.Y % 1 : 0);
Because M will be infinity in case S < 0 and P haven't fractional part.

Angle and Scale Invariant template matching using OpenCV

Function rotates the template image from 0 to 180 (or upto 360) degrees to search all related matches(in all angles) in source image even with different scale.
The function had been written in OpenCV C interface. When I tried to port it to openCV C++ interface , I am getting lot of errors. Some one please help me to port it to OpenCV C++ interface.
void TemplateMatch()
{
int i, j, x, y, key;
double minVal;
char windowNameSource[] = "Original Image";
char windowNameDestination[] = "Result Image";
char windowNameCoefficientOfCorrelation[] = "Coefficient of Correlation Image";
CvPoint minLoc;
CvPoint tempLoc;
IplImage *sourceImage = cvLoadImage("template_source.jpg", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
IplImage *templateImage = cvLoadImage("template.jpg", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
IplImage *graySourceImage = cvCreateImage(cvGetSize(sourceImage), IPL_DEPTH_8U, 1);
IplImage *grayTemplateImage =cvCreateImage(cvGetSize(templateImage),IPL_DEPTH_8U,1);
IplImage *binarySourceImage = cvCreateImage(cvGetSize(sourceImage), IPL_DEPTH_8U, 1);
IplImage *binaryTemplateImage = cvCreateImage(cvGetSize(templateImage), IPL_DEPTH_8U, 1);
IplImage *destinationImage = cvCreateImage(cvGetSize(sourceImage), IPL_DEPTH_8U, 3);
cvCopy(sourceImage, destinationImage);
cvCvtColor(sourceImage, graySourceImage, CV_RGB2GRAY);
cvCvtColor(templateImage, grayTemplateImage, CV_RGB2GRAY);
cvThreshold(graySourceImage, binarySourceImage, 200, 255, CV_THRESH_OTSU );
cvThreshold(grayTemplateImage, binaryTemplateImage, 200, 255, CV_THRESH_OTSU);
int templateHeight = templateImage->height;
int templateWidth = templateImage->width;
float templateScale = 0.5f;
for(i = 2; i <= 3; i++)
{
int tempTemplateHeight = (int)(templateWidth * (i * templateScale));
int tempTemplateWidth = (int)(templateHeight * (i * templateScale));
IplImage *tempBinaryTemplateImage = cvCreateImage(cvSize(tempTemplateWidth, tempTemplateHeight), IPL_DEPTH_8U, 1);
// W - w + 1, H - h + 1
IplImage *result = cvCreateImage(cvSize(sourceImage->width - tempTemplateWidth + 1, sourceImage->height - tempTemplateHeight + 1), IPL_DEPTH_32F, 1);
cvResize(binaryTemplateImage, tempBinaryTemplateImage, CV_INTER_LINEAR);
float degree = 20.0f;
for(j = 0; j <= 9; j++)
{
IplImage *rotateBinaryTemplateImage = cvCreateImage(cvSize(tempBinaryTemplateImage- >width, tempBinaryTemplateImage->height), IPL_DEPTH_8U, 1);
//cvShowImage(windowNameSource, tempBinaryTemplateImage);
//cvWaitKey(0);
for(y = 0; y < tempTemplateHeight; y++)
{
for(x = 0; x < tempTemplateWidth; x++)
{
rotateBinaryTemplateImage->imageData[y * tempTemplateWidth + x] = 255;
}
}
for(y = 0; y < tempTemplateHeight; y++)
{
for(x = 0; x < tempTemplateWidth; x++)
{
float radian = (float)j * degree * CV_PI / 180.0f;
int scale = y * tempTemplateWidth + x;
int rotateY = - sin(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + cos(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateHeight / 2;
int rotateX = cos(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + sin(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateWidth / 2;
if(rotateY < tempTemplateHeight && rotateX < tempTemplateWidth && rotateY >= 0 && rotateX >= 0)
rotateBinaryTemplateImage->imageData[scale] = tempBinaryTemplateImage->imageData[rotateY * tempTemplateWidth + rotateX];
}
}
//cvShowImage(windowNameSource, rotateBinaryTemplateImage);
//cvWaitKey(0);
cvMatchTemplate(binarySourceImage, rotateBinaryTemplateImage, result, CV_TM_SQDIFF_NORMED);
//cvMatchTemplate(binarySourceImage, rotateBinaryTemplateImage, result, CV_TM_SQDIFF);
cvMinMaxLoc(result, &minVal, NULL, &minLoc, NULL, NULL);
printf(": %f%%\n", (int)(i * 0.5 * 100), j * 20, (1 - minVal) * 100);
if(minVal < 0.065) // 1 - 0.065 = 0.935 : 93.5%
{
tempLoc.x = minLoc.x + tempTemplateWidth;
tempLoc.y = minLoc.y + tempTemplateHeight;
cvRectangle(destinationImage, minLoc, tempLoc, CV_RGB(0, 255, 0), 1, 8, 0);
}
}
//cvShowImage(windowNameSource, result);
//cvWaitKey(0);
cvReleaseImage(&tempBinaryTemplateImage);
cvReleaseImage(&result);
}
// cvShowImage(windowNameSource, sourceImage);
// cvShowImage(windowNameCoefficientOfCorrelation, result);
cvShowImage(windowNameDestination, destinationImage);
key = cvWaitKey(0);
cvReleaseImage(&sourceImage);
cvReleaseImage(&templateImage);
cvReleaseImage(&graySourceImage);
cvReleaseImage(&grayTemplateImage);
cvReleaseImage(&binarySourceImage);
cvReleaseImage(&binaryTemplateImage);
cvReleaseImage(&destinationImage);
cvDestroyWindow(windowNameSource);
cvDestroyWindow(windowNameDestination);
cvDestroyWindow(windowNameCoefficientOfCorrelation);
}
RESULT :
Template Image:
Result image:
The function above puts rectangles around the perfect matches (angle and scale invariant) in this image .....
Now, I have been trying to port the code into C++ interface. If anyone needs more details please let me know.
C++ Port of above code:
Mat TemplateMatch(Mat sourceImage, Mat templateImage){
double minVal;
Point minLoc;
Point tempLoc;
Mat graySourceImage = Mat(sourceImage.size(),CV_8UC1);
Mat grayTemplateImage = Mat(templateImage.size(),CV_8UC1);
Mat binarySourceImage = Mat(sourceImage.size(),CV_8UC1);
Mat binaryTemplateImage = Mat(templateImage.size(),CV_8UC1);
Mat destinationImage = Mat(sourceImage.size(),CV_8UC3);
sourceImage.copyTo(destinationImage);
cvtColor(sourceImage, graySourceImage, CV_BGR2GRAY);
cvtColor(templateImage, grayTemplateImage, CV_BGR2GRAY);
threshold(graySourceImage, binarySourceImage, 200, 255, CV_THRESH_OTSU );
threshold(grayTemplateImage, binaryTemplateImage, 200, 255, CV_THRESH_OTSU);
int templateHeight = templateImage.rows;
int templateWidth = templateImage.cols;
float templateScale = 0.5f;
for(int i = 2; i <= 3; i++){
int tempTemplateHeight = (int)(templateWidth * (i * templateScale));
int tempTemplateWidth = (int)(templateHeight * (i * templateScale));
Mat tempBinaryTemplateImage = Mat(Size(tempTemplateWidth,tempTemplateHeight),CV_8UC1);
Mat result = Mat(Size(sourceImage.cols - tempTemplateWidth + 1,sourceImage.rows - tempTemplateHeight + 1),CV_32FC1);
resize(binaryTemplateImage,tempBinaryTemplateImage,Size(tempBinaryTemplateImage.cols,tempBinaryTemplateImage.rows),0,0,INTER_LINEAR);
float degree = 20.0f;
for(int j = 0; j <= 9; j++){
Mat rotateBinaryTemplateImage = Mat(Size(tempBinaryTemplateImage.cols, tempBinaryTemplateImage.rows), CV_8UC1);
for(int y = 0; y < tempTemplateHeight; y++){
for(int x = 0; x < tempTemplateWidth; x++){
rotateBinaryTemplateImage.data[y * tempTemplateWidth + x] = 255;
}
}
for(int y = 0; y < tempTemplateHeight; y++){
for(int x = 0; x < tempTemplateWidth; x++){
float radian = (float)j * degree * CV_PI / 180.0f;
int scale = y * tempTemplateWidth + x;
int rotateY = - sin(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + cos(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateHeight / 2;
int rotateX = cos(radian) * ((float)x - (float)tempTemplateWidth / 2.0f) + sin(radian) * ((float)y - (float)tempTemplateHeight / 2.0f) + tempTemplateWidth / 2;
if(rotateY < tempTemplateHeight && rotateX < tempTemplateWidth && rotateY >= 0 && rotateX >= 0)
rotateBinaryTemplateImage.data[scale] = tempBinaryTemplateImage.data[rotateY * tempTemplateWidth + rotateX];
}
}
matchTemplate(binarySourceImage, rotateBinaryTemplateImage, result, CV_TM_SQDIFF_NORMED);
minMaxLoc(result, &minVal, 0, &minLoc, 0, Mat());
cout<<(int)(i * 0.5 * 100)<<" , "<< j * 20<<" , "<< (1 - minVal) * 100<<endl;
if(minVal < 0.065){ // 1 - 0.065 = 0.935 : 93.5%
tempLoc.x = minLoc.x + tempTemplateWidth;
tempLoc.y = minLoc.y + tempTemplateHeight;
rectangle(destinationImage, minLoc, tempLoc, CV_RGB(0, 255, 0), 1, 8, 0);
}
}
}
return destinationImage;
}

Resources