Related
How to get XY value from ct.
Ex: ct = 217, I want to get x="0.3127569", y= "0.32908".
I'm able to convert XY value into ct value using this below code.
float R1 = [hue[0] floatValue];
float S1 = [hue[1] floatValue];
float result = ((R1-0.332)/(S1-0.1858));
NSString *ctString = [NSString stringWithFormat:#"%f", ((-449*result*result*result)+(3525*result*result)-(6823.3*result)+(5520.33))];
float micro2 = (float) (1 / [ctString floatValue] * 1000000);
NSString *ctValue = [NSString stringWithFormat:#"%f", micro2];
ctValue = [NSString stringWithFormat:#"%d", [ctValue intValue]];
if ([ctValue integerValue] < 153) {
ctValue = [NSString stringWithFormat:#"%d", 153];
}
Now I want reverse value, which is from ct to XY.
On Phillips HUE
2000K maps to 500 and 6500K maps to 153 given in ct as color temperature but can be thought as actually being Mired.
Mired means micro reciprocal degree wikipedia.
ct is possibly used because it is not 100% Mired. Quite sure Phillips uses a lookup table as a lot CIE algorithms do because there are just 347 indexes in this range from 153 to 500.
The following is not a solution, it's just simple concept of a lookup table.
And as the CIE 1931 xy to CCT Formula by McCamy suggests found here it is possible to use a lookup table to find x and y as well.
A table can be found here but i am not sure if that is the right lookup table.
reminder so the following is not a solution, but to find an reverse algo the code may help.
typedef int Kelvin;
typedef float Mired;
Mired linearMiredByKelvin(Kelvin k) {
if (k==0) return 0;
return 1000000.0/k;
}
-(void)mired {
Mired miredMin = 2000.0/13.0; // 153,84 = reciprocal 6500K
Mired miredMax = 500.0; // 500,00 = reciprocal 2000K
Mired lookupMiredByKelvin[6501]; //max 6500 Kelvin + 1 safe index
//Kelvin lookupKelvinByMired[501]; //max 500 Mired + 1 safe index
// dummy stuff, empty unused table space
for (Kelvin k = 0; k < 2000; k++) {
lookupMiredByKelvin[k] = 0;
}
//for (Mired m = 0.0; m < 154.0; m++) {
// lookupKelvinByMired[(int)m] = 0;
//}
for (Kelvin k=2000; k<6501; k++) {
Mired linearMired = linearMiredByKelvin(k);
float dimm = (linearMired - miredMin) / ( miredMax - miredMin);
Kelvin ct = (Kelvin)(1000000.0/(dimm*miredMax - dimm*miredMin + miredMin));
lookupMiredByKelvin[k] = linearMiredByKelvin(ct);
if (k==2000 || k==2250 || k==2500 || k==2750 ||
k==3000 || k==3250 || k==3500 || k==3750 ||
k==4000 || k==4250 || k==4500 || k==4750 ||
k==5000 || k==5250 || k==5500 || k==5750 ||
k==6000 || k==6250 || k==6500 || k==6501 )
fprintf(stderr,"%d %f %f\n",ct, dimm, lookupMiredByKelvin[k]);
}
}
at least this is proof that x and y will not sit on a simple vector.
CCT means correlated colour temperature and like the implementation in the question shows can be calculated via n= (x-0.3320)/(0.1858-y); CCT = 437*n^3 + 3601*n^2 + 6861*n + 5517. (after McCamy)
but a cct=217 is out of range of above link'ed lookup table.
following the idea in this git-repo from colour-science
and ported to C it could look like..
void CCT_to_xy_CIE_D(float cct) {
//if (CCT < 4000 || CCT > 25000) fprintf(stderr, "Correlated colour temperature must be in domain, unpredictable results may occur! \n");
float x = calculateXviaCCT(cct);
float y = calculateYviaX(x);
NSLog(#"cct=%f x%f y%f",cct,x,y);
}
float calculateXviaCCT(float cct) {
float cct_3 = pow(cct, 3); //(cct*cct*cct);
float cct_2 = pow(cct, 2); //(cct*cct);
if (cct<=7000)
return -4.607 * pow(10, 9) / cct_3 + 2.9678 * pow(10, 6) / cct_2 + 0.09911 * pow(10, 3) / cct + 0.244063;
return -2.0064 * pow(10, 9) / cct_3 + 1.9018 * pow(10, 6) / cct_2 + 0.24748 * pow(10, 3) / cct + 0.23704;
}
float calculateYviaX(float x) {
return -3.000 * pow(x, 2) + 2.870 * x - 0.275;
}
CCT_to_xy_CIE_D(6504.38938305); //proof of concept
//cct=6504.389160 x0.312708 y0.329113
CCT_to_xy_CIE_D(217.0);
//cct=217.000000 x-387.131073 y-450722.750000
// so for sure Phillips hue temperature given in ct between 153-500 is not a good starting point
//but
CCT_to_xy_CIE_D(2000.0);
//cct=2000.000000 x0.459693 y0.410366
this seems to work fine with CCT between 2000 and 25000, but maybe confusing is CCT is given in Kelvin here.
EDIT
This has been through so many revisions and ideas. To keep it simple I edited most of that out and just give you the final result.
This fits your function perfectly except for a region in the middle (temp from 256 to 316) where it deviates a bit.
The problem with your function is that it has approximately infinite solutions, so to solve it nicely you need more constraints, but what? Ol Sen's reference https://www.waveformlighting.com/tech/calculate-color-temperature-cct-from-cie-1931-xy-coordinates discusses it in some detail and then mentions that you want a Duv to be zero. It also gives a way to calculate Duv and so I added that to my optimiser and voila!
Nice and smooth. The optimiser now solves for x and y that both satisfies your function and also minimises Duv.
To get it to work nicely I had to scale Duv quite a bit. That page mentions that Duv should be very small so I think this is a good thing. Also, as the temp increases the scaling should to help the optimiser.
Below prints from 153 to 500.
#import <Foundation/Foundation.h>
// Function taken from your code
// Simplified a bit
int ctFuncI ( float x, float y )
{
// float R1 = [hue[0] floatValue];
// float S1 = [hue[1] floatValue];
float result = (x-0.332)/(y-0.1858);
float cubic = - 449 * result * result * result + 3525 * result * result - 6823.3 * result + 5520.33;
float micro2 = 1 / cubic * 1000000;
int ct = ( int )( micro2 + 0.5 );
if ( ct < 153 )
{
ct = 153;
}
return ct;
}
// Need this
// Float version of your code
float ctFuncF ( float x, float y )
{
// float R1 = [hue[0] floatValue];
// float S1 = [hue[1] floatValue];
float result = (x-0.332)/(y-0.1858);
float cubic = - 449 * result * result * result + 3525 * result * result - 6823.3 * result + 5520.33;
return 1000000 / cubic;
}
// We need an additional constraint
// https://www.waveformlighting.com/tech/calculate-duv-from-cie-1931-xy-coordinates
// Given x, y calculate Duv
// We want this to be 0
float duv ( float x, float y )
{
float f = 1 / ( - 2 * x + 12 * y + 3 );
float u = 4 * x * f;
float v = 6 * y * f;
// I'm typing float but my heart yells double
float k6 = -0.00616793;
float k5 = 0.0893944;
float k4 = -0.5179722;
float k3 = 1.5317403;
float k2 = -2.4243787;
float k1 = 1.925865;
float k0 = -0.471106;
float du = u - 0.292;
float dv = v - 0.24;
float Lfp = sqrt ( du * du + dv * dv );
float a = acos( du / Lfp );
float Lbb = k6 * pow ( a, 6 ) + k5 * pow( a, 5 ) + k4 * pow( a, 4 ) + k3 * pow( a, 3 ) + k2 * pow(a,2) + k1 * a + k0;
return Lfp - Lbb;
}
// Solver!
// Returns iterations
int ctSolve ( int ct, float * x, float * y )
{
int iter = 0;
float dx = 0.001;
float dy = 0.001;
// Error
// Note we scale duv a bit
// Seems the higher the temp, the higher scale we require
// Also note the jump at 255 ...
float s = 1000 * ( ct > 255 ? 10 : 1 );
float d = fabs( ctFuncF ( * x, * y ) - ct ) + s * fabs( duv ( * x, * y ) );
// Approx
while ( d > 0.5 && iter < 250 )
{
iter ++;
dx *= fabs( ctFuncF ( * x + dx, * y ) - ct ) + s * fabs( duv ( * x + dx, * y ) ) < d ? 1.2 : - 0.5;
dy *= fabs( ctFuncF ( * x, * y + dy ) - ct ) + s * fabs( duv ( * x, * y + dy ) ) < d ? 1.2 : - 0.5;
* x += dx;
* y += dy;
d = fabs( ctFuncF ( * x, * y ) - ct ) + s * fabs( duv ( * x, * y ) );
}
return iter;
}
// Tester
int main(int argc, const char * argv[]) {
#autoreleasepool
{
// insert code here...
NSLog(#"Hello, World!");
float x, y;
int sume = 0;
int sumi = 0;
for ( int ct = 153; ct <= 500; ct ++ )
{
// Initial guess
x = 0.4;
y = 0.4;
// Approx
int iter = ctSolve ( ct, & x, & y );
// CT and error
int ctEst = ctFuncI ( x, y );
int e = ct - ctEst;
// Diagnostics
sume += abs ( e );
sumi += iter;
// Print out results
NSLog ( #"want ct = %d x = %f y = %f got ct %d in %d iter error %d", ct, x, y, ctEst, iter, e );
}
NSLog ( #"Sum of abs errors %d iterations %d", sume, sumi );
}
return 0;
}
To use it, do as below.
// To call it, init x and y to some guess
float x = 0.4;
float y = 0.4;
// Then call solver with your temp
int ct = 217;
ctSolve( ct, & x, & y ); // Note you pass references to x and y
// Done, answer now in x and y
a bit more compact answer and functions to convert back and forth..
beware there are rounding issues because McCamy's formula relies and mathematical assumptions. And so the backward calculation does also.
if you want to find more results search directly for "n= (x-0.3320)/(0.1858-y); CCT = 437*n^3 + 3601*n^2 + 6861*n + 5517" there are plenty of different methods to convert back and forth.
so here Phillips-Hue #[#x,#y] to Phillips-ct,Phillips-ct to CCT, CCT to x,y
void CCT_to_xy_CIE_D(float cct) {
//if (CCT < 4000 || CCT > 25000) fprintf(stderr, "Correlated colour temperature must be in domain, unpredictable results may occur! \n");
float x = calculateXviaCCT(cct);
float y = calculateYviaX(x);
fprintf(stderr,"cct=%f x%f y%f",cct,x,y);
}
float calculateXviaCCT(float cct) {
float cct_3 = pow(cct, 3); //(cct*cct*cct);
float cct_2 = pow(cct, 2); //(cct*cct);
if (cct<=7000.0)
return -4.607 * pow(10, 9) / cct_3 + 2.9678 * pow(10, 6) / cct_2 + 0.09911 * pow(10, 3) / cct + 0.244063;
return -2.0064 * pow(10, 9) / cct_3 + 1.9018 * pow(10, 6) / cct_2 + 0.24748 * pow(10, 3) / cct + 0.23704;
}
float calculateYviaX(float x) {
return -3.000 * x*x + 2.870 * x - 0.275;
}
int calculate_PhillipsHueCT_withCCT(float cct) {
if (cct>6500.0) return 2000.0/13.0;
if (cct<2000.0) return 500.0;
//return (float) (1 / cct * 1000000); // same as..
return 1000000 / cct;
}
float calculate_CCT_withPhillipsHueCT(float ct) {
if (ct == 0.0) return 0.0;
return 1000000 / ct;
}
float calculate_CCT_withHueXY(NSArray *hue) {
float x = [hue[0] floatValue]; //R1
float y = [hue[1] floatValue]; //S1
//x = 0.312708; y = 0.329113;
float n = (x-0.3320)/(0.1858-y);
float cct = 437.0*n*n*n + 3601.0*n*n + 6861.0*n + 5517.0;
return cct;
}
// MC Camy formula n=(x-0.3320)/(0.1858-y); cct = 437*n^3 + 3601*n^2 + 6861*n + 5517;
-(void)testPhillipsHueCt_backAndForth {
NSArray *hue = #[#(0.312708),#(0.329113)];
float cct = calculate_CCT_withHueXY(hue);
float ct = calculate_PhillipsHueCT_withCCT(cct);
NSLog(#"ct %f",ct);
CCT_to_xy_CIE_D(cct); // check
CCT_to_xy_CIE_D(6504.38938305); //proof of concept
CCT_to_xy_CIE_D(2000.0);
CCT_to_xy_CIE_D(calculate_CCT_withPhillipsHueCT(217.0));
}
I am implementing a nearest neighborhood kernel function to resize the input image. But the result is wrong and I have no idea.
Here is the input image
the result is wrong.
I use opencv to read the input image.
cv::Mat image = cv::imread("/home/tumh/test.jpg");
unsigned char* data = image.data;
int outH, outW;
float *out_data_host = test(data, image.rows, image.cols, outH, outW);
cv::Mat out_image(outH, outW, CV_32FC3);
memcpy(out_image.data, out_data_host, outH * outW * 3 * sizeof(float));
float* test(unsigned char* in_data_host, const int &inH, const int &inW, int &outH, int &outW) {
// get the output size
int im_size_min = std::min(inW, inH);
int im_size_max = std::max(inW, inH);
float scale_factor = static_cast<float>(640) / im_size_min;
float im_scale_x = std::floor(inW * scale_factor / 64) * 64 / inW;
float im_scale_y = std::floor(inH * scale_factor / 64) * 64 / inH;
outW = inW * im_scale_x;
outH = inH * im_scale_y;
int channel = 3;
unsigned char* in_data_dev;
CUDA_CHECK(cudaMalloc(&in_data_dev, sizeof(unsigned char) * channel * inH * inW));
CUDA_CHECK(cudaMemcpy(in_data_dev, in_data_host, 1 * sizeof(unsigned char) * channel * inH * inW, cudaMemcpyHostToDevice));
// image pre process
const float2 scale = make_float2( im_scale_x, im_scale_y);
float * out_buffer = NULL;
CUDA_CHECK(cudaMalloc(&out_buffer, sizeof(float) * channel * outH * outW));
float *out_data_host = new float[sizeof(float) * channel * outH * outW];
const dim3 threads(32, 32);
const dim3 block(iDivUp(outW, threads.x), iDivUp(outW, threads.y));
gpuPreImageNet<<<block, threads>>>(scale, in_data_dev, inW, out_buffer, outW, outH);
CUDA_CHECK(cudaFree(in_data_dev));
CUDA_CHECK(cudaMemcpy(out_data_host, out_buffer, sizeof(float) * channel * outH * outW, cudaMemcpyDeviceToHost));
CUDA_CHECK(cudaFree(out_buffer));
return out_data_host;
}
Here is the resize kernel function
__global__ void gpuPreImageNet( float2 scale, unsigned char* input, int iWidth, float* output, int oWidth, int oHeight )
{
const int x = blockIdx.x * blockDim.x + threadIdx.x;
const int y = blockIdx.y * blockDim.y + threadIdx.y;
const int n = oWidth * oHeight;
int channel = 3;
if( x >= oWidth || y >= oHeight )
return;
const int dx = ((float)x * scale.x);
const int dy = ((float)y * scale.y);
const unsigned char* px = input + dy * iWidth * channel + dx * channel ;
const float3 bgr = make_float3(*(px + 0), *(px + 1), *(px + 2));
output[channel * y * oWidth + channel * x + 0] = bgr.x;
output[channel * y * oWidth + channel * x + 1] = bgr.y;
output[channel * y * oWidth + channel * x + 2] = bgr.z;
}
Most of the implementation is from https://github.com/soulsheng/ResizeNN/blob/master/resizeCUDA/resizeNN.cu
Any idea?
Maybe you are observing an uninitialized memory problem.
As i understand your code, out_data_host allocation is too big
new float[sizeof(float) * channel * outH * outW];
should be
new float[channel * outH * outW]
Then out_buffer is uninitialized, add a cudaMemset after the cudaMalloc line.
To clarify your code, since you already use OpenCV to load images, why don't you use opencv to resize your images ?
cv::resize // Host side method is probably better since you'll have less data copied through PCI-Express
// or
cv::cuda::resize
It took me around two days to figure out a solution for this problem. Basically, I was building a GPU based image preprocessing pipeline for my project. Here's the custom Cuda Kernel.
For Gray scale Image Resizing, change channel from 3 -> 1 and it should work.
__global__ void resize_kernel( real* pIn, real* pOut, int widthIn, int heightIn, int widthOut, int heightOut)
{
int i = blockDim.y * blockIdx.y + threadIdx.y;
int j = blockDim.x * blockIdx.x + threadIdx.x;
int channel = 3;
if( i < heightOut && j < widthOut )
{
int iIn = i * heightIn / heightOut;
int jIn = j * widthIn / widthOut;
for(int c = 0; c < channel; c++)
pOut[ (i*widthOut + j)*channel + c ] = pIn[ (iIn*widthIn + jIn)*channel + c ];
}
}
I have created one polygon on map with some set of coordinates.
I need help regarding making one buffered polygon with some given distance outside of original polygon border.
so what i need a method with such algorithm in which i pass set of coordinates as input and should get buffered set of coordinates as output.
I tried to achieve this by using arcgis library for ios with bufferGeometry method of AGSGeometryEngine but problem is, this is tightly coupled and only will work their GIS Map but I am using Mapbox which is different Map. So I want one generic method which can resolve my problem independent to map.
The solution of #Ravikant Paudel though comprehensive didn't work for me so I have implemented the approach myself.
Also, I implemented the approach in kotlin and adding it here so that someone else who is facing a similar problem will find it helpful.
Approach:
Find the angle of the angle bisector theta for every vertice of the polygon.
Draw a circle with radius = bufferedDistance / sin(angleBisctorTheta)
Find intersections of the circle and angle bisector.
Out of the 2 intersection points the one inside the polygon will give you the buffered vertice for the shrunk polygon and the outside point for the buffered polygon.
This approach does not account for the corner cases in which both points somehow fall inside or outside the polygon -> in which case the buffered polygon formed will be malformed.
Code:
private fun computeAngleBisectorTheta(
prevLatLng: LatLng,
currLatLng: LatLng,
nextLatLng: LatLng
): Double {
var phiBisector = 0.0
try {
val aPrime = getDeltaPrimeVector(prevLatLng, currLatLng)
val cPrime = getDeltaPrimeVector(nextLatLng, currLatLng)
val thetaA = atan2(aPrime[1], aPrime[0])
val thetaC = atan2(cPrime[1], cPrime[0])
phiBisector = (thetaA + thetaC) / 2.0
} catch (e: Exception) {
logger.error("[Error] in computeAngleBisectorSlope: $e")
}
return phiBisector
}
private fun getDeltaPrimeVector(
aLatLng: LatLng,
bLatLng: LatLng
): ArrayList<Double> {
val arrayList: ArrayList<Double> = ArrayList<Double>(2)
try {
val aX = convertToXY(aLatLng.latitude)
val aY = convertToXY(aLatLng.longitude)
val bX = convertToXY(bLatLng.latitude)
val bY = convertToXY(bLatLng.longitude)
arrayList.add((aX - bX))
arrayList.add((aY - bY))
} catch (e: Exception) {
logger.error("[Error] in getDeltaPrimeVector: $e")
}
return arrayList
}
private fun convertToXY(coordinate: Double) =
EARTH_RADIUS * toRad(coordinate)
private fun convertToLatLngfromXY(coordinate: Double) =
toDegrees(coordinate / EARTH_RADIUS)
private fun computeBufferedVertices(
angle: Double, bufDis: Int,
centerLatLng: LatLng
): ArrayList<LatLng> {
var results = ArrayList<LatLng>()
try {
val distance = bufDis / sin(angle)
var slope = tan(angle)
var inverseSlopeSquare = sqrt(1 + slope * slope * 1.0)
var distanceByInverseSlopeSquare = distance / inverseSlopeSquare
var slopeIntoDistanceByInverseSlopeSquare = slope * distanceByInverseSlopeSquare
var p1X: Double = convertToXY(centerLatLng.latitude) + distanceByInverseSlopeSquare
var p1Y: Double =
convertToXY(centerLatLng.longitude) + slopeIntoDistanceByInverseSlopeSquare
var p2X: Double = convertToXY(centerLatLng.latitude) - distanceByInverseSlopeSquare
var p2Y: Double =
convertToXY(centerLatLng.longitude) - slopeIntoDistanceByInverseSlopeSquare
val tempLatLng1 = LatLng(convertToLatLngfromXY(p1X), convertToLatLngfromXY(p1Y))
results.add(tempLatLng1)
val tempLatLng2 = LatLng(convertToLatLngfromXY(p2X), convertToLatLngfromXY(p2Y))
results.add(tempLatLng2)
} catch (e: Exception) {
logger.error("[Error] in computeBufferedVertices: $e")
}
return results
}
private fun getVerticesOutsidePolygon(
verticesArray: ArrayList<LatLng>,
polygon: ArrayList<LatLng>
): LatLng {
if (isPointInPolygon(
verticesArray[0].latitude,
verticesArray[0].longitude,
polygon
)
) {
if (sPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
logger.error("[ERROR] Malformed polygon! Both Vertices are inside the polygon! $verticesArray")
} else {
return verticesArray[1]
}
} else {
if (PolygonGeofenceHelper.isPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
return verticesArray[0]
} else {
logger.error("[ERROR] Malformed polygon! Both Vertices are outside the polygon!: $verticesArray")
}
}
//returning a vertice anyway because there is no fall back policy designed if both vertices are inside or outside the polygon
return verticesArray[0]
}
private fun toRad(angle: Double): Double {
return angle * Math.PI / 180
}
private fun toDegrees(radians: Double): Double {
return radians * 180 / Math.PI
}
private fun getVerticesInsidePolygon(
verticesArray: ArrayList<LatLng>,
polygon: ArrayList<LatLng>
): LatLng {
if (isPointInPolygon(
verticesArray[0].latitude,
verticesArray[0].longitude,
polygon
)
) {
if (isPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
logger.error("[ERROR] Malformed polygon! Both Vertices are inside the polygon! $verticesArray")
} else {
return verticesArray[0]
}
} else {
if (PolygonGeofenceHelper.isPointInPolygon(
verticesArray[1].latitude,
verticesArray[1].longitude,
polygon
)
) {
return verticesArray[1]
} else {
logger.error("[ERROR] Malformed polygon! Both Vertices are outside the polygon!: $verticesArray")
}
}
//returning a vertice anyway because there is no fall back policy designed if both vertices are inside or outside the polygon
return LatLng(0.0, 0.0)
}
fun getBufferedPolygon(
polygon: ArrayList<LatLng>,
bufferDistance: Int,
isOutside: Boolean
): ArrayList<LatLng> {
var bufferedPolygon = ArrayList<LatLng>()
var isBufferedPolygonMalformed = false
try {
for (i in 0 until polygon.size) {
val prevLatLng: LatLng = polygon[if (i - 1 < 0) polygon.size - 1 else i - 1]
val centerLatLng: LatLng = polygon[i]
val nextLatLng: LatLng = polygon[if (i + 1 == polygon.size) 0 else i + 1]
val computedVertices =
computeBufferedVertices(
computeAngleBisectorTheta(
prevLatLng, centerLatLng, nextLatLng
), bufferDistance, centerLatLng
)
val latLng = if (isOutside) {
getVerticesOutsidePolygon(
computedVertices,
polygon
)
} else {
getVerticesInsidePolygon(
computedVertices,
polygon
)
}
if (latLng.latitude == 0.0 && latLng.longitude == 0.0) {
isBufferedPolygonMalformed = true
break
}
bufferedPolygon.add(latLng)
}
if (isBufferedPolygonMalformed) {
bufferedPolygon = polygon
logger.error("[Error] Polygon generated is malformed returning the same polygon: $polygon , $bufferDistance, $isOutside")
}
} catch (e: Exception) {
logger.error("[Error] in getBufferedPolygon: $e")
}
return bufferedPolygon
}
You'll need to pass an array of points present in the polygon in the code and the buffer distance the third param is to get the outside buffer or the inside buffer. (Note: I am assuming that the vertices in this list are adjacent to each other).
I have tried to keep this answer as comprehensive as possible. Please feel free to suggest any improvements or a better approach.
You can find the detailed math behind the above code on my portfolio page.
Finding angle bisector
To approximate latitude and longitude to a 2D cartesian coordinate system.
To check if the point is inside a polygon I am using the approach mentioned in this geeks for geeks article
I have same problem in my app and finally found the solution by the help of this site
I am an android developer and my code may not be useful to you but the core concept is same.
At first we need to find the bearing of the line with the help of two points LatLng points.(i have done by using computeDistanceAndBearing(double lat1, double lon1,double lat2, double lon2) function)
Now to get the buffering of certain point we need to give the buffering distance ,LatLng point and bearing (which i obtain from computeDistanceAndBearing function).(I have done this by using computeDestinationAndBearing(double lat1, double lon1,double brng, double dist) function ). from single LatLng point we get two points by producing them with their bearing with certain distance.
Now we need to find the interestion point of the two point to get the buffering that we want. for this remember to take new obtain point and bearing of another line and same with another. This helps to obtain new intersection point with buffering you want.(i have done this in my function computeIntersectionPoint(LatLng p1, double brng1, LatLng p2, double brng2))
Do this to all the polygon points and then you get new points whichyou joint to get buffering.
This is the way i have done in my android location app whis is
Here is my code
//computeDistanceAndBearing(double lat1, double lon1,
double lat2, double lon2)
public static double[] computeDistanceAndBearing(double lat1, double lon1,
double lat2, double lon2) {
// Based on http://www.ngs.noaa.gov/PUBS_LIB/inverse.pdf
// using the "Inverse Formula" (section 4)
double results[] = new double[3];
int MAXITERS = 20;
// Convert lat/long to radians
lat1 *= Math.PI / 180.0;
lat2 *= Math.PI / 180.0;
lon1 *= Math.PI / 180.0;
lon2 *= Math.PI / 180.0;
double a = 6378137.0; // WGS84 major axis
double b = 6356752.3142; // WGS84 semi-major axis
double f = (a - b) / a;
double aSqMinusBSqOverBSq = (a * a - b * b) / (b * b);
double L = lon2 - lon1;
double A = 0.0;
double U1 = Math.atan((1.0 - f) * Math.tan(lat1));
double U2 = Math.atan((1.0 - f) * Math.tan(lat2));
double cosU1 = Math.cos(U1);
double cosU2 = Math.cos(U2);
double sinU1 = Math.sin(U1);
double sinU2 = Math.sin(U2);
double cosU1cosU2 = cosU1 * cosU2;
double sinU1sinU2 = sinU1 * sinU2;
double sigma = 0.0;
double deltaSigma = 0.0;
double cosSqAlpha = 0.0;
double cos2SM = 0.0;
double cosSigma = 0.0;
double sinSigma = 0.0;
double cosLambda = 0.0;
double sinLambda = 0.0;
double lambda = L; // initial guess
for (int iter = 0; iter < MAXITERS; iter++) {
double lambdaOrig = lambda;
cosLambda = Math.cos(lambda);
sinLambda = Math.sin(lambda);
double t1 = cosU2 * sinLambda;
double t2 = cosU1 * sinU2 - sinU1 * cosU2 * cosLambda;
double sinSqSigma = t1 * t1 + t2 * t2; // (14)
sinSigma = Math.sqrt(sinSqSigma);
cosSigma = sinU1sinU2 + cosU1cosU2 * cosLambda; // (15)
sigma = Math.atan2(sinSigma, cosSigma); // (16)
double sinAlpha = (sinSigma == 0) ? 0.0 : cosU1cosU2 * sinLambda
/ sinSigma; // (17)
cosSqAlpha = 1.0 - sinAlpha * sinAlpha;
cos2SM = (cosSqAlpha == 0) ? 0.0 : cosSigma - 2.0 * sinU1sinU2
/ cosSqAlpha; // (18)
double uSquared = cosSqAlpha * aSqMinusBSqOverBSq; // defn
A = 1 + (uSquared / 16384.0) * // (3)
(4096.0 + uSquared * (-768 + uSquared * (320.0 - 175.0 * uSquared)));
double B = (uSquared / 1024.0) * // (4)
(256.0 + uSquared * (-128.0 + uSquared * (74.0 - 47.0 * uSquared)));
double C = (f / 16.0) * cosSqAlpha * (4.0 + f * (4.0 - 3.0 * cosSqAlpha)); // (10)
double cos2SMSq = cos2SM * cos2SM;
deltaSigma = B
* sinSigma
* // (6)
(cos2SM + (B / 4.0)
* (cosSigma * (-1.0 + 2.0 * cos2SMSq) - (B / 6.0) * cos2SM
* (-3.0 + 4.0 * sinSigma * sinSigma)
* (-3.0 + 4.0 * cos2SMSq)));
lambda = L
+ (1.0 - C)
* f
* sinAlpha
* (sigma + C * sinSigma
* (cos2SM + C * cosSigma * (-1.0 + 2.0 * cos2SM * cos2SM))); // (11)
double delta = (lambda - lambdaOrig) / lambda;
if (Math.abs(delta) < 1.0e-12) {
break;
}
}
double distance = (b * A * (sigma - deltaSigma));
results[0] = distance;
if (results.length > 1) {
double initialBearing = Math.atan2(cosU2 * sinLambda, cosU1 * sinU2
- sinU1 * cosU2 * cosLambda);
initialBearing *= 180.0 / Math.PI;
results[1] = initialBearing;
if (results.length > 2) {
double finalBearing = Math.atan2(cosU1 * sinLambda, -sinU1 * cosU2
+ cosU1 * sinU2 * cosLambda);
finalBearing *= 180.0 / Math.PI;
results[2] = finalBearing;
}
}
return results;
}
//computeDestinationAndBearing(double lat1, double lon1,double brng, double dist)
public static double[] computeDestinationAndBearing(double lat1, double lon1,
double brng, double dist) {
double results[] = new double[3];
double a = 6378137, b = 6356752.3142, f = 1 / 298.257223563; // WGS-84
// ellipsiod
double s = dist;
double alpha1 = toRad(brng);
double sinAlpha1 = Math.sin(alpha1);
double cosAlpha1 = Math.cos(alpha1);
double tanU1 = (1 - f) * Math.tan(toRad(lat1));
double cosU1 = 1 / Math.sqrt((1 + tanU1 * tanU1)), sinU1 = tanU1 * cosU1;
double sigma1 = Math.atan2(tanU1, cosAlpha1);
double sinAlpha = cosU1 * sinAlpha1;
double cosSqAlpha = 1 - sinAlpha * sinAlpha;
double uSq = cosSqAlpha * (a * a - b * b) / (b * b);
double A = 1 + uSq / 16384
* (4096 + uSq * (-768 + uSq * (320 - 175 * uSq)));
double B = uSq / 1024 * (256 + uSq * (-128 + uSq * (74 - 47 * uSq)));
double sinSigma = 0, cosSigma = 0, deltaSigma = 0, cos2SigmaM = 0;
double sigma = s / (b * A), sigmaP = 2 * Math.PI;
while (Math.abs(sigma - sigmaP) > 1e-12) {
cos2SigmaM = Math.cos(2 * sigma1 + sigma);
sinSigma = Math.sin(sigma);
cosSigma = Math.cos(sigma);
deltaSigma = B
* sinSigma
* (cos2SigmaM + B
/ 4
* (cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM) - B / 6
* cos2SigmaM * (-3 + 4 * sinSigma * sinSigma)
* (-3 + 4 * cos2SigmaM * cos2SigmaM)));
sigmaP = sigma;
sigma = s / (b * A) + deltaSigma;
}
double tmp = sinU1 * sinSigma - cosU1 * cosSigma * cosAlpha1;
double lat2 = Math.atan2(sinU1 * cosSigma + cosU1 * sinSigma * cosAlpha1,
(1 - f) * Math.sqrt(sinAlpha * sinAlpha + tmp * tmp));
double lambda = Math.atan2(sinSigma * sinAlpha1, cosU1 * cosSigma - sinU1
* sinSigma * cosAlpha1);
double C = f / 16 * cosSqAlpha * (4 + f * (4 - 3 * cosSqAlpha));
double L = lambda
- (1 - C)
* f
* sinAlpha
* (sigma + C * sinSigma
* (cos2SigmaM + C * cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM)));
double lon2 = (toRad(lon1) + L + 3 * Math.PI) % (2 * Math.PI) - Math.PI; // normalise
// to
// -180...+180
double revAz = Math.atan2(sinAlpha, -tmp); // final bearing, if required
results[0] = toDegrees(lat2);
results[1] = toDegrees(lon2);
results[2] = toDegrees(revAz);
return results;
}
private static double toRad(double angle) {
return angle * Math.PI / 180;
}
private static double toDegrees(double radians) {
return radians * 180 / Math.PI;
}
//computeIntersectionPoint(LatLng p1, double brng1, LatLng p2, double brng2)
public static LatLng computeIntersectionPoint(LatLng p1, double brng1, LatLng p2, double brng2) {
double lat1 = toRad(p1.latitude), lng1 = toRad(p1.longitude);
double lat2 = toRad(p2.latitude), lng2 = toRad(p2.longitude);
double brng13 = toRad(brng1), brng23 = toRad(brng2);
double dlat = lat2 - lat1, dlng = lng2 - lng1;
double delta12 = 2 * Math.asin(Math.sqrt(Math.sin(dlat / 2) * Math.sin(dlat / 2)
+ Math.cos(lat1) * Math.cos(lat2) * Math.sin(dlng / 2) * Math.sin(dlng / 2)));
if (delta12 == 0) return null;
double initBrng1 = Math.acos((Math.sin(lat2) - Math.sin(lat1) * Math.cos(delta12)) / (Math.sin(delta12) * Math.cos(lat1)));
double initBrng2 = Math.acos((Math.sin(lat1) - Math.sin(lat2) * Math.cos(delta12)) / (Math.sin(delta12) * Math.cos(lat2)));
double brng12 = Math.sin(lng2 - lng1) > 0 ? initBrng1 : 2 * Math.PI - initBrng1;
double brng21 = Math.sin(lng2 - lng1) > 0 ? 2 * Math.PI - initBrng2 : initBrng2;
double alpha1 = (brng13 - brng12 + Math.PI) % (2 * Math.PI) - Math.PI;
double alpha2 = (brng21 - brng23 + Math.PI) % (2 * Math.PI) - Math.PI;
double alpha3 = Math.acos(-Math.cos(alpha1) * Math.cos(alpha2) + Math.sin(alpha1) * Math.sin(alpha2) * Math.cos(delta12));
double delta13 = Math.atan2(Math.sin(delta12) * Math.sin(alpha1) * Math.sin(alpha2), Math.cos(alpha2) + Math.cos(alpha1) * Math.cos(alpha3));
double lat3 = Math.asin(Math.sin(lat1) * Math.cos(delta13) + Math.cos(lat1) * Math.sin(delta13) * Math.cos(brng13));
double dlng13 = Math.atan2(Math.sin(brng13) * Math.sin(delta13) * Math.cos(lat1), Math.cos(delta13) - Math.sin(lat1) * Math.sin(lat3));
double lng3 = lng1 + dlng13;
return new LatLng(toDegrees(lat3), (toDegrees(lng3) + 540) % 360 - 180);
}
I will suggest you to go through the the above site and get the knowledge as i had also done the same.
Hope this may help , i know the is not in ios but the concept is same as i done my project by changing code of javascript.
Cheers !!!
My requirement was something similar to this. I ended up writing up my own algo for this. https://github.com/RanaRanvijaySingh/PolygonBuffer
All you need to use is this line
double distance = 0.0001;
List bufferedPolygonList = AreaBuffer.buffer(pointList, distance);
It gives you a list of buffered polygon points at a given distance from your original polygon.
I would recommend to use Turf.js library for buffering and many basic gis operations. You would be able to retrieve each edge from the path that returned. For geometry buffer, it is easy to use, quite light weighted and it works without any problem for my applications using MapBox.js or leaflet.
More details : Turf.js Buffer
But if you are looking for a geodesic distance buffer that could be problem. I would use Arcgis Javascript API
Take a look at BOOST this is a big C++ library, you may find library/source code for almost everything up there like, buffer methods with different types such as miter,round,square.
Just install the latest version of the Boost which I guess is 1.58.0 right now, and take a look at BOOST/Geometry/Strategies/Cartesian/buffer[Something]-Square/Miter/Round
Here it is a good document
You need to convert your geodetic coordinates (lat/long) to cartesian (x/y) and use the Boost library and reverse the conversion. you do not need to use ArcGIS or any other GIS library at all.
I've got the following code to do a biliner interpolation from a matrix of 2D vectors, each cell has x and y values of the vector, and the function receives k and l indices telling the bottom-left nearest position in the matrix
// p[1] returns the interpolated values
// fieldLinePointsVerts the raw data array of fieldNumHorizontalPoints x fieldNumVerticalPoints
// only fieldNumHorizontalPoints matters to determine the index to access the raw data
// k and l horizontal and vertical indices of the point just bellow p[0] in the raw data
void interpolate( vertex2d* p, vertex2d* fieldLinePointsVerts, int fieldNumHorizontalPoints, int k, int l ) {
int index = (l * fieldNumHorizontalPoints + k) * 2;
vertex2d p11;
p11.x = fieldLinePointsVerts[index].x;
p11.y = fieldLinePointsVerts[index].y;
vertex2d q11;
q11.x = fieldLinePointsVerts[index+1].x;
q11.y = fieldLinePointsVerts[index+1].y;
index = (l * fieldNumHorizontalPoints + k + 1) * 2;
vertex2d q21;
q21.x = fieldLinePointsVerts[index+1].x;
q21.y = fieldLinePointsVerts[index+1].y;
index = ( (l + 1) * fieldNumHorizontalPoints + k) * 2;
vertex2d q12;
q12.x = fieldLinePointsVerts[index+1].x;
q12.y = fieldLinePointsVerts[index+1].y;
index = ( (l + 1) * fieldNumHorizontalPoints + k + 1 ) * 2;
vertex2d p22;
p22.x = fieldLinePointsVerts[index].x;
p22.y = fieldLinePointsVerts[index].y;
vertex2d q22;
q22.x = fieldLinePointsVerts[index+1].x;
q22.y = fieldLinePointsVerts[index+1].y;
float fx = 1.0 / (p22.x - p11.x);
float fx1 = (p22.x - p[0].x) * fx;
float fx2 = (p[0].x - p11.x) * fx;
vertex2d r1;
r1.x = fx1 * q11.x + fx2 * q21.x;
r1.y = fx1 * q11.y + fx2 * q21.y;
vertex2d r2;
r2.x = fx1 * q12.x + fx2 * q22.x;
r2.y = fx1 * q12.y + fx2 * q22.y;
float fy = 1.0 / (p22.y - p11.y);
float fy1 = (p22.y - p[0].y) * fy;
float fy2 = (p[0].y - p11.y) * fy;
p[1].x = fy1 * r1.x + fy2 * r2.x;
p[1].y = fy1 * r1.y + fy2 * r2.y;
}
Currently this code needs to be run every single frame in old iOS devices, say devices with arm6 processors
I've taken the numeric sub-indices from the wikipedia's equations http://en.wikipedia.org/wiki/Bilinear_interpolation
I'd accreciate any comments on optimization for performance, even plain asm code
This code should not be causing your slowdown if it's only run once per frame. However, if it's run multiple times per frame, it easily could be.
I'd run your app with a profiler to see where the true performance problem lies.
There is some room for optimization here: a) Certain index calculations could be factored out and re-used in subsequent calculations), b) You could dereference your fieldLinePointsVerts array to a pointer once and re-use that, instead of indexing it twice per index...
but in general those things won't help a great deal, unless this function is being called many, many times per frame. In which case every little thing will help.
I converted an RGB matrix to YUV matrix using this formula:
Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16
Cr = V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
Cb = U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
I then did a 4:2:0 chroma subsample on the matrix. I think I did this correctly, I took 2x2 submatrices from the YUV matrix, ordered the values from least to greatest, and took the average between the 2 values in the middle.
I then used this formula, from Wikipedia, to access the Y, U, and V planes:
size.total = size.width * size.height;
y = yuv[position.y * size.width + position.x];
u = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total];
v = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total + (size.total / 4)];
I'm using OpenCV so I tried to interpret this as best I can:
y = src.data[(i*channels)+(j*step)];
u = src.data[(j%4)*step + ((i%2)*channels+1) + max];
v = src.data[(j%4)*step + ((i%2)*channels+2) + max + (max%4)];
src is the YUV subsampled matrix. Did I interpret that formula correctly?
Here is how I converted the colours back to RGB:
bgr.data[(i*channels)+(j*step)] = (1.164 * (y - 16)) + (2.018 * (u - 128)); // B
bgr.data[(i*channels+1)+(j*step)] = (1.164 * (y - 16)) - (0.813 * (v - 128)) - (0.391 * (u - 128)); // G
bgr.data[(i*channels+2)+(j*step)] = (1.164 * (y - 16)) + (1.596 * (v - 128)); // R
The problem is my image does not return to its original colours.
Here are the images for reference:
http://i.stack.imgur.com/vQkpT.jpg (Subsampled)
http://i.stack.imgur.com/Oucc5.jpg (Output)
I see that I should be converting from YUV444 to RGB now but I don't quite I understand what the clip function does in the sample I found on Wiki.
C = Y' − 16
D = U − 128
E = V − 128
R = clip(( 298 * C + 409 * E + 128) >> 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = clip(( 298 * C + 516 * D + 128) >> 8)
Does the >> mean I should shift bits?
I'd appreciate any help/comments! Thanks
Update
Tried doing the YUV444 conversion but it just made my image appear in shades of green.
y = src.data[(i*channels)+(j*step)];
u = src.data[(j%4)*step + ((i%2)*channels+1) + max];
v = src.data[(j%4)*step + ((i%2)*channels+2) + max + (max%4)];
c = y - 16;
d = u - 128;
e = v - 128;
bgr.data[(i*channels+2)+(j*step)] = clip((298*c + 409*e + 128)/256);
bgr.data[(i*channels+1)+(j*step)] = clip((298*c - 100*d - 208*e + 128)/256);
bgr.data[(i*channels)+(j*step)] = clip((298*c + 516*d + 128)/256);
And my clip function:
int clip(double value)
{
return (value > 255) ? 255 : (value < 0) ? 0 : value;
}
I had the same problem when decoding WebM frames to RGB. I finally found the solution after hours of searching.
Take SCALEYUV function from here: http://www.telegraphics.com.au/svn/webpformat/trunk/webpformat.h
Then to decode the RGB data from YUV, see this file:
http://www.telegraphics.com.au/svn/webpformat/trunk/decode.c
Search for "py = img->planes[0];", there are two algorithms to convert the data. I only tried the simple one (after "// then fall back to cheaper method.").
Comments in the code also refer to this page: http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC30
Works great for me.
You won't get back perfectly the same image since UV does compress the image.
You don't say if the result is completely wrong (ie an error) or just not perfect
R = clip(( 298 * C + 409 * E + 128) >> 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = clip(( 298 * C + 516 * D + 128) >> 8)
The >> 8 is a bit shift, equivalent to dividing by 256. This is just to allow you to do all the arithmatic in integer units rather than floating point for speed
Was experimenting with formulas present on wiki and found that mixed formula:
byte c = (byte) (y - 16);
byte d = (byte) (u - 128);
byte e = (byte) (v - 128);
byte r = (byte) (c + (1.370705 * (e)));
byte g = (byte) (c - (0.698001 * (d)) - (0.337633 * (e)));
byte b = (byte) (c + (1.732446 * (d)));
produces "better" errors for my images, simply makes some black points pure green (i.e. rgb = 0x00FF00) which is better for detection and correction ...
wiki source: https://en.wikipedia.org/wiki/YUV#Y.27UV420p_.28and_Y.27V12_or_YV12.29_to_RGB888_conversion