Calculate Distance Formula while using a Scale - ios

I am trying to calculate a distance based upon the scale my users draws in.
My user can snap to other points in the game by drawing a line on a grid.
The grid is a numerical value that they can select 8, 16, 24, or 32.
The scale can be changed by the user selecting a whole number (1-10) and fraction number (0, 0.5, 0.75, or 0.125).
The user can select between showing the distance in Metric or Empirical units.
I am having difficulty spitting out the distance once the scale is changed.
Can anyone tell me where I have gone wrong in my math?
- (double) distanceFormula : (float) x1 : (float) y1 : (float) x2 : (float) y2 {
// 1 meter * 3.280839895 feet => feet
// 1 foot * 1 meter/3.280839895 feet => meter
/* Use Pythagora's theorem to calculate distance */
double dx = (x2-x1);
double dy = (y2-y1);
double dist = sqrt(dx*dx + dy*dy);
NSLog(#"Raw Distance %f", dist) ;
return dist;
}
- (NSString *) returnDistanceAsString : (double) distance {
NSString * string;
double d = distance / [self returnGridSize];
double scale = [self returnScaleWhole] + [self returnScaleSub];
if ([self returnUseMetric]) {
//METRIC
int tempCentim = (d * kCMConst) / 2;
if (tempCentim < 1) {
string = [NSString stringWithFormat:#"%d mm", tempCentim];
} else if (tempCentim > 1) {
string = [NSString stringWithFormat:#"%d mm", tempCentim];
} else if (tempCentim > 100) {
//eventually going to add cm mm
}
} else {
//EMPERICAL
int RL = d * scale;
int feet = RL / 12.0;
int inches = (int)RL % 12;
string = [NSString stringWithFormat:#"%i' %i\"", feet, inches];
}
return string;
}

The main questionable thing I noticed is that, in metric, you're neglecting to multiply by scale, but you are multiplying by kCMConst/2. You don't explain what kCMConst is, but it seems unlikely to be a value that needs to be divided by 2.

Related

Google Maps heat map color by average weight

The Google Maps iOS SDK's heat map (more specifically the Google-Maps-iOS-Utils framework) decides the color to render an area in essentially by calculating the density of the points in that area.
However, I would like to instead select the color based on the average weight or intensity of the points in that area.
From what I understand, this behavior is not built in (but who knows––the documentation sort of sucks). The file where the color-picking is decided is I think in /src/Heatmap/GMUHeatmapTileLayer.mThis is a relatively short file, but I am not very well versed in Objective-C, so I am having some difficulty figuring out what does what. I think -tileForX:y:zoom: in GMUHeatmapTileLayer.m is the important function, but I'm not sure and even if it is, I don't quite know how to modify it. Towards the end of this method, the data is 'convolved' first horizontally and then vertically. I think this is where the intensities are actually calculated. Unfortunately, I do not know exactly what it's doing, and I am afraid of changing things because I suck at obj-c. This is what the convolve parts of this method look like:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
// ...
// Convolve data.
int lowerLimit = (int)data->_radius;
int upperLimit = paddedTileSize - (int)data->_radius - 1;
// Convolve horizontally first.
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
float value = intensity[y * paddedTileSize + x];
if (value != 0) {
// convolve to x +/- radius bounded by the limit we care about.
int start = MAX(lowerLimit, x - (int)data->_radius);
int end = MIN(upperLimit, x + (int)data->_radius);
for (int x2 = start; x2 <= end; x2++) {
float scaledKernel = value * [data->_kernel[x2 - x + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
intermediate[y * paddedTileSize + x2] += scaledKernel;
// ^
}
}
}
}
free(intensity);
// Convole vertically to get final intensity.
float *finalIntensity = calloc(kGMUTileSize * kGMUTileSize, sizeof(float));
for (int x = lowerLimit; x <= upperLimit; x++) {
for (int y = 0; y < paddedTileSize; y++) {
float value = intermediate[y * paddedTileSize + x];
if (value != 0) {
int start = MAX(lowerLimit, y - (int)data->_radius);
int end = MIN(upperLimit, y + (int)data->_radius);
for (int y2 = start; y2 <= end; y2++) {
float scaledKernel = value * [data->_kernel[y2 - y + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// ^
}
}
}
}
free(intermediate);
// ...
}
This is the method where the intensities are calculated for each iteration, right? If so, how can I change this to achieve my desired effect (average, not summative colors, which I think are proportional to intensity).
So: How can I have averaged instead of summed intensities by modifying the framework?
I think you are on the right track. To calculate average you divide the point sum by the point count. Since you already have the sums calculated, I think an easy solution would be to also save the count for each point. If I understand it correctly, this it what you have to do.
When allocating memory for the sums also allocate memory for the counts
// At this place
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
// Add this line, calloc will initialize them to zero
int *counts = calloc(paddedTileSize * paddedTileSize, sizeof(int));
Then increase the count in each loop.
// Below this line (first loop)
intermediate[y * paddedTileSize + x2] += scaledKernel;
// Add this
counts[y * paddedTileSize + x2]++;
// And below this line (second loop)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// Add this
counts[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit]++;
After the two loops you should have two arrays, one with your sums finalIntensity and one with your counts counts. Now go through the values and calculate the averages.
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
int n = y * paddedTileSize + x;
if (counts[n] != 0)
finalIntensity[n] = finalIntensity[n] / counts[n];
}
}
free(counts);
The finalIntensity should now contain your averages.
If you prefer, and the rest of the code makes it possible, you can skip the last loop and instead do the division when using the final intensity values. Just change any subsequent finalIntensity[n] to counts[n] == 0 ? finalIntensity[n] : finalIntensity[n] / counts[n].
I may have just solved the same issue for the java version.
My problem was having a custom gradient with 12 different values.
But my actual weighted data does not necessarily contain all intensity values from 1 to 12.
The problem is, the highest intensity value gets mapped to the highest color.
Also 10 datapoints with intensity 1 that are close by will get the same color as a single point with intensity 12.
So the function where the tile gets created is a good starting point:
Java:
public Tile getTile(int x, int y, int zoom) {
// ...
// Quantize points
int dim = TILE_DIM + mRadius * 2;
double[][] intensity = new double[dim][dim];
int[][] count = new int[dim][dim];
for (WeightedLatLng w : points) {
Point p = w.getPoint();
int bucketX = (int) ((p.x - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
// Quantize wraparound points (taking xOffset into account)
for (WeightedLatLng w : wrappedPoints) {
Point p = w.getPoint();
int bucketX = (int) ((p.x + xOffset - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
for(int bx = 0; bx < dim; bx++)
for (int by = 0; by < dim; by++)
if (count[bx][by] != 0)
intensity[bx][by] /= count[bx][by];
//...
I added a counter and count every addition to the intensities, after that I go through every intensity and calculate the average.
For C:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
//...
// Quantize points.
int paddedTileSize = kGMUTileSize + 2 * (int)data->_radius;
float *intensity = calloc(paddedTileSize * paddedTileSize, sizeof(float));
int *count = calloc(paddedTileSize * paddedTileSize, sizeof(int));
for (GMUWeightedLatLng *item in points) {
GQTPoint p = [item point];
int x = (int)((p.x - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for (GMUWeightedLatLng *item in wrappedPoints) {
GQTPoint p = [item point];
int x = (int)((p.x + wrappedPointsOffset - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
// For wrapped points, additional shifting risks bucketing slipping just outside due to
// numerical instability.
if (x < 0) x = 0;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for(int i=0; i < paddedTileSize * paddedTileSize; i++)
if (count[i] != 0)
intensity[i] /= count[i];
Next is the convolving.
What I did there, is to make sure that the calculated value does not go over the maximum in my data.
Java:
// Convolve it ("smoothen" it out)
double[][] convolved = convolve(intensity, mKernel, mMaxAverage);
// the mMaxAverage gets set here:
public void setWeightedData(Collection<WeightedLatLng> data) {
// ...
// Add points to quad tree
for (WeightedLatLng l : mData) {
mTree.add(l);
mMaxAverage = Math.max(l.getIntensity(), mMaxAverage);
}
// ...
// And finally the convolve method:
static double[][] convolve(double[][] grid, double[] kernel, double max) {
// ...
intermediate[x2][y] += val * kernel[x2 - (x - radius)];
if (intermediate[x2][y] > max) intermediate[x2][y] = max;
// ...
outputGrid[x - radius][y2 - radius] += val * kernel[y2 - (y - radius)];
if (outputGrid[x - radius][y2 - radius] > max ) outputGrid[x - radius][y2 - radius] = max;
For C:
// To get the maximum average you could do that here:
- (void)setWeightedData:(NSArray<GMUWeightedLatLng *> *)weightedData {
_weightedData = [weightedData copy];
for (GMUWeightedLatLng *dataPoint in _weightedData)
_maxAverage = Math.max(dataPoint.intensity, _maxAverage)
// ...
// And then simply in the convolve section
intermediate[y * paddedTileSize + x2] += scaledKernel;
if (intermediate[y * paddedTileSize + x2] > _maxAverage)
intermediate[y * paddedTileSize + x2] = _maxAverage;
// ...
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
if (finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] > _maxAverage)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] = _maxAverage;
And finally the coloring
Java:
// The maximum intensity is simply the size of my gradient colors array (or the starting points)
Bitmap bitmap = colorize(convolved, mColorMap, mGradient.mStartPoints.length);
For C:
// Generate coloring
// ...
float max = [data->_maxIntensities[zoom] floatValue];
max = _gradient.startPoints.count;
I did this in Java and it worked for me, not sure about the C-code though.
You have to play around with the radius and you could even edit the kernel. Because I found that when I have a lot of homogeneous data (i.e. little variation in the intensities, or a lot of data in general) the heat map will degenerate to a one-colored overlay, because the gradient on the edges will get smaller and smaller.
But hope this helps anyway.
// Erik

Create perpendicular lat long from single cllocation coordinate of X meter

I have user current location i.e. CLLocation Coordinate (location lat & long) and user is on race track pointing to one direction with the help of user current location i created one region now I want some more race track coordinate(say 2m , 4m , 6m away from race track in perpendicular direction) and the track is 10 m long. Please check the image and the red points are on the track.
Please check this image
/**
* Returns the destination point from initial point having travelled the given distance on the
* given initial bearing (bearing normally varies around path followed).
*
* #param {double} distance - Distance travelled, in same units as earth radius (default: metres).
* #param {double} bearing - Initial bearing in degrees from north.
*
* #returns {CLLocationCoordinate} Destination point.
*/
#define kEarthRadius 6378137
- (CLLocationCoordinate2D)destinationPointWithStartingPoint:(MKMapPoint)initialPoint distance:(double)distance andBearing:(double)bearing {
CLLocationCoordinate2D location = MKCoordinateForMapPoint(initialPoint);
double delta = distance / kEarthRadius;
double omega = [self degreesToRadians:bearing];
double phi1 = [self degreesToRadians:location.latitude];
double lambda1 = [self degreesToRadians:location.longitude];
double phi2 = asin(sin(phi1)*cos(delta) + cos(phi1) * sin(delta) * cos(omega));
double x = cos(delta) - sin(phi1) * sin(phi2);
double y = sin(omega) * sin(delta) * cos(phi1);
double lambda2 = lambda1 + atan2(y, x);
return CLLocationCoordinate2DMake([self radiansToDegrees:phi2], ([self radiansToDegrees:lambda2]+540)%360-180);
}
- (CLLocationCoordinate2D)rhumbDestinationPointForInitialPoint:(MKMapPoint)initialPoint distance:(double)distance andBearing:(double)bearing {
CLLocationCoordinate2D location = MKCoordinateForMapPoint(initialPoint);
double delta = distance / kEarthRadius;
double omega = [self degreesToRadians:bearing];
double phi1 = [self degreesToRadians:location.latitude];
double lambda1 = [self degreesToRadians:location.longitude];
double delta_phi = delta * cos(omega);
double phi2 = phi1 + delta_phi;
// check for some daft bugger going past the pole, normalise latitude if so
if (fabs(phi2) > M_PI / 2) {
phi2 = phi2 > 0 ? M_PI-phi2 : -M_PI-phi2;
}
double delta_gamma = log(tan(phi2/2+M_PI/4)/tan(phi1/2+M_PI/4));
double q = fabs(delta_gamma) > 10e-12 ? delta_phi / delta_gamma : cos(phi1);
double delta_lambda = delta*sin(omega)/q;
double lambda2 = lambda1 + delta_lambda;
return CLLocationCoordinate2DMake([self radiansToDegrees:phi2], ([self radiansToDegrees:lambda2]+540)%360-180);
}
- (double)degreesToRadians:(double)degrees {
return degrees * M_PI / 180.0;
}
- (double)radiansToDegrees:(double)radians {
return radians * 180.0 / M_PI;
}
Adapted from : http://www.movable-type.co.uk/scripts/latlong.html
More information on bearing : https://en.wikipedia.org/wiki/Bearing_(navigation)
And rhumb line : https://en.wikipedia.org/wiki/Rhumb_line

Calculating the angle to a location

I have an image of an arrow that behaves like a compass to a specific location. Sometimes it works, and other times it's mirrored. So if I was facing east and the location is directly east of me, it should point up, but sometimes it points down.
#define RADIANS_TO_DEGREES(radians) ((radians) * (180.0 / M_PI))
- (void)locationManager:(CLLocationManager *)manager didUpdateHeading:(CLHeading *)heading
{
// update direction of arrow
CGFloat degrees = [self p_calculateAngleBetween:_myLocation
and:_otherLocation];
CGFloat rads = (degrees - heading.trueHeading) * M_PI / 180;
CGAffineTransform tr = CGAffineTransformIdentity;
tr = CGAffineTransformConcat(tr, CGAffineTransformMakeRotation(rads) );
[_directionArrowView setTransform:tr];
}
-(CGFloat) p_calculateAngleBetween:(CLLocationCoordinate2D)coords0 and:(CLLocationCoordinate2D)coords1 {
double x = 0, y = 0 , deg = 0, deltaLon = 0;
deltaLon = coords1.longitude - coords0.longitude;
y = sin(deltaLon) * cos(coords1.latitude);
x = cos(coords0.latitude) * sin(coords1.latitude) - sin(coords0.latitude) * cos(coords1.latitude) * cos(deltaLon);
deg = RADIANS_TO_DEGREES(atan2(y, x));
if(deg < 0)
{
deg = -deg;
}
else
{
deg = 360 - deg;
}
return deg;
}
Is this the correct way to calculate my angle with another location? Or am I missing a step? Being the arrow points directly in the opposite direction sometimes, my assumption is it's an issue with my math.
To calculate radians from x & y:
double r = atan(y/x);
if (x<0)
r = M_PI + r;
else if (x>0 && y<0)
r = 2 * M_PI + r;
There is not issue of dividing by 0 when X is zero because the atan function handles this correctly:
If the argument is positive infinity (negative infinity), +pi/2 (-pi/2) is returned.

Un-Distort raw images received from the Leap motion cameras

I've been working with the leap for a long time now. 2.1.+ SDK version allows us to access the cameras and get raw images. I want to use those images with OpenCV for square/circle detection and stuff... the problem is i can't get those images undistorted. i read the docs, but don't quite get what they mean. here's one thing i need to understand properly before going forward
distortion_data_ = image.distortion();
for (int d = 0; d < image.distortionWidth() * image.distortionHeight(); d += 2)
{
float dX = distortion_data_[d];
float dY = distortion_data_[d + 1];
if(!((dX < 0) || (dX > 1)) && !((dY < 0) || (dY > 1)))
{
//what do i do now to undistort the image?
}
}
data = image.data();
mat.put(0, 0, data);
//Imgproc.Canny(mat, mat, 100, 200);
//mat = findSquare(mat);
ok.showImage(mat);
in the docs it says something like this "
The calibration map can be used to correct image distortion due to lens curvature and other imperfections. The map is a 64x64 grid of points. Each point consists of two 32-bit values....(the rest on the dev website)"
can someone explain this in detail please, OR OR, just post the java code to undistort the images give me an output MAT image so i may continue processing that (i'd still prefer a good explanation if possible)
Ok, I have no leap camera to test all this, but this is how I understand the documentation:
The calibration map does not hold offsets but full point positions. An entry says where the pixel has to be placed instead. Those values are mapped between 0 and 1, which means that you have to mutiply them by your real image width and height.
What isnt explained explicitly is, how you pixel positions are mapped to 64 x 64 positions of your calibration map. I assume that it's the same way: 640 pixels width are mapped to 64 pixels width and 240 pixels height are mapped to 64 pixels height.
So in general, to move from one of your 640 x 240 pixel positions (pX, pY) to the undistorted position you will:
compute corresponding pixel position in the calibration map: float cX = pX/640.0f * 64.0f; float cY = pY/240.0f * 64.0f;
(cX, cY) is now the locaion of that pixel in the calibration map. You will have to interpolate between two pixel locaions, but I will now only explain how to go on for a discrete location in the calibration map (cX', cY') = rounded locations of (cX, cY).
read the x and y values out of the calibration map: dX, dY as in the documentation. You have to compute the location in the array by: d = dY*calibrationMapWidth*2 + dX*2;
dX and dY are values between 0 and 1 (if not: dont undistort this point because there is no undistortion available. To find out the pixel location in your real image, multiply by the image size: uX = dX*640; uY = dY*240;
set your pixel to the undistorted value: undistortedImage(pX,pY) = distortedImage(uX,uY);
but you dont have discrete point positions in your calibration map, so you have to interpolate. I'll give you an example:
let be (cX,cY) = (13.7, 10.4)
so you read from your calibration map four values:
calibMap(13,10) = (dX1, dY1)
calibMap(14,10) = (dX2, dY2)
calibMap(13,11) = (dX3, dY3)
calibMap(14,11) = (dX4, dY4)
now your undistorted pixel position for (13.7, 10.4) is (multiply each with 640 or 240 to get uX1, uY1, uX2, etc):
// interpolate in x direction first:
float tmpUX1 = uX1*0.3 + uX2*0.7
float tmpUY1 = uY1*0.3 + uY2*0.7
float tmpUX2 = uX3*0.3 + uX4*0.7
float tmpUY2 = uY3*0.3 + uY4*0.7
// now interpolate in y direction
float combinedX = tmpUX1*0.6 + tmpUX2*0.4
float combinedY = tmpUY1*0.6 + tmpUY2*0.4
and your undistorted point is:
undistortedImage(pX,pY) = distortedImage(floor(combinedX+0.5),floor(combinedY+0.5)); or interpolate pixel values there too.
Hope this helps for a basic understanding. I'll try to add openCV remap code soon! The only point thats unclear for me is, whether the mapping between pX/Y and cX/Y is correct, cause thats not explicitly explained in the documentation.
Here is some code. You can skip the first part, where I am faking a distortion and creating the map, which is your initial state.
With openCV it is simple, just resize the calibration map to your image size and multiply all the values with your resolution. The nice thing is, that openCV performs the interpolation "automatically" while resizing.
int main()
{
cv::Mat input = cv::imread("../Data/Lenna.png");
cv::Mat distortedImage = input.clone();
// now i fake some distortion:
cv::Mat transformation = cv::Mat::eye(3,3,CV_64FC1);
transformation.at<double>(0,0) = 2.0;
cv::warpPerspective(input,distortedImage,transformation,input.size());
cv::imshow("distortedImage", distortedImage);
//cv::imwrite("../Data/LenaFakeDistorted.png", distortedImage);
// now fake a calibration map corresponding to my faked distortion:
const unsigned int cmWidth = 64;
const unsigned int cmHeight = 64;
// compute the calibration map by transforming image locations to values between 0 and 1 for legal positions.
float calibMap[cmWidth*cmHeight*2];
for(unsigned int y = 0; y < cmHeight; ++y)
for(unsigned int x = 0; x < cmWidth; ++x)
{
float xx = (float)x/(float)cmWidth;
xx = xx*2.0f; // this if from my fake distortion... this gives some values bigger than 1
float yy = (float)y/(float)cmHeight;
calibMap[y*cmWidth*2+ 2*x] = xx;
calibMap[y*cmWidth*2+ 2*x+1] = yy;
}
// NOW you have the initial situation of your scenario: calibration map and distorted image...
// compute the image locations of calibration map values:
cv::Mat cMapMatX = cv::Mat(cmHeight, cmWidth, CV_32FC1);
cv::Mat cMapMatY = cv::Mat(cmHeight, cmWidth, CV_32FC1);
for(int j=0; j<cmHeight; ++j)
for(int i=0; i<cmWidth; ++i)
{
cMapMatX.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i];
cMapMatY.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i+1];
}
//cv::imshow("mapX",cMapMatX);
//cv::imshow("mapY",cMapMatY);
// interpolate those values for each of your original images pixel:
// here I use linear interpolation, you could use cubic or other interpolation too.
cv::resize(cMapMatX, cMapMatX, distortedImage.size(), 0,0, CV_INTER_LINEAR);
cv::resize(cMapMatY, cMapMatY, distortedImage.size(), 0,0, CV_INTER_LINEAR);
// now the calibration map has the size of your original image, but its values are still between 0 and 1 (for legal positions)
// so scale to image size:
cMapMatX = distortedImage.cols * cMapMatX;
cMapMatY = distortedImage.rows * cMapMatY;
// now create undistorted image:
cv::Mat undistortedImage = cv::Mat(distortedImage.rows, distortedImage.cols, CV_8UC3);
undistortedImage.setTo(cv::Vec3b(0,0,0)); // initialize black
//cv::imshow("undistorted", undistortedImage);
for(int j=0; j<undistortedImage.rows; ++j)
for(int i=0; i<undistortedImage.cols; ++i)
{
cv::Point undistPosition;
undistPosition.x =(cMapMatX.at<float>(j,i)); // this will round the position, maybe you want interpolation instead
undistPosition.y =(cMapMatY.at<float>(j,i));
if(undistPosition.x >= 0 && undistPosition.x < distortedImage.cols
&& undistPosition.y >= 0 && undistPosition.y < distortedImage.rows)
{
undistortedImage.at<cv::Vec3b>(j,i) = distortedImage.at<cv::Vec3b>(undistPosition);
}
}
cv::imshow("undistorted", undistortedImage);
cv::waitKey(0);
//cv::imwrite("../Data/LenaFakeUndistorted.png", undistortedImage);
}
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
I use this as input and fake a remap/distortion from which I compute my calib mat:
input:
faked distortion:
used the map to undistort the image:
TODO: after those computatons use a opencv map with those values to perform faster remapping.
Here's an example on how to do it without using OpenCV. The following seems to be faster than using the Leap::Image::warp() method (probably due to the additional function call overhead when using warp()):
float destinationWidth = 320;
float destinationHeight = 120;
unsigned char destination[(int)destinationWidth][(int)destinationHeight];
//define needed variables outside the inner loop
float calX, calY, weightX, weightY, dX1, dX2, dX3, dX4, dY1, dY2, dY3, dY4, dX, dY;
int x1, x2, y1, y2, denormalizedX, denormalizedY;
int x, y;
const unsigned char* raw = image.data();
const float* distortion_buffer = image.distortion();
//Local variables for values needed in loop
const int distortionWidth = image.distortionWidth();
const int width = image.width();
const int height = image.height();
for (x = 0; x < destinationWidth; x++) {
for (y = 0; y < destinationHeight; y++) {
//Calculate the position in the calibration map (still with a fractional part)
calX = 63 * x/destinationWidth;
calY = 63 * y/destinationHeight;
//Save the fractional part to use as the weight for interpolation
weightX = calX - truncf(calX);
weightY = calY - truncf(calY);
//Get the x,y coordinates of the closest calibration map points to the target pixel
x1 = calX; //Note truncation to int
y1 = calY;
x2 = x1 + 1;
y2 = y1 + 1;
//Look up the x and y values for the 4 calibration map points around the target
// (x1, y1) .. .. .. (x2, y1)
// .. ..
// .. (x, y) ..
// .. ..
// (x1, y2) .. .. .. (x2, y2)
dX1 = distortion_buffer[x1 * 2 + y1 * distortionWidth];
dX2 = distortion_buffer[x2 * 2 + y1 * distortionWidth];
dX3 = distortion_buffer[x1 * 2 + y2 * distortionWidth];
dX4 = distortion_buffer[x2 * 2 + y2 * distortionWidth];
dY1 = distortion_buffer[x1 * 2 + y1 * distortionWidth + 1];
dY2 = distortion_buffer[x2 * 2 + y1 * distortionWidth + 1];
dY3 = distortion_buffer[x1 * 2 + y2 * distortionWidth + 1];
dY4 = distortion_buffer[x2 * 2 + y2 * distortionWidth + 1];
//Bilinear interpolation of the looked-up values:
// X value
dX = dX1 * (1 - weightX) * (1- weightY) + dX2 * weightX * (1 - weightY) + dX3 * (1 - weightX) * weightY + dX4 * weightX * weightY;
// Y value
dY = dY1 * (1 - weightX) * (1- weightY) + dY2 * weightX * (1 - weightY) + dY3 * (1 - weightX) * weightY + dY4 * weightX * weightY;
// Reject points outside the range [0..1]
if((dX >= 0) && (dX <= 1) && (dY >= 0) && (dY <= 1)) {
//Denormalize from [0..1] to [0..width] or [0..height]
denormalizedX = dX * width;
denormalizedY = dY * height;
//look up the brightness value for the target pixel
destination[x][y] = raw[denormalizedX + denormalizedY * width];
} else {
destination[x][y] = -1;
}
}
}

What is the proper way to calculate knot vector for the Cox De Boor algorithm?

I am currently trying to implement the Cox De Boor algorithm for drawing bezier curves. I've managed to produce something acceptable with a set degree, number of control points and a predefined knot vector, but I want to adapt my code so that it will function given any number of control points and any degree. I'm 90% certain that the problems I am currently encountering, i.e. that the path goes wandering off to point 0/0, are due to me not properly calculating knot vectors. If anyone can give me a hint or two I'd be grateful. Note that I am presently calculating each dimension (in this case just x and y) individually; I will eventually adapt this code to use the same precalculations for all dimensions. I may also adjust it to use C arrays rather than NSArrays, but from what I've seen there's no real speed advantage to doing so.
I am currently producing a degree 3 curve using 5 control points with a knot vector of {0, 0, 0, 0, 1, 2, 2, 2, 2}.
- (double) coxDeBoorForDegree:(NSUInteger)degree span:(NSUInteger)span travel:(double)travel knotVector:(NSArray *)vector
{
double k1 = [[vector objectAtIndex:span] doubleValue];
double k2 = [[vector objectAtIndex:span+1] doubleValue];
if (degree == 1) {
if (k1 <= travel && travel <= k2) return 1.0;
return 0.0;
}
double k3 = [[vector objectAtIndex:span+degree-1] doubleValue];
double k4 = [[vector objectAtIndex:span+degree] doubleValue];
double density1 = k3 - k1;
double density2 = k4 - k2;
double equation1 = 0.0, equation2 = 0.0;
if (density1 > 0.0) equation1 = ((travel-k1) / density1) * [self coxDeBoorForDegree:degree-1 span:span travel:travel knotVector:vector];
if (density2 > 0.0) equation2 = ((k4-travel) / density2) * [self coxDeBoorForDegree:degree-1 span:span+1 travel:travel knotVector:vector];
return equation1 + equation2;
}
- (double) valueAtTravel:(double)travel degree:(NSUInteger)degree points:(NSArray *)points knotVector:(NSArray *)vector
{
double total = 0.0;
for (NSUInteger i = 0; i < points.count; i++) {
float weight = [self coxDeBoorForDegree:degree+1 span:i travel:travel knotVector:vector];
if (weight > 0.001) total += weight * [[points objectAtIndex:i] doubleValue];
}
return total;
}
Never mind, I found this very useful webpage:
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/PARA-knot-generation.html
Hence anyone with the same problem can use the following method to generate a suitable knot vector, where 'controls' is the number of control points affecting the line segment, and 'degree' is... well, the degree of the curve! Don't forget that degree cannot equal or exceed the number of control points in the curve:
- (NSArray *) nodeVectorForControlCount:(NSUInteger)controls degree:(NSUInteger)degree
{
NSUInteger knotIncrement = 0;
NSUInteger knotsRequired = controls + degree + 1;
NSMutableArray *constructor = [[NSMutableArray alloc] initWithCapacity:knotsRequired];
for (NSUInteger i = 0; i < knotsRequired; i++) {
[constructor addObject:[NSNumber numberWithDouble:(double)knotIncrement]];
if (i >= degree && i < controls) knotIncrement++;
}
NSArray * returnArray = [NSArray arrayWithArray:constructor];
[constructor release];
return returnArray;
}

Resources