How to calculate area of an organic shape? - ios

I want to know its possible to calculate the area of an organic shape. The shape im trying to calculate looks something like this:
Imagine its drawn by CGPoints
Is there a special function for this? Im thinking maybe CoreImage or Quartz or maybe opengl.

If the boundary path consists only of straight line segments and does not intersect
itself then you can use the
following formula to compute the area of the enclosed region (from https://en.wikipedia.org/wiki/Polygon#Area_and_centroid):
CGPoint points[N];
CGFloat area = 0;
for (int i = 0; i < N; i++) {
area += (points[i].x * points[(i+1) % N].y - points[(i+1) % N].x * points[i].y)/2.0;
}
where points[0], ... , points[N-1] are the starting points of the line segments in counter-clockwise order.
For more complicate path segments such as Bézier curves, you can subdivide each segment
into small parts that can be approximated by line segments.

Related

How to create a random closed smooth CGPath?

I am trying to find a way to create a random closed smooth path (CGPath or UIBezierPath). I have read about the De Casteljau's algorithm and tons other articles about Bezier paths but it does not seem to fit to what I try to achieve.
I thought about creating a circle CGPath. Then I would multiplicate the path by a function that would distort the positions of the points say, sine or cosine. However I don't know if this is the right direction to go since the path would not have a random shape.
CGMutablePathRef circle = CGPathCreateMutable();
CGPathAddArc(circle, nil, 0.0f, 0.0f, 100.0f, 2 * M_PI, 0.0f, true);
...
CGPathRelease(circle);
It would be great if anyone could point me in a right direction how to start implementing it. Example of a path I am trying to generate:
What you've drawn looks like a distorted circle.
Assuming that's what you are after, here is what I would do:
Write code that steps an angle from 0 to 2pi by a fixed number of steps. (Try 8) Have the angle vary by some small random amount less than ± pi/steps.
Pick a base radius that is somewhat less than 1/2 the length of a side of the enclosing square, so there is room to make your points go inside or outside the base radius without going outside your bounding square. Try 3/8 of your bounding box length.
For each slightly randomized angle value along the circle, calculate a radius value that is base radius ± a random value from 0 to base radius/2.
Use sine and cosine to convert your angle and radius values into x and y coordinates for a point.
Add each point to an array. If you use those points to create a closed path, it would give you an 8-sided irregular non-selfintersecting polygon that is a distorted circle.
Now use those points as the control points for a Catmull-Rom spline to turn it into a smooth curve.
EDIT: I created a project on github called RandomBlobs that does what I describe above, as well as another approach:
Break the square area into a 3x3 grid of smaller squares. Ignore the center square.
Walk around the 8 remaining squares clockwise. For each square, pick a random x/y coorindate inside the square (but prevent it from getting too close to the edges.)
Create closed UIBezierPath connecting the 8 points in order.
Use Catmull-Rom smoothing to turn the irregular octagon into a smooth curve.
Yet a third approach would probably be even simpler:
Use a circular layout like in the first approach outlined above. Pick random control points. But then instead of using Catmull-Rom splines, bisect the angle between each pair of endpoints on the distorted circle and add a control point for a quadratic Bezier curve, also with a randomized radius value. So as you walk around the circle, you'd have alternating endpoints and control points. You might need to add some constraints to the bezier control points so you don't have "kinks" in your curved shape (In order to avoid kinks, the control points for neighboring Bezier curves need to follow a line through the shared end-point of the two curves.)
Here are a couple of sample images from the RandomBlobs project. The images I've uploaded are scaled down. The program optionally shows the control points it uses to generate each image, but you can't really see the control points in the scaled-down image.
First, a circle-based blob (using the first method that Josh Caswell and I suggested):
In that picture, the starting circle shape is shown in light gray:
And second, a blob based on the second square-based technique I described:
And in that picture, the grid of squares is shown for reference. The shape is based on a random point in each of the points in the grid (excluding the center square).
I've try to build your path, but it's not perfect... Anyhow, I'll share my test ;-D Hop this can help.
//
// DrawView.h
// test
//
// Created by Armand DOHM on 03/03/2014.
//
//
#import <UIKit/UIKit.h>
#interface DrawView : UIView
#end
//
// DrawView.m
// test
//
// Created by Armand DOHM on 03/03/2014.
//
//
#import "DrawView.h"
#import <math.h>
#implementation DrawView
- (void)drawRect:(CGRect)rect
{
float r; //radius
float rmax = MIN(rect.size.height,rect.size.width) * .5; //max radius
float rmin = rmax * .1; //min radius
NSMutableArray *points = [[NSMutableArray alloc] init];
/*cut a circle into x pies. for each of this pie take a point at a random radius
link all of this point with quadcurve*/
for (double a=0;a < 2 * M_PI;a += M_PI / 10) {
r = rmin + ((arc4random_uniform((int)(rmax - rmin) * 100)) / 100.0f);
CGPoint p = CGPointMake((rect.size.width / 2) + (r * cos (a)) , (rect.size.height / 2) + (r * sin (a)));
[points addObject:[NSValue valueWithCGPoint:p]];
}
UIBezierPath *myPath=[[UIBezierPath alloc]init];
myPath.lineWidth=2;
[myPath strokeWithBlendMode:kCGBlendModeNormal alpha:1.0];
r = rmin + ((arc4random_uniform((int)(rmax - rmin) * 100)) / 100.0f);
[myPath moveToPoint:CGPointMake((rect.size.width / 2) + (r * cos (0)) , (rect.size.height / 2) + (r * sin (0)))];
for (int i = 0; i < points.count; i+=2) {
NSValue *value = [points objectAtIndex:i];
CGPoint p1 = [value CGPointValue];
value = [points objectAtIndex:(i+1)];
CGPoint p2 = [value CGPointValue];
[myPath addQuadCurveToPoint:p2 controlPoint:p1];
}
[myPath closePath];
[myPath stroke];
}
#end

How to determine the width of the lines?

I need to detect the width of these lines:
These lines are parallel and have some noise on them.
Currently, what I do is:
1.Find the center using thinning (ZhangSuen)
ZhanSuenThinning(binImage, thin);
2.Compute the distance transform
cv::distanceTransform(binImage, distImg, CV_DIST_L2, CV_DIST_MASK_5);
3.Accumulate the half distance around the center
double halfWidth = 0.0;
int count = 0;
for(int a = 0; a < thinImg.cols; a++)
for(int b = 0; b < thinImg.rows; b++)
if(thinImg.ptr<uchar>(b, a)[0] > 0)
{
halfWidth += distImg.ptr<float>(b, a)[0];
count ++;
}
4.Finally, get the actual width
width = halfWidth / count * 2;
The result, isn't quite good, where it's wrong around 1-2 pixels. On bigger Image, the result is even worse, Any suggestion?
You can adapt barcode reader algorithms which is the faster way to do it.
Scan horizontal and vertical lines.
Lets X the length of the horizontal intersection with black line an Y the length of the vertical intersection (you can have it be calculating the median value of several X and Y if there are some noise).
X * Y / 2 = area
X²+Y² = hypotenuse²
hypotenuse * width / 2 = area
So : width = 2 * area / hypotenuse
EDIT : You can also easily find the angle by using PCA.
Al you need is find RotatedRect for each contour in your image, here is OpenCV tutorial how to do it. Then just take the values of 'size' from rotated rectangle where you will get height and width of contour, the height and width may interchange for different alignment of contour. Here in the above image the height become width and width become height.
Contour-->RotatedRect
|
'--> Size2f size
|
|-->width
'-->height
After find contour just do
RotatedRect minRect = minAreaRect( Mat(contours[i]) );
Size2f contourSize=minRect.size // width and height of the rectangle
Rotated rectangle for each contour
Here is C++ code
Mat src=imread("line.png",1);
Mat thr,gray;
blur(src,src,Size(3,3));
cvtColor(src,gray,CV_BGR2GRAY);
Canny(gray,thr,50, 190, 3, false );
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( thr.clone(),contours,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,Point(0,0));
vector<RotatedRect> minRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
minRect[i] = minAreaRect( Mat(contours[i]) );
for( int i = 0; i< contours.size(); i++ )
{
cout<<" Size ="<<minRect[i].size<<endl; //The width may interchange according to contour alignment
Size2f s=minRect[i].size;
// rotated rectangle
Point2f rect_points[4]; minRect[i].points( rect_points );
for( int j = 0; j < 4; j++ )
line( src, rect_points[j], rect_points[(j+1)%4], Scalar(0,0,255), 1, 8 );
}
imshow("src",src);
imshow("Canny",thr);
One quick and simple suggestion:
Count the total number of black pixels.
Detect the length of each line. (perhaps with CVHoughLinesP, or simply the diagonal of the bounding box around each thinned line)
Divide the number of black pixels by the sum of all line lengths, that should give you the average line width.
I am not sure whether that is more accurate than your existing approach though. The irregular end parts of each line might throw it of.
One thing you could try that could increase the accuracy for that case:
Measure the average angle of the lines
Rotate the image so the lines are aligned horizontally
crop a rectangular subsection of your shape, so all lines have the same length
(you can get the contour of your shape by morphological closing, then find a rectangle that is entirely contained within the shape. Make sure that the horizontal edges of the rectangle are inbetween lines)
then count the number of black pixels again (count gray pixels caused by rotating the image as x% of a whole pixel)
Divide by (rectangle_width * number_of_lines_in_rectangle)
Hough line fits to find each line
From each pixel on each line fit, scan in the perpendicular direction to get the distance to the edge. Find the edge using a spline fit or similar sub-pixel method.
Depending on your needs/desires, take the median or average distance. To eliminate problems with outliers, throw out the distances below the 10th percentile and above the 90th percentile before calculating the mean or median. You might also report the size using statistics: line width W, standard deviation S.
Although a connected components algorithm can be used to find the lines, it won't find the "true" edges as nicely as a spline fit.
The image like you shown is noisy/blurry and thus the number of black pixels might not reflect line properties; for example, black pixels can be partially attributed to salt-and-pepper noise. You can get rid of it with morphological erosion but this will affect your lines as well.
A better way is to extract connected components, delete small ones that likely come from noise or small blobs, then calculate the number of pixels and divide it on the number of lines. This approach will also help you to analyse the shape of the objects in your image and get rid of any artefacts other than noise or lines.
A different real word situation is when you have some grey pixels close to a line border. You can either use a threshold to discard them or count them with some weight<1. This will compensate for blur in your image. By the way, rotation of the image may increase the blur since it is typically done with interpolation and smoothing.

Circle estimation from 2D data set

I am doing some computer vision based hand gesture recognising stuff. Here, I want to detect a circle (a circular motion) made by my hand. My initial stages are working fine and I am able to get a blob whose centroid from each frame I am plotting. This is essentially my data set. A collection of 2D co-ordinate points. Now I want to detect a circular type motion and say generate a call to a function which says "Circle Detected". The circle detector will give a YES / NO boolean output.
Here is a sample of the data set I am generating in 40 frames
The x, y values are just plotted to a bitmap image using MATLAB.
My initial hand movement was slow and later I picked up speed to complete the circle within stipulated time (40 frames). There is no hard and fast rule about the number of frames thing but for now I am using a 40 frame sliding window for circle detection (0-39) then (1-40) then (2-41) etc.
I am also calculating the arc-tangent between successive points using:
angle = atan2(prev_y - y, prev_x - x) * 180 / pi;
Now what approach should I take for detecting a circle (This sample image should result in a YES). The angle as I am noticing is not steadily increasing from 0 to 360. It does increase but with jumps here and there.
If you are only interested in full or nearly full circles:
I think that the standard parameter estimation approach: Hough/RANSAC won't work very well in this case.
Since you have frames order and therefore distances between consecutive blob centers, you can create a nearly uniform sub sample of the data (let say, pick 20 points spaced ~evenly), calculate the center and measure the distance of all points from that center.
If it is nearly a circle all points will have similar distance from the center.
If you want to do something slightly more robust, you can:
Compute center (mean) of all points.
Perform gradient descent to update the center: should be fairly easy an you won't have local minima. The error term I would probably use is max(D) - min(D) where D is the vector of distances between the blob centers and estimated circle center (but you can use robust statistics instead of max & min)
Evaluate the circle
I would use a Least Square estimation. Numerically you can use the Nelder-Mead method. You get the circle that best approximate your points and on the basis of the residual error value you decide whether to consider the circle valid or not.
Being points the array of the points, xc, yc the coordinates of the center and r the radius, this could be an example of error to minimize:
class Circle
{
private PointF[] _points;
public Circle(PointF[] points)
{
_points = points;
}
public double MinimizeFunction(double xc, double yc, double r)
{
double d, d2, dx, dy, sum;
sum = 0;
foreach(PointF p in _points)
{
dx = p.X - xc;
dy = p.Y - yc;
d2 = dx * dx + dy * dy;
// sum += d2 - r * r;
d = Math.Sqrt(d2) - r;
sum += d * d;
}
return sum;
}
public double ResidualError(double xc, double yc, double r)
{
return Math.Sqrt(MinimizeFunctional(xc, yc, r)) / (_points.Length - 3);
}
}
There is a slight difference between the commented functional and the uncommented, but for practical reason this difference is meaningless. Instead, from a theoretical point of view the difference is important.
Since you need to supply a initial values set (xc, yc, r), you can calculate the circle given three points, choosing three points far from each other.
If you need more details on "circle given three points" or Nelder-Mead you can google or ask me here.

Use of maths in the Apple pARk sample code

I'm studied the pARK example project (http://developer.apple.com/library/IOS/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083) so I can apply some of its fundamentals in an app i'm working on. I understand nearly everything, except:
The way it has to calculate if a point of interest must appear or not. It gets the attitude, multiply it with the projection matrix (to get the rotation in GL coords?), then multiply that matrix with the coordinates of the point of interest and, at last, look at the last coordinate of that vector to find out if the point of interest must be shown. Which are the mathematic fundamentals of this?
Thanks a lot!!
I assume you are referring to the following method:
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
poi.view.center = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
This is performing an OpenGL like vertex transformation on the places of interest to check if they are in a viewable frustum. The frustum is created in the following line:
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
This sets up a frustum with a 60 degree field of view, a near clipping plane of 0.25 and a far clipping plane of 1000. Any point of interest that is further away than 1000 units will then not be visible.
So, to step through the code, first the projection matrix that sets up the frustum, and the camera view matrix, which simply rotates the object so it is the right way up relative to the camera, are multiplied together. Then, for each place of interest, its location is multiplied by the viewProjection matrix. This will project the location of the place of interest into the view frustum, applying rotation and perspective.
The next two lines then convert the transformed location of the place into whats known as normalized device coordinates. The 4 component vector needs to be collapsed to 3 dimensional space, this is achieved by projecting it onto the plane w == 1, by dividing the vector by its w component, v[3]. It is then possible to determine if the point lies within the projection frustum by checking if its coordinates lie in the cube with side length 2 with origin [0, 0, 0]. In this case, the x and y coordinates are being biased from the range [-1 1] to [0 1] to match up with the UIKit coordinate system, by adding 1 and dividing by 2.
Next, the v[2] component, z, is checked to see if it is greater than 0. This is actually incorrect as it has not been biased, it should be checked to see if it is greater than -1. This will detect if the place of interest is in the first half of the projection frustum, if it is then the object is deemed visible and displayed.
If you are unfamiliar with vertex projection and coordinate systems, this is a huge topic with a fairly steep learning curve. There is however a lot of material online covering it, here are a couple of links to get you started:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
http://www.opengl.org/wiki/Vertex_Transformation
Good luck//

OpenGL: How to lathe a 2D shape into 3D?

I have an OpenGL program (written in Delphi) that lets user draw a polygon. I want to automatically revolve (lathe) it around an axis (say, Y asix) and get a 3D shape.
How can I do this?
For simplicity, you could force at least one point to lie on the axis of rotation. You can do this easily by adding/subtracting the same value to all the x values, and the same value to all the y values, of the points in the polygon. It will retain the original shape.
The rest isn't really that hard. Pick an angle that is fairly small, say one or two degrees, and work out the coordinates of the polygon vertices as it spins around the axis. Then just join up the points with triangle fans and triangle strips.
To rotate a point around an axis is just basic Pythagoras. At 0 degrees rotation you have the points at their 2-d coordinates with a value of 0 in the third dimension.
Lets assume the points are in X and Y and we are rotating around Y. The original 'X' coordinate represents the hypotenuse. At 1 degree of rotation, we have:
sin(1) = z/hypotenuse
cos(1) = x/hypotenuse
(assuming degree-based trig functions)
To rotate a point (x, y) by angle T around the Y axis to produce a 3d point (x', y', z'):
y' = y
x' = x * cos(T)
z' = x * sin(T)
So for each point on the edge of your polygon you produce a circle of 360 points centered on the axis of rotation.
Now make a 3d shape like so:
create a GL 'triangle fan' by using your center point and the first array of rotated points
for each successive array, create a triangle strip using the points in the array and the points in the previous array
finish by creating another triangle fan centered on the center point and using the points in the last array
One thing to note is that usually, the kinds of trig functions I've used measure angles in radians, and OpenGL uses degrees. To convert degrees to radians, the formula is:
degrees = radians / pi * 180
Essentially the strategy is to sweep the profile given by the user around the given axis and generate a series of triangle strips connecting adjacent slices.
Assume that the user has drawn the polygon in the XZ plane. Further, assume that the user intends to sweep around the Z axis (i.e. the line X = 0) to generate the solid of revolution, and that one edge of the polygon lies on that axis (you can generalize later once you have this simplified case working).
For simple enough geometry, you can treat the perimeter of the polygon as a function x = f(z), that is, assume there is a unique X value for every Z value. When we go to 3D, this function becomes r = f(z), that is, the radius is unique over the length of the object.
Now, suppose we want to approximate the solid with M "slices" each spanning 2 * Pi / M radians. We'll use N "stacks" (samples in the Z dimension) as well. For each such slice, we can build a triangle strip connecting the points on one slice (i) with the points on slice (i+1). Here's some pseudo-ish code describing the process:
double dTheta = 2.0 * pi / M;
double dZ = (zMax - zMin) / N;
// Iterate over "slices"
for (int i = 0; i < M; ++i) {
double theta = i * dTheta;
double theta_next = (i+1) * dTheta;
// Iterate over "stacks":
for (int j = 0; j <= N; ++j) {
double z = zMin + i * dZ;
// Get cross-sectional radius at this Z location from your 2D model (was the
// X coordinate in the 2D polygon):
double r = f(z); // See above definition
// Convert 2D to 3D by sweeping by angle represented by this slice:
double x = r * cos(theta);
double y = r * sin(theta);
// Get coordinates of next slice over so we can join them with a triangle strip:
double xNext = r * cos(theta_next);
double yNext = r * sin(theta_next);
// Add these two points to your triangle strip (heavy pseudocode):
strip.AddPoint(x, y, z);
strip.AddPoint(xNext, yNext, z);
}
}
That's the basic idea. As sje697 said, you'll possibly need to add end caps to keep the geometry closed (i.e. a solid object, rather than a shell). But this should give you enough to get you going. This can easily be generalized to toroidal shapes as well (though you won't have a one-to-one r = f(z) function in that case).
If you just want it to rotate, then:
glRotatef(angle,0,1,0);
will rotate it around the Y-axis. If you want a lathe, then this is far more complex.

Resources