Represent hand shape in time series curve in Visual c++ using OpenCV - opencv

I have to represent hand as a cluster of finger segments in time series curve.( Horizontal axis - Angle between one fixed point to each contour vertex.
vertical axis - Euclidean distance between center point(in hand) and each contour vertex ) .Is there is any algorithm for drawing that curve?
How can I represent the hand shape in time series curve?
Thanks in advance.

Related

Straightening a curved contour

Given a contour with an easily-identifiable edge, how would one straighten it and its contents, as pictured?
Detect the black edge and fit a spline curve to it.
From that spline you will be able to draw normals, and mark points regularly along it. This forms a (u, v) mesh that is easy to straighten.
To compute the destination image, draw horizontal rows, which correspond to particular normals in the source. Then sampling along the horizontal corresponds to some fractional (x, y) coordinates in the source. You can perform bilinear interpolation around the neighboring pixels to achieve good quality resampling.

calculate distance between two objects in image opencv c++

i am measuring distance between two objects in a image using opencv c++.
i have detected two balls with hough transform circle and want to measure distance between them.
so far, used Pythagoras theorem to find distance between two coordinates but not getting close.
d= sq rt( (x2-x1)^2 + (y2-y1)^2 )
for eg: if distance between two balls is 13 cm then result is 5.6 cm
thanks in advance
To measure the distance between objects in an image you will have to look into camera calibration concept.
And as said by Mark Setchell the distance will be in pixels if you do not do the calibration and 2D point to 3D point transformation.

OpenCV: Draw major axis of contour

I am currently working on cell contour detection with OpenCV. So far, I have been able to detect the cell contours and I want to find and draw the longest axis parallel to the y-axis of the contour.
What I did was create a bounding rectangle from the contour which finds the center and the height and width and use this information to draw the axes. As it turns out, the major axis does not necessarily run through the center, so at times it peeks over the cell contour.
My line of approach is to split the contour into a semi-circle along the y-axis, aquire the perpendicular distance from each contour point to the y-axis and then select the longest on each side, but I suppose this is computationally expensive.
Is there an easy way to find the longest axes of a contour (not a bounding rectangle), that are parallel to the x- or y- coordinate axis?
Here's an image - My cell contour is in thin black, major and minor axes are in red, and the blue "axes" are what I want to find.

Getting corners from convex points

I have written algorithm to extract the points shown in the image. They form convex shape and I know order of them. How do I extract corners (top 3 and bottom 3) from such points?
I'm using opencv.
if you already have the convex hull of the object, and that hull includes the corner points, then all you need to to do is simplify the hull until it only has 6 points.
There are many ways to simplify polygons, for example you could just use this simple algorithm used in this answer: How to find corner coordinates of a rectangle in an image
do
for each point P on the convex hull:
measure its distance to the line AB _
between the point A before P and the point B after P,
remove the point with the smallest distance
repeat until 6 points left
If you do not know the exact number of points, then you could remove points until the minimum distance rises above a certain threshold
you could also do Ramer-Douglas-Peucker to simplify the polygon, openCV already has that implemented in cv::approxPolyDP.
Just modify the openCV squares sample to use 6 points instead of 4
Instead of trying to directly determine which of your feature points correspond to corners, how about applying an corner detection algorithm on the entire image then looking for which of your feature points appear close to peaks in the corner detector?
I'd suggest starting with a Harris corner detector. The OpenCV implementation is cv::cornerHarris.
Essentially, the Harris algorithm applies both a horizontal and a vertical Sobel filter to the image (or some other approximation of the partial derivatives of the image in the x and y directions).
It then constructs a 2 by 2 structure matrix at each image pixel, looks at the eigenvalues of that matrix, and calls points corners if both eigenvalues are above some threshold.

Rectangle detection with Hough transform

I'm trying to implement rectangle detection using the Hough transform, based on
this paper.
I programmed it using Matlab, but after the detection of parallel pair lines and orthogonal pairs, I must detect the intersection of these pairs. My question is about the quality of the two line intersection in Hough space.
I found the intersection points by solving four equation systems. Do these intersection points lie in cartesian or polar coordinate space?
For those of you wondering about the paper, it's:
Rectangle Detection based on a Windowed Hough Transform by Cláudio Rosito Jung and Rodrigo Schramm.
Now according to the paper, the intersection points are expressed as polar coordinates, obviously you implementation may be different (the only way to tell is to show us your code).
Assuming you are being consistent with his notation, your peaks should be expressed as:
You must then perform peak paring given by equation (3) in section 4.3 or
where represents the angular threshold corresponding to parallel lines
and is the normalized threshold corresponding to lines of similar length.
The accuracy of the Hough space should be dependent on two main factors.
The accumulator maps onto Hough Space. To loop through the accumulator array requires that the accumulator divide the Hough Space into a discrete grid.
The second factor in accuracy in Linear Hough Space is the location of the origin in the original image. Look for a moment at what happens if you do a sweep of \theta for any given change in \rho. Near the origin, one of these sweeps will cover far less pixels than a sweep out near the edges of the image. This has the consequence that near the edges of the image you need a much higher \rho \theta resolution in your accumulator to achieve the same level of accuracy when transforming back to Cartesian.
The problem with increasing the resolution of course is that you will need more computational power and memory to increase it. Also If you uniformly increase the accumulator resolution you have wasted resolution near the origin where it is not needed.
Some ideas to help with this.
place the origin right at the
center of the image. as opposed to
using the natural bottom left or top
left of an image in code.
try using the closest image you can
get to a square. the more elongated an
image is for a given area the more
pronounced the resolution trap
becomes at the edges
Try dividing your image into 4/9/16
etc different accumulators each with
an origin in the center of that sub-image.
It will require a little overhead to link
the results of each accumulator together
for rectangle detection, but it should help
spread the resolution more evenly.
The ultimate solution would be to increase
the resolution linearly depending on the
distance from the origin. this can be achieved using the
(x-a)^2 + (y-b)^2 = \rho^2
circle equation where
- x,y are the current pixel
- a,b are your chosen origin
- \rho is the radius
once the radius is known adjust your accumulator
resolution accordingly. You will have to keep
track of the center of each \rho \theta bin.
for transforming back to Cartesian
The link to the referenced paper does not work, but if you used the standard hough transform than the four intersection points will be expressed in cartesian coordinates. In fact, the four lines detected with the hough tranform will be expressed using the "normal parametrization":
rho = x cos(theta) + y sin(theta)
so you will have four pairs (rho_i, theta_i) that identifies your four lines. After checking for orthogonality (for example just by comparing the angles theta_i) you solve four equation system each of the form:
rho_j = x cos(theta_j) + y sin(theta_j)
rho_k = x cos(theta_k) + y sin(theta_k)
where x and y are the unknowns that represents the cartesian coordinates of the intersection point.
I am not a mathematician. I am willing to stand corrected...
From Hough 2) ... any line on the xy plane can be described as p = x cos theta + y sin theta. In this representation, p is the normal distance and theta is the normal angle of a straight line, ... In practical applications, the angles theta and distances p are quantized, and we obtain an array C(p, theta).
from CRC standard math tables Analytic Geometry, Polar Coordinates in a Plane section ...
Such an ordered pair of numbers (r, theta) are called polar coordinates of the point p.
Straight lines: let p = distance of line from O, w = counterclockwise angle from OX to the perpendicular through O to the line. Normal form: r cos(theta - w) = p.
From this I conclude that the points lie in polar coordinate space.

Resources