I hope someone can point me, to how I can solve my issue. . I have 6000 X-rays where I have to measure the angle between bones.
My strategy is the following: If I can somehow draw a line1 though the long axis of bone1, and line2 though the long axis of bone2, then I can simply measure the angle between the 2 lines.
So how can I find the axis in the first place? Is it possible to do it this way? :
(It is an x-ray picture) Lets say 1 cm from the top of the picture, we scan that row for the first pixel that turns white (the first edge of the bone), here we have a dot A1, the we continue scanning until we find the first pixel that turns black (the second edge of bone ), this is dot A2, we draw a line between Y1(A1,A2).
We do the same procedure, we go just further down lets say 10 cm from the top, we then have another line Y2(B1,B2). A line that goes from the middle of Y1 to the middle of Y2, will be the axis of the bone
I already managed to play with the threshold, and making and edge. to make it easy to draw the lines ?
Does it make sense?
Please, can it be done? Any idea how?
Any help will be appreciated, thank you!
Here's an idea:
Maybe if you downsample the images to get less artifacts and/or apply some mathematical morphology (http://en.wikipedia.org/wiki/Mathematical_morphology) to reduce the noise you can convert the bones into more line-shaped separated figures.
Apply some threshold so you can have black/white binary pictures. Use math to find a point in each of the 2 shapes and then try to match them to a rectangle or an oval. These will give you the axis you are looking for and then you can measure the angle.
This is too general a question. Images would always be appreciated! I guess you have 6000 xrays producing a grayscale image of the bones. In this case the general idea would be to:
1. Find a good binary segmentation of the bones in 3d
2. Find a good skeletonization of the 2 bones, also look at this
3. Replace the main skeletons of the two bones by line segments that best approximate it and measure the two angles (in 3d) between them
4. If this is two bones in the body - there is usually a limit to the degrees of freedom of two connected bones. It would be good to validate it wrt to this reference.
Tracing the line in realtime might not be the best in terms of accuracy. I guess this is obvious.
This could give an idea for the full human pose.
Related
Do You have any suggestion for this?
I am following the steps in one research paper to re-implement it for my specific problem. I am not a professional programmer, however, I am struggling a lot, more than one month.When I am reaching to scoring step with generalized Hough transform, I do not get any result and what I am getting is a blank image without finding the object center.
What I did includes the following steps:
I define a spatially constrained area for a training image and extract SIFT features within the area. The red point in the center represents the object center in template(training) image.
and this the interest point extracted by SIFT in query image:
Keypoints are matched according to the some conditions: 1)they should be quantized to the same visual word and be spatially consistent. So I get the following points after matching conditions:
I have 15 and 14 points for template and query images, respectively. I send these points along with template image center of object coordinate to generalized hough transform (the code that I found from github). the code is working properly for it default images. However, according to the few points that I am getting by the algorithm, I do not know what I am doing wrong?!
I thought maybe that is because of theta calculation, so I changed this line to return abs of y and x differences. But it did not help.
In line 20 they only consider 90 degrees for binning, Could I ask what is the reason and how can I define a binning according to my problem and range of angles of rotation around the center of an object?
- Does binning range affect the center calculation?
I really appreciate it of you let me know what I am doing wrong here.
I need to find the moving direction of a vehicle by its extracted point cloud, and I have converted the point cloud to the following image.
As the target vehicle could be moving straight or turning and the image is sometimes clear and sometimes fuzzy, I find it's difficult to match the "L" shape using template matching.
I also try to use RANSAC to fit the linear, but it has two sides and RANSAC does not work well. What I need to do is using an oriented bounding box to represent the vehicle.
If I could have the yaw angle of the "L" shape, it's very easy to recover it to an oriented bounding box. So could anyone give me some suggestions?
PS: The function cv::minAreaRect could offer a basic result, but it sometimes fit the "L" shape in a wrong direction.
Build the convex hull and qualify the sides as "pretty vertical" and "pretty horizontal". This will help you identify the corners.
A yet simpler method is to identify the four pixels that maximimze ±X±Y. This gives you an interesting bounding quadrilateral (often reduced to a triangle).
One possibility is to see what side is closer to the center of the mass, because the this center is always closer to the 'L' shape.
See the link below:
docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/moments/moments.html
I have an algorithm that simply goes through a number of corners and finds those which are parallel. My problem is, as shown below, that I sometimes get false positive results.
To eliminate this I was going to check if both points fell onto a single hough line but this would be quite computationally intensive and I was wondering if anyone had any simpler ideas.
Thanks.
Ok based on the comments, this should be fix-able. When you detect a pair of parallel lines, get the equation of the line using the two corners that you've used to construct it. This line may be of the form y = mx + c. Then for every y coordinate between the two points, compute the x coordinate. This gives you a set of all the pixels that the line segment covers. Go through these pixels, and check if the intensity at every pixel is closer to black than white. If a majority of the pixels in the set are black-ish, then it's a line. If not, it's probably a non-line.
I am looking for a fast idea/algorithm letting me to find squares (as mark points) in the image file. It shouldn't be so much challenge, however...
I started doing this by changing the color of the source image to a grey scale image and scanning each line of the image looking for two, three longest lines (pixel by pixel).
Then having an array of "lines" I am finding elements which may create the desire square.
The better idea would be to find the pattern with known traits, like: it is square, beyond of the square there are no distortion (there is just white space) etc.
The goal is to analyze the image 5000 X 5000 px in less than 1-2s.
Is it possible?
One of the OpenCV samples squares.cpp does just this, see here for code, . Alternatively you could look up the Hough transform to detect all lines in your image, and then test for two lines intersecting at right angles.
There are also a number of resources on this site which may help you:
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Are there any opencv function like "cvHoughCircles()" for square detection?
square detection, image processing
I'm sure there are others, these are just the first few I came across.
See Scale-invariant feature transform, template matching, and Hough transform. A quick and inaccurate guess may be to make a histogram of color and compare it. If the image is complicated enough, you might be able to distinguish between several sets of images.
To make the matter simple, assume we have three buckets for R, G, and B. A completely white image would have (100%, 100%, 100%) for (R, G, B). A completely red image would have (100%, 0%, 0%). A complicated image might have something like (23%, 53%, 34%). If you take the distance between the points in that (R, G, B) space, you can compare which one is "closer".
I guess links by chris solved the question :)
If instead of getting all edges, I only want edges that make 45 degree angles. What is a method to detect these?
Would it be possible to detect all edges, then somehow run a constrained hough transform to detect which edges form 45 degrees?
What is wrong with using an diagonal structure element and simply convolve the image??
Details
Please read here and it should become clear how to build the structuring element. If you are familiar with convolution than you can build a simple structure matrix which amplifies diagonals without theory
{ 0, 1, 2},
{-1, 0, 1},
{-2, -1, 0}
The idea is: You want to amplify pixel in the image, where 45deg below it is something different than 45deg above it. Thats the case when you are at a 45deg edge.
Taking an example. Following picture
convolved by the above matrix gives a graylevel image where the highest pixel values have those lines which are exactly 45deg.
Now the approach is to simply binarize the image. Et voila
First of all, it is possible to do this as post processing.
The result of Hough is in the parameter space of (angle,radius).
So you can simply take a slice in say angle=(45-5,45+5) and all radiuses.
An alternative method is that the output of edge detection will contain only 45/135 angle edges.
If you use a kernel but want line equations, then you'll still have to perform a line fit after the edge pixels are found. If you're certain the lines are exactly 45 degrees, then knowing the (x,y) point on any discovered line or line segment is sufficient to find the line equation.
Hough (rho, theta) parameter space can use whatever ranges of rho and theta that you'd like. You might preprocess the image to favor neighbor pixels at the proper angle. For example, give a "bonus point" to an edge pixel if it has 8-neighbors at the appropriate angle. You can certainly mix a kernel-based method (such as halirutan suggested) with a parametric or parameterless Hough algorithm.
A recent implementation of Hough runs at blazing fast speeds, so if you're looking for a quick solution you might download the open source code and then simply filter the output.
"Real-time line detection through an improved Hough transform voting scheme"
by Fernandes and Oliveira
http://www.ic.uff.br/~laffernandes/projects/kht/index.html