OpenCV: Making a curve straight - opencv

How do I make this curve a straight line of the same length (basically by unbending it )? I guess I need to apply some kind of non-linear transformation. But I am not sure which transformation will work best here.
Please note that if I try taking its projection on a straight line, I will end up with a shorter line.
Please provide your suggestions.

I think you can do connection analysis to get every point(pixel) in the curve, and calculate the pixels number. The length of pixels is the length of transformed line, the orientation can be the two vertex connection line's orientation.

Related

OpenCV parallel lines cull

I have an algorithm that simply goes through a number of corners and finds those which are parallel. My problem is, as shown below, that I sometimes get false positive results.
To eliminate this I was going to check if both points fell onto a single hough line but this would be quite computationally intensive and I was wondering if anyone had any simpler ideas.
Thanks.
Ok based on the comments, this should be fix-able. When you detect a pair of parallel lines, get the equation of the line using the two corners that you've used to construct it. This line may be of the form y = mx + c. Then for every y coordinate between the two points, compute the x coordinate. This gives you a set of all the pixels that the line segment covers. Go through these pixels, and check if the intensity at every pixel is closer to black than white. If a majority of the pixels in the set are black-ish, then it's a line. If not, it's probably a non-line.

Find slope (angle) of each line of the image

I have to calculate a slope (or an angle) of every single detectable line of the image. And even to detect the changes of the slope of the line, if it is possible.
I've performed 2D Fourier and I know a neighborhood averege angle at every area (sets of 64x64px). I even try a Hough transform, but neither sobel nor prewitt edge detection seems to detect these lines appropriately.
Please note that some of the lines are crossing each other, and some aren't straight.
Is there a method to detect the slope of each line? Or to detect these lines in order to perform an usefull Hough transform?
If you need the full image I can upload it somewhere.
Image
Greets Adamek,
I hope it is not too late. Here some quick ideas:
1) Using Hough trafo to detect lines is a good idea as a first step
2) Second step would be some kind of labeling to really know what lines there are. The most difficult problem to address is probably how to determine start and end of lines and seperate potentially connected ones. Search for the labeling keyword in this context, that should give some results.
3) Afterwards, having end and startpoint, I would
a) calculate for each line a regression line if you need more exact data in further analysis
b) just compute slope and intercept via f(x)=mx+n, where m is the slope and n the intercept. Given two points in 2D this is easily done as follows:
slope = (yRight - yLeft)/(xRight - xLeft);
m_oIntercept = ((yLeft - slope*xLeft) + (yRight - slope*xRight))*0.5;
and don't forget to test for (xRight-xLeft) < eps before to avoid zero division.
Hope that helps,
G.

Image processing: drawing a line though the axis of a bone

I hope someone can point me, to how I can solve my issue. . I have 6000 X-rays where I have to measure the angle between bones.
My strategy is the following: If I can somehow draw a line1 though the long axis of bone1, and line2 though the long axis of bone2, then I can simply measure the angle between the 2 lines.
So how can I find the axis in the first place? Is it possible to do it this way? :
(It is an x-ray picture) Lets say 1 cm from the top of the picture, we scan that row for the first pixel that turns white (the first edge of the bone), here we have a dot A1, the we continue scanning until we find the first pixel that turns black (the second edge of bone ), this is dot A2, we draw a line between Y1(A1,A2).
We do the same procedure, we go just further down lets say 10 cm from the top, we then have another line Y2(B1,B2). A line that goes from the middle of Y1 to the middle of Y2, will be the axis of the bone
I already managed to play with the threshold, and making and edge. to make it easy to draw the lines ?
Does it make sense?
Please, can it be done? Any idea how?
Any help will be appreciated, thank you!
Here's an idea:
Maybe if you downsample the images to get less artifacts and/or apply some mathematical morphology (http://en.wikipedia.org/wiki/Mathematical_morphology) to reduce the noise you can convert the bones into more line-shaped separated figures.
Apply some threshold so you can have black/white binary pictures. Use math to find a point in each of the 2 shapes and then try to match them to a rectangle or an oval. These will give you the axis you are looking for and then you can measure the angle.
This is too general a question. Images would always be appreciated! I guess you have 6000 xrays producing a grayscale image of the bones. In this case the general idea would be to:
1. Find a good binary segmentation of the bones in 3d
2. Find a good skeletonization of the 2 bones, also look at this
3. Replace the main skeletons of the two bones by line segments that best approximate it and measure the two angles (in 3d) between them
4. If this is two bones in the body - there is usually a limit to the degrees of freedom of two connected bones. It would be good to validate it wrt to this reference.
Tracing the line in realtime might not be the best in terms of accuracy. I guess this is obvious.
This could give an idea for the full human pose.

No bumps in Normal Mapping

I am trying to do normal mapping on flat surface but I can't get any noticeable result :(
My shader
http://pastebin.com/raEvBY92
For my eye, shader looks fine, but it doesn't render desired result( https://dl.dropbox.com/u/47585151/sss/final.png).
All values are passed.Normals,tengents and binormals are computed correctly when I create the grid,I have checked that!
Here are screens of ambient,diffuse,specular and bump map.
https://dl.dropbox.com/u/47585151/sss/ambient.png
https://dl.dropbox.com/u/47585151/sss/bumpMap.png
https://dl.dropbox.com/u/47585151/sss/diffuse.png
https://dl.dropbox.com/u/47585151/sss/specular.png
They seems to be legit...
The bump map,which is the result of (bump=normalize(mul(bump, input.WorldToTangentSpace)) definitely looks correct,but doesn't have any impact on end result.
Maybe I don't understand the different spaces idea or I changed the order of matrix multiplication.By world matrix I understand the position and orientation of the grid,which never changes and it is identity matrix.Only view matrix changes and represents camera position and orientation in own space.
Where is my mistake?
First of all, if you're having a problem, it's a good ideo to comment everything out, which doesn't belong to this. The whole lightcomputation with ambient, specular or even the diffusetexture isn't interesting at this moment. With
output.color=float4(diffuse ,1);
You can focus on your problem and see clearly what change, if you change something in you code.
If your quad lies in the xy-plane with z=0, you should change your lightvector, he wouldn't work. Generally I use for testing purpose a diagonal vector (like normalize(1,-1,1)) to prevent a parallel direction to my object.
When I look over your code it seems to me, that you didn't get the idea of the different spaces, as how you thought ;) The basic idea of normalmapping is to give additional information about the surface with additional normals. They are saved in a normalmap, so encoded to rgb, where b is usually the up-vector. Now you must fit them into your 3D-world, because they aren't in the world space, but in the tangent space (tangent space = surface space of the triangle). Because this transformation is more complex, the normal computation goes the other way round. You transform with the normal,binormal and tangent as a matrix your lightvector and viewvector from world space into tangent space (you are mapping the axis of world space xyz to tnb - tangent,normal,binormal, the order can be wrong I usually swap them until it works ;) ). With your line
bump = normalize(mul(bump, input.WorldToTangentSpace));
you try to transform you normal in tangent space to tangent space. Change this, so you transform the view and the lightvector in the vertexshader into tangent space and pass the transformed vectors to the pixelshader. There you can do the lightcomputation in tangent space. Maybe read an additional tutorial to normalmapping, then you will get this working! :)
PS: If youre finished with the basic lighting, your specular computation seems to have some errors, too.
float3 reflect = normalize(2*diffuse*bump-LightDirection);
This line seems to should compute the halfway-vector, but therefore you need the viewvector and shouldn't use a lightingstrength like diffuse. But a tutorial can explain this in more detail than me now.

How to detect 45 degree edges in an image

If instead of getting all edges, I only want edges that make 45 degree angles. What is a method to detect these?
Would it be possible to detect all edges, then somehow run a constrained hough transform to detect which edges form 45 degrees?
What is wrong with using an diagonal structure element and simply convolve the image??
Details
Please read here and it should become clear how to build the structuring element. If you are familiar with convolution than you can build a simple structure matrix which amplifies diagonals without theory
{ 0, 1, 2},
{-1, 0, 1},
{-2, -1, 0}
The idea is: You want to amplify pixel in the image, where 45deg below it is something different than 45deg above it. Thats the case when you are at a 45deg edge.
Taking an example. Following picture
convolved by the above matrix gives a graylevel image where the highest pixel values have those lines which are exactly 45deg.
Now the approach is to simply binarize the image. Et voila
First of all, it is possible to do this as post processing.
The result of Hough is in the parameter space of (angle,radius).
So you can simply take a slice in say angle=(45-5,45+5) and all radiuses.
An alternative method is that the output of edge detection will contain only 45/135 angle edges.
If you use a kernel but want line equations, then you'll still have to perform a line fit after the edge pixels are found. If you're certain the lines are exactly 45 degrees, then knowing the (x,y) point on any discovered line or line segment is sufficient to find the line equation.
Hough (rho, theta) parameter space can use whatever ranges of rho and theta that you'd like. You might preprocess the image to favor neighbor pixels at the proper angle. For example, give a "bonus point" to an edge pixel if it has 8-neighbors at the appropriate angle. You can certainly mix a kernel-based method (such as halirutan suggested) with a parametric or parameterless Hough algorithm.
A recent implementation of Hough runs at blazing fast speeds, so if you're looking for a quick solution you might download the open source code and then simply filter the output.
"Real-time line detection through an improved Hough transform voting scheme"
by Fernandes and Oliveira
http://www.ic.uff.br/~laffernandes/projects/kht/index.html

Resources