Finding the normal to the edges in an image - opencv

I need to the normal (to the edge) at each point of the edges detected. I can't think of a good method to do this and my main concern isn't this. I need the normals to be consistent. What I mean is that they should all be outwards facing (or inwards). Consider the image (after edge detection)
Since it doesn't form a closed curve, I can't define inwards or outwards but what I want is that adjacent normals should not be opposite to each other. I am not sure whether saying that adjacent normals should form an acute angle is the right way to express this condition. Is there any known algorithm/library which addresses this problem?

Related

FEM Integrating Close to Integration Points

I am working on a program that can essentially determine the electrostatic field of some arbitrarily shaped mesh with some surface charge. To test my program I make use of a cube whose left and right faces are oppositely charged.
I use a finite element method (FEM) that discretizes the object's surface into triangles and gives to each triangle 3 integration points (see below figure, bottom-left and -right). To obtain the field I then simply sum over all these points, whilst taking into account some weight factor (because not all triangles have the same size).
In principle this works all fine, until I get too close to a triangle. Since three individual points are not the same as a triangular surface, the program breaks and gives these weird dots. (block spots precisely between two integration points).
Below you see a figure showing the simulation of the field (top left), the discretized surface mesh (bottom left). The picture in the middle depicts what you see when you zoom in on the surface of the cube. The right-most picture shows qualitatively how the integration points are distributed on a triangle.
Because the electric field of one integration point always points away from that point, two neighbouring points will cancel each other out since their vectors aim in the exact opposite direction. Of course what I need instead is that both vectors point away from the surface instead.
I have tried many solutions, mostly around the following points:
Patching the regions near an integration point with a theoretically correct uniform field pointing away from the surface.
Reorienting the vectors only nearby the integration point to manually put them in the right direction.
Apply a sigmoid or other decay function to make the above look more smooth.
Though, none of the methods above allow me to properly connect the nearby and faraway regions.
I guess what might work is some method to extrapolate the correct value from the surroundings. Though, because of the large number of computations, I moved the simulation the my GPU, which means that I have to be careful allowing two pixels to write to each other.
Either way, my question here is as follows:
What would be a good way to smooth out my results? That is, I need a more accurate description of my model when I get closer to a triangle.
As a final note I want to add that it is not my goal to simply obtain a smooth image. Later in the program I need this data to determine the response of a conducting material, which is where these black dots internally become a real pain...
Thank you for your help !!!

Matching dynamically drawn triangles and differentiating angles

I am making a game wherein the user draws triangles on a grid and be congruent with other triangles. However, the user gets additional points for having their new triangle in a different rotation from the original. I would use the rotation property of the movieclip, but since the triangles are drawn into a dynamically created MC, they all have a rotation of 0 degrees.
Is there some way to do this? I am absolutely stumped.
I think this is just a maths problem.
Firstly, if you have an equilateral triangle, you wouldn't reliably be able to work out the rotational difference since the sides are the same size.
Otherwise, you will always have 'the important side'
Assuming your triangle is isosceles, your important side is the one that is of a different length to the other two matching sides.
Assuming you have a scalene triangle, your most important side is the longest side.
Once you know your most important side...
You should be able to work out the important side of the users triangle using trig.
You should also know the important side of the base triangle the user is trying to draw against, since you are 'making' it.
Then you basically have two lines (the two important sides), use trig again to work out the difference in rotation between the two lines, then you are good to go.
I solved this. What I did was have the program dig through the triangle to find the left-most and uppermost point. Then I draw all the triangles using this point as the origin. This ensures that regardless of the order in which the dots are clicked, all triangles will have the same point of origin.
To detect whether they are matches, I wrote up a function that copies the triangles and moves them to the same point. Because they now have the same origin point, they will occupy the same space if they are of the same angle. Using this, I then wrote a function that checks to see if the triangles overlap completely.

how to connect points after identifying them in cvgoodfeaturesTotrack

I want to identify an object and draw a shape around it ...
I used previously the color identification but wasn't a good option since color change dramatically from place to place .. so I though why not identifying objects by features such as edges .. and I did that using this function in openCV
cvgoodfeaturesTotrack
it returns the (x,y)-coordinates of the points .. now I want to connect those points.. well not all of them but the one who are close to each other to draw a shape around the different objects. Any ideas ?
I don't think there is a free lunch in this case. You are trying to reconstruct a polygon if you only know the corner points of the polygon. There is no unique solution to this problem: you can draw all sorts of polygons through the corners. If you are certain the shape you are after is convex, then you can construct the convex span of the corner points, but the result will be horrible if you include any corners that were not part of the original object.
It seems to me that detecting corners is not the way to segment an object that is more or less delimited by lines. You probably want to try an edge detector instead, or a proper segmentation technique such as watershed.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

Surface Detection in 2d Game?

I'm working on a 2D Platform game, and I was wondering what's the best (performance-wise) way to implement Surface (Collision) Detection.
So far I'm thinking of constructing a list of level objects constructed of a list of lines, and I draw tiles along the lines.
alt text http://img375.imageshack.us/img375/1704/lines.png
I'm thinking every object holds the ID of the surface that he walks on, in order to easily manipulate his y position while walking up/downhill.
Something like this:
//Player/MovableObject class
MoveLeft()
{
this.Position.Y = Helper.GetSurfaceById(this.SurfaceId).GetYWhenXIs(this.Position.X)
}
So the logic I use to detect "droping/walking on surface" is a simple point (player's lower legs)-touches-line (surface) check
(with some safety approximation
- let`s say 1-2 pixels over the line).
Is this approach OK?
I`ve been having difficulty trying to find reading material for this problem, so feel free to drop links/advice.
Having worked with polygon-based 2D platformers for a long time, let me give you some advice:
Make a tile-based platformer.
Now, to directly answer your question about collision-detection:
You need to make your world geometry "solid" (you can get away with making your player object a point, but making it solid is better). By "solid" I mean - you need to detect if the player object is intersecting your world geometry.
I've tried "does the player cross the edge of this world geometry" and in practice is doesn't work (even though it might seem to work on paper - floating point precision issues will not be your only problem).
There are lots of instructions online on how to do intersection tests between various shapes. If you're just starting out I recommend using Axis-Aligned Bounding Boxes (AABBs).
It is much, much, much, much, much easier to make a tile-based platformer than one with arbitrary geometry. So start with tiles, detect intersections with AABBs, and then once you get that working you can add other shapes (such as slopes).
Once you detect an intersection, you have to perform collision response. Again a tile-based platformer is easiest - just move the player just outside the tile that was collided with (do you move above it, or to the side? - it will depend on the collision - I will leave how to do this is an exercise).
(PS: you can get terrific results with just square tiles - look at Knytt Stories, for example.)
Check out how it is done in the XNA's Platformer Starter Kit Project. Basically, the tiles have enum for determining if the tile is passable, impassable etc, then on your level you GetBounds of the tiles and then check for intersections with the player and determine what to do.
I've had wonderful fun times dealing with 2D collision detection. What seems like a simple problem can easily become a nightmare if you do not plan it out in advance.
The best way to do this in a OO-sense would be to make a generic object, e.g. classMapObject. This has a position coordinate and slope. From this, you can extend it to include other shapes, etc.
From that, let's work with collisions with a Solid object. Assuming just a block, say 32x32, you can hit it from the left, right, top and bottom. Or, depending on how you code, hit it from the top and from the left at the same time. So how do you determine which way the character should go? For instance, if the character hits the block from the top, to stand on, coded incorrectly you might inadvertently push the character off to the side instead.
So, what should you do? What I did for my 2D game, I looked at the person's prior positioning before deciding how to react to the collision. If the character's Y position + Height is above the block and moving west, then I would check for the top collision first and then the left collision. However, if the Character's Y position + height is below the top of the block, I would check the left collision.
Now let's say you have a block that has incline. The block is 32 pixels wide, 32 pixels tall at x=32, 0 pixels tall at x=0. With this, you MUST assume that the character can only hit and collide with this block from the top to stand on. With this block, you can return a FALSE collision if it is a left/right/bottom collision, but if it is a collision from the top, you can state that if the character is at X=0, return collision point Y=0. If X=16, Y=16 etc.
Of course, this is all relative. You'll be checking against multiple blocks, so what you should do is store all of the possible changes into the character's direction into a temporary variable. So, if the character overlaps a block by 5 in the X direction, subtract 5 from that variable. Accumulate all of the possible changes in the X and Y direction, apply them to the character's current position, and reset them to 0 for the next frame.
Good luck. I could provide more samples later, but I'm on my Mac (my code is on a WinPC) This is the same type of collision detection used in classic Mega Man games IIRC. Here's a video of this in action too : http://www.youtube.com/watch?v=uKQM8vCNUTM
You can try to use one of physics engines, like Box2D or Chipmunk. They have own advanced collision detection systems and a lot of different bonuses. Of course they don't accelerate your game, but they are suitable for most of games on any modern devices
It is not that easy to create your own collision detection algorithm. One easy example of a difficulty is: what if your character is moving at a high enough velocity that between two frames it will travel from one side of a line to the other? Then your algorithm won't have had time to run in between, and a collision will never be detected.
I would agree with Tiendil: use a library!
I'd recommend Farseer Physics. It's a great and powerful physics engine that should be able to take care of anything you need!
I would do it this way:
Strictly no lines for collision. Only solid shapes (boxes and triangles, maybe spheres)
2D BSP, 2D partitioning to store all level shapes, OR "sweep and prune" algorithm. Each of those will be very powerfull. Sweep and prune, combined with insertion sort, can easily thousands of potentially colliding objects (if not hundreds of thousands), and 2D space partitioning will allow to quickly get all nearby potentially colliding shapes on demand.
The easiest way to make objects walk on surfaces is to make then fall down few pixels every frame, then get the list of surfaces object collides with, and move object into direction of surface normal. In 2d it is a perpendicular. Such approach will cause objects to slide down on non-horizontal surfaces, but you can fix this by altering the normal slightly.
Also, you'll have to run collision detection and "push objects away" routine several times per frame, not just once. This is to handle situations if objects are in a heap, or if they contact multiple surfaces.
I have used a limited collision detection approach that worked on very different basis so I'll throw it out here in case it helps:
A secondary image that's black and white. Impassible pixels are white. Construct a mask of the character that's simply any pixels currently set. To evaluate a prospective move read the pixels of that mask from the secondary image and see if a white one comes back.
To detect collisions with other objects use the same sort of approach but instead of booleans use enough depth to cover all possible objects. Draw each object to the secondary entirely in the "color" of it's object number. When you read through the mask and get a non-zero pixel the "color" is the object number you hit.
This resolves all possible collisions in O(n) time rather than the O(n^2) of calculating interactions.

Resources