I need to find all simple non overlapping cycles on undirected graph. To find all existing cycles I made an Objective-C version of the algorithm that I found here:
Finding all cycles in undirected graphs
#interface HSValue : NSObject
#property (nonatomic, assign) CGPoint point;
#end
#implementation HSValue
#end
#interface CyclesFinder ()
#property (nonatomic, strong) NSMutableArray <NSArray<HSValue *>*> *cycles;
#property (nonatomic, strong) NSArray <NSArray<HSValue*>*> *edges;
#end
#implementation CyclesFinder
-(void)findCyclesInGraph:(NSArray <NSArray<HSValue*>*> *)edges {
self.edges = edges;
for (NSInteger i=0; i < self.edges.count; i++) {
for (NSInteger j=0; j < self.edges[i].count; j++) {
[self findNewCycles:#[self.edges[i][j]]];
}
}
}
-(void)findNewCycles:(NSArray <HSValue *> *)path {
HSValue *startNode = path[0];
HSValue *nextNode;
NSArray <HSValue *> *sub;
for (NSInteger i=0; i < self.edges.count; i++) {
NSArray <HSValue *> *edge = self.edges[i];
if ([edge containsObject:startNode]) {
if ([edge[0] isEqual:startNode]) {
nextNode = edge[1];
}
else {
nextNode = edge[0];
}
}
else {
nextNode = nil;
}
if (![path containsObject:nextNode] && nextNode) {
sub = #[nextNode];
sub = [sub arrayByAddingObjectsFromArray:path];
[self findNewCycles:sub];
}
else if (path.count > 2 && [nextNode isEqual:path.lastObject]) {
if (![self cycleExist:path]) {
[self.cycles addObject:path];
break;
}
}
}
}
-(BOOL)cycleExist:(NSArray <HSValue*> *)path {
path = [path sortedArrayUsingSelector:#selector(compare:)];
for (NSInteger i=0; i < self.cycles.count; i++) {
NSArray <HSValue *> *cycle = [self.cycles[i] sortedArrayUsingSelector:#selector(compare:)];
if ([cycle isEqualToArray:path]) {
return TRUE;
}
}
return FALSE;
}
Above algorithm works fine (even if it is not very efficient) and it finds all the possible cycles from the graph on the attached picture (please see picture below):
A-B-H-G-F-D-E-A (valid)
B-C-I-H-B (valid)
G-H-I-L-K-G (valid)
F-G-K-J-F (valid)
F-G-H-I-L-K-J-F (invalid)
A-B-C-I-H-G-F-D-E-A (invalid)
A-B-C-I-L-K-J-F-D-E-A (invalid)
A-B-C-I-H-G--K-J-F-D-E-A (invalid)
A-B-H-I-L-K-G-F-D-E-A (invalid)
A-B-H-G-K-J-F-D-E-A (invalid)
A-B-C-I-L-K-G-F-D-E-A (invalid)
B-C-I-L-K-G-H-B (invalid)
B-C-I-L-K-J-F-G-H-B (invalid)
However when I run the above algorithm I want to end up with only those cycles that I highlighted with coloured polygons on the left hand side example. What I don't want are the cycles like the one on the right hand side example.
My first thought was that overlapping cycle will be a cycle that includes all the points from any other cycles, but this is not always true. Can someone point me into the right direction? Is it possible to modify the above algorithm so it finds only non-overlapping cycles or if not what should I do after finding all cycles to filter them?
There isn't enough information just in the undirected graph itself to determine which cycles are which. For example, consider that the following 2 diagrams yield identical undirected graphs:
A-----B E-------------F
| | \ /
C-----D \ A-----B /
| | \| |/
E-----F C-----D
But for the diagram on the left, you want the cycles ABDCA and CDFEC, while for the diagram on the right, you want the cycles ABDCA and EFDBACE. Thus the undirected graph inferred from the diagram isn't enough -- you need to somehow incorporate spatial information from the original diagram.
I'm working on this same problem and a lot of your comments were helpful, especially the comment that all edges will have an area on each side. Thus you could say that each edge has a "left area" and a "right area".
You can add all graph edges to a queue in any order. Peek at the first edge, pick its vertex closer to your origin. Move to the neighbor that is the most counter-clockwise. continue this until you have reached your starting vertex. All of these edges bound your first area. I would give it a unique ID and assign it to a "left area" property of those edges.
Peek at the first edge in the queue and check if it has a "left area". If it does check if it has a "right area" if it does not proceed in a clockwise manner and find the right area. If it has both areas assigned dequeue it and grab the next one.
should be O(e+v) so pretty quick, right?
This is a little bit stream of consciousness but I wanted to get it written down. I'll be writing the algorithm for my actual app and I'll make tweaks as I find problems in it.
Of course I'm open to feedback and suggestions :)
I know this question has been 6 years old yet leaving this answer for someone having the same problem in the future.
Key idea
Every edge has exactly two adjacent faces. Actually, every directed edge has exactly one adjacent face.
Construct polygons by choosing most counter-clockwise adjacent edge. Then the polygons with counter-clockwise orientation will be non-overlapping cycle of the graph.
Algorithm overview
For every edges on the graph, find two polygons containing the edge. One for each direction of the edge.
Filter the polygons whose orientation is counter-clockwise.
The resulting polygons are all of non-overlapping cycles in the given graph.
Finding a polygon containing the given edge in a graph
Basically it's choosing the next edge by most counter-clockwise from the current edge until a cycle is created or it hits dead end.
If the end of the cycle equals to the start of the given edge, then the polygon contains the given edge. Else the polygon doesn't contain the given edge, so ignore it.
I'm just posting the entire code of this function. Read it through and I hope you get the idea.
See caveats below for more informations about orientations and vector calculations.
static public <N extends Location> Optional<Polygon<N>> findPolygon(EndpointPair<N> edge, Graph<N> graph) {
if (!edge.isOrdered()) throw new IllegalArgumentException("The starting edge must be ordered.");
if (!graph.hasEdgeConnecting(edge))
throw new IllegalArgumentException("The starting edge must be contained in the graph");
final N start = edge.source();
final MutableGraph<N> polygonGraph = GraphBuilder.directed()
.incidentEdgeOrder(ElementOrder.stable())
.nodeOrder(ElementOrder.insertion())
.build();
// Set the first edge of the polygon.
N source = start;
N target = edge.adjacentNode(source);
// Start adding edges to polygonGraph.
// Until a cycle is created.
while (true) {
// Check if a cycle is created.
if (polygonGraph.nodes().contains(target)) {
// Connect the last edge.
polygonGraph.putEdge(source, target);
break;
}
// Connect the edge.
polygonGraph.putEdge(source, target);
// Find the most counter-clockwise adjacent vertex from the target.
// Then that vertex is the target of the next edge and the target of the current edge is the source of
// the next edge.
Vector base = source.toVector().clone().subtract(target.toVector());
final N finalTarget = target;
Map<N, Double> angles = graph.adjacentNodes(target).stream().collect(Collectors.toMap(
Function.identity(),
node -> {
Vector u = node.toVector().clone().subtract(finalTarget.toVector());
return Vectors.fullAngle(base, u);
}
));
List<N> adjacentNodes = graph.adjacentNodes(target).stream().filter(not(source::equals)).toList();
// Dead end. Failed to create a polygon. Exit.
if (adjacentNodes.isEmpty()) break;
source = target;
target = Collections.max(adjacentNodes, Comparator.comparingDouble(angles::get));
}
// The created polygon doesn't contain the starting edge.
if (!target.equals(start)) {
return Optional.empty();
}
return Optional.of(new Polygon<>(polygonGraph));
}
Identifying the orientation of a polygon
https://www.baeldung.com/cs/list-polygon-points-clockwise
A polygon is counter-clockwise iff its area > 0.
Optimization
The time complexity of the algorithm is O(E^2). (I think)
But you can apply dynamic programming method and it reduces to O(E) (I think)
The idea is that for every directed edge there exists only one matching polygon.
So when you find a polygon, cache every edges of that polygon and you won't have to find that polygon for that edges again.
// This is a pseudo-code
Map<Edge, Polygon> cache = new HashMap<>();
// If the edge is in cache, skip the polygon search.
if (cache.containsKey(edge)) continue;
// When you have found a polygon, cache the edges.
polygon.edges().forEach(edge -> {
cache.put(edge, polygon);
});
You can also pre-determine if a given edge can construct a polygon by looking at the neighbors of the edge.
If any one of the degree of the vertices of the edge is less than 2, meaning that the edge is not connected to other neighbors at both side, it cannot construct a polygon.
So you can skip the polygon search for this edge.
Caveats
Orientations
About the orientation and the related things, although I wrote this article after choose to use counter-clockwise, it seems that it doesn't matter which side you pick to use as long as be consistent for:
The orientation of the polygon ( counter-clockwise / clockwise)
Which adjacent edge to pick to be the next edge of the polygon ( most counter-clockwise / least counter-clockwise ) (or in other words, least clockwise / most clockwise)
Once you choose one of them to be one thing then the other option is automatically determined in order to the algorithm work.
Angle of two edges
You need to convert edges to vectors in order to calculate the angle between them.
Keep in mind that the tail of the vectors have to be the vertex of the angle.
So, if you're getting the angle between Edge(AB) and Edge(BC) then you have to calculate the angle between u = A - B and w = C - B.
Angle of two vectors
Some APIs define the range of the function for getting angle between two vectors as [-PI/2, PI/2].
But you need it to be [0, 2PI] so, you have to convert it.
You can make it [-PI, PI] by using atan2 function.
https://math.stackexchange.com/questions/878785/how-to-find-an-angle-in-range0-360-between-2-vectors
And then add 2 * PI then take mod 2 * PI.
public class Vectors {
static public double fullAngle(#NotNull Vector v, #NotNull Vector u) {
return (Math.atan2(v.det(u), v.dot(u)) + 2 * Math.PI) % (2 * Math.PI);
}
}
Related
So, here is my situation. I have created a object detection program which is based on color object detection. My program detects the color red and it works perfectly. But here is the problems i am facing:-
Whenever there are more than one red object in the surrounding, my program detects them and it cannot really track one object at that time(i.e it tracks other red objects of various sizes in the background. It shows me the error that "too much noise in the background". As you can see in the "threshold image" attached, it detects the round object (which is my tracking object) and my cap which is red in color. I want my program to detect only my tracking object("which is a round shaped coke cap"). How can i achieve that? Please help me out. I have my engineering design contest in few days and i have to demo my program infront of my lecturers. My program should only be able to detect and track the object which i want. Thanks
My code for the objectdetection program is a little long. So, i am hereby explaining the code as follows- I captured a frame from the webcam frame-converted it to HSV- used HSV Inrange filter to filter out the other colors but red- applied morphological operations on the filtered image. This all goes in my main function
I am using a frame resolution of 1280*720 for my webcam frame. It kind of slows down my program but it was a trade off which i had to do for performing gesture controlled operations. Anyways here is my drawobjectfunction and trackfilteredobjectfunction.
int H_MIN = 0;
int H_MAX = 256;
int S_MIN = 0;
int S_MAX = 256;
int V_MIN = 0;
int V_MAX = 256;
//default capture width and height
const int FRAME_WIDTH = 1280;
const int FRAME_HEIGHT = 720;
//max number of objects to be detected in frame
const int MAX_NUM_OBJECTS=50;
//minimum and maximum object area
const int MIN_OBJECT_AREA = 20*20;
const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;
void drawObject(int x, int y,Mat &frame){
circle(frame,Point(x,y),20,Scalar(0,255,0),2);
if(y-25>0)
line(frame,Point(x,y),Point(x,y-25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,0),Scalar(0,255,0),2);
if(y+25<FRAME_HEIGHT)
line(frame,Point(x,y),Point(x,y+25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,FRAME_HEIGHT),Scalar(0,255,0),2);
if(x-25>0)
line(frame,Point(x,y),Point(x-25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(0,y),Scalar(0,255,0),2);
if(x+25<FRAME_WIDTH)
line(frame,Point(x,y),Point(x+25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(FRAME_WIDTH,y),Scalar(0,255,0),2);
putText(frame,intToString(x)+","+intToString(y),Point(x,y+30),1,1,Scalar(0,255,0),2);
}
void trackFilteredObject(int &x, int &y, Mat threshold, Mat &cameraFeed){
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
//use moments method to find our filtered object
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
putText(cameraFeed,"Tracking Object",Point(0,50),2,1,Scalar(0,255,0),2);
//draw object location on screen
drawObject(x,y,cameraFeed);}
}else putText(cameraFeed,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
Here is the link of the image; as you can see it also detects the red hat in the background along with the red cap of the coke bottle.
My observations:- Here is what i think, to achieve my desired goal of not detecting objects of unknown sizes of red color. I think i have to edit the value of maximum object area which i declared in the above program as (const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;). I think i have to change this value, that might eliminate the detection of bigger continous red pictures. But also, there is another problem some objects are not completely red in color and they have patches of red and other colors. So, if the detected area is within the range specfied in my program then my program detects those red patches too. What i mean to say is i was wearing a tshirt which has mixed colors and when i tested my program by wearing that tshirt, my program was able to detect the red color out of the other colors. Now, how do i solve this issue?
I think you can try out the following procedure:
obtain a circular kernel having roughly the same area as your object of interest. You can do it like: Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(d, d));
where d is the diameter of the disk.
perform normalized-cross-correlation or convolution of the filtered regions image with this kernel (I think normalized-cross-correlation would be better. And add an empty boarder around the kernel).
the peak of the resulting image should give you the location of the circular region in your filtered image (if you are using normalized-cross-correlation, you'll have to add the shift).
To speed things up, you can perform this at a reduced resolution.
You can filter out non-circular shapes by detecting circles in your thresholded image. OpenCV provides a built-on method to detect circles using Hough transform, more info here. You can take advantage of this function to retain only circles that have a radius in a given range.
Another possibility is to implement connected component labeling (CCL) into your demo program.
I believe that it was removed at some point in verions 2.x of OpenCV, but a basic implementation of the two-pass version is straightforward from the Wikipedia page.
CCL will assign a unique ID for each object after thresholding. You then have to implement matching between the objects at frame (T-1) and objects in frame (T) (for example based on some nearest distance criterion) and possibly trajectory filtering or smoothing, but this would definitely give you some extra-points.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I need the ability to verify that a user has drawn a shape correctly, starting with simple shapes like circle, triangle and more advanced shapes like the letter A.
I need to be able to calculate correctness in real time, for example if the user is supposed to draw a circle but is drawing a rectangle, my hope is to be able to detect that while the drawing takes place.
There are a few different approaches to shape recognition, unfortunately I don't have the experience or time to try them all and see what works.
Which approach would you recommend for this specific task?
Your help is appreciated.
We may define "recognition" as the ability to detect features/characteristics in elements and compare them with features of known elements seen in our experience. Objects with similar features probably are similar objects. The higher the amount and complexity of the features, the greater is our power to discriminate similar objects.
In the case of shapes, we can use their geometric properties such as number of angles, the angles values, number of sides, sides sizes and so forth. Therefore, in order to accomplish your task you should employ image processing algorithms to extract such features from the drawings.
Below I present a very simple approach that shows this concept in practice. We gonna recognize different shapes using the numbers of corners. As I said: "The higher the amount and complexity of the features, the greater is our power to discriminate similar objects". Since we are using just one feature, the number of corners, we can differentiate a few different kinds of shapes. Shapes with the same number of corners will not be discriminated. Therefore, in order to improve the approach you might add new features.
UPDATE:
In order to accomplish this task in real time you might extract the features in real time. If the object to be drawn is a triangle and the user is drawing the fourth side of any other figure, you know that he or she is not drawing a triangle. About the level of correctness you might calculate the distance between the feature vector of the desired object and the drawn one.
Input:
The Algorithm
Scale down the input image since the desired features can ben detected in lower resolution.
Segment each object to be processed independently.
For each object, extract its features, in this case, just the number of corners.
Using the features, classify the object shape.
The Software:
The software presented below was developed in Java and using Marvin Image Processing Framework. However, you might use any programming language and tools.
import static marvin.MarvinPluginCollection.floodfillSegmentation;
import static marvin.MarvinPluginCollection.moravec;
import static marvin.MarvinPluginCollection.scale;
public class ShapesExample {
public ShapesExample(){
// Scale down the image since the desired features can be extracted
// in a lower resolution.
MarvinImage image = MarvinImageIO.loadImage("./res/shapes.png");
scale(image.clone(), image, 269);
// segment each object
MarvinSegment[] objs = floodfillSegmentation(image);
MarvinSegment seg;
// For each object...
// Skip position 0 which is just the background
for(int i=1; i<objs.length; i++){
seg = objs[i];
MarvinImage imgSeg = image.subimage(seg.x1-5, seg.y1-5, seg.width+10, seg.height+10);
MarvinAttributes output = new MarvinAttributes();
output = moravec(imgSeg, null, 18, 1000000);
System.out.println("figure "+(i-1)+":" + getShapeName(getNumberOfCorners(output)));
}
}
public String getShapeName(int corners){
switch(corners){
case 3: return "Triangle";
case 4: return "Rectangle";
case 5: return "Pentagon";
}
return null;
}
private static int getNumberOfCorners(MarvinAttributes attr){
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
int corners=0;
List<Point> points = new ArrayList<Point>();
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
// Is it a corner?
if(cornernessMap[x][y] > 0){
// This part of the algorithm avoid inexistent corners
// detected almost in the same position due to noise.
Point newPoint = new Point(x,y);
if(points.size() == 0){
points.add(newPoint); corners++;
}else {
boolean valid=true;
for(Point p:points){
if(newPoint.distance(p) < 10){
valid=false;
}
}
if(valid){
points.add(newPoint); corners++;
}
}
}
}
}
return corners;
}
public static void main(String[] args) {
new ShapesExample();
}
}
The software output:
figure 0:Rectangle
figure 1:Triangle
figure 2:Pentagon
The other way is you can use math with this problem using the average of each point that are smallest distance from the one your'e comparing it from,
first you must resize shape with the ones in your library of shapes and then:
function shortestDistanceSum( subject, test_subject ) {
var sum = 0;
operate( subject, function( shape ){
var smallest_distance = 9999;
operate( test_subject, function( test_shape ){
var distance = dist( shape.x, shape.y, test_shape.x, test_shape.y );
smallest_distance = Math.min( smallest_distance, distance );
});
sum += smallest_distance;
});
var average = sum/subject.length;
return average;
}
function operate( array, callback ) {
$.each(array, function(){
callback( this );
});
}
function dist( x, y, x1, y1 ) {
return Math.sqrt( Math.pow( x1 - x, 2) + Math.pow( y1 - y, 2) );
}
var square_shape = Array; // collection of vertices in a square shape
var triangle_shape = Array; // collection of vertices in a triangle
var unknown_shape = Array; // collection of vertices in the shape your'e comparing from
square_sum = shortestDistanceSum( square_shape, unknown_shape );
triangle_sum = shortestDistanceSum( triangle_shape, unknown_shape );
Where the lowest sum is the closest shape.
You have two inputs - the initial image and the user input - and you are looking for a boolean outcome.
Ideally you would convert all your input data to a comparable format. Instead, you could also parameterize both types of input and use a supervised machine learning algorithm (Nearest Neighbor comes to mind for closed shapes).
The trick is in finding the right parameters. If your input is a flat image file, this could be a binary conversion. If user input is a swiping motion or pen stroke, I'm sure there are ways to capture and map this as binary but the algorithm would probably be more robust if it used data closest to the original input.
Perhaps this is more of a math question than a programming question, but I've been trying to implement the rotating calipers algorithm in XNA.
I've deduced a convex hull from my point set using a monotone chain as detailed on wikipedia.
Now I'm trying to model my algorithm to find the OBB after the one found here:
http://www.cs.purdue.edu/research/technical_reports/1983/TR%2083-463.pdf
However, I don't understand what the DOTPR and CROSSPR methods it mentions on the final page are supposed to return.
I understand how to get the Dot Product of two points and the Cross Product of two points, but it seems these functions are supposed to return the Dot and Cross Products of two edges / line segments. My knowledge of mathematics is admittedly limited but this is my best guess as to what the algorithm is looking for
public static float PolygonCross(List<Vector2> polygon, int indexA, int indexB)
{
var segmentA1 = NextVertice(indexA, polygon) - polygon[indexA];
var segmentB1 = NextVertice(indexB, polygon) - polygon[indexB];
float crossProduct1 = CrossProduct(segmentA1, segmentB1);
return crossProduct1;
}
public static float CrossProduct(Vector2 v1, Vector2 v2)
{
return (v1.X * v2.Y - v1.Y * v2.X);
}
public static float PolygonDot(List<Vector2> polygon, int indexA, int indexB)
{
var segmentA1 = NextVertice(indexA, polygon) - polygon[indexA];
var segmentB1 = NextVertice(indexB, polygon) - polygon[indexB];
float dotProduct = Vector2.Dot(segmentA1, segmentB1);
return dotProduct;
}
However, when I use those methods as directed in this portion of my code...
while (PolygonDot(polygon, i, j) > 0)
{
j = NextIndex(j, polygon);
}
if (i == 0)
{
k = j;
}
while (PolygonCross(polygon, i, k) > 0)
{
k = NextIndex(k, polygon);
}
if (i == 0)
{
m = k;
}
while (PolygonDot(polygon, i, m) < 0)
{
m = NextIndex(m, polygon);
}
..it returns the same index for j, k when I give it a test set of points:
List<Vector2> polygon = new List<Vector2>()
{
new Vector2(0, 138),
new Vector2(1, 138),
new Vector2(150, 110),
new Vector2(199, 68),
new Vector2(204, 63),
new Vector2(131, 0),
new Vector2(129, 0),
new Vector2(115, 14),
new Vector2(0, 138),
};
Note, that I call polygon.Reverse to place these points in Counter-clockwise order as indicated in the technical document from perdue.edu. My algorithm for finding a convex-hull of a point set generates a list of points in counter-clockwise order, but does so assuming y < 0 is higher than y > 0 because when drawing to the screen 0,0 is the top left corner. Reversing the list seems sufficient. I also remove the duplicate point at the end.
After this process, the data becomes:
Vector2(115, 14)
Vector2(129, 0)
Vector2(131, 0)
Vector2(204, 63)
Vector2(199, 68)
Vector2(150, 110)
Vector2(1, 138)
Vector2(0, 138)
This test fails on the first loop when i equals 0 and j equals 3. It finds that the cross-product of the line (115,14) to (204,63) and the line (204,63) to (199,68) is 0. It then find that the dot product of the same lines is also 0, so j and k share the same index.
In contrast, when given this test set:
http://www.wolframalpha.com/input/?i=polygon+%282%2C1%29%2C%281%2C2%29%2C%281%2C3%29%2C%282%2C4%29%2C%284%2C4%29%2C%285%2C3%29%2C%283%2C1%29
My code successfully returns this OBB:
http://www.wolframalpha.com/input/?i=polygon+%282.5%2C0.5%29%2C%280.5%2C2.5%29%2C%283%2C5%29%2C%285%2C3%29
I've read over the C++ algorithm found on http://www.geometrictools.com/LibMathematics/Containment/Wm5ContMinBox2.cpp but I'm too dense to follow it completely. It also appears to be very different than the other one detailed in the paper above.
Does anyone know what step I'm skipping or see some error in my code for finding the dot product and cross product of two line segments? Has anyone successfully implemented this code before in C# and have an example?
Points and vectors as data structures are essentially the same thing; both consist of two floats (or three if you're working in three dimensions). So, when asked to take the dot product of the edges, I suppose it means taking the dot product of the vectors that the edges define. The code you provided does exactly this.
Your implementation of CrossProduct seems correct (see Wolfram MathWorld). However, in PolygonCross and PolygonDot I think you shouldn't normalize the segments. It will affect the magnitude of the return values of PolygonDot and PolygonCross. By removing the superfluous calls to Vector2.Normalize you can speed up your code and reduce the amount of noise in your floating point values. However, normalization is not relevant to the correctness of the code that you have pasted as it only compares the results with zero.
Note that the paper you refer to assumes that the polygon vertices are listed in counterclockwise order (page 5, first paragraph after "Beginning of comments") but your example polygon is defined in clockwise order. That's why PolygonCross(polygon, 0, 1) is negative and you get the same value for j and k.
I assume DOTPR is a normal vector dot product, crosspr is a crossproduct. dotproduct will return a normal number , crossproduct will return a vector which is perpendicular to the two vectors given. (basic vector math,check wikipedia)
they are actually defined in the paper as DOTPR(i,j) returns dotproduct of vectors from vertex i to i+1 and j to j+1. same for CROSSPR but with cross product.
Can anyone please help.
I have a cube which I have made in 3DS Max. I don't know the dimensions of the cube. Is there a way to get the vertices of each of the triangles of the faces of the cube? I am trying to get the normal to one of the faces of the cube to determine which way its pointing. So if I can determine the vertices I can get the normal for the face if I have 3 vertices, V1, V2 and V3, ordered in counterclockwise order, I can obtain the direction of the normal by computing (V2 - V1) x (V3 - V1), where x is the cross product of the two vectors.
I have looked in my models .fbx file and I can see a number of values there:
Vertices: *24 {
a: -15,-12.5,0,15,-12.5,0,-15,12.5,0,15,12.5,0,-15,-12.5,0.5,15,-12.5,0.5,-15,12.5,0.5,15,12.5,0.5}
PolygonVertexIndex: *36 {
a: 0,2,-4,3,1,-1,4,5,-8,7,6,-5,0,1,-6,5,4,-1,1,3,-8,7,5,-2,3,2,-7,6,7,-4,2,0,-5,4,6,-3}
Are these my models vertices?
Also, I would assume that Vertices: * 24 would be my list of vertices, but why is there only 24? Should a cube not have 36 vertices? And finally, if the coordinates for my vertices are PolygonVertexIndex: * 36 these values just seem off to me when I imagine the cube in my head with those dimensions?
Or alternatively, is there a automatic way to get the vertices of a cube without having to manually enter all the values for each vertex? I might have a couple of models to
Any help would be greatly appreciated
I can't figure why you need that... because when you load a model it is calculated , internally each vertex will have the normal,...
Anyway it is easy to calc...
The three first indexes define the first triangle of a face, the next three, the other triangle of a face.
You need only one triangle to calculate the normal...
So with the three indexes access to the veretex array and get three points... A, B and C
Now your normal is the result of the cross product between two vectors formed with that vertex.
Vector3 Normal = Vector3.Cross(B-A, C-B);
If the normal go back or forward will depend on the A,B,C order, can be CounterClockWise or ClockWise, but every triangle of the model will be ordered in one way. So you will have try it and fix it
You can write an XNA program which reads your normals without much hassle.
If you still want to calculate them, however, use this C# code, taken from FFWD, as a guide. Check the URL for a more detailed discussion on pros and cons. Personally, I'm not too happy with the result, but for the time being it works. Of course, since this code is FFWD related (implementation of Unity's API for XNA), it does not match XNA exactly, but the mathematics remain the same.
/// <summary>
/// Recalculates the normals.
/// Implementation adapted from http://devmaster.net/forums/topic/1065-calculating-normals-of-a-mesh/
/// </summary>
public void RecalculateNormals()
{
Vector3[] newNormals = new Vector3[_vertices.Length];
// _triangles is a list of vertex indices,
// with each triplet referencing the three vertices of the corresponding triangle
for (int i = 0; i < _triangles.Length; i = i + 3)
{
Vector3[] v = new Vector3[]
{
_vertices[_triangles[i]],
_vertices[_triangles[i + 1]],
_vertices[_triangles[i + 2]]
};
Vector3 normal = Vector3.Cross(v[1] - v[0], v[2] - v[0]);
for (int j = 0; j < 3; ++j)
{
Vector3 a = v[(j+1) % 3] - v[j];
Vector3 b = v[(j+2) % 3] - v[j];
float weight = (float)Math.Acos(Vector3.Dot(a, b) / (a.magnitude * b.magnitude));
newNormals[_triangles[i + j]] += weight * normal;
}
}
foreach (Vector3 normal in newNormals)
{
normal.Normalize();
}
normals = newNormals;
}
heres the problem
i have 5 balls floating around the screen that bounce of the sides, top and bottom. thats working great.
what i want to do now is work out if any of them collide with each other.
i know about
if (CGRectIntersectsRect(image1.frame, image2.frame))
{
}
but that only checks two images, i need to check all and each of them..
ive checked everywhere but cant find the answer, only others searching the same thing, any ideas?
thanks in advance
Spriggsy
edit:
im using this to find the CGRect and store it in an array
ball1 = NSStringFromCGRect(image1.frame);
ball2 = NSStringFromCGRect(image2.frame);
ball3 = NSStringFromCGRect(image3.frame);
ball4 = NSStringFromCGRect(image4.frame);
ball5 = NSStringFromCGRect(image5.frame);
bingoarray = [NSMutableArray arrayWithObjects:ball1,ball2,ball3,ball4,ball5,nil];
this then gets passed to a collision detection method
-(void)collision {
for (int i = 0; i<[bingoarray count]-1 ; i++) {
CGRect ballA = CGRectFromString([bingoarray objectAtIndex:i]);
if (CGRectIntersectsRect(ballA, image1.frame)) {
NSLog(#"test");
}
}
this i guess should check one ball against all the others.
so ball 1 gets checked against the others but doesnt check ball 2 against them. is this nearly there?
}
The ideal solution is to store all the rectangles into a interval tree or a segment tree in order to efficiently compute any overlapping areas. Note that you will have to generalize to 2 dimensions for your use case.
Another efficient approach would be to use a K-d tree to find the nearest other balls and compare against the nearest neighbor until there isn't a collision.
The simple approach is to simply iterate over all the balls and compare them to all other balls with a higher ID (to avoid double checking ball1 -> ball2 and ball2 -> ball1).
Since you only have 5 at once, the iterative approach is likely fast enough to not be dropping frames in the animation, but you should consider a more scalable solution if you plan to support more balls since the simple appreach run in quadratic time.
That is a fun little math problem to avoid being redundant.
You can create an array of the images. And loop through it, checking if each member collides with any successive members.
I can spell it out more with code if need be.
EDIT I couldn't resist
// the images are in imagesArray
//where you want to check for a collision
int ballCount = [imagesArray count];
int v1Index;
int v2Index;
UIImageView * v1;
UIImageView * v2;
for (v1Index = 0; v1Index < ballCount; v1Index++) {
v1 = [imagesArray objectAtIndex:v1Index];
for (v2Index = v1Index+1; v2Index < ballCount; v2Index++) {
v2 = [imagesArray objectAtIndex:v2Index];
if (CGRectIntersectsRect(v1.frame, v2.frame)) {
// objects collided
// react to collision here
}
}
}