Collision detection for rotating images - ios

I want to be able to tell when 2 images collide (not just their frames). But here is the catch: the images are rotating.
So I know how to find whether a pixel in an image is transparent or not but that wont help in this scenario because it will only find the location in the frame relative to a non-rotated image.
Also I have gone as far as trying hit boxes but even those wont work because I can't find a way to detect the collision of UIViews that are contained in different subviews.
Is what I am trying to do even possible?
Thanks in advance

I don't know how you would go about checking for pixel collision on a rotated image. That would be hard. I think you would have to render the rotated image into a context, then fetch pixels from the context to check for transparency. That would be dreadfully slow.
I would suggest a different approach. Come up with a path that maps the bounds of your irregular image. You could then use CGPathContainsPoint to check to see if a set of points is contained in the path (That method takes a transform, which you would use to describe the rotation of your image's path.)
Even then though you're going to have performance problems, since you would have to call that method for a large number of points from the other image to determine if they intersect.

I propose you a simple strategy to solve that, based on looking for rectangles intersections.
The key for that is to create a simplified representation of your images with a set of rectangles laid out properly as bounding boxes of the different part of you image (like you would build your image with legos). For better performance use a small set of rectangles (a few big legos), for better precision use a biggest number of rectangles to precisely follow the image outline.
Your problem becomes equivalent to finding an intersection between rectangles. Or to be more precise to find wether at least one vertex of the rectangles of object A is inside at least one rectangle of object B (CGRectContainsPoint) or if rect intersects (CGRectIntersectsRect).
If you prefer the points lookup, you should define your rectangles by their 4 vertices then it is easy when you rotate your image to apply the same affine transform (use CGPointApplyAffineTransform) to your rectangle vertices to have the coordinates of your points after rotation. But of course you can lookup for frame intersections and represent you rectangle using the standard CGRect structure.
You could also use a CGPath (as explained in another answer below) instead of a set of rectangles and look for any vertex inside other path using CGPathContainsPoint. That would give the same result actually but probably the rectangles approach is faster in many cases.
The only trick is to take one of the objects as a reference axis. Imagine you are on object A and you only see B moving around you. Then if you have to rotate A you need to make an axis transform to always have B transform relatively to A and not to the screen or any other reference. If your transforms are only rotation around the object centre then rotating A by n radians is equivalent to rotating B by -n radians.
Then just loop through your vertices defining object A and find if one is inside a rectangle of object A.
You will probably need to investigate a bit to achieve that but at least you have some clues on how to solve that.

Related

OpenCV - align stack of images - different cameras

We have this camera array arranged in an arc around a person (red dot). Think The Matrix - each camera fires at the same time and then we create an animated gif from the output. The problem is that it is near impossible to align the cameras exactly and so I am looking for a way in OpenCV to align the images better and make it smoother.
Looking for general steps. I'm unsure of the order I would do it. If I start with image 1 and match 2 to it, then 2 is further from three than it was at the start. And so matching 3 to 2 would be more change... and the error would propagate. I have seen similar alignments done though. Any help much appreciated.
Here's a thought. How about performing a quick and very simple "calibration" of the imaging system by using a single reference point?
The best thing about this is you can try it out pretty quickly and even if results are too bad for you, they can give you some more insight into the problem. But the bad thing is it may just not be good enough because it's hard to think of anything "less advanced" than this. Here's the description:
Remove the object from the scene
Place a small object (let's call it a "dot") to position that rougly corresponds to center of mass of object you are about to record (the center of area denoted by red circle).
Record a single image with each camera
Use some simple algorithm to find the position of the dot on every image
Compute distances from dot positions to image centers on every image
Shift images by (-x, -y), where (x, y) is the above mentioned distance; after that, the dot should be located in the center of every image.
When recording an actual object, use these precomputed distances to shift all images. After you translate the images, they will be roughly aligned. But since you are shooting an object that is three-dimensional and has considerable size, I am not sure whether the alignment will be very convincing ... I wonder what results you'd get, actually.
If I understand the application correctly, you should be able to obtain the relative pose of each camera in your array using homographies:
https://docs.opencv.org/3.4.0/d9/dab/tutorial_homography.html
From here, the next step would be to correct for alignment issues by estimating the transform between each camera's actual position and their 'ideal' position in the array. These ideal positions could be computed relative to a single camera, or relative to the focus point of the array (which may help simplify calculation). For each image, applying this corrective transform will result in an image that 'looks like' it was taken from the 'ideal' position.
Note that you may need to estimate relative camera pose in 3-4 array 'sections', as it looks like you have a full 180deg array (e.g. estimate homographies for 4-5 cameras at a time). As long as you have some overlap between sections it should work out.
Most of my experience with this sort of thing comes from using MATLAB's stereo camera calibrator app and related functions. Their help page gives a good overview of how to get started estimating camera pose. OpenCV has similar functionality.
https://www.mathworks.com/help/vision/ug/stereo-camera-calibrator-app.html
The cited paper by Zhang gives a great description of the mathematics of pose estimation from correspondence, if you're interested.

How to use affineTransform on subset of CAEAGLLayer (iOS)?

I am playing around with Open GL and core animation and have been able to do affine transforms on open GL layers and everything works great. Looking for help on how would I transform a subset of a layer, meaning a top half or bottom quarter and only rotate those pixels while keeping the rest of the layer untouched.
Alternatively if I have 1 openGL layer would it be possible to split it into 2 (top and bottom sections). Then I can perform transforms as needed. I cannot access the subviews in the layer, only the layer as a whole.
Any advice would be appreciated.
To do it in the view pipeline you would need multiple views. In general there is nothing wrong with that but you will need to do a bit of work to draw to each of the views so that it seems as a whole. If you are using standard projection matrices such as glOrtho you only need to split the border parameters (top, bottom, left and right) accordingly to your view split.
To do it with openGL directly there are multiple ways. Which to choose depends on your needs.
One way is to use viewport. This describes what part of the buffer you are drawing to so you can split it into multiple draw calls drawing to different positions. This is generally more useful for view within the view situation.
Probably the best way would be to draw the whole scene to a FBO (frame buffer object) with attached texture. Then create sprites (rectangles) which you want to animate and then draw parts of the texture to those rectangles.
Still then you need a system which is able to animate within the openGL. To achieve that you need to do a matrix interpolation. It might take a bit of time but is generally worth it as you have a total control over the animations and how they are done. Note that due to rotations you will need to do the interpolation in the polar coordinate system which means transforming the 3 base vectors (top left 3x3 part of the matrix) to angle+radius and interpolate those.

Matching dynamically drawn triangles and differentiating angles

I am making a game wherein the user draws triangles on a grid and be congruent with other triangles. However, the user gets additional points for having their new triangle in a different rotation from the original. I would use the rotation property of the movieclip, but since the triangles are drawn into a dynamically created MC, they all have a rotation of 0 degrees.
Is there some way to do this? I am absolutely stumped.
I think this is just a maths problem.
Firstly, if you have an equilateral triangle, you wouldn't reliably be able to work out the rotational difference since the sides are the same size.
Otherwise, you will always have 'the important side'
Assuming your triangle is isosceles, your important side is the one that is of a different length to the other two matching sides.
Assuming you have a scalene triangle, your most important side is the longest side.
Once you know your most important side...
You should be able to work out the important side of the users triangle using trig.
You should also know the important side of the base triangle the user is trying to draw against, since you are 'making' it.
Then you basically have two lines (the two important sides), use trig again to work out the difference in rotation between the two lines, then you are good to go.
I solved this. What I did was have the program dig through the triangle to find the left-most and uppermost point. Then I draw all the triangles using this point as the origin. This ensures that regardless of the order in which the dots are clicked, all triangles will have the same point of origin.
To detect whether they are matches, I wrote up a function that copies the triangles and moves them to the same point. Because they now have the same origin point, they will occupy the same space if they are of the same angle. Using this, I then wrote a function that checks to see if the triangles overlap completely.

how to manage countor bounding rect in opencv

I have been testing background subtraction using gaussian state model. I am using opencv
2.1.0. I can generate binary image of foreground of the scene. Now all I want to do is Draw
countour bounding rectangle to highlight the moving object. I have used cvCountourBoundingRect
to obtain the rectangle covering countour. The issue I am facing is in case of multiple
countour, sometime nearby rectangle overlaps. Here, can anyone suggest me to prevent
overlapping of rectangle? In ideal case, two rectangle should not be overlapped. It rather
should be draw a bigger rectagle which covers all two rectangles.
Any suggetion will be greatful.
There's no ready possibility to do this in OpenCV. But actually the algorithm is very easy:
Cycle through all rectangles and check if two rectangles overlap each other. This topic will be useful: Determine if two rectangles overlap each other?
For every overlapped pair of rectangles create rectangle that contains both of them. To do this you should select one corner from first rectangle and another corner from second rectangle and these two corners will create rectangle for you. I don't think that it's a hard task - just simple math.

how to connect points after identifying them in cvgoodfeaturesTotrack

I want to identify an object and draw a shape around it ...
I used previously the color identification but wasn't a good option since color change dramatically from place to place .. so I though why not identifying objects by features such as edges .. and I did that using this function in openCV
cvgoodfeaturesTotrack
it returns the (x,y)-coordinates of the points .. now I want to connect those points.. well not all of them but the one who are close to each other to draw a shape around the different objects. Any ideas ?
I don't think there is a free lunch in this case. You are trying to reconstruct a polygon if you only know the corner points of the polygon. There is no unique solution to this problem: you can draw all sorts of polygons through the corners. If you are certain the shape you are after is convex, then you can construct the convex span of the corner points, but the result will be horrible if you include any corners that were not part of the original object.
It seems to me that detecting corners is not the way to segment an object that is more or less delimited by lines. You probably want to try an edge detector instead, or a proper segmentation technique such as watershed.

Resources