VC Dimension with rectangles with horizontal and vertical edges - machine-learning

I am learning theory of machine learning and have some confusion about VC dimensions. According to the text book, the VC dimension of 2D axis-aligned rectangles is 4 which means it cannot shatter 5 points.
I found an example here: Cornell
However I still cannot understand this example. What if we use a rectangle like this (the red one)
Then we can classify this point out of them. Why is this incorrect?

We are supposed to draw the rectangle containing +ve examples only for any combination of the given 5 points. Here you see that, for any combination of the given points, a rectangle that corresponds to the points with maximum x-coordinate, minimum x-coordinate, maximum y-coordinate, and minimum y-coordinate will always contain the fifth point also. Hence, the set of 5 points cannot be shattered.

Related

Extend a square in world space to a cube when only screen space coordinates are available

I have a photo of a Go-board, which is basically a grid with n*n squares, each of size a.
Depending on how the image was taken, the grid can have either one vanishing point like this (n = 15, board size b = 15*a):
or two vanishing points like this (n = 9, board size b = 9*a):
So what is available to me are the four screen space coordinates of the four corners of the flat board: p1, p2, p3, p4.
What I would like to do is to calculate the corresponding four screen space coordinates q1, q2, q3, q4 of the corners of the board, if the board was moved 'upward' (perpendicular to the plane of the board) in world space by a, or in other words the coordinates on top of the board, if the board had a thickness of a.
Is the information about the four points even sufficient to calculate this?
If this is not enough information, maybe it would help to make the assumption that the distance of the camera to the center of the board is typically of the order of 1.5 or 2 times the board size b?
From my understanding, the four lines p1-q1, p2-q2, p3-q3, p4-q4 would all go through the same (yet unknown) vanishing point, located somewhere below the board.
Maybe a sufficient approximation (because typically for a Go board n=18 and therefore square size a is small in comparison to the board size) for the direction of each of the lines p1-q1, p2-q2, ... in screen space would be to simply choose a line perpendicular to the horizon (given by the two vanishing points vp1-vp2 or by p1-p2 in the case of only one vanishing point)?
Having made this approximation, still the length of the four lines p1-q1, p2-q2, p3-q3, p4-q4 would need to be calculated ...
Any hints are highly appreciated!
PS: I am using Objective-C & OpenCV
Not yet a full answer but this might help to move forward. As MvG pointed out 4 points alone are not enough. Luckily we know the board is a square so even with perspective distortion the diagonals in 2D should/will intersect at board center (unless serious fish-eye or other distortions are present in the image). Here a test image (created by OpenGL I used as a test input):
The grayish surface is 2D QUAD using 2D perspective distorted corner points (your input). The aqua/bluish grid is 3D OpenGL grid I created the 2D corner points with (to see if they match). The green lines are 2D diagonals and Orange points are the 2D corner points and the diagonals intersection. As you can see 2D diagonal intersection correspond exactly with 3D board mid cell center.
Now we can use the ratio between half diagonal lengths to assume/fit the perspective. If we handle cell coordinates in range <0,9> we want to achieve further division of halve diagonals like this:
I am still not sure how exactly (linear ratio l0/(l0+l1) is not working) so I need to inspect perspective mapping equations to find relative ratio dependence and compute inverse (when I have time mood for this).
If that will be a success than we can compute any points along the diagonals (we want the cell edges). If that is done from that we can easily compute visual size of any cell size a and use the vanishing point without any 3D transform matrices at all.
In case this is not doable there is still the option to use DIP/CV techniques to detect the cell crossings like this:
OpenCV Birdseye view without loss of data
using just the bullet #2 but for that you need to take into account type of images you will have and adjust the detector or add preprocessing for it ...
Now back to your offsetting you can simply offset your cells up by the visual size of the cell like this:
And handle the left side points (either interpolate the size or use the sane as neighboring cell) That should work unless too weird angles of the board are used.

Is edge points of object represent an ellipse?

Given an edge points of object. Let us say Obj = (xi,yi); i=1,2,3,....
How can we know if these edge points represent an ellipse or not?
As long as you have more than 2 points you could try linear fitting by using least squares:
See here:https://math.stackexchange.com/a/153150/104118
See section 7 Fitting an Ellipse to 2D Points in the actual link: http://www.geometrictools.com/Documentation/LeastSquaresFitting.pdf
Off the top of my head, I would calculate the axis with minimal variance (call it a) and the axis with maximal variance (call it b).
I would check that those axes are reasonably close to being perpendicular - if not then it's probably not an ellipse. If they are close to being perpendicular, I would rotate the point cloud so that a and are aligned with the x- and y-axes.
Next step would be to translate the point cloud so its center is at (0,0) and then check that each translated point lie close to the perimeter of an ellipse with axes a and b by putting each of the points into the equation of the ellipse and checking that the value is close to 0.
This is all based on me reading "edge points" as just looking at the points used by edges. If the edges themselves are to be involved, you would have to check that the edges go "around the clock" as well.
Well I know this was loose... hope it made sense somehow :-).

Understanding Distance Transform in OpenCV

What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.

Can we apply Quad-Tree to a non-square rectangle?

I am trying to implement a 2-D fast collision detection with Quad-Tree.
AFAIK, Quad-Tree divides a region into 4 sub-regions, north-west, north-east, south-east and south-west. This dividing works perfectly with a square. But what if the region is a non-square rectangle? In that case, we cannot divide the long edge and the short edge evenly, and the short edge determins how far we can divide.
Am I right on this? Is that meant to be?
Simply take the max of width, height of the bounding box of the region of interest as the side length of the quad tree.
Another solution:
Two quad tree implementtaions that i have seen uses a rectangle internaly, so that would run out of the box, even if the provided root bounds is not a square. They divide both the width and the height of the bounds in each subdivision step. But note that there are ovr 10 different Quadtree types. I am talking about Rectangle Quadtrees.
One implemention explictly uses a a side length which is divided by 2, so that would not work fine for non square root bounds.
However, I still recommend my first sentence, better use a square as root bounds.
This then works for all quad tree types.

Given a set of points to define a shape, how can I contract this shape like Photoshop's Selection>Contract

I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.

Resources