what does the vertical line in Confusion Matrix plot represent? - machine-learning

What the vertical line in the confusion matrix heat map represents?

The vertical line represents the true value of the item
When the sample belongs to a class in your data set, that class is called the true value.
You obviously have 5 classes.
As for the horizontal, they are the classes predicted by the model for those samples

Related

How to create minimum enclosing area from given datapoints, analyse the found areas and quantify respective to each other

I have a 2D datapoint set (Nx2) and they are labelled (Nx1). I need to check if the minimum enclosing areas created by the datapoints with the same label are behaving well.
Well behaviour: having roughly the same shape (or at least same orientation) with others, not intruding the area of other shapes too much, touching 2 and exactly 2 edges if not top or bottom shape.
I created a sample code, I need to quantify a way to distinguish the cases like first plot from the cases like the second plot.
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import ConvexHull, convex_hull_plot_2d
datapoints = np.random.rand(1000,2)
label = np.zeros((1000))
label[:] = 3
label[datapoints[:, 1]<0.6]=2
label[datapoints[:, 1]<0.3]=1
plt.scatter(datapoints[:,0],datapoints[:,1], c=label)
for subset_label in range(1,4):
subset = datapoints[label==subset_label]
hull = ConvexHull(subset)
hull.close()
plt.plot(subset[hull.vertices,0], subset[hull.vertices,1], 'r-', lw=2)
plt.show()
label[:] = 3
label[datapoints[:, 1]<0.6]=2
label[np.logical_and(datapoints[:, 1]<0.9,datapoints[:,0]<0.15)]=1
label[np.logical_and(datapoints[:, 1]<0.3,datapoints[:,0]<0.5)]=1
label[np.logical_and(datapoints[:, 1]<0.3,datapoints[:,0]>0.7)]=4
fig, ax = plt.subplots()
ax.scatter(datapoints[:,0],datapoints[:,1], c=label)
for subset_label in range(1,5):
subset = datapoints[label==subset_label]
hull = ConvexHull(subset)
hull.close()
plt.plot(subset[hull.vertices,0], subset[hull.vertices,1], 'r-', lw=2)
plt.show()
I need things like regionprops from skimage.measure or contours from opencv. But it doesn't seem to be right in my case where I already have the "region" or the "contour".
Otherwise, I could check region moments (or orientation, eccentricity etc)
The sets here are nicely aligned, they have roughly the same shape, their orientation is very similar (horizontal), interior set touches 2 edges exactly
Blue set intruded the lower set and created 2 smaller sets. Blue set has very different shape than the others. Pink set has a rather vertical orientation than horizontal (as opposed to other sets).

What do values in ARKit transform matrices represent?

Right now I am trying to understand the values within an ARKit transform matrix so I can quantify the movements of my SCNNode. From a previous post on stack overflow I have learned that the matrix contains information regarding the node's current translation, scale, rotation, and position.
What I am not understanding is which values are specifically associated with those four things. For example I have figured out that the first element of the 3rd column represents an X (horizontal) movement on the screen and the 2nd value of the 3rd column represents a Y (vertical) movement. But other than that I'm not sure what the rest of the values in the matrix mean.
Thanks for the help!
In a transformation matrix, the translation information is in the last column.
Given a transformation matrix, you can extract the translation from the last column as below:
let translation = SCNVector3(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
Rotation and scaling use the first three columns and are more complex.
Scale is the length of the first three column vectors, and to extract the rotation you need to divide the first three column vectors by the scaling factors just mentioned.
You can refer to this link for a better understanding of scale and rotation and how to extract them.
MohammadRF's answered cleared things up for me the best. However, ARKit's matrices are in row-major order so if you were to transpose the matrices from the information he gave then it would apply to ARKit.

VC Dimension with rectangles with horizontal and vertical edges

I am learning theory of machine learning and have some confusion about VC dimensions. According to the text book, the VC dimension of 2D axis-aligned rectangles is 4 which means it cannot shatter 5 points.
I found an example here: Cornell
However I still cannot understand this example. What if we use a rectangle like this (the red one)
Then we can classify this point out of them. Why is this incorrect?
We are supposed to draw the rectangle containing +ve examples only for any combination of the given 5 points. Here you see that, for any combination of the given points, a rectangle that corresponds to the points with maximum x-coordinate, minimum x-coordinate, maximum y-coordinate, and minimum y-coordinate will always contain the fifth point also. Hence, the set of 5 points cannot be shattered.

Calculate harmonics for a set of data extracted from images

I am implementing a method known as trace transform for image analysis. The algorithm extracts a lot of features of an image (in my case features pertained to texture) using a set of transforms on the values of the pixels. Here is a paper describing the algorithm: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=18&cad=rja&ved=0CGoQFjAHOAo&url=http%3A%2F%2Fpdf.aminer.org%2F000%2F067%2F295%2Ftexture_classification_with_thousands_of_features.pdf&ei=lV9cUez_GYrx2QX-voCgDw&usg=AFQjCNEbCd8GBm4X8V4vk0PYyQwPZPlWyg&sig2=KTtvd1XxtvuUpCDeBzUu4A (sorry for the long link)
The algorithm finally constructs a "triple feature" which is a number that characterizes the image and was calculated by applying a first set of transforms on the pixel values extracted by all the tracing lines going through the image at all angles. On the paper you can see this description starting on page 2, Figure 1 shows an abstraction of such a tracing line which is defined by its angle (phi) and the distance (d) of the line from the center of the image. So we can describe each tracing line by the pair (phi,d).
Now on the next page on the paper (page 3) we have Table 1 which is the set of functionals that is applied to the pixel values extracted by the tracing lines. Each of these functionals applied to all the tracing lines generates another representation of the image. Then another set of functionals (those that appear on Table 2 on page 4) are applied to the new representation of the image along the columns which is basically along parameter d generating, this way, another new representation of the image.
Finally another set of functionals is applied to this last representation which is an array of values over parameter phi which was the angle parameter (that means we have 360 values describing the image, one for each angle of the tracing lines). You can see these functionals in the paper on Table 3 page 4.
Now some of this last set of functionals that have to be applied to the last representation of the image over parameter phi need to calculate the harmonics of this set of 360 values. First I need to calculate the first harmonic and then the amplitude and phase of the first through fourth harmonic but I don't know how to do this.
I hope this explains it better. You can read the first couple of pages of the paper since it's probably better explained there. And figure 1 makes it clear what I mean by tracing lines and the representation of each line by the (phi,d) pair.
Thanks

Given a set of points to define a shape, how can I contract this shape like Photoshop's Selection>Contract

I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.

Resources