Why does the Normalization cause different output value? - machine-learning

Why does the Normalization cause different output value?
My program : https://i.imgur.com/rlHqMw9.png
After Normalization, 4.0 becomes 0.8
After Normalization, another 4.0 becomes 1.00
After Normalization, 0.0 has become 0.00
After Normalization, another 0.0 becomes 0.000000

As evident from your image, in mvr1 the precision of 0.75 is more than that of 0.0; hence the precision of the latter is changed to match the precision of all values.

Related

Where is OpenCV's getGaussianKernel method's formula for sigma coming from? [duplicate]

I find on the OpenCV documentation for cvSmooth that sigma can be calculated from the kernel size as follows:
sigma = 0.3(n/2 - 1) + 0.8
I would like to know the theoretical background of this equation.
Thank you.
Using such a value for sigma, the ratio between the value at the centre of the kernel and on the edge of the kernel, found for y=0 and x=n/2-1, is:
g_edge / g_center = exp(-(x²+y²)/(2σ²))
= exp(-(n/2-1)²/(2*(0.3(n/2-1)+0.8)²))
The limit of this value as n increases is:
exp(-1/(2*0.3²)) = 0.00386592
Note that 1/256 is 0.00390625. Images are often encoded in 256-value ranges. The choice of 0.3 ensures that the kernel considers all pixels that may significantly influence the resulting value.
I am afraid I do not have an explanation for the 0.8 part, but I imagine it is here to ensure reasonable values when n is small.

SceneKit transform values, how are they implemented?

Apple's SceneKit documentation suggests that the transform matrix of an object consists of rotation, position, and scale information. However the transform matrix is a 4x4 matrix, the last column being 0,0,0,1. What are these values exactly and is there a more detailed explanation to this matrix? Like which columns/rows represent what, why is there 4 rows and what is the last column for?
Example code:
for t in 0...3 {
print("\t")
for n in frame.camera.transform[t] {
print(String(format: "%10.1f", n),terminator: "");
}
}
Output:
0.1 -0.7 0.7 0.0
1.0 0.2 -0.1 0.0
-0.1 0.7 0.7 0.0
0.3 -0.1 0.0 1.0
I'm pretty sure this is the CATransform3D:
https://developer.apple.com/documentation/quartzcore/catransform3d
Which is some of the most asinine "documentation" in the history of Apple's notoriously asinine documentation.
Try this, from the wayback machine, when they used to at least talk about things a little more deeply... somewhat: https://web.archive.org/web/20111010140734/http://developer.apple.com/library/mac/#/web/20111012014313/https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreAnimation_guide/Articles/Layers.html
m34 is the most interesting of the entire matrix, as it's usually referred to as being responsible for perspective.
And here's one of the best articles ever written about Core Animation, that explains some aspects of this transform: http://www.thinkandbuild.it/introduction-to-3d-drawing-in-core-animation-part-1/

Why represent neural network quality as 1 minus the ratio of the mean absolute error in prediction to the range of the predicted values?

The documentation for IBM's SPSS Modeler defines neural network quality as:
For a continuous target, this 1 minus the ratio of the mean absolute error in prediction (the average of the absolute values of the predicted values minus the observed values) to the range of predicted values (the maximum predicted value minus the minimum predicted value).
Is this calculation standard?
I'm having trouble understanding how quality is derived from this.
The main point here is to make the network quality measure independent from the range of output values. The proposed measure is 1 - relative_error This means that for a perfect network, you will get the maximum quality of 1. It also means that the quality cannot become less than 0.
Example:
If you want to predict values in the range 0 to 1, an absolute error of 0.2 would mean 20%. When predicting values in the range 0 to 100, you could have a much larger absolute error of 20 for the same accuracy of 20%.
When using the formula you describe, you get these relative errors:
1 - 0.2 / (1 - 0) = 0.8
1 - 20 / (100 - 0) = 0.8

How to normalize data to 1 such that each value receives a weight proportional to their own value

I have some data that do not sum to 1 that I would like to have sum to 1.
0.0232
0.05454
0.2154
0.5
0.005426
0.024354
0.00000456
sum: 0.82292456
I could just multiply each value by 1.0/0.82292456 and then have the sum be 1.00, but then all values would receive a factor adjustment of the same proportion (i.e 0.17707544).
I'd like to increase each value based on the size of the value itself. In other words, 0.5 would get a proportionally larger adjustment than 00000456 would.
I am not sure how to determine these adjustments that could potentially be additive, multiplicative, or both.
Any hints or suggestions would be helpful!
Thanks!
I could just multiply each value by 1.0/0.82292456 and then have the sum be 1.00, but then all values would receive a factor adjustment of the same proportion (i.e 0.17707544).
OK, That's what I'd do. Why is "a factor adjustment of the same proportion" a problem?
I'd like to increase each value based on the size of the value itself.
In that case, you should multiply each value by 1.0/0.82292456 because that's what multiplication does.

Calculate the Gaussian filter's sigma using the kernel's size

I find on the OpenCV documentation for cvSmooth that sigma can be calculated from the kernel size as follows:
sigma = 0.3(n/2 - 1) + 0.8
I would like to know the theoretical background of this equation.
Thank you.
Using such a value for sigma, the ratio between the value at the centre of the kernel and on the edge of the kernel, found for y=0 and x=n/2-1, is:
g_edge / g_center = exp(-(x²+y²)/(2σ²))
= exp(-(n/2-1)²/(2*(0.3(n/2-1)+0.8)²))
The limit of this value as n increases is:
exp(-1/(2*0.3²)) = 0.00386592
Note that 1/256 is 0.00390625. Images are often encoded in 256-value ranges. The choice of 0.3 ensures that the kernel considers all pixels that may significantly influence the resulting value.
I am afraid I do not have an explanation for the 0.8 part, but I imagine it is here to ensure reasonable values when n is small.

Resources