About precision and recall in prediction - machine-learning

I am confused about precision and recall in prediction.
My task is about object detection in an image.
If there is no object in the image, and my predict number is 1: then false positive = 1
If there is one object in the image, and my predict number is 0: then false negative = 1
However, if there is on object in the image and my predict number is 0: then what is the value of true positive? Is it 0 or 1?
Thanks!

if there is no object in the image and my predict number is 0 then true positive = 0.

Related

Is there any meaning to measure ture negative over all negative response?

When we want to assess the quality of a positive prediction made by the model, which is the number of true positives divided by the total number of positive predictions.
Also, the recall shows the model's ability to detect positive samples and biases towrds negative predictions, which is the ratio between the numbers of true positivto the total number of positive samples.
Is there any meaning in the ratio between True negative and all negative predictions??

ROC-Curve calculation of elements

I know that the ROC-Curve is calculated from the True-Positive-Rate and the False-Positive-Rate.
But the ROC-Curve has infinite Elements on it's Curve, right? How is each Element calculated? Can someone explain this to me? Where is each point coming from?
Example
Thanks in Advance
The values are calculated for all values of the threshold of the classifier.
On the x axis, you have the "false positive rate" for the given threshold: FPR = FP / (TN + FP) where:
FP are the number of false positive (the elements predicted positive but which are negative);
TN the number of true negative (the elements predicted negative and are really negative);
and FP the number of false positive (the elements predicted positive but are negative).
On the y axis, you have the "true positive rate" for the given threshold: TPR = TP / (TP + FN) where:
TP are the number of true positive (predicted positive and are indeed positive);
FN the number of false negative (predicted negative but are positive).
You have not an infinite number of points in practice: you are limited to the number of points of the dataset (the rate dont change for some ranges of threshold).

Categorical accuracy

How does categorical accuract works? By definition
categorical_accuracy checks to see if the index of the maximal true
value is equal to the index of the maximal predicted value.
and
Calculates the mean accuracy rate across all predictions for
multiclass classification problems
What does it mean in practice? Lets say i am prediction bounding box of object
it has (xmin,ymin,xmax,ymax) does it check if xmin predicted is equal with xmin real? So if i xmin and xmax where same in prediction and real values, and ymin and ymax were different i would get 50%?
Please help me undestand this concept
Traditionally for multiclass classification, your labels will have some integer (or equivalently categorical) label; for example:
labels = [0, 1, 2]
The output of a multiclass classification prediction will typically be a probability distribution of confidences; for example:
preds = [0.25, 0.5, 0.25]
Normally the index associated with the most likely event will be the index of the label. In this case, the argmax(preds) is 1, which maps to label 1.
You can see the total accuracy of your predictions a la confusion matrices, where one axis is the "true" value, and the other axis is the "predicted" value. The values for each cell are the sums of the values of CM[y_true][y_pred]. The accuracy will be the sum of main diagonal of the matrix (y_true = y_pred) over the total number of training instances.

Opencv (python) finding true positive in binary images

I have two binary images one as ground truth image and other as experimental/testing image. I want to calculate true positive, false positive and false negative where my region of interest is blobs (i.e. circle and ellipse) present in the images.
For the true positive, the intersection of images using 'or bitwise' operation was performed along with counting a total of black pixels present in the intersected image as 'Total number of True Positive pixels',i.e, TP.
For false positive, pixels having value 255 in the ground-truth image was considered and a total of white pixels assigned as'Total number of False Positive',i.e, FP
For false negative pixels having value 255 in experimental-image was considered and a total of white pixels assigned as 'Total number of False Negative',i.e., FN
Precision and Recall is calculated as:
Precision as TP / (TP + FP)
Recall as TP / (TP + FN)
It seems the values are calculated wrong as I got precision to be 19%.
Please guide me on this.
Thanks in advance.

Why represent neural network quality as 1 minus the ratio of the mean absolute error in prediction to the range of the predicted values?

The documentation for IBM's SPSS Modeler defines neural network quality as:
For a continuous target, this 1 minus the ratio of the mean absolute error in prediction (the average of the absolute values of the predicted values minus the observed values) to the range of predicted values (the maximum predicted value minus the minimum predicted value).
Is this calculation standard?
I'm having trouble understanding how quality is derived from this.
The main point here is to make the network quality measure independent from the range of output values. The proposed measure is 1 - relative_error This means that for a perfect network, you will get the maximum quality of 1. It also means that the quality cannot become less than 0.
Example:
If you want to predict values in the range 0 to 1, an absolute error of 0.2 would mean 20%. When predicting values in the range 0 to 100, you could have a much larger absolute error of 20 for the same accuracy of 20%.
When using the formula you describe, you get these relative errors:
1 - 0.2 / (1 - 0) = 0.8
1 - 20 / (100 - 0) = 0.8

Resources