OpenCV: subtract same BGR values from all pixels - opencv

I have some BGR image:
cv::Mat image;
I want to subtract from all the pixels in the image the vector:
[10, 103, 196]
Meaning that the blue channel for all the pixels will be reduced by 10, the green by 103 and the red by 196.
Is there a standard way to do that, or should I run for loops over all the channels and all the pixels?

suppose we have image that all channels filled with zero and for instance it's dimension is 2x3
cv::Mat image = cv::Mat::zeros(2,3,CV_32SC3)
output will be:
[0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0]
then if we want to add or subtract a singleton variable, then we can use cv::Scalar
1- suppose we want to add 3 in blue channel:
image = image + Scalar(3,0,0); // the result will be same as image=image+3;
with above code our matrix is now:
[3, 0, 0, 3, 0, 0, 3, 0, 0;
3, 0, 0, 3, 0, 0, 3, 0, 0]
2- if you want to add to another channel you can use second or third argument(or forth) of cv::Scalar like below
image = image +Scalar(3,2,-3);
output will be
[3, 2, -3, 3, 2, -3, 3, 2, -3;
3, 2, -3, 3, 2, -3, 3, 2, -3]
Using cv::subtract
cv::Mat image = cv::Mat::zeros(2,3,CV_32SC3);
subtract(image,Scalar(2,3,1),image);
output
[-2, -3, -1, -2, -3, -1, -2, -3, -1;
-2, -3, -1, -2, -3, -1, -2, -3, -1]

Related

Image Processing - Skimage or other

I am new to image processing. I am trying out a few experiments.
I have binarized my image with otsu
Found connected pixels with skimage
from PIL import Image
import numpy as np
import skimage
im = Image.open("DMSO_Resized.png")
imgr = im.convert("L")
im2arr = np.array(imgr)
arr2im = Image.fromarray(im2arr)
thresh = skimage.filters.threshold_otsu(im2arr)
binary = im2arr > thresh
connected = skimage.morphology.label(binary)
I'd now like to count the number of background pixels that are either "completely" covered by other background pixels or "partially" covered.
For example, pixel[1][1] is partially covered
1 0 2
0 0 0
3 0 8
AND
For example, pixel[1][1] is completely covered
0 0 0
0 0 0
0 0 0
Is there a skimage or other package that has a method to do these ? Or would I have to implement them as an array processing loop ?
import numpy as np
from skimage import morphology
bad_connection = np.array([[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 1, 0, 1],
[1, 0, 0, 0, 1]], dtype=np.uint8)
expected_good = np.array([[0, 0, 1, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=np.uint8)
another_bad = np.array([[1, 0, 0, 0, 1],
[1, 1, 0, 1, 1],
[1, 1, 1, 1, 1],
[1, 1, 0, 1, 1],
[1, 0, 0, 0, 1]], dtype=np.uint8)
another_good = np.array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=np.uint8)
footprint = np.array([[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1]], dtype=np.uint8)
Outputs (incorrect or not as expected):

How to perform Bilinear Interpolation to a masked image?

Suppose I have an image with mask, valid pixels are masked as 1 and others 0, how to perform bilinear interpolation to fill all the invalid pixels?
for example, image:
1, 0, 0, 4
mask:
1, 0, 0, 1
interpolation result should be:
1, 2, 3, 4
The valid pixels are not regularly arranged, a more complicated sample, image:
4, 0, 6, 0,
0, 8, 5, 0
5, 3, 0, 0
mask:
1, 0, 1, 0,
0, 1, 1, 0
1, 1, 0, 0
interpolate with scipy.interpolate.interp2d and the result has many holes and noise

Calculate accuracy score of kmeans model

This works as expected and returns 1 for one of the groups.
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [6, 6, 6, 1, 2, 2]
metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
(1.0, 0.6853314789615865, 0.8132898335036762)
But this returns 0.75 for all 3 groups while I expected "1.0" for one of the groups like the example mentioned above.
y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
labels = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 2, 2, 2, 2, 0, 2, 2, 2,
2, 2, 2, 0, 0, 2, 2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 0, 2, 2, 2, 2,
2, 0, 2, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2, 0]
metrics.homogeneity_completeness_v_measure(y, labels)
(0.7514854021988339, 0.7649861514489816, 0.7581756800057786)
Expected 1 in one of the groups above!
Update:
As you can see, one of the groups matches with the other and therefore one of the values should have been 1 instead of 0.75 accuracy that I get for all 3 groups. This is not expected!
from collections import Counter
Counter(y)
Counter(labels)
Counter({0: 50, 1: 50, 2: 50})
Counter({1: 50, 0: 62, 2: 38})
Firstly, homogeneity, completeness and v measure score are calculated as follows:
C and K are two random variables. In your case, C is the labels true, while K is the label predicted.
If h = 1, it means that H(C|K) = 0, as H(C) is always less than 0. If H(C|K) = 0, it means that random variable C is completely determined by given random variable K, you can see more detailed definition on conditional entropy. So in your first case, why h = 1? Because when I give a value of random variable K (labels predicted), I know what the random variable C (labels true) will be. If k equals 6, I know c is 0. If k is 1, c is 1, etc. So when talking about the second case, why h != 1 or c != 1. Because even though there is a perfect match between 1 to 0, but there are no perfect match for other classes. If I give k is 1, I know c is 0. But when I give k is 0, I'm not sure whether c is 1 or 2. Thus, the homogeneity score or in reverse, completeness score, you can think about that, will not be 1.

How to calculate the resolution of an undistorted image?

An undistorted image typically have lower resolution than the original image due to non-uniform distribution of pixels and (usually) the cropping of the black edges. (See below as an example)
So given the camera calibration parameters, e.g. in ROS format
image_width: 1600
image_height: 1200
camera_name: camera1
camera_matrix:
rows: 3
cols: 3
data: [1384.355466887268, 0, 849.4355708515795, 0, 1398.17734010913, 604.5570699746268, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [0.0425049914802741, -0.1347528158561486, -0.0002287009852930437, 0.00641133892300999, 0]
rectification_matrix:
rows: 3
cols: 3
data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
rows: 3
cols: 4
data: [1379.868041992188, 0, 860.3000889574832, 0, 0, 1405.926879882812, 604.3997819099422, 0, 0, 0, 1, 0]
How would one calculate the final resolution of the undistorted rectified image?
From Fruchtzwerg's comment, the following will give the effective ROI of the undistorted image
import cv2
import numpy as np
mtx = np.array([
[1384.355466887268, 0, 849.4355708515795],
[ 0, 1398.17734010913, 604.5570699746268],
[0, 0, 1]])
dist = np.array([0.0425049914802741, -0.1347528158561486, -0.0002287009852930437, 0.00641133892300999, 0])
cv2.getOptimalNewCameraMatrix(mtx, dist, (1600, 1200), 1)

How can I assign values to a opencv matrix Mat?

For example, I have a 2by3 matrix [1,0,5;1,0,-5], and a Mat trans_mat( 2, 3, CV_32FC1).
How can I assign those values to the trans_mat matrix?
Mat trans_mat( 2, 3, CV_32FC1);
trans_mat = (Mat_<float>(2, 3) << 1, 0, 5, 1, 0, -5);

Resources