What is the Vec3b type? - opencv

I came across the type Vec3b in OpenCV. I just couldn't find a description if this type and why we use it.
Do you know of any reference that describes such type, or, if you can clarify it this would be very much appreciated.
Thanks.

Vec3b is the abbreviation for "vector with 3 byte entries"
Here those byte entries are unsigned char values to represent values between 0 .. 255.
Each byte typically represents the intensity of a single color channel, so on default, Vec3b is a single RGB (or better BGR) pixel. So Vec3bVar[0] might be the blue color part, [1] green and [2] red.
But it could be any other 24 bit 3 channel pixel value like HSV.

vec3b is defined as 3 uchars vec: From OpenCV documentation:
typedef Vec<uchar, 3> Vec3b;

Related

How are the bits allocated?

In OpenCV, the type of the elements in a cv::Mat object could be for instance CV_32FC1, CV_32FC3 which represent 32-bit floating point with one channel and 32-bit floating point with three channels, respectively.
The CV_32FC3 type can be used to represent color images which have blue, green and red channels plus an alpha channel used to represent transparency, with each channel getting 8 bits.
I'm wondering how the bits being allocated in CV_32FC1 type, when there's only one channel?
32F means float. The number after the C is the number of channels. So CV_32FC3 means 3 floats per pixel, while CV_32FC1 is one float only. What these floats mean is up to you and not explicitly stored in the Mat.
The float is stored in memory as it would in your regular C program (typically in little endian).
A classical BGR image (default channel ordering in OpenCV) would be a CV_8UC3: 8 bit unsigned integer per channel in three channels.

What exactly do "channel" refer to in opencv?

i don't understand when opencv documentation mention the term "channel" . Does it mean the channel as in digital image ? or it is something else ?
So as OpenCV is an image processing Library, So A given image can be assumed as 2D matrix with each element as a pixel. Now since there are various types of image formats like Gray, RGB or RGBA, etc. each format is different as to how many colors it(pixel) can support. For example the pixels of Gray image take values in range 0-255 so to represent each gray pixel we need single uchar value, so it has single channel, similarly the pixels of RGB image can take values from 0-16777216 and to represent each RGB pixel, we need 3 uchar values, (256^3 = 16777216), hence it is 3 channels, similarly RGBA has 4 channels, the last channel is used for storing the alpha(transparency) value.

Explanation for ddepth parameter in cv2.filter2d() opencv?

import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('logo.png')
kernel = np.ones((5, 5), np.float32) / 25
dst = cv2.filter2D(img, -1, kernel)
plt.subplot(121), plt.imshow(img), plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122), plt.imshow(dst), plt.title('Averaging')
plt.xticks([]), plt.yticks([])
plt.show()
I was trying smoothing a picture and i didnt understand the ddepth parameter of cv2.filter2d() where the value is -1. So what does -1 do and also what does ddpeth mean ?
ddepth
ddepth means desired depth of the destination image
It has information about what kinds of data stored in an image, and that can be unsigned char (CV_8U), signed char (CV_8S), unsigned short (CV_16U), and so on...
type
As for type, the type has information combined from 2 values:
image depth + the number of channels.
It can be for example CV_8UC1 (which is equal to CV_8U), CV_8UC2, CV_8UC3, CV_8SC1 (which is equal to CV_8S) etc.
Further Readings
For more discussion, it can be found in the following two articles
image type vs image depth
Matrix depth equals 0
Detailed - Fixed Pixel Types. Limited Use of Templates
You can see in the doc that ddepth stands for "Destination depth", which is the depth of result (destination) image.
If you use -1, the result (destination) image will have the same depth as the input (source) image.
ddepth
uses depth() function which returns the depth of a point transformed by a rigid transform.
and Depth is the number of bits used to represent color in the image it can be 8/24/32 bit for display which can be denoted as (signed char, unsigned short, signed short, int, float, double).
In OpenCV you typically have those types:
8UC3 : 8 bit unsigned and 3 channels => 24 bit per pixel in total.
8UC1 : 8 bit unsigned with a single channel
32S: 32 bit integer type => int
32F: 32 bit floating point => float
64F: 64 bit floating point => double
https://docs.opencv.org/4.x/d4/d86/group__imgproc__filter.html#filter_depths
What is 'Depth' in Image Processing For more information
According to the official doc:
when ddepth=-1, the output image will have the same depth as the source.
And the valid value of ddepth is limited by the following table:
ddepth table
For example:
cv::Mat src(3, 3, CV_8U3);
cv::Mat dst(3, 3, CV_16S3);
cv::Mat dst2(3, 3, CV_16F3);
cv::Mat kernel(3, 3, CV_8U, cv::Scalar(1));
cv::filter2D(src, dst, CV_16S, kernel); // valid
cv::filter2D(src, dst, CV_16F, kernel); // invalid
Basically there are five methods I know to blur images:
1) use gamma Method
2) create your own kernal (kernal: it is nothing but a numpy array of ones of desired shape) and apply it on images
3) use built_in function of OpenCv
blur_img = cv2.blur(Image_src,Kernal_size)
4) gaussian Blur
Guassian_blur_img=cv2.GuassianBlur(img_src,kernel_size,sigma_value)
5) median Blur
Median_blur_img=cv2.medianBlu(img_src,kernel_size_value)
I personally prefer to use Median blur as it smartly remove the noise from your image such only backgroung of image will only get blurred and other features of images is at is in image like corner will be unchanged.

Mat_<uchar> for Image. Why?

I'm reading a code, in this code I can not understand why we use Mat_<uchar> for image (in opencv) for use:
thereshold
what is the advantage of using this matrix?
OpenCV threshold function accepts as source image a 1 channel (i.e. grayscale) matrix, either 8 bit or 32 bit floating point.
So, in your case, you're passing a single channel 8 bit matrix. Its OpenCV type is CV_8UC1.
A Mat_<uchar> is also typedef-ined as Mat1b, and the values of the pixels are in the range [0, 255], since the underlying type (uchar aka unsigned char) is 8 bit, with possible values from 0 to 2^8 - 1.

Converting matches from 8-bit 4 channels to 64-bit 1 channel in OpenCV

I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?
I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());
No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.

Resources