I have image buffer in 24bit RGB format. This buffer is copied to cv::Mat using
cv::Mat mat = cv::Mat(image->height, image->width, CV_8UC3, image->data);
Since this buffer is in RGB format and OpenCV uses BGR format, I'm converting matto BGR with
cv::cvtColor(mat, mat, CV_RGB2BGR);
This works, but when I check original image its channels are inverted as well (and so they become wrong) and I don't want this to happen.
I'd like to invert mat channels order leaving image-data (my image buffer) untouched. How can I do that?
I presume (I am not certain) that if you use cv::cvtColor(mat, mat, CV_RGB2BGR);, you actually recreate mat, but you overwrite data with the RGB->BGR converted data. Since you pass data to your "mat" using pointer, if you overwrite the data in mat, you change "image->data" as well.
Therefore, I do not expect less performance than:
cv::Mat mat = cv::Mat(image->height, image->width, CV_8UC3, image->data);
cv::Mat mat2;
cv::cvtColor(mat, mat2, CV_RGB2BGR);
//Work with mat 2 now
Rather than overwriting, you write new data. This should bear the same performance cost...
I do not know what plan to do with your image after colour conversion, but even if the performance was different, it is likely to have an overall minor impact.
Related
I need to reconstruct a PointCloud using libfreenect2 registration::apply with color/depth images. The problem is I pre-saved the color & depth images as PNG files. Both color & depth images was obtained using libfreenect2::Frame and converted to a OpenCV Mat for imwrite.
Mat(rgb->height, rgb->width, CV_8UC4, rgb->data).copyTo(bgrImage);
Mat(depth->height, depth->width, CV_32FC1, depth->data).copyTo(depthImage);
I tried the following method to get my color Frame & it works. The problem is when I tried to do the same thing with the depth image, issues emerge.
Mat color_CV8UC3 = cv::imread(colorFilename);
cvtColor(color_CV8UC3, color_CV8UC4, CV_BGR2BGRA); //Convert to BGRX
libfreenect2::Frame rgb(color_CV8UC4.cols, color_CV8UC4.rows, 4, color_CV8UC4.data);
Mat depthImg = cv::imread(depthFilename);
depthImg.convertTo(depthFrame, CV_32FC1, 1.0/255.0); //Convert to 32FC1
libfreenect2::Frame depth(depthFrame.cols, depthFrame.rows, 4, depthFrame.data);
I tested the conversion of cv::Mat to libfreenect2::Frame by re-converting the libfreenect2::Frame for both rgb and depth & while I got the same image for rgb, it wasn't true for the depth image.
Mat(rgb.height, rgb.width, CV_8UC4, rgb.data).copyTo(rgb_image);
Mat(depth.height, depth.width, CV_32FC1, depth.data ).copyTo(depth_image);
imshow("Depth", depth_image);
imshow("Color", rgb_image);
Depth Image - Loaded & Converted to 32FC1
Depth Image - After Converted to Frame & Reconverted Back
Thanks for any assistance provided and please give any feedback as this is my first time posting a question here.
I have a bitmap in RGB format, i.e. 24bit per pixel. How can I create a Mat object so that I can minimize data copying while making sure that the order of the channel is treated correctly, given that default order in OpenCV is BGR
In general you can use RGB order as usual, just remember to convert to correct color space when needed.
You can create a Mat header with no copies using:
int rows = ...
int cols = ...
uchar* rgb_buffer = ...
cv::Mat3b rgb_image(rows, cols, bgr_buffer);
None (or just a few) of the OpenCV functions assume that matrix data should be in BGR or RGB order. You can also operate on your data with your custom processing accounting for RGB order.
The fact that OpenCV images are in BGR order is mostly a matter of input / output (basically imshow, imread, imwrite, and the like).
You can always convert your image with cvtColor(..., RGB2<whatever>) in case you need to switch color space. This won't be a performance issue, since the data would be copied anyway.
RGB buffer:
int rows, int cols, uchar* input_RGB;
Mat buffer declaration:
Mat image(rows, cols, CV_8UC3, input_RGB);
https://docs.opencv.org/master/d6/d6d/tutorial_mat_the_basic_image_container.html
What is the default (Pixel) storage format used by OpenCV ?
I know it is BGR but is it BGR32 ? BGR16 ?
Is it Packed or Planar ?
Can you suggest me a way to find it out?
Thank you for your help.
[EDIT] Context : Actually I am trying to use OpenCV with another library called MIL (Matrox Imaging Library). I need to grab an Image with MIL and then convert it to an OpenCV Image. That is why I need to know the default pixel format, to configure MIL.
The image format is set by the flag when you create the image eg CV_8UC3 means 8bit pixels, unsigned, 3colour channels. In a colour image the pixel order is BGR, data is stored in row order.
The data isn't packed at the pixel level - it's 3bytes/pixel (BGRA is an option on some of the GPU calls).
Data may be packed at the line level, if the number of pixels in a row * the number of bytes/pixel isn't a multiple of 4 then the data is padded with zero to the next 32bit boundary. The call mat.ptr(n) returns a pointer to the start of the 'n' th row
Note that you can share memory with another comaptible image format by passing the data pointer from the MIL image to the ctor of the cv::Mat
It depends on the way you are managing the image: have you loaded it from a file with imread for example?
Have a look at imread here, with a colour jpeg for example you'll have a 3 channel format, 24 bits overall. Can you be more specific?
I do not know if it's useful, but I had a similar issue when converting an image from Android Bitmap (passed to OpenCV as a byte array RGBA8888) to OpenCV image (BGR888).
Here is how I've solved it.
cv::Mat orig_image1(orig_height, orig_width, CV_8UC4, image_data);
int from_to[] = { 0, 2, 1, 1, 2, 0};
cv::Mat image(orig_height, orig_width, CV_8UC3);
cv::mixChannels(&orig_image1, 1, &image, 1, from_to, 3);
orig_image1.release();
I am using BGR to HSV conversion of image using OpenCV. Since I am new to this field and software, I may sound incorrect so please let me know if I am wrong.
Now, in my school project i want to work with HSV image, which is easily converted using
cvtColor(src, dst, CV_BGR2HSV);
I suppose, that imread() function reads image in a uchar format which is 8bit unsigned image (i suppose). In that case imshow() is also uchar.
But to work with HSV image, I am not sure but I feel i need to convert use Mat3b perhaps for the distinctive H, S and V channels of the image.
Incase if I am wrong, that I want to work with H channel of the HSV image only so how can i print, or modify this channel information.
Many thanks
Perhaps you can use cv::split to devide the HSV to 3 one-channel Mat. I think this topic OpenCV:split HSV image and scan through H channel may solve your problem.
I'm loading a 24 Bit RGB image from a PNG file into my OpenCV application.
However loading the image as grayscale directly using imread gives a very poor result.
Mat src1 = imread(inputImageFilename1.c_str(), 0);
Loading the RGB image as RGB and converting it to Grayscale gives a much better looking result.
Mat src1 = imread(inputImageFilename1.c_str(), 1);
cvtColor(src1, src1Gray, CV_RGB2GRAY);
I'm wondering if I'm using imread for my image type correctly. Has anyone experienced similar behavior?
The image converted to grayscale using imread is shown here:
The image converted to grayscale using cvtColor is shown here:
I was having the same issue today. Ultimately, I compared three methods:
//method 1
cv::Mat gs = cv::imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
//method 2
cv::Mat color = cv::imread(filename, 1); //loads color if it is available
cv::Mat gs_rgb(color.size(), CV_8UC1);
cv::cvtColor(color, gs_rgb, CV_RGB2GRAY);
//method 3
cv::Mat gs_bgr(color.size(), CV_8UC1);
cv::cvtColor(color, gs_bgr, CV_BGR2GRAY);
Methods 1 (loading grayscale) and 3 (CV_BGR2GRAY) produce identical results, while method 2 produces a different result. For my own ends, I've started using CV_BGR2GRAY.
My input files are jpgs, so there might be issues related to your particular image format.
The simple answer is, that openCV functions uses the BGR format. If you read in a image with imread or VideoCapture, it'll be always BGR. If you use RGB2GRAY, you interchange the blue channel with the green. The formula to get the brightness is
y = 0.587*green + 0.299*red + 0.114*blue
so if you change green and blue, this will cause an huge calculation error.
Greets
I have had a similar problem once, working with OpenGL shaders. It seems that the first container that OpenCV reads your image with does not support all the ranges of color and hence you see that the image is a poor grayscale transformation. However once you convert the original image into grayscale using cvtColor the container is different from the first one and supports all ranges. In my opinion the first one uses less than 8 bits for grayscale or changing to the grayscale uses a bad method. But the second one gives smooth image because of more bits in gray channel.