I need to reconstruct a PointCloud using libfreenect2 registration::apply with color/depth images. The problem is I pre-saved the color & depth images as PNG files. Both color & depth images was obtained using libfreenect2::Frame and converted to a OpenCV Mat for imwrite.
Mat(rgb->height, rgb->width, CV_8UC4, rgb->data).copyTo(bgrImage);
Mat(depth->height, depth->width, CV_32FC1, depth->data).copyTo(depthImage);
I tried the following method to get my color Frame & it works. The problem is when I tried to do the same thing with the depth image, issues emerge.
Mat color_CV8UC3 = cv::imread(colorFilename);
cvtColor(color_CV8UC3, color_CV8UC4, CV_BGR2BGRA); //Convert to BGRX
libfreenect2::Frame rgb(color_CV8UC4.cols, color_CV8UC4.rows, 4, color_CV8UC4.data);
Mat depthImg = cv::imread(depthFilename);
depthImg.convertTo(depthFrame, CV_32FC1, 1.0/255.0); //Convert to 32FC1
libfreenect2::Frame depth(depthFrame.cols, depthFrame.rows, 4, depthFrame.data);
I tested the conversion of cv::Mat to libfreenect2::Frame by re-converting the libfreenect2::Frame for both rgb and depth & while I got the same image for rgb, it wasn't true for the depth image.
Mat(rgb.height, rgb.width, CV_8UC4, rgb.data).copyTo(rgb_image);
Mat(depth.height, depth.width, CV_32FC1, depth.data ).copyTo(depth_image);
imshow("Depth", depth_image);
imshow("Color", rgb_image);
Depth Image - Loaded & Converted to 32FC1
Depth Image - After Converted to Frame & Reconverted Back
Thanks for any assistance provided and please give any feedback as this is my first time posting a question here.
Related
How is the data laid out in a 4 channel image matrix in OpencV (CV_8UC4)?
cv::Mat A = cv::Mat::zeros(height, width, CV_8UC4);
is it:
[R1,G1,B1,A1,R2,G2,B2,A2,...]
or:
[B1,G1,R1,A1,B2,G2,R2,A2,...]
or anything else?
It depends. If you are in BGRA space then it is:
[B1,G1,R1,A1,B2,G2,R2,A2,...]
If you are in RGBA space then it is:
[R1,G1,B1,A1,R2,G2,B2,A2,...]
And by default, OpenCV loads images as BGR (BGRA) sapce. So if you did not chage anything it should be:
[B1,G1,R1,A1,B2,G2,R2,A2,...]
I've got a sequence of images of type CV_8UC4. It is of HD size 1280x720.
I'm executing the bgfg segmentation (MOG2 specifically) on a ROI of the image.
After the algo finished I've got the binary image of the size of ROI and of
type CV_8UC1.
I want to insert this binary image back to the original big image. How can I do
this?
Here's what I'm doing (the code is simplified for the sake of readability):
// cvImage is the big Mat coming from outside
cv::Mat roi(cvImage, cv::Rect(200, 200, 400, 400));
mog2 = cv::createBackgroundSubtractorMOG2();
cv::Mat fgMask;
mog2->apply(roi, fgMask); // Here the fgMask is the binary mat which corresponds to the roi size
So, how can insert the fgMask back to the original image?
Hwo to do this CV_8UC1 -> CV_8UC4 conversion only for the ROI?
Thank you.
You need to make fgMask a 4 channel image:
Mat4b fgMask4ch;
cvtColor(fgMask, fgMask4ch, COLOR_GRAY2BGRA);
and then copy this into the original cvImage at the correct position, given by roi:
fgMask4ch.copyTo(roi);
I have image buffer in 24bit RGB format. This buffer is copied to cv::Mat using
cv::Mat mat = cv::Mat(image->height, image->width, CV_8UC3, image->data);
Since this buffer is in RGB format and OpenCV uses BGR format, I'm converting matto BGR with
cv::cvtColor(mat, mat, CV_RGB2BGR);
This works, but when I check original image its channels are inverted as well (and so they become wrong) and I don't want this to happen.
I'd like to invert mat channels order leaving image-data (my image buffer) untouched. How can I do that?
I presume (I am not certain) that if you use cv::cvtColor(mat, mat, CV_RGB2BGR);, you actually recreate mat, but you overwrite data with the RGB->BGR converted data. Since you pass data to your "mat" using pointer, if you overwrite the data in mat, you change "image->data" as well.
Therefore, I do not expect less performance than:
cv::Mat mat = cv::Mat(image->height, image->width, CV_8UC3, image->data);
cv::Mat mat2;
cv::cvtColor(mat, mat2, CV_RGB2BGR);
//Work with mat 2 now
Rather than overwriting, you write new data. This should bear the same performance cost...
I do not know what plan to do with your image after colour conversion, but even if the performance was different, it is likely to have an overall minor impact.
I cannot successfully calculate the distance transformation using OpenCV (2.3) on an ubuntu. The output is either black or a copy of the orignial image, but never as expected an greyscale image with gradients.
My code:
Mat input(Size(100,100), CV_8UC1);
circle(input, Point(50,50), 10 Scalar(255,255,255), 15, 0, 0));
Mat output(Size(100,100), CV_32FC1);
distanceTransformation(input, output, CV_DIST_L2, CV_DIST_MASK_3); //Output is completely black
//distanceTransformation(input, output, CV_DIST_L2, CV_DIST_MASK_PRECISE); //Output is a "copy" of input
imshow("in", input);
imshow("out", output);
Any suggestions?
The first call is correct, but being a distance, it's not stored as uchar. When you want to display it, OpenCV converts those float (i think) into uchars. And the result seems black.
Find the max value in the output, and then scale it to fit a grayscale image
double maxVal = findMaxDistanceSomehow();
output.convertTo( displayBuffer, CV_8UC1, 255./maxVal,0);
imshow("dist", displayBuffer);
EDIT
The first idea was correct, but you actually did not try to find maxVal! You said that by looking at the picture, instead of actually extracting it. Difference between win and fail.
So, calculate the dist transform using the precise algoritm, and then
output.convertTo( displayBuffer, CV_8UC1, 10 ,0);
Edit 2
And you must put the setTo(0) there.
I'm loading a 24 Bit RGB image from a PNG file into my OpenCV application.
However loading the image as grayscale directly using imread gives a very poor result.
Mat src1 = imread(inputImageFilename1.c_str(), 0);
Loading the RGB image as RGB and converting it to Grayscale gives a much better looking result.
Mat src1 = imread(inputImageFilename1.c_str(), 1);
cvtColor(src1, src1Gray, CV_RGB2GRAY);
I'm wondering if I'm using imread for my image type correctly. Has anyone experienced similar behavior?
The image converted to grayscale using imread is shown here:
The image converted to grayscale using cvtColor is shown here:
I was having the same issue today. Ultimately, I compared three methods:
//method 1
cv::Mat gs = cv::imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
//method 2
cv::Mat color = cv::imread(filename, 1); //loads color if it is available
cv::Mat gs_rgb(color.size(), CV_8UC1);
cv::cvtColor(color, gs_rgb, CV_RGB2GRAY);
//method 3
cv::Mat gs_bgr(color.size(), CV_8UC1);
cv::cvtColor(color, gs_bgr, CV_BGR2GRAY);
Methods 1 (loading grayscale) and 3 (CV_BGR2GRAY) produce identical results, while method 2 produces a different result. For my own ends, I've started using CV_BGR2GRAY.
My input files are jpgs, so there might be issues related to your particular image format.
The simple answer is, that openCV functions uses the BGR format. If you read in a image with imread or VideoCapture, it'll be always BGR. If you use RGB2GRAY, you interchange the blue channel with the green. The formula to get the brightness is
y = 0.587*green + 0.299*red + 0.114*blue
so if you change green and blue, this will cause an huge calculation error.
Greets
I have had a similar problem once, working with OpenGL shaders. It seems that the first container that OpenCV reads your image with does not support all the ranges of color and hence you see that the image is a poor grayscale transformation. However once you convert the original image into grayscale using cvtColor the container is different from the first one and supports all ranges. In my opinion the first one uses less than 8 bits for grayscale or changing to the grayscale uses a bad method. But the second one gives smooth image because of more bits in gray channel.