OpenCV create 3 mat objects from YUV_420_888 planes - opencv

Is there anyway to create 3 mat objects to hold YUV_420_888 planes data.
One for Y, another for U and the last one for V plane.
I don't want to convert them to BGR or anything, just hold the data as above.

You can use Splitting of the Mat.
For example in a BGR Image (I'll show you in c++ because i'm not that into opencv4Android):
cv::Mat src = cv::imread("some.png");
cv::Mat planes[3];
cv::split(src, planes);
If you have a BGR you would now have the R-Plane in the planes[2].
Another possibility is, to just get the Planes Buffer e.g.(Java Android now):
/* Get your Image somehow */
Image.Plane Y = img.getPlanes[0];
Image.Plane U = img.getPlanes[1];
Image.Plane V = img.getPlanes[2];
//now just for Y e.g.
ByteBuffer yBuffer = Y.getBuffer();
byte[] yBytes = new Byte[yBuffer.remaining()];
yBuffer.get(yBytes);
//read the byte data into a cv::Mat

Related

Copy Vector of Contour Points into Mat

I am using OpenCV 3.1 with VS2012 C++/CLI.
I have stored the result of a finContours call into:
std::vector<std::vector<Point>> Contours;
Thus, Contours[0] is a vector of the contour points of the first contour.
Contours[1] is a vector of the contour points of the second vector, etc.
Now, I want to load one of the contours into a Mat Based on Convert Mat to vector <float> and Vector<float> to mat in opencv I thought something like this would work.
Mat testMat=Mat(Images->Contours[0].size(),2,CV_32FC1);
memcpy(testMat.data,Images->Contours[0].data(),Images->Contours[0].size()*CV_32FC1);
I specified two columns because I each underlying pint must be composed of both an X point and a Y point and each of those should be a float. However, when I access the Mat elements, I can see that the first element is not the underlying data but the total number of contour points.
Any help on the right way to accomplish this appreaciated.
You can do that with:
Mat testMat = Mat(Images->Contours[0]).reshape(1);
Now testMat is of type CV_32SC1, aka of int. If you need float you can:
testMat.convertTo(testMat, CV_32F);
Some more details and variants...
You can simply use the Mat constructor that accepts a std::vector:
vector<Point> v = { {0,1}, {2,3}, {4,5} };
Mat m(v);
With this, you get a 2 channel matrix with the underlying data in v. This means that if you change the value in v, also the values in m change.
v[0].x = 7; // also 'm' changes
If you want a deep copy of the values, so that changes in v are not reflected in m, you can use clone:
Mat m2 = Mat(v).clone();
Your matrices are of type CV_32SC2, i.e. 2 channels matrices of int (because Point uses int. Use Point2f for float). If you want a 2 columns single channel matrix you can use reshape:
Mat m3 = m2.reshape(1);
If you want to convert to float type, you need to use convertTo:
Mat m4;
m2.convertTo(m4, CV_32F);
Here some working code as a proof of concept:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
vector<Point> v = { {0,1}, {2,3}, {4,5} };
// changes in v affects m
Mat m(v);
// changes in v doesn't affect m2
Mat m2 = Mat(v).clone();
// m is changed
v[0].x = 7;
// m3 is a 2 columns single channel matrix
Mat m3 = m2.reshape(1);
// m4 is a matrix of floats
Mat m4;
m2.convertTo(m4, CV_32F);
return 0;
}

Converting a large matrix into an image

hi so I've got a large 33 x 33 matrix in a text file. I've been working on an opencv project which basically reads the frames and calculates the similarities. So basically, I now have this large text file filled with numbers. How do I visualize this matrix in say a 2D grayscale image?
Is your matrix a cv::Mat object?
If so, do:
cv::Mat matrix;
//Load the matrix from the file
matrix = ...
//show the matrix
imshow("window name", matrix);
//save the image
imwrite("image.png", matrix);
If not, then do:
cv::Mat matrix = cv::Mat.create(33, 33, CV_32FC1);
float* floatPtr = matrix.ptr<float>();
for (int i=0;i<33*33;i++)
//read data from file here
*floatPtr++ = data[i] //if it's in an array
//If you have a file stream then do: file>>*floatPtr++;
//show the image
imshow("window name", matrix);
//save the image
imwrite("image.png", matrix);

OpenCV 2.4.2 Byte array to Mat produces a strange image pattern

Good afternoon,
I am trying to run OpenCV through a DLL and use it in a LabVIEW application.
I have correctly acquired an image in LV and passed the byte array to the DLL.
I can loop and print out in a text file the values for every pixel and match them to the output in LV, so I know that all my pixels are in the right position, for the exception that LV adds 2 columns at the beginning, with the first 2 values reserved for height and width and the rest are arbitrary numbers. But all this should do is produce a streak on the left side of the image.
Next, I am using the following lines to convert and display the image.
a[0], a[1]... etc. are channels.
The output image comes out as a very horizontally stretched out image with pixels spaced equally 15-20 pixels apart and surrounded by black pixels. I attached a screenshot
_declspec (dllexport) double imageProcess(int **a, int &x, int &y, int &cor,int &cog,int &cob,int &cow, int th, int hth)
{
y = a[0][0];
x = a[0][1];
Mat image(y, x, CV_8U, a[0]);
namedWindow( "Display window", CV_WINDOW_NORMAL ); // Create a window for display.
imshow( "Display window", image ); // Show our image inside it.
return (0);
}
Additionally I tried using this code with the same effect:
IplImage* cv_image = cvCreateImageHeader(cvSize(x,y), IPL_DEPTH_8U, 1);
cvSetData(cv_image, a[0], cv_image->widthStep);
Mat image = Mat(cv_image, false);
Can anyone please help me explain why this is happening during my image creation?
Note, Unfortunately, I cannot provide the original image/capture from LV, but I can say that it doesn't look anything like that and I am working with everything in grayscale.
Output Image:
your input ( a ) is a matrix of ints, while opencv wants uchars there.
the way you do it currently, each int (from a) gets spread over 4 consecutive bytes,
( that's exactly, what i see in that picture )
also it's only using the 1st 1/4 of the input data
you probably won't get away with just feeding the pixel pointer into your cv::Mat there,
looping over a[0], casting each pixel to uchar, and then assigning it to the opencv-pixel
should work, imho
You could convert your image to uchar or simple use an int image by replacing CV_8U by CV_32S and then:
int offset = 0;
int scale = 0;
cv::Mat image8U;
image.convertTo(image8U, CV_8UC1, scale, offset );

Sorting a matrix and placing it in one row

I am trying to figure a way of sorting a 3x3 row into a 9x1.
So i have following:
I want to end up with this:
This is what i end up doing so far:
Rect roi(y-1,x-1,kernel,kernel);
Mat image_roi = image(roi);
Mat image_sort(kernel, kernel, CV_8U);
cv::sort(image_roi, image_sort, CV_SORT_ASCENDING+CV_SORT_EVERY_ROW);
The code is not functional, currently i cannot find any data in my image_sort after its "sorted".
Sure you have single-channel grey level images? Try:
cv::Mat image_sort = cv::Mat::zeros(rect.height, rect.width, rect.type()); // allocated memory
image(roi).copyTo(image_sort); // copy data in image_sorted
std::sort(image_sort.data, image_sort.dataend); // call std::sort
cv::Mat vectorized = image_sort.reshape(1, 1); // reshaped your WxH matrix into a 1x(W*H) vector.

how to separate BGR components of a pixel in color image using openCV

Since each pixel memory contains 8 bit for each component Blue,Green and Red. So how can I separate these components from Image or Image Matrix. As
int Blue = f(Image(X,y));// (x,y) = Coordinate of a pixel of Image
similarly, for red and green.
So what should be function f and 2D matrix Image;
Thanks in advance
First off, you must go through the basics of OpenCV and turn your attention towards other parts of image processing. What you ask for is pretty basic and assuming you will be using OpenCV 2.1 and higher,
cv::Mat img = Read the image off the disk or do something to fill the image.
To access the RGB values
img.at<cv::Vec3b>(x,y);
But would give the values in reverse that is BGR. So make sure you note this.
Basically a cv::Vec3b type that is accessed.
img.at<cv::Vec3b>(x,y)[0];//B
img.at<cv::Vec3b>(x,y)[1];//G
img.at<cv::Vec3b>(x,y)[2];//R
or
Vec3f pixel = img.at<Vec3f>(x, y);
int b = pixel[0];
int g = pixel[1];
int r = pixel[2];
Now onto splitting the image into RGB channels you can use the following
Now down to primitive C style of OpenCV (There C and C++ style supported)
You can use the cvSplit function
IplImage* rgb = cvLoatImage("C://MyImage.bmp");
//now create three single channel images for the channel separation
IplImage* r = cvCreateImage( cvGetSize(rgb), rgb->depth,1 );
IplImage* g = cvCreateImage( cvGetSize(rgb), rgb->depth,1 );
IplImage* b = cvCreateImage( cvGetSize(rgb), rgb->depth,1 );
cvSplit(rgb,b,g,r,NULL);
OpenCV 2 CookBook Is one of the best books on OpenCV. Will help you alot.

Resources