Creating a Mat object from a YV12 image buffer - opencv

I have a buffer which contains an image in YV12 format. Now I want to either convert this buffer to RGB format or create a Mat object from it directly! Can someone help me? I tried this code :
cv::Mat input(widthOfImg, heightOfImg, CV_8UC1, vy12Buffer);
cv::Mat converted;
cv::cvtColor(input, converted, CV_YUV2RGB_YV12);

That's possible.
cv::Mat picYV12 = cv::Mat(nHeight * 3/2, nWidth, CV_8UC1, yv12DataBuffer);
cv::Mat picBGR;
cv::cvtColor(picYV12, picBGR, CV_YUV2BGR_YV12);
cv::imwrite("test.bmp", picBGR); //only for test
Opencv color conversion flags
The height is multiplied by 3/2 because there are 4 Y samples, and 1 U and 1 V sample stored for every 2x2 square of pixels. This results in a byte sample to pixel ratio of 3/2
4*1+1+1 samples per 2*2 pixels = 6/4 = 3/2
YV12 Format
Correction: In the last version of OpenCV (i use oldest 2.4.13 version) is color conversion code changed to
COLOR_YUV2BGR_YV12
cv::cvtColor(picYV12, picBGR, COLOR_YUV2BGR_YV12);

here is the corresponding version in java (Android)...
This method was faster than other techniques like renderscript or opengl(glReadPixels) for getting bitmap from yuv12/i420 data stream (tested with webrtc i420 ).
long startTimei = SystemClock.uptimeMillis();
Mat picyv12 = new Mat(768,512,CV_8UC1); //(im_height*3/2,im_width), should be even no...
picyv12.put(0,0,return_buff); // buffer - byte array with i420 data
Imgproc.cvtColor(picyv12,picyv12,COLOR_YUV2RGB_YV12);// or use COLOR_YUV2BGR_YV12 depending on output result
long endTimei = SystemClock.uptimeMillis();
Log.d("i420_time", Long.toString(endTimei - startTimei));
Log.d("picyv12_size", picyv12.size().toString()); // Check size
Log.d("picyv12_type", String.valueOf(picyv12.type())); // Check type
Utils.matToBitmap(picyv12,tbmp2); // Convert mat to bitmap (height, width) i.e (512,512) - ARGB_888
save(tbmp2,"itest"); // Save bitmap

That's impossible.
Y'UV420p is a planar format, meaning that the Y', U, and V values are
grouped together instead of interspersed. The reason for this is that
by grouping the U and V values together, the image becomes much more
compressible. When given an array of an image in the Y'UV420p format,
all the Y' values come first, followed by all the U values, followed
finally by all the V values.
but cv::Mat is a RGB color model, and arranged like B0 G0 R0 B1 G1 R1... So,we can't create a Mat object from a YV12 buffer directly.
Here is an example:
cv::Mat Yv12ToRgb( uchar *pBuffer,long bufferSize, int width,int height )
{
cv::Mat result(height,width,CV_8UC3);
uchar y,cb,cr;
long ySize=width*height;
long uSize;
uSize=ySize>>2;
assert(bufferSize==ySize+uSize*2);
uchar *output=result.data;
uchar *pY=pBuffer;
uchar *pU=pY+ySize;
uchar *pV=pU+uSize;
uchar r,g,b;
for (int i=0;i<uSize;++i)
{
for(int j=0;j<4;++j)
{
y=pY[i*4+j];
cb=ucharpU[i];
cr=ucharpV[i];
//ITU-R standard
b=saturate_cast<uchar>(y+1.772*(cb-128));
g=saturate_cast<uchar>(y-0.344*(cb-128)-0.714*(cr-128));
r=saturate_cast<uchar>(y+1.402*(cr-128));
*output++=b;
*output++=g;
*output++=r;
}
}
return result;
}

You can try as YUV_I420 array
char filePath[3000];
int width, height;
cout << "file path = ";
cin >> filePath;
cout << "width = ";
cin >> width;
cout << "height = ";
cin >> height;
FILE *pFile = fopen(filePath, "rb");
unsigned char* buff = new unsigned char[width * height *3 / 2];
fread(buff, 1, width * height* 3 / 2, pFile);
fclose(pFile);
cv::Mat imageRGB;
cv::Mat picI420 = cv::Mat(height * 3 / 2, width, CV_8UC1, buff);
cv::cvtColor(picI420, imageRGB, CV_YUV2BGRA_I420);
imshow("imageRGB", imageRGB);
waitKey(0);

Related

How to correctly manipulate a CV_16SC3 Mat in a CUDA Kernel

I am writing a CUDA Program while working with OpenCV. I have an empty Mat of a given size (e.g. 1000x800) which I explicitly converted to GPUMat with dataytpe CV_16SC3. It is desired to manipulate the Image in this format in the CUDA Kernel. However trying to manipulate the Mat does not seem to work correctly.
I am calling my CUDA kernel as follows:
my_kernel <<< gridDim, blockDim >>>( (unsigned short*)img.data, img.cols, img.rows, img.step);
and my sample kernel looks like this
__global__ void my_kernel( unsigned short* img, int width, int height, int img_step)
{
int x, y, pixel;
y = blockIdx.y * blockDim.y + threadIdx.y;
x = blockIdx.x * blockDim.x + threadIdx.x;
if (y >= height)
return;
if (x >= width)
return;
pixel = (y * (img_step)) + (3 * x);
img[pixel] = 255; //I know 255 is basically an uchar, this is just part of my test
img[pixel+1] = 255
img[pixel+2] = 255;
}
I am expecting this small kernel sample to write al pixels to white. However, after downloading the Mat again from the GPU and visualizing it with imshow, not all the pixels are white and some weird black lines are present, which makes me believe that somehow I am writing to invalid memory addresses.
My guess is the following. The OpenCV documentation states that cv::mat::data returns an uchar pointer. However, my Mat has a data type "16U" (short unsigned to my knowledge). That is why in the kernel launch I am casting the pointer to (unsigned short*). But apparently that is incorrect.
How should I correctly proceed to be able to read and write the Mat data as short in my kernel?
First of all, the input image type should be short instead of unsigned short because the type of Mat is 16SC3 ( rather than 16UC3 ).
Now, since the image step is in bytes and the data type is short, the pixel index ( or address ) should be calculated taken into account the difference in byte width of those. There are 2 ways to fix this issue.
Method 1:
__global__ void my_kernel( short* img, int width, int height, int img_step)
{
int x, y, pixel;
y = blockIdx.y * blockDim.y + threadIdx.y;
x = blockIdx.x * blockDim.x + threadIdx.x;
if (y >= height)
return;
if (x >= width)
return;
//Reinterpret the input pointer as char* to allow jump in bytes instead of short
char* imgBytes = reinterpret_cast<char*>(img);
//Calculate row start address using the newly created pointer
char* rowStartBytes = imgBytes + (y * img_step); // Jump in byte
//Reinterpret the row start address back to required data type.
short* rowStartShort = reinterpret_cast<short*>(rowStartBytes);
short* pixelAddress = rowStartShort + ( 3 * x ); // Jump in short
//Modify the image values
pixelAddress[0] = 255;
pixelAddress[1] = 255;
pixelAddress[2] = 255;
}
Method 2:
Divide the input image step by the size of required data type (short). It may be done when passing the step as a kernel argument.
my_kernel<<<grid,block>>>( img, width, height, img_step/sizeof(short));
I have used method 2 for quite a long time. It is a shortcut method, but later on when I got to look at the source code of certain image processing libraries, I realized that actually Method 1 is more portable, since the size of type can vary across different platforms.

OpenCV decode CV_32FC1 into png

I would like to convert a OpenCV CV_32FC1 Mat into a png to save it and later use it in a Unity Shader.
I would like to decode it so that the first channel contains the highest 8 bits, the second channel the next 8 bits and the third channel the next 8 bits.
Edit> I actually mean the highest bits of the mantissa. Otherwise discarding 8 Bit (since I need 3 channels for imwrite) would destroy the float-representation.
I already have this working the other way around with this function:
Mat prepareLUT(char* filename){
Mat first;
first = imread(filename, CV_LOAD_IMAGE_COLOR);
Mat floatmat;
first.convertTo(floatmat, CV_32F);
std::vector<Mat> channels(3);
split(floatmat, channels);
Mat res(Size(960,1080), CV_32FC1);
res = channels[2]/255 + channels[1]/(255.0*255.0) + channels[0]/(255.0*255.0*255.0);
return res;
}
but I am unable to do this the other way around.
My first idea was the following:
void saveLUT(Mat in, char const* filename){
Mat m1 = Mat(imageSize, CV_8UC1);
Mat m2 = Mat(imageSize, CV_8UC1);
Mat m3 = Mat(imageSize, CV_8UC1);
m1 = (in*(0xFF*0xFF*0xFF-1));
m2 = (in*(0xFF*0xFF-1));
m3 = (in*(0xFF-1));
std::vector<Mat> channels;
channels.push_back(m1);
channels.push_back(m2);
channels.push_back(m3);
Mat out;
merge(channels, out);
imwrite(filename, out);
}
I thought all the bits left and right of my 8 Bit range would be cut-off, giving me the right Mat, but it always outputs some gray image.
The second approach was to work with float mats, then convert them to Char Mats to cut-off the trailing numbers:
void saveLUT(Mat in, char const* filename){
Mat m1f(imageSize, CV_32FC1);
Mat m2f(imageSize, CV_32FC1);
Mat m3f(imageSize, CV_32FC1);
Mat m1, m2, m3;
m3f = in*255;
m3f.convertTo(m3, CV_8UC1);
m3.convertTo(m3f, CV_32FC1);
m2f = (in*255-m3f)*255;
m2f.convertTo(m2, CV_8UC1);
m2.convertTo(m2f, CV_32FC1);
m1f = ((in*255-m3f)*255-m2f)*255;
m1f.convertTo(m1, CV_8UC1);
std::vector<Mat> channels;
channels.push_back(m1);
channels.push_back(m2);
channels.push_back(m3);
Mat out;
merge(channels, out);
imwrite(filename, out);
}
This way I always subtract the numbers that are too high by subtracting the result for the previous channel before multiplying, but this still gives me a gray result as the one below.
Any Idea how to tacle this?
What you want to achieve is essentially a conversion from type CV_32FC1 to type CV_8UC4 that you can then save as a PNG file.
This can be achieved in one line in C++ using the data pointers:
cv::Mat floatImage; // Your CV_32FC1 Mat
cv::Mat pngImage(floatImage.rows, floatImage.cols, CV_8UC4, (cv::Vec4b*)floatImage.data);
What you obtain is a 4-channel 8-bit precision image where each pixel contains one of the floating-point values in your original image separated in 4 blocks of 8 bits.
The inverse transformation is also possible:
cv::Mat pngImage;
cv::Mat floatImage(pngImage.rows, pngImage.cols, CV_32FC1, (float*)pngImage.data);
I found a way to do this, but it is not very pretty.
I just perform the conversion on each value, with a few bit operations and bit-shifting like so:
void saveLUT(Mat in, char const* filename){
int i,j;
Mat c1(imageSize, CV_8UC1);
Mat c2(imageSize, CV_8UC1);
Mat c3(imageSize, CV_8UC1);
for(i = 0; i < in.cols; i++){
for(j = 0; j < in.rows; j++){
float orig = in.at<float>(j,i);
uint32_t orig_int = orig*(256.0*256.0*256.0-1);
c1.at<uint8_t>(j,i) = (uint8_t)((orig_int&0xFF0000) >> 16);
c2.at<uint8_t>(j,i) = (uint8_t)((orig_int&0x00FF00) >> 8);
c3.at<uint8_t>(j,i) = (uint8_t)((orig_int&0x0000FF));
}
}
std::vector<Mat> channels;
channels.push_back(c1);
channels.push_back(c2);
channels.push_back(c3);
Mat out;
merge(channels, out);
imwrite(filename, out);
Mat encoded(imageSize, CV_8UC4);
}
It's not pretty to look at and I have to assume there are faster methods to do this, but I did not find any and it runs fast enough for my purpose.

Image Processing: Image has grid lines after applying filter

I'm very new to working with image processing at a low level and have just had a go at implementing a gaussian kernel with both GPU and CPU - however both yield the same output, an image which is severely skewed by a grid:
I'm aware I could use OpenCV's pre-built functions to handle the filters, but I wanted to learn the methodology behind it, so I built my own.
Convolution kernel:
// Convolution kernel - this manipulates the given channel and writes out a new blurred channel.
void convoluteChannel_cpu(
const unsigned char* const channel, // Input channel
unsigned char* const channelBlurred, // Output channel
const size_t numRows, const size_t numCols, // Channel width/height (rows, cols)
const float *filter, // The weight of sigma, to convulge
const int filterWidth // This is normally a sample of 9
)
{
// Loop through the images given R, G or B channel
for(int rows = 0; rows < (int)numRows; rows++)
{
for(int cols = 0; cols < (int)numCols; cols++)
{
// Declare new pixel colour value
float newColor = 0.f;
// Loop for every row along the stencil size (3x3 matrix)
for(int filter_x = -filterWidth/2; filter_x <= filterWidth/2; filter_x++)
{
// Loop for every col along the stencil size (3x3 matrix)
for(int filter_y = -filterWidth/2; filter_y <= filterWidth/2; filter_y++)
{
// Clamp to the boundary of the image to ensure we don't access a null index.
int image_x = __min(__max(rows + filter_x, 0), static_cast<int>(numRows -1));
int image_y = __min(__max(cols + filter_y, 0), static_cast<int>(numCols -1));
// Assign the new pixel value to the current pixel, numCols and numRows are both 3, so we only
// need to use one to find the current pixel index (similar to how we find the thread in a block)
float pixel = static_cast<float>(channel[image_x * numCols + image_y]);
// Sigma is the new weight to apply to the image, we perform the equation to get a radnom weighting,
// if we don't do this the image will become choppy.
float sigma = filter[(filter_x + filterWidth / 2) * filterWidth + filter_y + filterWidth/2];
//float sigma = 1 / 81.f;
// Set the new pixel value
newColor += pixel * sigma;
}
}
// Set the value of the next pixel at the current image index with the newly declared color
channelBlurred[rows * numCols + cols] = newColor;
}
}
}
I call this 3 times from another method which splits the image into respective R, G, B channels, but I don't believe this would cause the image to be so severely mutated.
Has anybody encountered a problem similar to this before, and if so how did you solve it?
EDIT Channel Splitting Func:
void gaussian_cpu(
const uchar4* const rgbaImage, // Our input image from the camera
uchar4* const outputImage, // The image we are writing back for display
size_t numRows, size_t numCols, // Width and Height of the input image (rows/cols)
const float* const filter, // The value of sigma
const int filterWidth // The size of the stencil (3x3) 9
)
{
// Build an array to hold each channel for the given image
unsigned char *r_c = new unsigned char[numRows * numCols];
unsigned char *g_c = new unsigned char[numRows * numCols];
unsigned char *b_c = new unsigned char[numRows * numCols];
// Build arrays for each of the output (blurred) channels
unsigned char *r_bc = new unsigned char[numRows * numCols];
unsigned char *g_bc = new unsigned char[numRows * numCols];
unsigned char *b_bc = new unsigned char[numRows * numCols];
// Separate the image into R,G,B channels
for(size_t i = 0; i < numRows * numCols; i++)
{
uchar4 rgba = rgbaImage[i];
r_c[i] = rgba.x;
g_c[i] = rgba.y;
b_c[i] = rgba.z;
}
// Convolute each of the channels using our array
convoluteChannel_cpu(r_c, r_bc, numRows, numCols, filter, filterWidth);
convoluteChannel_cpu(g_c, g_bc, numRows, numCols, filter, filterWidth);
convoluteChannel_cpu(b_c, b_bc, numRows, numCols, filter, filterWidth);
// Recombine the channels to build the output image - 255 for alpha as we want 0 transparency
for(size_t i = 0; i < numRows * numCols; i++)
{
uchar4 rgba = make_uchar4(r_bc[i], g_bc[i], b_bc[i], 255);
outputImage[i] = rgba;
}
}
EDIT Calling the kernel
while(gpu_frames > 0)
{
//cout << gpu_frames << "\n";
camera >> frameIn;
// Allocate I/O Pointers
beginStream(&h_inputFrame, &h_outputFrame, &d_inputFrame, &d_outputFrame, &d_redBlurred, &d_greenBlurred, &d_blueBlurred, &_h_filter, &filterWidth, frameIn);
// Show the source image
imshow("Source", frameIn);
g_timer.Start();
// Allocate mem to GPU
allocateMemoryAndCopyToGPU(numRows(), numCols(), _h_filter, filterWidth);
// Apply the gaussian kernel filter and then free any memory ready for the next iteration
gaussian_gpu(h_inputFrame, d_inputFrame, d_outputFrame, numRows(), numCols(), d_redBlurred, d_greenBlurred, d_blueBlurred, filterWidth);
// Output the blurred image
cudaMemcpy(h_outputFrame, d_frameOut, sizeof(uchar4) * numPixels(), cudaMemcpyDeviceToHost);
g_timer.Stop();
cudaDeviceSynchronize();
gpuTime += g_timer.Elapsed();
cout << "Time for this kernel " << g_timer.Elapsed() << "\n";
Mat outputFrame(Size(numCols(), numRows()), CV_8UC1, h_outputFrame, Mat::AUTO_STEP);
clean_mem();
imshow("Dest", outputFrame);
// 1ms delay to prevent system from being interrupted whilst drawing the new frame
waitKey(1);
gpu_frames--;
}
And then within the beginStream() method, images are converted to uchar4:
// Allocate host variables, casting the frameIn and frameOut vars to uchar4 elements, these will
// later be processed by the kernel
*h_inputFrame = (uchar4 *)frameIn.ptr<unsigned char>(0);
*h_outputFrame = (uchar4 *)frameOut.ptr<unsigned char>(0);
There are many doubts in the problem.
At the start of the code, its mentioned that the filter width is 9, thus making it a 9x9 kernel. But in some other comments its said to be 3. So I am guessing that you are actually using a 9x9 kernel and the filter do have the 81 weights in them.
But the above output can never be due to the above mentioned confusion.
uchar4 is of 4-byte size. Thus in gaussian_cpu while splitting the data by running the loop over rgbaImage[i] on an image that doesnot contain alpha value (it could be inferred from the above mentioned loop that alpha is not present) what actually gets done is that your are copying R1,G2,B3,R5,G6,B7 and so on to the red-channel. Better you initially try the code on a grayscale image and make sure you are using uchar instead of uchar4.
The output image seems exactly 1/3rd the width of the original image, which makes the above assumption to be true.
EDIT 1:
Is the input rgbaImage to guassian_cpu function RGBA or RGB? videoCapture must be giving a 3 channel output. The initialization of *h_inputFrame (to uchar4) itself is wrong as its pointing to 3 channel data.
Similarly the output data is four channel data, but Mat outputFrame is declared as a single channel which points to this four channel data. Try Mat outputFrame as 8UC3 type and see the result.
Also, how is the code working, the guassian_cpu() function has 7 input parameters in the definition, but when you call the function 8 parameters are used. Hope this is just a typo.

How to create OpenCV images that are contiguous in memory?

I am an OpenCV newbie. I create a OpenCV image using cvCreateImage and apply some operations on it. Now, I want to create a series of OpenCV images whose underlying memory is contiguous. This can be helpful to process that memory later as a series of image frames using parallel or CUDA techniques.
How can I create a certain number of OpenCV images that are contiguous in memory?
You can allocate the data yourself:
const int W = 640;
const int H = 480;
const int C = 1; // number of channels (1 for CV_8U)
const int N = 10; // number of images
unsigned char buffer[W*H*C*N];
cv::Mat im0(H, W, CV_8U, buffer);
cv::Mat im1(H, W, CV_8U, buffer + W*H*C);
cv::Mat im2(H, W, CV_8U, buffer + W*H*C*2);
I have used the C++ API because I'm more used to it, but there must exist a similar behaviour in the C api with the cvCreateImage function.
You could use a cv::Mat to manage the storage, then you don't to have remember to delete the storage.
Assuming 3 channel images:
const int W = 640; const int H = 480; const int C = 1;
const int N = 10; // number of images
cv::Mat buffer (N, W * H, CV_8UC3);
cv::Mat im0(H, W, CV_8UC3, buffer.ptr<uchar>(0));
cv::Mat im1(H, W, CV_8UC3, buffer.ptr<uchar>(1));
cv::Mat im2(H, W, CV_8UC3, buffer.ptr<uchar>(2));

writing to IplImage imageData

I want to write data directly into the imageData array of an IplImage, but I can't find a lot of information on how it's formatted. One thing that's particularly troubling me is that, despite creating an image with three channels, there are four bytes to each pixel.
The function I'm using to create the image is:
IplImage *frame = cvCreateImage(cvSize(1, 1), IPL_DEPTH_8U, 3);
By all indications, this should create a three channel RGB image, but that doesn't seem to be the case.
How would I, for example, write a single red pixel to that image?
Thanks for any help, it's get me stumped.
If you are looking at frame->imageSize keep in mind that it is frame->height * frame->widthStep, not frame->height * frame->width.
BGR is the native format of OpenCV, not RGB.
Also, if you're just getting started, you should consider using the C++ interface (where Mat replaces IplImage) since that is the future direction and it's a lot easier to work with.
Here's some sample code that accesses pixel data directly:
int main (int argc, const char * argv[]) {
IplImage *frame = cvCreateImage(cvSize(41, 41), IPL_DEPTH_8U, 3);
for( int y=0; y<frame->height; y++ ) {
uchar* ptr = (uchar*) ( frame->imageData + y * frame->widthStep );
for( int x=0; x<frame->width; x++ ) {
ptr[3*x+2] = 255; //Set red to max (BGR format)
}
}
cvNamedWindow("window", CV_WINDOW_AUTOSIZE);
cvShowImage("window", frame);
cvWaitKey(0);
cvReleaseImage(&frame);
cvDestroyWindow("window");
return 0;
}
unsigned char* imageData = [r1, g1, b1, r2, g2, b2, ..., rN, bn, gn]; // n = height*width of image
frame->imageData = imageData.
Take Your image that is a dimensional array of height N and width M and arrange it into a row-wise vector of length N*M. Make this of type unsigned char* for IPL_DEPTH_8U images.
Straight to your answer, painting the pixel red:
IplImage *frame = cvCreateImage(cvSize(1, 1), IPL_DEPTH_8U, 3);
int y,x;
x=0;y=0; //Pixel coordinates. Use this for bigger images than a single pixel.
int C=2; //0 for blue, 1 for green and 2 for red (BGR is the default format).
frame->imageData[y*frame->widthStep+3*x+C]=(uchar)255;

Resources