OpenCV Mat to Id2d1Bitmap - opencv

I have to render opencv Mat to using DirectX 11.
D2D1_BITMAP_PROPERTIES1 bitmapProperties = D2D1::BitmapProperties1(
D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW,
D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE),
96,
96
);
D2D1_SIZE_U s = D2D1::SizeU(640, 480);
hr = m_d2DeviceContext->CreateBitmap(s, bitmapProp, &m_streamBitmap);
I have to copy opencv matrix to m_streamBitmap.
pImage is a color image with 3 channels.
m_streamBitmap->CopyFromMemory(NULL, reinterpret_cast<BYTE*>(pImage.data),pImage.cols()*3);
This gives distorted image as result.
Can anyone guide me with this.
Thanks in advance ..

Your CopyFromMemory call is copying a three byte color bitmap into a four byte color bitmap. CopyFromMemory does not do format conversion, meaning the format of the source and destination bitmaps must match exactly. In addition, DXGI does not support a three byte color format, so the source bitmap must be modified to four bytes per pixel, even if the fourth is not used. If you cannot change the format of the source bitmap to match the Direct2D bitmap, then you will need to write a conversion function that will expand the bitmap elements to four bytes.

Related

What format should I use for a Opencv image if I need to access the underlaying data?

I've made a program that creates images using OpenCL and in the OpenCL code I have to access the underlaying data of the opencv-image and modify it directly but I don't know how the data is arranged internally.
I'm currently using CV_8U because the representation is really simple 0 black 255 white and everything in between but I want to add color and I don't know what format to use.
This is how I currently modify the image A[y*width + x] = 255;.
Since your A[y*width + x] = 255; works fine, then the underlaying image data A must be a 1D pixel array of size width * height, each element is a cv_8u (8 bit unsigned int).
The color values of a pixel, in the case of OpenCV, will be arranged B G R in memory. RGB order would be more common but OpenCV likes them BGR.
Your data ought to be CV_8UC3, which is the case if you use imread or VideoCapture. if it isn't that, the following information needs to be interpreted accordingly.
Your array index math needs to expand to account for the data's layout:
[(y*width + x)*3 + channel]
3 because 3 channels. channel is 0..2, x and y as you expect.
As mentioned in other answers, you'd need to convert this single-channel image to a 3-channel image to have color. The 3 channels are Blue, Green, Red (BGR).
OpenCV has a method that does just this, cv2.cvtColor(), this method takes an input image (in this case the single channel image that you have), and a conversion code (see here for more).
So the code would be like the following:
color_image = cv2.cvtColor(source_image, cv2.COLOR_GRAY2BGR)
Then you can modify the color by accessing each of the color channels, e.g.
color_image[y, x, 0] = 255 # this changes the first channel (Blue)

Converting BMP image to set of instructions for a plotter?

I have a plotter like this one:
The task which I have to implement is conversion of 24 bits BMP to set of instructions for this plotter. In the plotter I can change 16 common colors. The first complexity which I face is the colors reduction. The second complexity which I face is how to transform pixels into set of drawing instructions.
As drawing tool brush with oil paint will be used. It means that plotter drawing lines will not be so tiny and they will be relatively short.
Please suggest algorithms which can be used for solving this image data conversion problem?
Some initial results:
Dithering
Well I got some time for this today so here the result. You did not provide your plotter color palette so I extracted it from your resulting images but you can use any. The idea behind dithering is simple our perception integrates color on area not individual pixels so you have to use some accumulator of color difference of what is rendered and what should be rendered instead and add this to next pixel ...
This way the area have approximately the same color but only discrete number of colors are used in real. The form of how to update this info can differentiate the result branching dithering to many methods. The simple straightforward is this:
reset color accumulator to zero
process all pixels
for each pixel add its color to accumulator
find closest match of the result in your palette
render selected palette color
substract selected palette color from accumulator
Here your input image (I put them together):
Here result image for your source:
The color squares in upper left corner is just palette I used (extracted from your image).
Here code (C++) I do this with:
picture pic0,pic1,pic2;
// pic0 - source img
// pic1 - source pal
// pic2 - output img
int x,y,i,j,d,d0,e;
int r,g,b,r0,g0,b0;
color c;
List<color> pal;
// resize output to source image size clear with black
pic2=pic0; pic2.clear(0);
// create distinct colors pal[] list from palette image
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
c=pic1.p[y][x];
for (i=0;i<pal.num;i++) if (pal[i].dd==c.dd) { i=-1; break; }
if (i>=0) pal.add(c);
}
// dithering
r0=0; g0=0; b0=0; // no leftovers
for (y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
// get source pixel color
c=pic0.p[y][x];
// add to leftovers
r0+=WORD(c.db[picture::_r]);
g0+=WORD(c.db[picture::_g]);
b0+=WORD(c.db[picture::_b]);
// find closest color from pal[]
for (i=0,j=-1;i<pal.num;i++)
{
c=pal[i];
r=WORD(c.db[picture::_r]);
g=WORD(c.db[picture::_g]);
b=WORD(c.db[picture::_b]);
e=(r-r0); e*=e; d =e;
e=(g-g0); e*=e; d+=e;
e=(b-b0); e*=e; d+=e;
if ((j<0)||(d0>d)) { d0=d; j=i; }
}
// get selected palette color
c=pal[j];
// sub from leftovers
r0-=WORD(c.db[picture::_r]);
g0-=WORD(c.db[picture::_g]);
b0-=WORD(c.db[picture::_b]);
// copy to destination image
pic2.p[y][x]=c;
}
// render found palette pal[] (visual check/debug)
x=0; y=0; r=16; g=pic2.xs/r; if (g>pal.num) g=pal.num;
for (y=0;y<r;y++)
for (i=0;i<g;i++)
for (c=pal[i],x=0;x<r;x++)
pic2.p[y][x+(i*r)]=c;
where picture is my image class so here some members:
xs,ys resolution
color p[ys][xs] direct pixel access (32bit pixel format so 8 bit per channel)
clear(DWORD c) fills image with color c
The color is just union of DWORD dd and BYTE db[4] for simple channel access.
The List<> is my template (dynamic array/list>
List<int> a is the same as int a[].
add(b) add b to it at the end of list
num is number of items in list
Now to avoid too many dots (for the lifespan of your plotter sake) you can use instead different line patterns etc but that needs a lot of trial/error ... For example you can count how many times a color is used in some area and from that ratio use different filling patterns (based on lines). You need to choose between quality of image and speed of rendering/durability ...
Without more info about your plotter capabilities (speeds, method of tool change,color combination behavior) is hard to decide best method of forming control stream. My bet is you change the colors manually so you will render each colors at once. So extract all pixels with the color of first tool merge adjacent pixels to lines/curves and render ... then move to next tool color ...

Convert 4 channel image to 3 Channel image

I am using OpenCV 2.4.6. I am trying to convert a 4channel RGB IplImage to 4channel HSV image. Below is my code. Which is giving error "OpenCV Error: Assertion failed in unknown function". I think cvCvtColor supports 3channel images. Is there any way of converting 4channel RGB to HSV or 4channel RGB to 3channel RGB?
IplImage* mCVImageColor = cvCreateImageHeader(cvSize(640,480), IPL_DEPTH_8U, 4);
/*Doing something*/
IplImage* imgHSV = cvCreateImage(cvGetSize(mCVImageColor), IPL_DEPTH_8U, 4);
cvCvtColor(mCVImageColor, imgHSV, CV_BGR2HSV); //This line throws exception
The common assumption is that the 4th channel is an alpha (A) channel. Thus, the correct conversion code is:
cvCvtColor(mCVImageColor, imgHSV, CV_BGRA2HSV);
Notice the A in BGRA.
Also, I guess from your syntax (mCVImage...) that you are using C++. Then, why not using the C++ API of OpenCV?
If you choose to go C++, the documentation is still outdated, and you can find up-to-date color conversion codes for OpenCV 2.4.6 here.
For your case, the correct color conversion code (C++) is: cv::COLOR_BGRA2HSV. But if you are using C++ API, then you should use cv::Mat objects and call the funciton cv::cvtColor(...) instead of using IplaImage's and cv prefixed functions.
Working with OpenCV 3.1, I had this issue but the answer by sansuiso was not working for me.
Instead, I went with the following command to convert from 4 channel color (RGBA) to 3 channel color (RGB):
cvtColor(inputMat, outputMat, CV_BGRA2BGR);
Afterwards, I verified the channels() for each type, and was able to confirm that the alpha channel had been stripped, and my function worked properly that required three channel images.
I have a idea to remove the channel, to convert it to white. Then you can use opencv to do other operations. This method will use PIL package.
image = Image.open("original.png")
image.convert("RGBA") # Convert this to RGBA if possible
pixel_data = image.load()
if image.mode == "RGBA":
# If the image has an alpha channel, convert it to white
# Otherwise we'll get weird pixels
for y in range(image.size[1]): # For each row ...
for x in range(image.size[0]): # Iterate through each column ...
# Check if it's opaque
if pixel_data[x, y][3] < 255:
# Replace the pixel data with the colour white
pixel_data[x, y] = (255, 255, 255, 255)
image.save("noAlphaChannelImage.png")

Convert image color space and output separate channels in OpenCV

I'm trying to reduce the runtime of a routine that converts an RGB image to a YCbCr image. My code looks like this:
cv::Mat input(BGR->m_height, BGR->m_width, CV_8UC3, BGR->m_imageData);
cv::Mat output(BGR->m_height, BGR->m_width, CV_8UC3);
cv::cvtColor(input, output, CV_BGR2YCrCb);
cv::Mat outputArr[3];
outputArr[0] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Y->m_imageData);
outputArr[1] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cr->m_imageData);
outputArr[2] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cb->m_imageData);
split(output,outputArr);
But, this code is slow because there is a redundant split operation which copies the interleaved RGB image into the separate channel images. Is there a way to make the cvtColor function create an output that is already split into channel images? I tried to use constructors of the _OutputArray class that accepts a vector or array of matrices as an input, but it didn't work.
Are you sure that copying the image data is the limiting step?
How are you producing the Y ? Cr / Cb cv::mats?
Can you just rewrite this function to write the results into the three separate images?
There is no calling option for cv::cvtColor, that gives it result as three seperate cv::Mats (one per channel).
dst – output image of the same size and depth as src.
source: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor
You have to copy the pixels from the result (as you are already doing) or write such a conversion function yourself.
Use split. This splits the image into 3 different channels or arrays.
Now converting them back to UIImage is where I am having trouble. I get three grayscale images, one in each array. I am convinced they are the proper channels in cvMat format but when I convert them to UIImage they are grayscale but different grayscale values in each image. If you can use imread and imshow then it should display the images for you after the split. My problem is trying to use the ios.h methods and I believe it reassembles the arrays, instead of transferring the single array. Here is my code using a segmented control to choose which layer, or array, you want to display. Like I said, I get 3 grayscale images but with completely different values. I need to keep the one layer and abandon the rest. Still working on that part of it.
UIImageToMat(_img, cvImage);
cv::cvtColor(cvImage, RYB, CV_RGB2BGRA);
split(RYB, layers);
if (_segmentedRGBControl.selectedSegmentIndex == 0) {
// cv::cvtColor(layers[0], RYB, CV_8UC1);
RYB = layers[0];
_imageProcessView.image = MatToUIImage(RYB);
}
if (_segmentedRGBControl.selectedSegmentIndex == 1) {
RYB = (layers[1]);
_imageProcessView.image = MatToUIImage(RYB);
}
if (_segmentedRGBControl.selectedSegmentIndex == 2) {
RYB = (layers[2]);
_imageProcessView.image = MatToUIImage(RYB);
}
}

How to convert monochrome image to bitwise format for thermal printer

I'm using a Custom s'print DPT100-S thermal printer to made a receipt printing application.
It is able to print graphics using 384 pixels in one line. This data has to be passed on to the printer using 48 bytes (48x8=384). So, each 'bit' represents one dot to be printed (bit will be '0' for white and '1' for black).
So, I need to create a program which will read a monochrome BMP generated in Windows Paint(or any other program) and convert it into this bit format using a C program in Linux.
Please guide me.
Pseudo code:
Read BMP
For each row in BMP
For each group of 8 pixels in row
output_byte = 0
For each pixel in current group of 8
output_byte <<= 1 // shift output_byte left by one bit
output_byte |= (pixel != 0) // set rightmost bit in output_byte
// according to input pixel value
Save output_byte in bitmap
Take a look at halftoning.
A quick Google will get you references and Java applet like here: http://www.markschulze.net/halftone/index.html
If you don't have to create your own program and you are happy to use off the shelf software, try ImageMagick's convert command: http://www.imagemagick.org/Usage/quantize/#halftone
e.g.
convert myfile.jpg -colorspace Gray -ordered-dither h4x4a printable-file.bmp
This link has a software called LCD assistant which does the same thing as you need. You have to use paint to convert any image to bitmap and then import that bmp image into the software. The output you can choose to be 384 X xyz. You get the output pixels in HEX.

Resources