How can I correctly convert an OpenCV IplImage to OpenSceneGraph's osg::Image?
This my current method. But I'm getting incorrect color data.
// IplImage* cvImg is a webcam output image captured using cvQueryFrame(capture)
osg::ref_ptr<osg::Image> osgImage = new osg::Image;
osgImage->setImage(cvImg->width,cvImg->height, 3,
GL_RGB, GL_RGB, GL_UNSIGNED_BYTE,
(BYTE*)(cvImg->imageData),
osg::Image::AllocationMode::NO_DELETE,1);
This is likely an issue involving OpenCV's native BGR color space. You don't mention which version of OpenCV you are using, but modern versions define CV_BGR2RGB for use with cvCvtColor. Possibly doing the conversion like this
IplImage* pImg = cvLoadImage("lines.jpg", CV_LOAD_IMAGE_COLOR);
cvCvtColor(pImg, pImg, CV_BGR2RGB);
cvReleaseImage(&pImg);
If you can't use that another option would be use cvSplit to separate and reorder the channels and then combine them with cvMerge.
On a side note, I would definitely recommend using the C++ interface as it is much easier memory management, and has more features than the C interface.
Hope that helps!
Related
In OpenCV we have access to the CV_XX types which allow you to create a matrix with, for example, CV_32SC1. How do I do this in EmguCV?
The reason for asking is:
I am currently using EmguCV and getting an error where I need to create a specific type of Image and am unable to find those values.
Here is my code:
Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>^ mask = gcnew Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>(gray->Size);
try { CvInvoke::cvDistTransform(255-gray, tmp, CvEnum::DIST_TYPE::CV_DIST_L1, 3, nullptr, mask); }
Which gives the error:
OpenCV: the output array of labels must be 32sC1
So I believe I need to change the byte type to 32sC1, how do I do this?
I am using EmguCV 2.0
From the Working with images page, specifically the section on EmguCV 2.0, it provides the following clarification on image depth:
Image Depth Image Depth is specified using the second generic
parameter Depth. The types of depth supported in Emgu CV 1.4.0.0
include
Byte
SByte
Single (float)
Double
UInt16
Int16
Int32 (int)
I believe this means it does not use the CV_XXX types at all and only the above.
For my issue i set the type to Int32 and it seemed to stop erroring.
I have a lot of code that is based on open cv but there are many ways in which the Arm Compute library improves performance, so id like to integrate some arm compute library code into my project. Has anyone tried converting between the two corresponding Image structures? If so, what did you do? Or is there a way to share a pointer to the underlying data buffer without needing to copy image data and just set strides and flags appropriately?
I was able to configure an arm_compute::Image corresponding to my cv::Mat properties, allocate the memory, and point it to the data portion of my cv:Mat.
This way, I can process my image efficiently using arm_compute and maintain the opencv infrastructure I had for the rest of my project.
// cv::Mat mat defined and initialized above
arm_compute::Image image;
image.allocator()->init(arm_compute::TensorInfo(mat.cols, mat.rows, Format::U8));
image.allocator()->allocate();
image.allocator()->import_memory(Memory(mat.data));
Update for ACL 18.05 or newer
You need to implement IMemoryRegion.h
I have created a gist for that: link
I am not familiar with directx, but I ran into a problem in a small project, part of which involves capturing directx data. I hope, below I make some sense.
General question:
I would like to know what factors determine the DXGI_FORMAT of a texture in the backbuffer (hardware?, OS?, application?, directx version?). And more importantly, when capturing a texture from the backbuffer, is it possible to receive a texture in the desired format by supplying the format as a parameter, having the format automatically converted if necessary.
Specifics about my problem :
I capture screens from games using Open Broadcaster Software(OBS) and process them using a specific library(OpenCV) prior to streaming. I noticed that, following updates to both Windows and OBS, I get 'DXGI_FORMAT_R10G10B10A2_UNORM' as the DXGI_FORMAT. This is a problem for me, because as far as I know OpenCV does not provide a convenient way for building an OpenCV object when colors are 10bits. Below are a few relevant lines from the modified OBS source file.
d3d11_copy_texture(data.texture, backbuffer);
...
hlog(toStr(data.format)); // prints 24 = DXGI_FORMAT_R10G10B10A2_UNORM
...
ID3D11Texture2D* tex;
bool success = create_d3d11_stage_surface(&tex);
if (success) {
...
HRESULT hr = data.context->Map(tex, subresource, D3D11_MAP_READ, 0, &mappedTex);
...
Mat frame(data.cy, data.cx, CV_8UC4, mappedTex.pData, (int)mappedTex.RowPitch); //This creates an OpenCV Mat object.
//No support for 10-bit coors. Expects 8-bit colors (CV_8UC4 argument).
//When the resulting Mat is viewed, colours are jumbled (Probably because 10-bits did not fit into 8-bits).
Before the updates (when I was working on this a year ago), I was probably receiving DXGI_FORMAT = DXGI_FORMAT_B8G8R8A8_UNORM, because the code above used to work.
Now I wonder what changed, and whether I can modify the source code of OBS to receive data with the desired DXGI_FORMAT.
'create_d3d11_stage_surface' method called above sets the DXGI_FORMAT, but I am not sure if it means 'give me data with this DXGI_FORMAT' or 'I know you work with this format, give me what you have'.
static bool create_d3d11_stage_surface(ID3D11Texture2D **tex)
{
HRESULT hr;
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = data.cx;
desc.Height = data.cy;
desc.Format = data.format;
...
I hoped that, overriding the desc.Format with DXGI_FORMAT_B8G8R8A8_UNORM would result in that format being passed as argument in the ID3D11DeviceContext::Map call above, and I would get data with specified format. But that did not work.
The choice of render target is up to the application, but they need to pick one based on the Direct3D hardware feature level. Formats for render targets in swapchains are usually display scanout formats:
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM (rare)
See the DXGI documentation for the full list of supported formats and usages by feature level.
Direct3D 11 does not do format conversions when you copy resources such as copying to staging render textures, so if you want to do a format conversion you'll need to handle that yourself. Note that CPU-side conversion code for all the DXGI formats can be found in DirectXTex.
It is the application that decides that format. The simplest one would be R8G8B8A8, which simply represents RGB and alpha values. But, if developer decides that he will be using HDR, the backbuffer would probably be R11B11G10, because you can store way more precise data there, without alpha channel information. If the game is for example black and white, there's no need to keep all RGB channels in the back buffer, you could use simpler format. I hope this helps.
I am building an image processing application using OpenCV. I am also using the Armadillo library because it has some very neat matrix related functions. The thing is though, in order to use Armadillo functions on cv::Mat I need frequent conversions from cv::Mat to arma::Mat .
To accomplish this I convert the cv::Mat to an arma::Mat using a function like this
arma::Mat cvMat2armaMat(cv::Mat M)
{
copy cv::Mat data to a arma::Mat
return arma::Mat
}
Is there a more efficient way of doing this?
To avoid or reduce copying, you can access the memory used by Armadillo matrices via the .memptr() member function. For example:
mat X(5,6);
double* mem = X.memptr();
Be careful when using the above, as you're not allowed to free the memory yourself (Armadillo will still manage the memory).
Alternatively, you can construct an Armadillo matrix directly from existing memory. For example:
double* data = new double[4*5];
// ... fill data ...
mat X(data, 4, 5, false); // 'false' indicates that no copying is to be done; see docs
In this case you will be responsible for manually managing the memory.
Also bear in mind that Armadillo stores and accesses matrices in column-major order, ie. column 0 is first stored, then column 1, column 2, etc. This is the same as used by MATLAB, LAPACK and BLAS.
what is the proper way of using cvSplit function? I saw different version of it.
should it be
cvSplit(oriImg, r,g,b, NULL);
or
cvSplit(oriImg, b,g,r, NULL);
Both of them are ok, it depends on the channel ordering. By default OpenCV uses BGR, so in this case it would be cvSplit(oriImg, b,g,r, NULL);, but you can convert it to RGB and then use the other one.
It is exactly the same thing I was puzzled by when I started using OpenCV. OpenCV uses BGR instead of RGB so you should use
cvSplit(img,b,g,r,NULL);