EmguCV SetCaptureProperty default values - emgucv

I am using EmguCV in my C# project.
I have recently set the values of my usb webcam properties to some random doubles and integers to see how it works, but now my webcam seemed to remember all my unfortunate changes and the video is terrible even in a clean project.
The code I have used looked like this:
capture.SetCaptureProperty(CapProp.Contrast, x);
capture.SetCaptureProperty(CapProp.Brightness, x);
capture.SetCaptureProperty(CapProp.AutoExposure, x);
capture.SetCaptureProperty(CapProp.Gamma, x);
capture.SetCaptureProperty(CapProp.Staturation, x);
capture.SetCaptureProperty(CapProp.Sharpness, x);
How do I know the default values of properties listed in EmguCV CapProp Enum?
Is there a way to reset to default webcam settings?

capture.SetCaptureProperty(CapProp.Settings, 1);

Related

How to create an Emgu::CV::Image with a specific type?

In OpenCV we have access to the CV_XX types which allow you to create a matrix with, for example, CV_32SC1. How do I do this in EmguCV?
The reason for asking is:
I am currently using EmguCV and getting an error where I need to create a specific type of Image and am unable to find those values.
Here is my code:
Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>^ mask = gcnew Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>(gray->Size);
try { CvInvoke::cvDistTransform(255-gray, tmp, CvEnum::DIST_TYPE::CV_DIST_L1, 3, nullptr, mask); }
Which gives the error:
OpenCV: the output array of labels must be 32sC1
So I believe I need to change the byte type to 32sC1, how do I do this?
I am using EmguCV 2.0
From the Working with images page, specifically the section on EmguCV 2.0, it provides the following clarification on image depth:
Image Depth Image Depth is specified using the second generic
parameter Depth. The types of depth supported in Emgu CV 1.4.0.0
include
Byte
SByte
Single (float)
Double
UInt16
Int16
Int32 (int)
I believe this means it does not use the CV_XXX types at all and only the above.
For my issue i set the type to Int32 and it seemed to stop erroring.

What is the best way to use the OpenCV library in conjunction with the Armadillo library?

I am building an image processing application using OpenCV. I am also using the Armadillo library because it has some very neat matrix related functions. The thing is though, in order to use Armadillo functions on cv::Mat I need frequent conversions from cv::Mat to arma::Mat .
To accomplish this I convert the cv::Mat to an arma::Mat using a function like this
arma::Mat cvMat2armaMat(cv::Mat M)
{
copy cv::Mat data to a arma::Mat
return arma::Mat
}
Is there a more efficient way of doing this?
To avoid or reduce copying, you can access the memory used by Armadillo matrices via the .memptr() member function. For example:
mat X(5,6);
double* mem = X.memptr();
Be careful when using the above, as you're not allowed to free the memory yourself (Armadillo will still manage the memory).
Alternatively, you can construct an Armadillo matrix directly from existing memory. For example:
double* data = new double[4*5];
// ... fill data ...
mat X(data, 4, 5, false); // 'false' indicates that no copying is to be done; see docs
In this case you will be responsible for manually managing the memory.
Also bear in mind that Armadillo stores and accesses matrices in column-major order, ie. column 0 is first stored, then column 1, column 2, etc. This is the same as used by MATLAB, LAPACK and BLAS.

How to modify dft function in opencv?

I need to modify some of the variables inside the dft function in OpenCV to make it suitable for my application.
Where can I find the dft source code?
I've tried C:\opencv243\build\include\opencv2\core.hpp but it only gives me the description of dft:
//! performs forward or inverse 1D or 2D Discrete Fourier Transformation
CV_EXPORTS_W void dft(InputArray src, OutputArray dst, int flags=0, int nonzeroRows=0);
What is the procedure after source code modification? Do I have to give it a different name such as dft2()?
Where to save the new function?
I'm using visual Studio 2010 and OpenCV 2.4.3 installed on windows7 (32 bit).
Please note that I'm new to OpenCV and just switched from MATLAB. Therefore if you are willing to help, I would be grateful if you could explain clearly.
In MATLAB I could simply right-click on the function and see the source file (for the open source functions only).
Thanks
Payam
DFT function can be found in the dxt.cpp source file. This is located in $opencv2.3$\opencv\modules\core\src
If you save it as the same function you will Overwrite that function and wont be able to use the original function. If you only want your new function then just change the code, if you want the original functionality save it as something else, dft2 would surfice but i suggest saving it as something more meaningfull like dft"whathaveIdone"
Either create some new files etc or just save it as a new function with dxt.cpp, you will need to create function definitions etc
In order to find this information I opened the OpenCV solution in Visual Studio and did a solution wide search for DFT

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

Confusion on cvSplit function

what is the proper way of using cvSplit function? I saw different version of it.
should it be
cvSplit(oriImg, r,g,b, NULL);
or
cvSplit(oriImg, b,g,r, NULL);
Both of them are ok, it depends on the channel ordering. By default OpenCV uses BGR, so in this case it would be cvSplit(oriImg, b,g,r, NULL);, but you can convert it to RGB and then use the other one.
It is exactly the same thing I was puzzled by when I started using OpenCV. OpenCV uses BGR instead of RGB so you should use
cvSplit(img,b,g,r,NULL);

Resources