OpenCV with GigE Vision Cameras [closed] - opencv

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I need to use OpenCV with a GigE Vision Ethernet Camera, but I couldn't find much useful information on how to do this, any pointers, documents and example code?
I need to read frames from the camera.

Gig-E is a communication standard for a wide range of cameras. OpenCV now contains a wrapper for The Prosilica Gig-E based cameras (see CV_CAP_PVAPI)
But in general it's better to use the camera's native API to get the data and then use openCV to convert the returned data into an image, openCv contains a number of Bayer pattern ->RGB routines.
The CvCapture module is convenient for testing, because it can seemlessly read from a camera or a file - but it's not really suitable for high-speed real-time vision

You can do this! I used the Baumer GAPI SDK, which is a GenTL consumer. GenTL is a generic transport layer, which is a module within genIcam. You can read up on GenTL HERE. Using a GenTL consumer like Baumer's GAPI or Basler's API makes things a lot easier. They should work with any GigE camera.
I made a more comprehensive way to use Baumer's GAPI SDK in another answer HERE, so I will give a summary of what you need.
Visual Studios
openCV 3 for C++ (HERE is a youtube tutorial on how)
Baumer GAPI SDK HERE
(optional) Test your camera and network interface card using Baumer's Camera Explorer program. You need to enable jumbo packets. You may also need to configure the camera and car IP address using Baumer's IPconfig program.
Setup your system Variables. refer to the programmer's guide in the Baumer GAPI SDK docs folder (should be in C:\Program Files\Baumer\Baumer GAPI SDK\Docs\Programmers_Guide). Refer to section 4.3.1.
Create a new C++ project in Visual Studios and configure the properties. Refer to section 4.4.1.
Go to the examples folder and look for 005_PixelTransformation example. It should be in (C:\Program Files\Baumer\Baumer GAPI SDK\Components\Examples\C++\src\0_Common\005_PixelTransformation). Copy the C++ file and paste it into the source directory of your new project.
Verify you can build and compile. NOTE: You may find a problem with the part that adjusts camera parameters (exposure time for example). you should see pixel values written to the screen for the first 6 pixels in the first 6 rows, for 8 images.
Add these #include statements to the top of the .cpp source file:
#include <opencv2\core\core.hpp
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\video\video.hpp>
Add these variable declarations at the beginning of the main() function
// OPENCV VARIABLE DECLARATIONS
cv::VideoWriter cvVideoCreator; // Create OpenCV video creator
cv::Mat openCvImage; // create an OpenCV image
cv::String videoFileName = "openCvVideo.avi"; // Define video filename
cv::Size frameSize = cv::Size(2048, 1088); // Define video frame size
cvVideoCreator.open(videoFileName, CV_FOURCC('D', 'I', 'V', 'X'), 20, frameSize, true); // set the codec type and frame rate
In the original 005_PixelTransformation.cpp file, line 569 has a for loop that loops over 8 images, which says for(int i = 0; i < 8; i++). We want to change this to run continuously. I did this by changing it to a while loop that says
while (pDataStream->GetIsGrabbing())
Within the while loop there's an if and else statement to check the image pixel format. After the else statement closing brace and before the pImage->Release(); statement, add the following lines
// OPEN CV STUFF
openCvImage = cv::Mat(pTransformImage->GetHeight(), pTransformImage->GetWidth(), CV_8U, (int *)pTransformImage->GetBuffer());
// create OpenCV window ----
cv::namedWindow("OpenCV window: Cam", CV_WINDOW_NORMAL);
//display the current image in the window ----
cv::imshow("OpenCV window : Cam", openCvImage);
cv::waitKey(1);
Make sure you chose the correct pixel format for your openCvImage object. I chose CV_8U because my camera is mono 8 bit.
When you build and compile, you should get an openCV window which displays the live feed from your camera!
Like I said, it can be done, because I've done it. If you run into problems, refer to the programmer's guide.

I use an uEye GigE camera (5240) with OpenCV. It works as a cv::VideoCapture out of the box. Nevertheless using the API allows for much more control over the cameras parameters.

You don't mention the type of the camera and your platform. On Windows, according to the OpenCV documentation:
Currently two camera interfaces can be
used on Windows: Video for Windows
(VFW) and Matrox Imaging Library (MIL)
It is unlikely that your GigE camera driver supports VFW, and for MIL you need the MIL library, which is not free AFAIK.
Most GigE cameras will have an API that you can use to capture images. In most cases the API will be based on GenICam. Probably your best approach is to use the API that came with your camera, and then convert the captured image to an IplImage structure (C) or Mat class (C++).

Related

Creating a sub-texture, from an existing texture, using D3D9

I'm working on an older project that uses D3D9 for rendering 3D environments.
I have a texture file loaded into memory, that I'm applying onto a simple 3D model for rendering. I'm loading this file using the D3DXCreateTextureFromFileInMemory function (MS Docs function link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemory), and everything works okay.
However, instead of simply reading & loading the entire texture file, I want to only be able to read & load a square portion of it (a sub-texture of sorts). I have a pair of UV coordinates of the supposed square portion of the sub-texture (one UV coordinate for top-left corner of square, one for the bottom-right), relative to the main texture file, but I can't find a D3D9 function that does such a thing (I believe the correct wording for this would be a "Texture Atlas", but I've only heard it a couple of times and I'm not sure).
Here is an example diagram, to make sure my question is clear:
Looking over the MS Docs for the D3D9 texture functions, D3DXCreateTextureFromFileInMemoryEx (MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemoryex) can also be found with is a supposed upgrade of the previous D3DXCreateTextureFromFileInMemory function, however it only accepts a "height" and a "width" parameters, but not any sort of positional parameter pair. There are also alternative functions that use "Resources" instead of files in memory, but they also do not appear to accept any sort of positional parameters (such as D3DXCreateTextureFromResourceEx, MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromresourceex).
There are also several functions for a "UV Atlas" present in the MS Docs archives (https://learn.microsoft.com/en-us/windows/win32/direct3d9/dx9-graphics-reference-d3dx-functions-uvatlas), however I do not think those would be helpful to me.
Is what I'm trying to achieve here even possible using D3D9? Are there any functions that I may be missing that could help me achieve this goal?

Any solution for TGA format loading in OpenCV4

There's no support TGA format for OpenCV currently.
And I know there's a single header file library named stb_image that allow you to read/write TGA image.
But the use case with OpenCV on the Internet are so few. (more often to see people use it with OpenGL)
The second method I found.
There's a short code included (the answer) in this topic:
Loading a tga/bmp file in C++/OpenGL
Someone use this code to read TGA file into cv::Mat just like the code below.
Tga tgaImg = Tga("/tmp/test.tga");
Mat img(tgaImg.GetHeight(), tgaImg.GetWidth(), CV_8UC4);
memcpy(img.data, tgaImg.GetPixels().data(), tgaImg.GetHeight() * tgaImg.GetWidth() * 4);
But this is only for reading part. I wonder if stb_image can do the same thing like the code above. I mean the image data structure might be different. (not look into them yet)
I would like to ask people who also experience this before. Since DDS/TGA image format are also popular using in game texture, there must be people have already found the way. I mean read/write TGA format in OpenCV code.
Thanks.
For saving opencv image in tga use stbi_write_tga. This function takes pointer to image data as argument, which is img.data in case of cv::Mat type.

How to modify dft function in opencv?

I need to modify some of the variables inside the dft function in OpenCV to make it suitable for my application.
Where can I find the dft source code?
I've tried C:\opencv243\build\include\opencv2\core.hpp but it only gives me the description of dft:
//! performs forward or inverse 1D or 2D Discrete Fourier Transformation
CV_EXPORTS_W void dft(InputArray src, OutputArray dst, int flags=0, int nonzeroRows=0);
What is the procedure after source code modification? Do I have to give it a different name such as dft2()?
Where to save the new function?
I'm using visual Studio 2010 and OpenCV 2.4.3 installed on windows7 (32 bit).
Please note that I'm new to OpenCV and just switched from MATLAB. Therefore if you are willing to help, I would be grateful if you could explain clearly.
In MATLAB I could simply right-click on the function and see the source file (for the open source functions only).
Thanks
Payam
DFT function can be found in the dxt.cpp source file. This is located in $opencv2.3$\opencv\modules\core\src
If you save it as the same function you will Overwrite that function and wont be able to use the original function. If you only want your new function then just change the code, if you want the original functionality save it as something else, dft2 would surfice but i suggest saving it as something more meaningfull like dft"whathaveIdone"
Either create some new files etc or just save it as a new function with dxt.cpp, you will need to create function definitions etc
In order to find this information I opened the OpenCV solution in Visual Studio and did a solution wide search for DFT

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

Interpolation and Morphing of an image in labview and/or openCV

I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map
proj_x, proj_y <--> cam_x, cam_y for scattered point pairs
My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows
[pgridx, pgridy] = meshgrid(allprojxpts, allprojypts)
fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy);
fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy);
and the reverse for the camera to projector mapping
Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle)
I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick.
So my question is one of the following
1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ?
2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV.
Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"
So this ends up being a specific fix for bugs in Labview 8.5. Nevertheless, since they're poorly documented, and I've spent a day of pain on them, I figure I'll post them so someone else googling this problem will come across it.
1) Meshgrid bombs. Don't know when this was fixed, definitely a bug in 8.5. Solution: use the meshgrid-like function on the interpolation&extrapolation pallet instead. Or upgrade to LV2009 which apparently works (thanks Underflow)
2) Griddata is defective in 8.5. This is badly documented. The 8.6 upgrade notes say that a problem with griddata and the "cubic" setting, but it is fact also a problem with the DEFAULT LINEAR setting. Solutions in descending order of kludginess: 1) pass 'v4' flag, which does some kind of spline interpolation, but does not have bugs. 2) upgrade to at least version 8.6. 3) Beat the ni engineers with reeds until they document bugs properly.
3) I was able to use the openCV remap function to do the actual transformation from one image to another. I tried just using the matlab built in interp2 vi, but it choked on large arrays and gave me out of memory errors. On the other hand, it is fairly straightforward to map an IMAQ image to an IPL image, so this isn't that bad, except for the addition of the outside library.

Resources