How to modify dft function in opencv? - opencv

I need to modify some of the variables inside the dft function in OpenCV to make it suitable for my application.
Where can I find the dft source code?
I've tried C:\opencv243\build\include\opencv2\core.hpp but it only gives me the description of dft:
//! performs forward or inverse 1D or 2D Discrete Fourier Transformation
CV_EXPORTS_W void dft(InputArray src, OutputArray dst, int flags=0, int nonzeroRows=0);
What is the procedure after source code modification? Do I have to give it a different name such as dft2()?
Where to save the new function?
I'm using visual Studio 2010 and OpenCV 2.4.3 installed on windows7 (32 bit).
Please note that I'm new to OpenCV and just switched from MATLAB. Therefore if you are willing to help, I would be grateful if you could explain clearly.
In MATLAB I could simply right-click on the function and see the source file (for the open source functions only).
Thanks
Payam

DFT function can be found in the dxt.cpp source file. This is located in $opencv2.3$\opencv\modules\core\src
If you save it as the same function you will Overwrite that function and wont be able to use the original function. If you only want your new function then just change the code, if you want the original functionality save it as something else, dft2 would surfice but i suggest saving it as something more meaningfull like dft"whathaveIdone"
Either create some new files etc or just save it as a new function with dxt.cpp, you will need to create function definitions etc
In order to find this information I opened the OpenCV solution in Visual Studio and did a solution wide search for DFT

Related

convert arm_compute::Image to cv::Mat

I have a lot of code that is based on open cv but there are many ways in which the Arm Compute library improves performance, so id like to integrate some arm compute library code into my project. Has anyone tried converting between the two corresponding Image structures? If so, what did you do? Or is there a way to share a pointer to the underlying data buffer without needing to copy image data and just set strides and flags appropriately?
I was able to configure an arm_compute::Image corresponding to my cv::Mat properties, allocate the memory, and point it to the data portion of my cv:Mat.
This way, I can process my image efficiently using arm_compute and maintain the opencv infrastructure I had for the rest of my project.
// cv::Mat mat defined and initialized above
arm_compute::Image image;
image.allocator()->init(arm_compute::TensorInfo(mat.cols, mat.rows, Format::U8));
image.allocator()->allocate();
image.allocator()->import_memory(Memory(mat.data));
Update for ACL 18.05 or newer
You need to implement IMemoryRegion.h
I have created a gist for that: link

OpenCV - Export a CvRTrees object to a file?

I was wondering if it was possible to export (write) a CvRTrees object (effectively the forest of trees) to a file, and then import that model into a different OpenCV session.
I ask as my training/test system is separate from the on-line system, and being able to move the model between the two would be a huge help.
I'm using OpenCV v 2.4.10
Thanks!!
I also had the same problem and it seems that the documentation is a bit poor.
I had a look at the source code and there are two functions which should solve your problem:
void CvRTrees::write( CvFileStorage* fs, const char* name ) const
void CvRTrees::read( CvFileStorage* fs, CvFileNode* fnode )

How to save CV_32F type CV::Mat to a file without loosing precision?

I'm using cv::PCA class for a face recognition project. I convert photos of faces to one row vectors, concatenate them to one big array and feed to pca, to acquire a new space in which I can try to use distance for recognition. Problem is, that calculating the pca from scratch each time I start the program is really time consuming (almost five minutes). I figured out that I need to save the calculated pca to hard drive, and load it when I start the program again. And here is the problem. As I can see, all cv::Mat objects in cv::PCA are of type CV_32F. When i try to save it as a normal picture, its converted to 8 bit image, and there is some data lost. When i use XML/YAML persistence, the generated file is really big, and data is also lost (I have saved it, loaded to another structure and ran cerr<<sum(pca_orginal.mean==pca_loaded.mean)[0]<<endl to check how big is the difference). Right now I'm trying to use std::ofstream::write with std::ofstream::binary flag, and istream::read, but there are some type issues (out.write(_pca.mean.data,_pca.mean.rows*_pca.mean.cols*4/*CV_32F->4*CV_8U*/\); generates error: no matching function for call to ‘std::basic_ofstream<char, std::char_traits<char> >::write(uchar*&, int). I've also heard about openexr library and it's file format, but I would rather avoid using additional libraries. I'm using OpenCV 2.3.1 and OpenCV 2.2.
edit:
I'm sorry for the confusion. I misread cv::Mat operator== description, and thought that it works the opposite way that it does, so sum(pca_orginal.mean==pca_loaded.mean)[0] giving 0 is the worse possible result, not the best. It means that XML/YML works fine apart from generating huge files. Also, after using c-style casting I was able to make the binary streams work, but the files generated are also big (over 150MB).
In the C interface, there are functions cvSave and cvLoad for saving arbitrary matrices. There are probably C++ interface counterparts, too.

OpenCV with GigE Vision Cameras [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I need to use OpenCV with a GigE Vision Ethernet Camera, but I couldn't find much useful information on how to do this, any pointers, documents and example code?
I need to read frames from the camera.
Gig-E is a communication standard for a wide range of cameras. OpenCV now contains a wrapper for The Prosilica Gig-E based cameras (see CV_CAP_PVAPI)
But in general it's better to use the camera's native API to get the data and then use openCV to convert the returned data into an image, openCv contains a number of Bayer pattern ->RGB routines.
The CvCapture module is convenient for testing, because it can seemlessly read from a camera or a file - but it's not really suitable for high-speed real-time vision
You can do this! I used the Baumer GAPI SDK, which is a GenTL consumer. GenTL is a generic transport layer, which is a module within genIcam. You can read up on GenTL HERE. Using a GenTL consumer like Baumer's GAPI or Basler's API makes things a lot easier. They should work with any GigE camera.
I made a more comprehensive way to use Baumer's GAPI SDK in another answer HERE, so I will give a summary of what you need.
Visual Studios
openCV 3 for C++ (HERE is a youtube tutorial on how)
Baumer GAPI SDK HERE
(optional) Test your camera and network interface card using Baumer's Camera Explorer program. You need to enable jumbo packets. You may also need to configure the camera and car IP address using Baumer's IPconfig program.
Setup your system Variables. refer to the programmer's guide in the Baumer GAPI SDK docs folder (should be in C:\Program Files\Baumer\Baumer GAPI SDK\Docs\Programmers_Guide). Refer to section 4.3.1.
Create a new C++ project in Visual Studios and configure the properties. Refer to section 4.4.1.
Go to the examples folder and look for 005_PixelTransformation example. It should be in (C:\Program Files\Baumer\Baumer GAPI SDK\Components\Examples\C++\src\0_Common\005_PixelTransformation). Copy the C++ file and paste it into the source directory of your new project.
Verify you can build and compile. NOTE: You may find a problem with the part that adjusts camera parameters (exposure time for example). you should see pixel values written to the screen for the first 6 pixels in the first 6 rows, for 8 images.
Add these #include statements to the top of the .cpp source file:
#include <opencv2\core\core.hpp
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\video\video.hpp>
Add these variable declarations at the beginning of the main() function
// OPENCV VARIABLE DECLARATIONS
cv::VideoWriter cvVideoCreator; // Create OpenCV video creator
cv::Mat openCvImage; // create an OpenCV image
cv::String videoFileName = "openCvVideo.avi"; // Define video filename
cv::Size frameSize = cv::Size(2048, 1088); // Define video frame size
cvVideoCreator.open(videoFileName, CV_FOURCC('D', 'I', 'V', 'X'), 20, frameSize, true); // set the codec type and frame rate
In the original 005_PixelTransformation.cpp file, line 569 has a for loop that loops over 8 images, which says for(int i = 0; i < 8; i++). We want to change this to run continuously. I did this by changing it to a while loop that says
while (pDataStream->GetIsGrabbing())
Within the while loop there's an if and else statement to check the image pixel format. After the else statement closing brace and before the pImage->Release(); statement, add the following lines
// OPEN CV STUFF
openCvImage = cv::Mat(pTransformImage->GetHeight(), pTransformImage->GetWidth(), CV_8U, (int *)pTransformImage->GetBuffer());
// create OpenCV window ----
cv::namedWindow("OpenCV window: Cam", CV_WINDOW_NORMAL);
//display the current image in the window ----
cv::imshow("OpenCV window : Cam", openCvImage);
cv::waitKey(1);
Make sure you chose the correct pixel format for your openCvImage object. I chose CV_8U because my camera is mono 8 bit.
When you build and compile, you should get an openCV window which displays the live feed from your camera!
Like I said, it can be done, because I've done it. If you run into problems, refer to the programmer's guide.
I use an uEye GigE camera (5240) with OpenCV. It works as a cv::VideoCapture out of the box. Nevertheless using the API allows for much more control over the cameras parameters.
You don't mention the type of the camera and your platform. On Windows, according to the OpenCV documentation:
Currently two camera interfaces can be
used on Windows: Video for Windows
(VFW) and Matrox Imaging Library (MIL)
It is unlikely that your GigE camera driver supports VFW, and for MIL you need the MIL library, which is not free AFAIK.
Most GigE cameras will have an API that you can use to capture images. In most cases the API will be based on GenICam. Probably your best approach is to use the API that came with your camera, and then convert the captured image to an IplImage structure (C) or Mat class (C++).

Using existing tools, how can I extract into separate images the Luma, Cb, Cr channels of a JPEG image?

I am seeking a method to extract into separate images the Luma (Y), Cb (blue component), Cr (red component), channels of the JPEG images:
Seattle Police Department image #1
Seattle Police Department image #2
Seattle Police Department image #3
I would like results equivalent to this example from Wikipedia.
The output must be calculated directly from the JPEG Start-of-Scan (SOS) data and other data in the JPEG, rather than 'back calculated' from the RGB images output by a decompressor. The purpose of this task is to produce images which represent the 'raw data' as closely as possible.
Are there existing tools which can accomplish this? I am considering throwing together something using Python, PyImage, etc. but I am surprised my search for open source or free tools has come up empty. I am aware there are many libraries which could help, although I am open to becoming aware of more libraries.
For this question, the correct answer would be a tool chain of free and/or open-source tools which can do the job. Tools with source are preferred. These tools can run on any platform, but Linux or Win32 would be immediately useful.
Answer inspired by codelogic
Given the libjpeg implementation, change djpeg.c and wrppm.c.
wrppm.c:
189: case JCS_RGB:
190: + case JCS_YCbCr:
191: /* emit header for raw PPM format */
djpeg.c
560: case FMT_PPM:
561: + cinfo.quantize_colors = 0;
562: + cinfo.out_color_space = JCS_YCbCr;
Obviously, this is quick hack, because I have a private copy where PPM output is always forced to YCbCr, but it works and I thank you, codelogic, for your Stone Code Logic.
As suggested your best bet would be to use libjpeg directly. Specifically, you might be able to set jpeg_decompress_struct's out_color_space member to be JCS_YCbCr instead of JCS_RGB and read the scanlines as usual. Here's some sample code (GPL).
Well the obvious one is libjpg.

Resources