Matching two images using VideoGrabber with OpenCV - opencv

I' trying to match one image from VideoGrabber and another image that is on disk.
I'm following this tutorial.
The problem is that it uses the function imread, which takes a string (path of the file), incompatible with VideGrabber type.
How can I convert a VideoGrabber from openframeworks data type to opencv mat type?

Is this what you want?
ofVideoGrabber vidGrabber;
...
cv::Mat frame(vidGrabber.getHeight(), vidGrabber.getWidth(), CV_8UC3, vidGrabber.getPixels());
I am not sure what format uses to pack pixels, OpenCV uses BGR interleaved channels, you may have to swap pixels around.

Ok, it seems to be a specific question about openframworks. You can use the wonderful addon ofxCv fro Kyle, it is specifically for an alternative use of opencv library inside openframeworks.
in ofxCv you can find methods such as toCv for converting openframeworks types to opencv, and toOf, for the inverse process. Have a look, it is well documented, with a lot of examples, and well designed.

Not exactly sure what difficulty you have with string versus array char[]. Array of chars can be converted to string like this:
char myarray[ ] = "my_file_name";
string str(myarray);

Related

Can I convert a sensor_msgs::Pointcloud to pcl::pointcloud

Can I convert sensor_msgs::Pointcloud to pcl::pointcloud directly or I need to convert sensor_msgs::Pointcloud to sensor_msgs::Pointcloud2 before converting it to pcl::pointcloud?
As far as I understood it PCL can only work with sensor_msgs::PointCloud2 directly. ros_pcl.
U can use a converter as the middle-man between your pcl node and your publisher node. How to convert sensor_msgs::pointcloud to sensor_msgs::pointcloud2
or
If possible use sensor_msgs/PointCloud2
Take a look at: laser-scan-multi-merger it can be used to convert only one laser-scan topic aswell and directly converts it to PointCloud2.
The pass through filter can be done on the pcl::PointCloud side aswell no?

convert from pcl::PointCloud<pcl::PointXYZ> to pcl::PCLPointCloud2 ros melodic

I try to convert from pcl::PointCloud<pcl::PointXYZ> to pcl::PCLPointCloud2
But the conversion returns an empty point cloud.
This is my code:
pcl::PCLPointCloud2 cloud_inliers_pcl2;
pcl::toPCLPointCloud2(cloud_inliers, cloud_inliers_pcl2);
I can print out the cloud "cloud_inliers" which is in the
pcl::PointCloud<pcl::PointXYZ>
But the pcl::PCLPointCloud2 returns empty fields
This is a bit late but others searching the same topic may find this useful.
To convert between PCLPointCloud2 and PointT types you can use:
//Convert from PCLPointCloud2 to PointT
pcl::fromPCLPointCloud2(*pc2_cloud_ptr, *xyzrgb_cloud_ptr);
and
///Convert from PointT to PCLPointCloud2
pcl::toPCLPointCloud2(*xyzrgb_cloud_ptr, *pc2_cloud_ptr);
where my pointers to the point cloud are defined as:
pcl::PCLPointCloud2::Ptr pc2_cloud_ptr (new pcl::PCLPointCloud2);
and
pcl::PointCloudpcl::PointXYZRGB::Ptr xyzrgb_cloud_ptr (new pcl::PointCloudpcl::PointXYZRGB);
remember to include:
#include <pcl/conversions.h>
Note however that if you're using the Point Cloud Library (PCL) instead of the Robotics Operating System (ROS) library you don't need to convert but rather just use PointT types (e.g. pcl::PointCloudpcl::PointXYZRGB::Ptr). Some examples in PCL tutorials use PCLPointCloud2 types but I find that PointT types work just as well without needing to convert between the types. I guess that the tutorials are for older versions of the PCL. the downsampling tutorial is a good example: https://pcl.readthedocs.io/projects/tutorials/en/latest/voxel_grid.html
I used a PointT type instead of the PCLPointCloud2 type used in the tutorial and it works fine in PCL 1.11.
Note: I've recently started learning to use the Point Cloud Library for my master dissertation. Happy to learn more about converting between data structures in PCL. The documentation seems insufficient and needs trial and error to understand the data structures.

What interpolation methods used in cv::cvtColor() demosaicing

I would like to reproduce the cv::cvtColor() function that converts a raw Bayer image into an RGB image. There are several different ways like COLOR_BayerBG2BGR, or COLOR_BayerBG2BGR_VNG, and COLOR_BayerBG2BGR_EA. However, I can not find any information on what interpolation method each of those approaches uses. There should be some references to publications or patents. Anyone knows?
OpenCV, as the name already suggests, is open source. Just read the source code if you are interested in what is happening under the hood. Or copy it if you want to "reproduce" the function...
https://github.com/opencv/opencv
Usually its just the average of the neighbouring values. what fancy interpolation method would you expect?

How to write and read float data fast, not using string?

I have many float data which is generated from an image. I want to store it to a file, like XX.dat ( general in C). and I will read it again to do further processing.
I have method to represent float by nsstring and write it in to .txt file. but it is too slow. Is there some function which is same as fwrite( *data , *pfile) and fread(*buf, *pfile) in c? or some new idea?
many thanks!
In iOS you can still make use of the standard low-level file (and socket, among other things) API's. So you can use fopen(), fwrite(), fread(), etc. just as you would in any other C program.
This question has some examples of using the low-level file API on iOS: iPhone Unzip code
Another option to consider is writing your floats into something like an NSMutableData instance, and then writing that to file. That will be faster than converting everything to strings (you'll get a binary file instead of a text one), though probably still not as fast as using the low-level API's. And you'd probably have to use something like this to convert between floats and byte-arrays.
If you are familiar with lower level access, you could mmap your file, and then access the data directly just as you would any allocated memory.

How to incorporate FANN with other C libraries?

I am using FANN, pyfann in particular, for signature recognition. Before I can use AI, I have to prepocess the image first using the imagelab, a compilation of image processing libraries like image.h,jpegio.h,etc. My problem is I don't know how to incorporate the two so that I can use the their libraries in just one program. I have to extract the signatures' features like no. of pixels and length and width, but I don't know how to input these data to FANN. Any help? I really don't know exactly where to start.

Resources