Data transfer between LibTorch C++ and Eigen (Questions and Help)
Hello all,
I'm developing a Data Transfer Tools for C++ Linear Algebra Libraries, as you can see here:
https://github.com/andrewssobral/dtt
(considering bi-dimensional arrays or matrices)
and I'm wondering if you can help me on the following code for data transfer between LibTorch and Eigen:
std::cout << "Testing LibTorch to Eigen:" << std::endl;
// LibTorch
torch::Device device(torch::cuda::is_available() ? torch::kCUDA : torch::kCPU);
torch::Tensor T = torch::rand({3, 3});
std::cout << "LibTorch:" << std::endl;
std::cout << T << std::endl;
// Eigen
float* data = T.data_ptr<float>();
Eigen::Map<Eigen::MatrixXf> E(data, T.size(0), T.size(1));
std::cout << "EigenMat:\n" << E << std::endl;
// re-check after changes
E(0,0) = 0;
std::cout << "EigenMat:\n" << E << std::endl;
std::cout << "LibTorch:" << std::endl;
std::cout << T << std::endl;
This is the output of the code:
--------------------------------------------------
Testing LibTorch to Eigen:
LibTorch:
0.6232 0.5574 0.6925
0.7996 0.9860 0.1471
0.4431 0.5914 0.8361
[ Variable[CPUFloatType]{3,3} ]
EigenMat (after data transfer):
0.6232 0.7996 0.4431
0.5574 0.986 0.5914
0.6925 0.1471 0.8361
# Modifying EigenMat, set element at (0,0) = 0
EigenMat:
0 0.7996 0.4431
0.5574 0.986 0.5914
0.6925 0.1471 0.8361
# Now, the LibTorch matrix was also modified (OK), but the rows and columns were switched.
LibTorch:
0.0000 0.5574 0.6925
0.7996 0.9860 0.1471
0.4431 0.5914 0.8361
[ Variable[CPUFloatType]{3,3} ]
Do someone knows what's happening ?
There's a better way to do that?
I need also to do the same for Armadillo, ArrayFire and OpenCV (cv::Mat).
Thanks in advance!
The reason for the switched rows and columns is that LibTorch (apparently) uses row-major storage, while Eigen by default uses column-major storage. I don't know if you can change the behavior of LibTorch, but with Eigen you can also use row-major storage, like so:
typedef Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> MatrixXf_rm; // same as MatrixXf, but with row-major memory layout
and then use it like this:
Eigen::Map<MatrixXf_rm> E(data, T.size(0), T.size(1));
Related
I want to use OpenCv's dnn::Net class inside of the openFrameworks.
I checked the version of the openCV in openFrameworks by printing out CV_VERSION and etc, which was 3.1.0.
I looked up the openCV documentation of 3.1.0 and found out that 3.1.0 supports dnn::Net class.
However, when I tried to use cv::dnn::Net in openFrameworks, it says cv has no module called dnn.
Any kind of advise or insights would be really appreciated.
cout << "OpenCV version : " << CV_VERSION << endl;
cout << "Major version : " << CV_MAJOR_VERSION << endl;
cout << "Minor version : " << CV_MINOR_VERSION << endl;
cout << "Subminor version : " << CV_SUBMINOR_VERSION << endl;
cv::dnn::Net net
Early opencv_dnn was not in the mainOpenCV, it was in opencv_contrib modules. But in all cases I recommend to use dnn from lates OpenCV version: it fixed a lot of errors and added new NN architectures.
I am trying to learn how to use the GPU programs in OpenCV. I have built everything with CUDA and if I run
cout << " Number of devices " << cv::gpu::getCudaEnabledDeviceCount() << endl;
I get the answer 1 device so at least something seems to work. However, I try the following peace of code, it just prints out the message and then nothing happens. It gets stuck on
cv::gpu::cvtColor(input_gpu, output_gpu, CV_BGR2GRAY);
Here is the code
#include<iostream>
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/gpu/gpu.hpp>
#include <opencv2/opencv.hpp>
using std::cout;
using std::endl;
int main(void){
cv::Mat input = cv::imread("image.jpg");
if (input.empty()){
cout << "Image Not Found" << endl;
return -1;
}
cv::Mat output;
// Declare the input and output GpuMat
cv::gpu::GpuMat input_gpu;
cv::gpu::GpuMat output_gpu;
cout << "Number of devices: " << cv::gpu::getCudaEnabledDeviceCount() << endl;
// Copy the input cv::Mat to device.
// Device memory will be allocated automatically according to the parameters of input image
input_gpu.upload(input);
// Convert the input image to grayScale on GPU
cv::gpu::cvtColor(input_gpu, output_gpu, CV_BGR2GRAY);
//// Copy the result from GPU back to host
output_gpu.download(output);
cv::imshow("Input", input);
cv::imshow("Output", output);
cv::waitKey(0);
return 0;
}
I just found this issue and it seems to be a problem with the Maxwell architecture, but that post is over a year old. Has anybody else experienced the same problem? I am using windows 7, Visual Studio 2013 and an Nvidia Geforce GTX 770.
/ Erik
Ok, I do not really know what the problem was, but in CMake, I changed the CUDA_GENERATION to Kepler, which is the micro architecture of my GPU, then I recompiled it, and now the code works as it should.
Interestingly, there were only Fermi and Kepler to choose, so I do not know if one will get problem with Maxwell.
/ Erik
I just installed opencv. I use VisualStudio 2013 the free version.
Whenever I use cout to display Mat object I get this:
Unhandled exception at 0x734ADE19 (msvcp100.dll) in myOpenCVStudy.exe: 0xC0000005: Access violation reading location 0x00000000.
My code is simply:
Mat M;
M.create(4, 4, CV_8UC(2));
cout << "M = " << endl << " " << M << endl << endl;
Any ideas?
Thanks
I want to write a vector<DMatch> in a file. I checked that it’s possible write some vector of classes like Keypoints, Mat, etc, but it’s not possible with it. Anyone knows how can I do it? The section of code is the following:
Mat ImDescriptors;
Vector<KeyPoints> ImKeypoints;
FileStorage fs(Name, cv::FileStorage::WRITE);
fs <<” C” << "{";
fs << "Descriptors" << ImDescriptors;
fs << "Keypoints" << ImKeypoints;
fs << “}”;
It works ok, but when I add to this code the element:
Vector<DMatch> good_matches;
fs << “GoodMatches” << good_matches;
I get the following error:
c:\opencv248\build\include\opencv2\core\operations.hpp(2713): error C2665: 'cv::write' : any of the 9 overloads could convert all the argument types.
I am experimenting with the following code on the raspberry pi:
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/camshiftdemo.cpp?rev=4234
and I would like to do the recommendations from this thread:
Getting better performance using OpenCV?
Specifically, how do I perform #1 - #4 in the link above?
Looking at this site:
http://opencv.willowgarage.com/documentation/cpp/reading_and_writing_images_and_video.html#cv-videocapture-set
I have attempted to do the following:
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cout << "width = " << cap.get(CV_CAP_PROP_FRAME_WIDTH) << endl;
cout << "height = " << cap.get(CV_CAP_PROP_FRAME_HEIGHT) << endl;
However the output always says 640 and 480 for width and height. Am I missing something big here?
I am using OpenCV 2.3.1