GLM: Multiplying vec3 by a 3x3 translation matrix gives weird results [duplicate] - glm-math

This question already has answers here:
Multiplying a matrix and a vector in GLM (OpenGL)
(2 answers)
Closed 4 years ago.
I have a glm::mat3 constructed via the experimental glm::translate(mat3,vec2) function. However, using this matrix to modify a vec3 gives funky results. Here is a short program demonstrating:
#define GLM_ENABLE_EXPERIMENTAL
#include <glm/glm.hpp>
#include <glm/gtx/matrix_transform_2d.hpp>
#include <iostream>
std::ostream& operator<< (std::ostream& o, const glm::vec3& vec){
return o << '(' << vec.x << ',' << vec.y << ',' << vec.z << ")\n";
}
int main(){
glm::mat3 translate = glm::translate(glm::mat3(1.), glm::vec2(-1.,-1.));
std::cout << glm::vec3(10.,10.,1.); //prints (10,10,1)
std::cout << glm::vec3(10.,10.,1.) * translate; //prints (10,10,-19)
}
What is wrong with my matrix that is causing it to modify the Z coordinate instead of translating it?

Your operations are in the wrong order; you want translate * glm::vec3(10, 10, 1).

Related

Converting depth image of type CV_16UC1 in OpenCV

The input image is a depth image having CV_16UC1 encoding (depth values are in millimeter). I want to convert depth values to meters. Later on, I need depth values of a few pixels. Therefore, I am using the mat.at() to access the individual pixel locations. Finally, the depth value is multiplied by 0.001f to convert it to meters.
However, instead of multiplying the depth value after using the mat.at() function, I want to do it another way i.e. multiply the whole image by 0.001f and then use the mat.at() function. unfortunately, this is giving the wrong value. A sample code is shown below-
#include <iostream>
#include <opencv2/opencv.hpp>
int main(int argc, char* argv[])
{
cv::Mat img_mm(480, 640, CV_16UC1);
// just for debugging
randu(img_mm, cv::Scalar(0), cv::Scalar(1234));
// assign a fixed value at (0, 0) just for debugging
int pixel_x = 0;
int pixel_y = 0;
img_mm.at<unsigned short>(pixel_y, pixel_x) = 123;
// the first way
auto depth_mm = img_mm.at<unsigned short>(pixel_y, pixel_x);
auto depth_m = depth_mm * 0.001f;
// the second way
cv::Mat img_m = img_mm * 0.001f;
float depth_unsigned_short = img_m.at<unsigned short>(pixel_y, pixel_x);
float depth_float = img_m.at<float>(pixel_y, pixel_x);
std::cout << "depth_mm " << depth_mm << ", depth_m " << depth_m << ", depth_unsigned_short " << depth_unsigned_short << ", depth_float " << depth_float << std::endl;
return 0;
}
Below is the output-
depth_mm 123, depth_m 0.123, depth_unsigned_short 0, depth_float 9.18355e-41
I was expecting to see 0.123 in the second way. But we see that both depth_unsigned_short and depth_float are returning wrong values.
You should use opencv provided matrix conversion utility.
Check convertTo
Something like:
cv::mat f32Mat;
img_mm.convertTo(f32Mat,CV_32FC1,0.001);
should do the trick.
At least the following statement of your code is wrong assuming img_m is a float matrix.
float depth_unsigned_short = img_m.at<unsigned short>(pixel_y, pixel_x);

values changing while saving or reading float and double values to yaml file using opencv

Setup: opencv 3.2 on ubuntu 18.04.
I save an int, a float and a double value using YAML file. The values of the float and the double in the YAML file is different from the values which are written by the program.
#include <opencv2/core/core.hpp>
int main(int ac, char** av)
{
cv::FileStorage fs("file.yaml", cv::FileStorage::WRITE); // create FileStorage object
int a = 1;
float b = 0.2;
double c = 0.3;
fs << "a" << a;
fs << "b" << b;
fs << "c" << c;
fs.release(); // releasing the file.
return 0;
}
The file.yaml reads
%YAML:1.0
---
a: 1
b: 2.0000000298023224e-01
c: 2.9999999999999999e-01
Also when I read the above YAML file using code belong I get altered values for the float and the double values.
#include <opencv2/core/core.hpp>
#include <iostream>
int main(int ac, char** av)
{
cv::FileStorage fs("file.yaml", cv::FileStorage::READ); // create FileStorage object
int a; float b; double c;
fs["a"] >> a;
fs["b"] >> b;
fs["c"] >> c;
std::cout << "a, b, c="<< a << ","<< b << ","<< c << std::endl;
fs.release(); // releasing the file.
return 0;
}
Output of the above program and the saved YAML file:
a, b, c=1,0.2,0.3
My question is how to read and write float and double values from/to YAML files using opencv without value alteration
The values of the float and the double in the YAML file is different from the values which are written by the program.
float and double are represented using a floating-point format with a binary base. The values 0.2 and 0.3 are not representable in these formats. The nearest representable values are 0.20000000298023223876953125 and 0.299999999999999988897769753748434595763683319091796875.
The numerals written to the file, “2.0000000298023224e-01” and “2.9999999999999999e-01” differ from the represented values (shown above) but contain sufficiently many digits to uniquely identify the represented values. When read back, the resulting float and double values should equal the values shown above.
Also when I read the above YAML file using code belong [below?] I get altered values for the float and the double values.
What do you mean by “altered values”? According to the question, the output of the “code below” is “a, b, c=1,0.2,0.3”. While 0.2 and 0.3 differ from the represented values shown above, they are what we expect to be output by default when those values are sent to std::cout. Most likely, what has happened is that, when “2.0000000298023224e-01” was read from the file, 0.20000000298023223876953125 was stored in the float b, and writing this to std::cout produced “0.2”, as expected, and similarly for double c and 0.3. What do you believe differs from this?

How to use magnitude and absdiff OpenCV functions to compute distances?

How can I use magnitude and absdiff? I read the explanation in the documentation, but every time it gives an error because I do not understand how exactly should be input arrays and output. Should it be vector, Mat or Scalar? I tried some but I failed, same with cartToPolar. Can anyone give me a small snippet of that, since I didn't find any examples in the documentation?
More precisely, I have the vector vector<Vec4f> lines; that contains the end point and start point of 30 lines, so I want to use magnitude to find length of each line. I learned how to use norm by for loop but I would like to use magnitude so I did it like:
double x;
length=magnitude(lines[i][2]-lines[i][0],lines[i][3]-lines[i][1],x)
but it doesn't work. I tried to define x as 1 array vector, but I couldn't.
You already got how to use norm to compute the distance:
Point2f a = ...
Point2f b = ..
double length = norm(a - b); // NORM_L2, NORM_L1
You can also work on all points at once. You first need to convert the coordinates from vector to matrix form, then it's just simple math:
#include <opencv2\opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main()
{
vector<Vec4f> lines{ { 1, 2, 4, 6 }, { 5, 7, 1, 3 }, { 11, 12, 12, 11 } };
Mat1f coordinates = Mat4f(lines).reshape(1);
Mat1f diff_x = coordinates.col(0) - coordinates.col(2);
Mat1f diff_y = coordinates.col(1) - coordinates.col(3);
cout << "coordinates: \n" << coordinates << "\n\n";
cout << "diff_x: \n" << diff_x << "\n\n";
cout << "diff_y: \n" << diff_y << "\n\n";
cout << endl;
// sqrt((x2 - x1)^2 + (y2 - y1)^2)
Mat1f euclidean_distance;
magnitude(diff_x, diff_y, euclidean_distance);
cout << "euclidean_distance: \n" << euclidean_distance << "\n\n";
// abs(x2 - x1) + abs(y2 - y1)
Mat1f manhattan_distance = abs(diff_x) + abs(diff_y);
cout << "manhattan_distance: \n" << manhattan_distance << "\n\n";
// Another way to compute L1 distance, with absdiff
// abs(x2 - x1) + abs(y2 - y1)
Mat1f points1 = coordinates(Range::all(), Range(0, 2));
Mat1f points2 = coordinates(Range::all(), Range(2, 4));
Mat1f other_manhattan_distance;
absdiff(points1, points2, other_manhattan_distance);
other_manhattan_distance = other_manhattan_distance.col(0) + other_manhattan_distance.col(1);
cout << "other_manhattan_distance: \n" << other_manhattan_distance << "\n\n";
return 0;
}

OpenCV PCA project returning cv::Mat with only 1 column

Trying to compress some image descriptors with some difficulties, take this example:
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <opencv2/core.hpp>
int main(int argc, char *argv[])
{
// create some random descriptor 1 rows x 10 cols and populate with some data
const cv::Mat A = (cv::Mat_<float>(1,10) << 0, -1.5f, -2.f, -3.5f, -3.f, -5.f, -2.f, -2.f, 0, -1.f);
std::cout << "DESCRIPTOR A : " << std::endl << A << std::endl;
// create PCA and pass descriptor data with no mean and 6 max components
cv::PCA pca(A, cv::Mat(), CV_PCA_DATA_AS_ROW, 6);
// project A to compressed descriptor B (expecting 1 rows x 6 cols)
const cv::Mat B = pca.project(A);
std::cout << "DESCRIPTOR B : " << std::endl << B << std::endl;
return EXIT_SUCCESS;
}
Output:
DESCRIPTOR A :
[0, -1.5, -2, -3.5, -3, -5, -2, -2, 0, -1]
DESCRIPTOR B :
[0]
I get that PCA attempts to transform the coordinate system to represent the values with less dimensions, is this done using all the available descriptors? I.e. Does the PCA class need to be constructed with all the descriptors? If so, what if they are unknown?
Note: using OpenCV 3.0.0
Additional Note: in my actual code the descriptor is of dimensions 1 rows x 1000 cols but it is still outputting [0]
The PCA finds a rotation that puts the maximum spread between all samples in the first dimension.
It can't do anything meaningful with one sample alone.
I suggest a visit to the library.

How can I port code that uses numpy.fft.rfft from python to C++?

I have code written in python. It computes positive part of FFT of real input using numpy. I need to port this code to C++.
import numpy as np
interp=[131.107, 133.089, 132.199, 129.905, 132.977]
res=np.fft.rfft(interp)
print res
Result of rfft is [ 659.27700000+0.j, 1.27932533-1.4548977j, -3.15032533+2.1158917j]
I tried to use OpenCV for 1D DFT:
std::vector<double> fft;
std::vector<double> interpolated = {131.107, 133.089, 132.199, 129.905, 132.977};
cv::dft( interpolated, fft );
for( auto it = fft.begin(); it != fft.end(); ++it ) {
std::cout << *it << ' ';
}
std::cout << std::endl;
Result of cv::dft is {1.42109e-14, -127.718, -94.705, 6.26856, 23.0231}. It is much different from numpy.fft.rfft. It looks strange that DC value (zero element) is near zero on all inputs after OpenCV's dft computed.
Usage of FFTW3 library gave me the same results as OpenCV's results:
std::vector<double> interpolated = {131.107, 133.089, 132.199, 129.905, 132.977};
fftw_complex* out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * 3 );
fftw_plan plan = fftw_plan_dft_r2c_1d( interpolated.size( ), interpolated.data( ), out, FFTW_ESTIMATE );
fftw_execute(plan);
fftw_destroy_plan(plan);
for( size_t i = 0; i < interpolated.size( ); ++i ) {
std::cout << " (" << out[ i ][ 0 ] << ", " << out[ i ][ 1 ] << ")";
}
fftw_free(out);
This code gives me the same results as OpenCV. It prints: (1.42109e-14, 0) (-127.718, -94.705) (6.26856, 23.0231).
Why do I get different results of dft in C++ and in python? What am I doing wrong?
Thanks!
I'm using gcc 4.6 at the moment, which doesn't have C++11, so I tried this version of your code, using OpenCV 2.4.8:
#include <iostream>
#include "opencv2/core/core.hpp"
int main(int argc, char *argv[])
{
double data[] = {131.107, 133.089, 132.199, 129.905, 132.977};
std::vector<double> interpolated (data, data + sizeof(data) / sizeof(double));
std::vector<double> fft;
cv::dft(interpolated, fft);
for (std::vector<double>::const_iterator it = fft.begin(); it != fft.end(); ++it) {
std::cout << *it << ' ';
}
std::cout << std::endl;
}
The output
659.277 1.27933 -1.4549 -3.15033 2.11589
agrees with numpy and with the cv2 python module:
In [55]: np.set_printoptions(precision=3)
In [56]: x
Out[56]: array([ 131.107, 133.089, 132.199, 129.905, 132.977])
In [57]: np.fft.rfft(x)
Out[57]: array([ 659.277+0.j , 1.279-1.455j, -3.150+2.116j])
In [58]: cv2.dft(x)
Out[58]:
array([[ 659.277],
[ 1.279],
[ -1.455],
[ -3.15 ],
[ 2.116]])
I don't know why your code is not working, so I guess this is more of a long comment than an answer.
Please check the documentation. The libfft rfft method transforms a vector of real inputs into the complex Fourier coefficients. Using the conjugacy of Fourier coefficients for real signals, the output can be given in an array of the same length as the input.
The generic fft and dft methods transform vectors of complex numbers into vectors of complex coefficients. The older codes use arrays of doubles for input and output where the real and imaginary parts of the complex numbers are given alternatingly, i.e., one array of even length. What happens to odd input lengths may be undocumented behavior.

Resources