I am trying to use Eigen unsupported FFT library using FFTW backend. Specifically I am want to do a 2D FFT. Here's my code :
void fft2(Eigen::MatrixXf * matIn,Eigen::MatrixXcf * matOut)
{
const int nRows = matIn->rows();
const int nCols = matIn->cols();
Eigen::FFT< float > fft;
for (int k = 0; k < nRows; ++k) {
Eigen::VectorXcf tmpOut(nRows);
fft.fwd(tmpOut, matIn->row(k));
matOut->row(k) = tmpOut;
}
for (int k = 0; k < nCols; ++k) {
Eigen::VectorXcf tmpOut(nCols);
fft.fwd(tmpOut, matOut->col(k));
matOut->col(k) = tmpOut;
}
}
I have 2 problems :
First, I get a segmentation fault when using this code on some matrix. This error doesn't happen for all matrixes. I guess it's related to an alignment error. I use the functions in the following way :
Eigen::MatrixXcf matFFT(mat.rows(),mat.cols());
fft2(&matFloat,&matFFT);
where mat can be any matrix. Funnily, the code plants only when I compute the FFT over the 2nd dimension, never on the first one. This doesn't happen with kissFFT backend.
Second I don't get the same result as Matlab (that uses FFTW), when the function works. Eg :
Input Matrix :
[2, 1, 2]
[3, 2, 1]
[1, 2, 3]
Eigen gives :
[ (0,5), (0.5,0.86603), (0,0.5)]
[ (-4.3301,-2.5), (-1,-1.7321), (0.31699,-1.549)]
[ (-1.5,-0.86603), (2,3.4641), (2,3.4641)]
Matlab gives :
17 + 0i 0.5 + 0.86603i 0.5 - 0.86603i
-1 + 0i -1 - 1.7321i 2 - 3.4641i
-1 + 0i 2 + 3.4641i -1 + 1.7321i
Only the central part is the same.
Any help would be welcome.
I failed to activate EIGEN_FFTW_DEFAULT in my first solution, activating it reveals an error in the fftw-support implementation of Eigen. The following works:
#define EIGEN_FFTW_DEFAULT
#include <iostream>
#include <unsupported/Eigen/FFT>
int main(int argc, char *argv[])
{
Eigen::MatrixXf A(3,3);
A << 2,1,2, 3,2,1, 1,2,3;
const int nRows = A.rows();
const int nCols = A.cols();
std::cout << A << "\n\n";
Eigen::MatrixXcf B(3,3);
Eigen::FFT< float > fft;
for (int k = 0; k < nRows; ++k) {
Eigen::VectorXcf tmpOut(nRows);
fft.fwd(tmpOut, A.row(k));
B.row(k) = tmpOut;
}
std::cout << B << "\n\n";
Eigen::FFT< float > fft2; // Workaround: Using the same FFT object for a real and a complex FFT seems not to work with FFTW
for (int k = 0; k < nCols; ++k) {
Eigen::VectorXcf tmpOut(nCols);
fft2.fwd(tmpOut, B.col(k));
B.col(k) = tmpOut;
}
std::cout << B << '\n';
}
I get this output:
2 1 2
3 2 1
1 2 3
(17,0) (0.5,0.866025) (0.5,-0.866025)
(-1,0) (-1,-1.73205) (2,-3.4641)
(-1,0) (2,3.4641) (-1,1.73205)
Which is the same as your Matlab result.
N.B.: FFTW seems to support 2D real->complex FFT natively (without using individual FFTs). This is likely more efficient.
fftwf_plan fftwf_plan_dft_r2c_2d(int n0, int n1,
float *in, fftwf_complex *out, unsigned flags);
Related
I would like to find the median color in a masked area in OpevCV. Does OpenCV have a function that takes an image and a mask, and puts only the pixels from the image where mask != 0 into an array or Mat?
I don't know of any OpenCV function that creates a vector from masked values, I have written my own function to do that in the past, which you could do.
Alternatively you could calculate the histogram and find the median off of that, if your data is uint8.
You should use the following function of the Mat class to copy all the pixels into another Mat by using Mask:
Mat rst;
img.copyTo(rst, mask);
Post is quite old now, but - as there is still no function available in OpenCV - I implemented it for my app. Maybe will be useful for anyone...
cv::Mat extractMaskedData(cv::Mat data, cv::Mat mask)
{
CV_Assert(mask.size()==data.size());
CV_Assert(mask.type()==CV_8UC1);
const bool isContinuous = data.isContinuous() && mask.isContinuous();
const int nRows = isContinuous ? 1 : data.rows;
const int nCols = isContinuous ? data.rows * data.cols : data.cols;
const size_t pixelBitsize = data.channels() * (data.depth() < 2 ? 1 : data.depth() < 4 ? 2 : data.depth() < 6 ? 4 : 8);
cv::Mat extractedData(0, 1, data.type());
uint8_t* m;
uint8_t* d;
for (size_t i = 0; i < nRows; ++i) {
m = mask.ptr<uint8_t>(i);
d = data.ptr(i);
for (size_t j = 0; j < nCols; ++j) {
if(m[j]) {
const cv::Mat pixelData(1, 1, data.type(), d + j * pixelBitsize);
extractedData.push_back(pixelData);
}
}
}
return extractedData;
}
It returns cv::Mat(1,n,data.type()) where n is the number of non-zero elements in mask.
May be optimised by using image-type-specific d pointer (e.g. cv::Vec3f for CV_32FC3 instead of generic uint8_t* d together with const cv::Mat pixelData(1, 1, data.type(), d + j * pixelBitsize);.
I took the code in https://gist.github.com/kyrs/9adf86366e9e4f04addb (which takes an opencv cv::Mat image as input and converts it to tensor) and I use it to label images with the model inception_v3_2016_08_28_frozen.pb stated in the Tensorflow tutorial (https://www.tensorflow.org/tutorials/image_recognition#usage_with_the_c_api). Everything worked fine when using a batchsize of 1. However, when I increase the batchsize to 2 (or greater), the size of
finalOutput (which is of type std::vector) is zero.
Here's the code to reproduce the error:
// Only for VisualStudio
#define COMPILER_MSVC
#define NOMINMAX
#include <string>
#include <iostream>
#include <fstream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/framework/tensor.h"
int batchSize = 2;
int height = 299;
int width = 299;
int depth = 3;
int mean = 0;
int stdev = 255;
// Set image paths
cv::String pathFilenameImg1 = "D:/IMGS/grace_hopper.jpg";
cv::String pathFilenameImg2 = "D:/IMGS/lenna.jpg";
// Set model paths
std::string graphFile = "D:/Tensorflow/models/inception_v3_2016_08_28_frozen.pb";
std::string labelfile = "D:/Tensorflow/models/imagenet_slim_labels.txt";
std::string InputName = "input";
std::string OutputName = "InceptionV3/Predictions/Reshape_1";
void read_prepare_image(cv::String pathImg, cv::Mat &imgPrepared) {
// Read Color image:
cv::Mat imgBGR = cv::imread(pathImg);
// Now we resize the image to fit Model's expected sizes:
cv::Size s(height, width);
cv::Mat imgResized;
cv::resize(imgBGR, imgResized, s, 0, 0, cv::INTER_CUBIC);
// Convert the image to float and normalize data:
imgResized.convertTo(imgPrepared, CV_32FC1);
imgPrepared = imgPrepared - mean;
imgPrepared = imgPrepared / stdev;
}
int main()
{
// Read and prepare images using OpenCV:
cv::Mat img1, img2;
read_prepare_image(pathFilenameImg1, img1);
read_prepare_image(pathFilenameImg2, img2);
// creating a Tensor for storing the data
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({ batchSize, height, width, depth }));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
// Copy images data into the tensor:
for (int b = 0; b < batchSize; ++b) {
const float * source_data;
if (b == 0)
source_data = (float*)img1.data;
else
source_data = (float*)img2.data;
for (int y = 0; y < height; ++y) {
const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x) {
const float* source_pixel = source_row + (x * depth);
const float* source_B = source_pixel + 0;
const float* source_G = source_pixel + 1;
const float* source_R = source_pixel + 2;
input_tensor_mapped(b, y, x, 0) = *source_R;
input_tensor_mapped(b, y, x, 1) = *source_G;
input_tensor_mapped(b, y, x, 2) = *source_B;
}
}
}
// Load the graph:
tensorflow::GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), graphFile, &graph_def);
// create a session with the graph
std::unique_ptr<tensorflow::Session> session_inception(tensorflow::NewSession(tensorflow::SessionOptions()));
session_inception->Create(graph_def);
// run the loaded graph
std::vector<tensorflow::Tensor> finalOutput;
session_inception->Run({ { InputName,input_tensor } }, { OutputName }, {}, &finalOutput);
// Get Top 5 classes:
std::cerr << "final output size = " << finalOutput.size() << std::endl;
tensorflow::Tensor output = std::move(finalOutput.at(0));
auto scores = output.flat<float>();
std::cerr << "scores size=" << scores.size() << std::endl;
std::ifstream label(labelfile);
std::string line;
std::vector<std::pair<float, std::string>> sorted;
for (unsigned int i = 0; i <= 1000; ++i) {
std::getline(label, line);
sorted.emplace_back(scores(i), line);
}
std::sort(sorted.begin(), sorted.end());
std::reverse(sorted.begin(), sorted.end());
std::cout << "size of the sorted file is " << sorted.size() << std::endl;
for (unsigned int i = 0; i< 5; ++i)
std::cout << "The output of the current graph has category " << sorted[i].second << " with probability " << sorted[i].first << std::endl;
}
Do I miss anything? Any ideas?
Thanks in advance!
I had the same problem. When I changed to the model used in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark (differente version of inception) bigger batch sizes work correctly.
Notice you need to change the input size from 299,299,3 to 224,224,3 and the input and output layer names to: input:0 and output:0
Probably the graph in the protobuf file had a fixed batch size of 1 and I was only changing the shape of the input, not the graph. The graph has to accept a variable batch size by setting the shape to (None, width, heihgt, channels). This is done when you freeze the graph. Since the graph we have is already frozen, there is no way to change the batch size at this point.
I want to know if A and B are relatively prime using Euclidean Algorithm. A and B are large numbers that cannot be stored in any data type(in C), so they are stored in a linked list. In the algorithm, the operator % is used. My question is, is there a way to compute for A mod B without actually directly using the % operator. I found out that % is distributive over addition:
A%B = ((a1%B)+(a2%B))%B.
But the problem still persists because I will still be doing %B operations.
You need calculate a % b without the % operator. OK? By definition the modulo operation finds the remainder after division of one number by another.
In python:
# mod = a % b
def mod(a, b):
return a-b*int(a/b)
>>> x = [mod(i,j) for j in range(1,100) for i in range(1,100)]
>>> y = [i % j for j in range(1,100) for i in range(1,100)]
>>> x == y
True
In C++:
#include <iostream>
#include <math.h>
using namespace std;
unsigned int mod(unsigned int a, unsigned int b) {
return (unsigned int)(a-b*floor(a/b));
}
int main() {
for (unsigned int i=1; i<=sizeof(unsigned int); ++i)
for (unsigned int j=1; j<=sizeof(unsigned int); ++j)
if (mod(i,j) != i%j)
cout << "Somthing wrong!!";
cout << "Proved for all unsigned int!";
return 0;
}
Proved for all unsigned int!
Now, just extend the result to your big numbers...!!!
I'm trying to run the kmeans algorithm on a n-dimensional data.
I Have N points and each point have { x, y, z, ... , n } features.
my code is the following:
cv::Mat points(N, n, CV_32F);
// fill the data points
cv::Mat labels; cv::Mat centers;
cv::kmeans(points, k, labels, cv::TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 1000, 0.001), 10, cv::KMEANS_PP_CENTERS, centers);
the problem is that the kmeans algorithm run into a segmentation fault.
any help is appreciated
update
How Miki and Micka said the above code was correct!
I had made a mistake in the "fill the data points" so that I corrupts the memory
The code looks ok. You have to choose the data as 1 dimension per column.
Can you try to run this example?
// k-means
int main(int argc, char* argv[])
{
cv::Mat projectedPointsImage = cv::Mat(512, 512, CV_8UC3, cv::Scalar::all(255));
int nReferenceCluster = 10;
int nSamplesPerCluster = 100;
int N = nReferenceCluster*nSamplesPerCluster; // number of samples
int n = 10; // dimensionality of data
// fill the data points
// create n artificial clusters and randomly seed 100 points around them
cv::Mat referenceCenters(nReferenceCluster, n, CV_32FC1);
//std::cout << referenceCenters << std::endl;
cv::randu(referenceCenters, cv::Scalar::all(0), cv::Scalar::all(512));
//std::cout << "FILLED:" << "\n" << referenceCenters << std::endl;
cv::Mat points = cv::Mat::zeros(N, n, CV_32FC1);
cv::randu(points, cv::Scalar::all(-20), cv::Scalar::all(20)); // seed points around the center
for (int j = 0; j < nReferenceCluster; ++j)
{
cv::Scalar clusterColor = cv::Scalar(rand() % 255, rand() % 255, rand() % 255);
//cv::Mat & clusterCenter = referenceCenters.row(j);
for (int i = 0; i < nSamplesPerCluster; ++i)
{
// creating a sample randomly around the artificial cluster:
int index = j*nSamplesPerCluster + i;
//samplesRow += clusterCenter;
for (int k = 0; k < points.cols; ++k)
{
points.at<float>(index, k) += referenceCenters.at<float>(j, k);
}
// projecting the 10 dimensional clusters to 2 dimensions:
cv::circle(projectedPointsImage, cv::Point(points.at<float>(index, 0), points.at<float>(index, 1)), 2, clusterColor, -1);
}
}
cv::Mat labels; cv::Mat centers;
int k = 10; // searched clusters in k-means
cv::kmeans(points, k, labels, cv::TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 1000, 0.001), 10, cv::KMEANS_PP_CENTERS, centers);
for (int j = 0; j < centers.rows; ++j)
{
std::cout << centers.row(j) << std::endl;
cv::circle(projectedPointsImage, cv::Point(centers.at<float>(j, 0), centers.at<float>(j, 1)), 30, cv::Scalar::all(0), 2);
}
cv::imshow("projected points", projectedPointsImage);
cv::imwrite("C:/StackOverflow/Output/KMeans.png", projectedPointsImage);
cv::waitKey(0);
return 0;
}
I'm creating 10-dimensional data around artificial cluster centers there. For displaying I project them to 2D, getting this result:
I have a real 2d matrix. I am taking its fft using fftw. But the result of using a real to complex fft is different from a complex ( with imaginary part equal to zero) to complex fft.
real matrix
0 1 2
3 4 5
6 7 8
result of real to complex fft
36 -4.5+2.59808i -13.5+7.79423i
0 -13.5-7.79423i 0
0 0 0
Code:
int r = 3, c = 3;
int sz = r * c;
double *in = (double*) malloc(sizeof(double) * sz);
fftw_complex *out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
fftw_plan p = fftw_plan_dft_r2c_2d(r, c, in, out, FFTW_MEASURE);
for ( int i=0; i<r; ++i ){
for ( int j=0; j<c; ++j ){
in[i*c+j] = i*c + j;
}
}
fftw_execute(p);
using a complex matrix with imaginary part of zero
complex matrix
0+0i 1+0i 2+0i
3+0i 4+0i 5+0i
6+0i 7+0i 8+0i
result of complex to complex fft
36 -4.5 + 2.59808i -4.5 - 2.59808i
-13.5 + 7.79423i 0 0
-13.5 - 7.79423i 0 0
Code:
int r = 3, c = 3;
int sz = r * c;
fftw_complex *out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
fftw_complex *inc = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
p = fftw_plan_dft_2d( r,c, inc, out, FFTW_FORWARD,FFTW_MEASURE);
for ( int i=0; i<r; ++i ){
for ( int j=0; j<c; ++j ){
inc[i*c+j][0] = i*c+j;
inc[i*c+j][1] = 0;
}
}
fftw_execute(p);
I am after the result of complex to complex fft. But the real to complex fft is much faster and my data is real. Am I making a programming mistake or the result should be different?
As indicated in FFTW documentation
Then, after an r2c transform, the output is an n0 × n1 × n2 × … × (nd-1/2 + 1) array of fftw_complex values in row-major order
In other words, the output for your real-to-complex transform of your sample real matrix really is:
36 -4.5+2.59808i
-13.5+7.79423i 0
-13.5-7.79423i 0
You may notice that these two columns match exactly the first two columns of your complex-to-complex transform. The missing column is omitted from the real-to-complex transform since it is redundant due to symmetry. As such, the full 3x3 matrix including the missing column could be constructed using:
fftw_complex *outfull = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sz);
int outc = (c/2+1);
for ( int i=0; i<r; ++i ){
// copy existing columns
for ( int j=0; j<outc; ++j ){
outfull[i*c+j][0] = out[i*outc+j][0];
outfull[i*c+j][1] = out[i*outc+j][1];
}
// generate missing column(s) from symmetry
for ( int j=outc; j<c; ++j){
int row = (r-i)%r;
int col = c-j;
outfull[i*c+j][0] = out[row*outc+col][0];
outfull[i*c+j][1] = -out[row*outc+col][1];
}
}