Converting a large matrix into an image - opencv

hi so I've got a large 33 x 33 matrix in a text file. I've been working on an opencv project which basically reads the frames and calculates the similarities. So basically, I now have this large text file filled with numbers. How do I visualize this matrix in say a 2D grayscale image?

Is your matrix a cv::Mat object?
If so, do:
cv::Mat matrix;
//Load the matrix from the file
matrix = ...
//show the matrix
imshow("window name", matrix);
//save the image
imwrite("image.png", matrix);
If not, then do:
cv::Mat matrix = cv::Mat.create(33, 33, CV_32FC1);
float* floatPtr = matrix.ptr<float>();
for (int i=0;i<33*33;i++)
//read data from file here
*floatPtr++ = data[i] //if it's in an array
//If you have a file stream then do: file>>*floatPtr++;
//show the image
imshow("window name", matrix);
//save the image
imwrite("image.png", matrix);

Related

opencv c++ inverse fourier transformation does not give same image

I have a bgr image and convert to lab channels.
I tried to check if the idft image of the result of dft of L channel image is the same.
// MARK: Split LAB Channel each
cv::Mat lab_resized_host_image;
cv::cvtColor(resized_host_image, lab_resized_host_image, cv::COLOR_BGR2Lab);
imshow("lab_resized_host_image", lab_resized_host_image);
cv::Mat channel_L_host_image, channel_A_host_image, channel_B_host_image;
std::vector<cv::Mat> channel_LAB_host_image(3);
cv::split(lab_resized_host_image, channel_LAB_host_image);
// MARK: DFT the channel_L host image.
channel_L_host_image = channel_LAB_host_image[0];
imshow("channel_L_host_image", channel_L_host_image);
cv::Mat padded_L;
int rows_L = getOptimalDFTSize(channel_L_host_image.rows);
int cols_L = getOptimalDFTSize(channel_L_host_image.cols);
copyMakeBorder(channel_L_host_image, padded_L, 0, rows_L - channel_L_host_image.rows, 0, cols_L - channel_L_host_image.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes_L[] = {Mat_<float>(padded_L), Mat::zeros(padded_L.size(), CV_32F)};
Mat complexI_L;
merge(planes_L, 2, complexI_L);
dft(complexI_L, complexI_L);
// MARK: iDFT Channel_L.
Mat complexI_channel_L = complexI_L;
Mat complexI_channel_L_idft;
cv::dft(complexI_L, complexI_channel_L_idft, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
normalize(complexI_channel_L_idft, complexI_channel_L_idft, 0, 1, NORM_MINMAX);
imshow("complexI_channel_L_idft", complexI_channel_L_idft);
Each imshow give me different image... I think normalization would be error...
what is wrong? help!
original image
idft
OpenCV’s FFT is not normalized by default. One of the forward/backward transform pair must be normalized for the pair to reproduce the input values. Simply add cv::DFT_SCALE to the options:
cv::dft(complexI_mid_frequency_into_channel_A, iDFT_mid_frequency_into_channel_A, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT|cv::DFT_SCALE);

OpenCV create 3 mat objects from YUV_420_888 planes

Is there anyway to create 3 mat objects to hold YUV_420_888 planes data.
One for Y, another for U and the last one for V plane.
I don't want to convert them to BGR or anything, just hold the data as above.
You can use Splitting of the Mat.
For example in a BGR Image (I'll show you in c++ because i'm not that into opencv4Android):
cv::Mat src = cv::imread("some.png");
cv::Mat planes[3];
cv::split(src, planes);
If you have a BGR you would now have the R-Plane in the planes[2].
Another possibility is, to just get the Planes Buffer e.g.(Java Android now):
/* Get your Image somehow */
Image.Plane Y = img.getPlanes[0];
Image.Plane U = img.getPlanes[1];
Image.Plane V = img.getPlanes[2];
//now just for Y e.g.
ByteBuffer yBuffer = Y.getBuffer();
byte[] yBytes = new Byte[yBuffer.remaining()];
yBuffer.get(yBytes);
//read the byte data into a cv::Mat

Is it possible to recognize so minimal changes between noisy images in OpenCV?

I want to detect the very minimal movement of a conveyor belt using image evaluation (Resolution: 31x512, image rate: 1000 per second.). The moment of belt-start is important for me.
If I do cv::absdiff between two subsequent images, I obtain very noisy result:
According to the mechanical rotation sensor of the motor, the movement starts here:
I tried to threshold the abs-diff image with a cascade of erosion and dilation, but I could detect the earliest change more than second too late in this image:
Is it possible to find the change earlier?
Here is the sequence of the Images without changes (according to motor sensor):
In this sequence the movement begins in the middle image:
Looks like I've found a solution which works in MY case.
Instead of comparing the image changes in space-domain, the cross-correlation should be applied:
I convert both images to DFT, multiply DFT-Mats and convert back. The max pixel value is the center of the correlation. As long as the images are same, the max-pix remains in the same position and moves otherwise.
The actual working code uses 3 images, 2 DFT multiplication result between images 1,2 and 2,3:
Mat img1_( 512, 32, CV_16UC1 );
Mat img2_( 512, 32, CV_16UC1 );
Mat img3_( 512, 32, CV_16UC1 );
//read the data in the images wohever you want. I read from MHD-file
//Set ROI (if required)
Mat img1 = img1_(cv::Rect(0,200,32,100));
Mat img2 = img2_(cv::Rect(0,200,32,100));
Mat img3 = img3_(cv::Rect(0,200,32,100));
//Float mats for DFT
Mat img1f;
Mat img2f;
Mat img3f;
//DFT and produtcts mats
Mat dft1,dft2,dft3,dftproduct,dftproduct2;
//Calculate DFT of both images
img1.convertTo(img1f, CV_32FC1);
cv::dft(img1f, dft1);
img2.convertTo(img3f, CV_32FC1);
cv::dft(img3f, dft3);
img3.convertTo(img2f, CV_32FC1);
cv::dft(img2f, dft2);
//Multiply DFT Mats
cv::mulSpectrums(dft1,dft2,dftproduct,true);
cv::mulSpectrums(dft2,dft3,dftproduct2,true);
//Convert back to space domain
cv::Mat result,result2;
cv::idft(dftproduct,result);
cv::idft(dftproduct2,result2);
//Not sure if required, I needed it for visualizing
cv::normalize( result, result, 0, 255, NORM_MINMAX, CV_8UC1);
cv::normalize( result2, result2, 0, 255, NORM_MINMAX, CV_8UC1);
//Find maxima positions
double dummy;
Point locdummy; Point maxLoc1; Point maxLoc2;
cv::minMaxLoc(result, &dummy, &dummy, &locdummy, &maxLoc1);
cv::minMaxLoc(result2, &dummy, &dummy, &locdummy, &maxLoc2);
//Calculate products simply fot having one value to compare
int maxlocProd1 = maxLoc1.x*maxLoc1.y;
int maxlocProd2 = maxLoc2.x*maxLoc2.y;
//Calculate absolute difference of the products. Not 0 means movement
int absPosDiff = std::abs(maxlocProd2-maxlocProd1);
if ( absPosDiff>0 )
{
std::cout << id<< std::endl;
break;
}

Writing a float image in openCv with pixel values bigger than 1

I am currently working on a program which should take an LDR images and multiply certain pixel in the image, so that their pixel value would exceed the normal 0-255 (0-1) pixel value boundary. The program i have written can do so, but I am not able to write the image file, as the imwrite() in OpenCV clambs the values back in the range of 0-255 (0-1)
if they are bigger than 255.
Is there anybody there who knows how to write a floating point image with pixel values bigger than 255 (1)
My code looks like this
Mat ApplySunValue(Mat InputImg)
{
Mat Image1 = imread("/****/.jpg",CV_LOAD_IMAGE_COLOR);
Mat outPutImage;
Image1.convertTo(Image1, CV_32FC3);
for(int x = 0; x < InputImg.cols; x++){
for(int y = 0; y < InputImg.rows; y++){
float blue = Image1.at<Vec3f>(y,x)[0] /255.0f;
float green = Image1.at<Vec3f>(y,x)[1] /255.0f;
float red = Image1.at<Vec3f>(y,x)[2] /255.0f ;
Image1.at<Vec3f>(y,x)[0] = blue;
Image1.at<Vec3f>(y,x)[1] = green;
Image1.at<Vec3f>(y,x)[2] = red;
int pixelValue = InputImg.at<uchar>(y,x);
if(pixelValue > 254){
Image1.at<Vec3f>(y,x)[0] = blue * SunMultiplyer;
Image1.at<Vec3f>(y,x)[1] = green * SunMultiplyer;
Image1.at<Vec3f>(y,x)[2] = red * SunMultiplyer;
}
}
}
imwrite("/****/Nice.TIFF", Image1 * 255);
namedWindow("Hej",CV_WINDOW_AUTOSIZE);
imshow("hej", Image1);
return InputImg;
}
For storage purposes, the following is more memory efficient than the XML / YAML alternative (due to the use of a binary format):
// Save the image data in binary format
std::ofstream os(<filepath>,std::ios::out|std::ios::trunc|std::ios::binary);
os << (int)image.rows << " " << (int)image.cols << " " << (int)image.type() << " ";
os.write((char*)image.data,image.step.p[0]*image.rows);
os.close();
You can then load the image as follows:
// Load the image data from binary format
std::ifstream is(<filepath>,std::ios::in|std::ios::binary);
if(!is.is_open())
return false;
int rows,cols,type;
is >> rows; is.ignore(1);
is >> cols; is.ignore(1);
is >> type; is.ignore(1);
cv::Mat image;
image.create(rows,cols,type);
is.read((char*)image.data,image.step.p[0]*image.rows);
is.close();
For instance, without compression, a 1920x1200 floating-point three-channel image takes 26 MB when stored in binary format, whereas it takes 129 MB when stored in YML format. This size difference also has an impact on runtime since the number of accesses to the hard drive are very different.
Now, if what you want is to visualize your HDR image, you have no choice but to convert it to LDR. This is called "tone-mapping" (Wikipedia entry).
As far as I know, when opencv writes using imwrite, it writes in the format supported by the image container, and this by default is 255.
However, if you just want to save the data, you might consider writing the Mat object to an xml/yaml file.
//Writing
cv::FileStorage fs;
fs.open(filename, cv::FileStorage::WRITE);
fs<<"Nice"<<Image1;
//Reading
fs.open(filename, cv::FileStorage::READ);
fs["Nice"]>>Image1;
fs.release(); //Very Important

Sorting a matrix and placing it in one row

I am trying to figure a way of sorting a 3x3 row into a 9x1.
So i have following:
I want to end up with this:
This is what i end up doing so far:
Rect roi(y-1,x-1,kernel,kernel);
Mat image_roi = image(roi);
Mat image_sort(kernel, kernel, CV_8U);
cv::sort(image_roi, image_sort, CV_SORT_ASCENDING+CV_SORT_EVERY_ROW);
The code is not functional, currently i cannot find any data in my image_sort after its "sorted".
Sure you have single-channel grey level images? Try:
cv::Mat image_sort = cv::Mat::zeros(rect.height, rect.width, rect.type()); // allocated memory
image(roi).copyTo(image_sort); // copy data in image_sorted
std::sort(image_sort.data, image_sort.dataend); // call std::sort
cv::Mat vectorized = image_sort.reshape(1, 1); // reshaped your WxH matrix into a 1x(W*H) vector.

Resources