I would like to make a PCD file with X, Y, Z, RGBA and Label fields. Now I have a XYZRGBA, PCD file. It includes 640*480 points. In the other hand I have another file which includes 320*256 numbers that represent labels in a segmented image. I want to up-scale the label array and add it to my current PCD file for making a new PCD files with corresponding x,y,z, rgba and label information.This PCD file will be related to a segmented image.
Here is my attempt.
Label is the name of file which contains label information, first I converted it to an OpenCV matrix and now I want to up-scale it to 640*480 and then add it to the current xyzrgba, pcd file. After Up-scaling again I converted the resulted OpenCV matrix to a normal matrix with name: "array" for adding to my current PCD data.
cv::Mat LabelsMat=cv::Mat(320,256, CV_32SC1, label);
cv::Mat OriginalLabels=cv::Mat::zeros(320,256, CV_32SC1);
LabelsMat.copyTo(OriginalLabels);
cv::Mat UpScaledLabels=cv::Mat::zeros(640, 480, CV_32FC1);
cv::resize(OriginalLabels, UpScaledLabels, UpScaledLabels.size(), 0, 0,cv::INTER_NEAREST);
std::vector<int> array;
array.assign((int*)UpScaledLabels.datastart, (int*)UpScaledLabels.dataend);
But there is some problem. When I make this new PCD file and want to see only one segment of the image, e.g. 4, a wrong shape is appeared to me which is very different with segment 4 according to my basic image. I am sure the problem is because of this part and above code. Does any one could help me in finding the problem please? I appreciate your valuable help.
Ok, finally had the time...
It is always good to look at the Mat objects you produced, just use cv::imshow or cv::imwrite and scale the data accordingly.
Using this code (basically your own code with fixed array writing):
int label[320][256];
std::ifstream in("../inputData/UPSCALE_data.dat");
for (int i = 0; i < 320; i++)
for (int j = 0;j< 256; j++)
{
in >> label[i][j];
}
in.close();
// create Mat with label input:
cv::Mat LabelsMat=cv::Mat(320,256, CV_32SC1, label);
cv::Mat OriginalLabels = LabelsMat.clone(); // you could instead work on the non-copied data, if you liked to...
// upscale:
cv::Mat UpScaledLabels; // no need to allocate memory here during testing
cv::resize(OriginalLabels, UpScaledLabels, cv::Size(640, 480), 0, 0,cv::INTER_NEAREST);
std::vector<int> marray;
marray.reserve(UpScaledLabels.cols*UpScaledLabels.rows);
for(int j=0; j<UpScaledLabels.rows; ++j)
for(int i=0; i<UpScaledLabels.cols; ++i)
{
marray.push_back(UpScaledLabels.at<int>(j,i));
}
// now here marray has information about the upscaled image.
cv::Mat convertedCorrect;
UpScaledLabels.convertTo(convertedCorrect, CV_8UC1);
cv::imwrite("../outputData/UPSCALED_RESULT_ORIG.png", convertedCorrect*50);
I get this result:
That's because cv::Mat LabelsMat=cv::Mat(320,256, CV_32SC1, label); produces an image with HEIGHT 320 and WIDTH 256 (I thought I already mentioned that in a comment but can't find it atm...)
So fixing that using this code:
int label[320][256];
std::ifstream in("../inputData/UPSCALE_data.dat");
for (int i = 0; i < 320; i++)
for (int j = 0;j< 256; j++)
{
in >> label[i][j];
}
in.close();
// create Mat with label input: HERE THE DIMENSIONS ARE SWAPPED
cv::Mat LabelsMat=cv::Mat(256,320, CV_32SC1, label);
cv::Mat OriginalLabels = LabelsMat.clone(); // you could instead work on the non-copied data, if you liked to...
// upscale:
cv::Mat UpScaledLabels; // no need to allocate memory here during testing
cv::resize(OriginalLabels, UpScaledLabels, cv::Size(640, 480), 0, 0,cv::INTER_NEAREST);
std::vector<int> marray;
marray.reserve(UpScaledLabels.cols*UpScaledLabels.rows);
for(int j=0; j<UpScaledLabels.rows; ++j)
for(int i=0; i<UpScaledLabels.cols; ++i)
{
marray.push_back(UpScaledLabels.at<int>(j,i));
}
// now here marray has information about the upscaled image.
cv::Mat convertedCorrect;
UpScaledLabels.convertTo(convertedCorrect, CV_8UC1);
cv::imwrite("../outputData/UPSCALED_RESULT_CORRECTED.png", convertedCorrect*50);
you get this Mat which looks much better:
But compared to your other image, that image is somehow rotated?!?
Related
I would like to copy the mask pixels (maybe non continuous) into a new image of size that of mask. I have outlined the objective I am trying to achieve via this image.
One way to achieve this would be to loop through the complete image, compare mask value (== 255) for the particular pixel and then copy that into the new image.
What I am interested in - is there a better way to accomplishing the same? Something more optimized for performance.
fullImage = cv::imread("full_image.png");
cv::Mat grayImage;
cv::cvtColor(fullImage, grayImage, cv::COLOR_BGR2GRAY);
threshold, dst = cv::threshold(grayImage, 10, 255, cv::THRESH_BINARY);
cv::Mat subImage (1, cv::countNonZero(threshold), CV_8UC3, Scalar(255, 255, 255));
int siColIdx = 0;
// now loop thru the image
for (int i=0; i < fullImage.rows; i++) {
for (int j=0; j < fullImage.cols; j++) {
if (threshold.at<uchar>(i, j) == 255) {
subImage[siColIdx++] = fullImage.at<uchar>(i, j);
}
}
}
disclaimer: the code has not been tested. Purely to provide one approach to the solution.
I am trying to segment an image of rocks and I get a decent result. But now I need to count the pixels in the largest colored object.
The picture above shows a segmented image of a rock pile and I want to count the number of green pixels which denote the largest rock in the image. And then also count the 2nd largest,i.e, the yellow one. After counting I would like to compare it with the ground truth to compare my results.
The code to get the segmented image is referred from Watershed segmentation opencv. A part of my code is also given below :
cv::findContours(peaks_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
// CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
cv::Mat markers = cv::Mat::zeros(input_image.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
cv::drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i) + 1), -1);
}
// Draw the background marker
cv::circle(markers, cv::Point(5, 5), 3, cv::Scalar(255), -1);
cv::watershed(in_sharpened_image, markers);
// Generate random colors; result of watershed
std::vector<cv::Vec3b> colors;
for (size_t i = 0; i < contours.size(); i++)
{
int b = cv::theRNG().uniform(0, 256); //0,256
int g = cv::theRNG().uniform(0, 256);
int r = cv::theRNG().uniform(0, 256);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i, j);
if (index > 0 && index <= static_cast<int>(contours.size()))
{
dst.at<cv::Vec3b>(i, j) = colors[index - 1];
}
}
}
Question: Is there an efficient way to count the pixels inside the largest/marker in opencv?
You can calculate a histogram of markers using cv::calcHist with range from 0 to contours.size() + 1 and find the largest value in it starting from the index 1.
Instead of counting pixels you could use contourArea() for your largest contour. This will work much faster.
Something like this.
cv::Mat mask;
// numOfSegments - number of your labels (colors)
for (int i = 0; i < numOfSegments; i++) {
std::vector<cv::Vec4i> hierarchy;
// this "i + 2" may be different for you
// depends on your labels allocation.
// This is thresholding to get mask with
// contour of your #i label (color)
cv::inRange(markers, i + 2, i + 2, mask);
contours.clear();
findContours(mask, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
double area = cv::contourArea(contours[0]);
}
Having contours in hands is also good because after watershed() they will be quite "noisy" with lots of small peaks and not suitable for most of using in the "raw" form. Having contour you may smooth it with gauss or approxPoly, etc., as well as check for some important properties or contour shape if you need it.
I have the histogram for an image which i have calculated. I want to display this as an image so that I can actually see the histogram. I think my problem is to do with scaling although i am slightly confused over the co ordinate system starting with 0,0 in the top left as well.
int rows = channel.rows;
int cols = channel.cols;
int hist[256] = {0};
for(int i = 0; i<rows; i++)
{
for(int k = 0; k<cols; k++ )
{
int value = channel.at<cv::Vec3b>(i,k)[0];
hist[value] = hist[value] + 1;
}
}
Mat histPlot = cvCreateMat(256, 500,CV_8UC1);
for(int i = 0; i < 256; i++)
{
int mag = hist[i];
line(histPlot,Point(i,0),Point(i,mag),Scalar(255,0,0));
}
namedWindow("Hist",1);
imshow("Hist",histPlot);
This is my calculation for creating my histogram and displaying the result. If i do mag/100 in my second loop then i get some resemblance of a plot appearing (although upside down). I call this method whenever i adjust a value of my image, so the histogram should also change shape, which it doesn't appear to do. Any help in scaling the histogram and displaying it properly is appreciated.
please don't use cvCreateMat ( aka, the old c-api ), you also seem to have rows and cols wrong, additionally, if you want a color drawing, you need a color image as well, so make that:
Mat histPlot( 500, 256, CV_8UC3 );
image origin is top-left(0,0), so you've got to put y in reverse:
line(histPlot,Point(i,histPlot.rows-1),Point(i,histPlot.rows-1-mag/100),Scalar(255,0,0));
I have optical flow stored in a 2-channel 32F matrix. I want to visualize the contents, what's the easiest way to do this?
How do I convert a CV_32FC2 to RGB with an empty blue channel, something imshow can handle? I am using OpenCV 2 C++ API.
Super Bonus Points
Ideally I would get the angle of flow in hue and the magnitude in brightness (with saturation at a constant 100%).
imshow can handle only 1-channel gray-scale and 3-4 channel BRG/BGRA images. So you need do a conversion yourself.
I think you can do something similar to:
//extraxt x and y channels
cv::Mat xy[2]; //X,Y
cv::split(flow, xy);
//calculate angle and magnitude
cv::Mat magnitude, angle;
cv::cartToPolar(xy[0], xy[1], magnitude, angle, true);
//translate magnitude to range [0;1]
double mag_max;
cv::minMaxLoc(magnitude, 0, &mag_max);
magnitude.convertTo(magnitude, -1, 1.0 / mag_max);
//build hsv image
cv::Mat _hsv[3], hsv;
_hsv[0] = angle;
_hsv[1] = cv::Mat::ones(angle.size(), CV_32F);
_hsv[2] = magnitude;
cv::merge(_hsv, 3, hsv);
//convert to BGR and show
cv::Mat bgr;//CV_32FC3 matrix
cv::cvtColor(hsv, bgr, cv::COLOR_HSV2BGR);
cv::imshow("optical flow", bgr);
cv::waitKey(0);
The MPI Sintel Dataset provides C and MatLab code for visualizing computed flow. Download the ground truth optical flow of the training set from here. The archive contains a folder flow_code containing the mentioned source code.
You can port the code to OpenCV, however, I wrote a simple OpenCV wrapper to easily use the provided code. Note that the method MotionToColor is taken from the color_flow.cpp file. Note the comments in the listing below.
// Important to include this before flowIO.h!
#include "imageLib.h"
#include "flowIO.h"
#include "colorcode.h"
// I moved the MotionToColor method in a separate header file.
#include "motiontocolor.h"
cv::Mat flow;
// Compute optical flow (e.g. using OpenCV); result should be
// 2-channel float matrix.
assert(flow.channels() == 2);
// assert(flow.type() == CV_32F);
int rows = flow.rows;
int cols = flow.cols;
CFloatImage cFlow(cols, rows, 2);
// Convert flow to CFLoatImage:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
cFlow.Pixel(j, i, 0) = flow.at<cv::Vec2f>(i, j)[0];
cFlow.Pixel(j, i, 1) = flow.at<cv::Vec2f>(i, j)[1];
}
}
CByteImage cImage;
MotionToColor(cFlow, cImage, max);
cv::Mat image(rows, cols, CV_8UC3, cv::Scalar(0, 0, 0));
// Compute back to cv::Mat with 3 channels in BGR:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
image.at<cv::Vec3b>(i, j)[0] = cImage.Pixel(j, i, 0);
image.at<cv::Vec3b>(i, j)[1] = cImage.Pixel(j, i, 1);
image.at<cv::Vec3b>(i, j)[2] = cImage.Pixel(j, i, 2);
}
}
// Display or output the image ...
Below is the result when using the Optical Flow code and example images provided by Ce Liu.
In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV?
Using cvReduce has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
More information is available in the OpenCV documentation.
EDIT after 3 years:
The proper function for this is cv::reduce.
Reduces a matrix to a vector.
The function reduce reduces the matrix to a vector by treating the
matrix rows/columns as a set of 1D vectors and performing the
specified operation on the vectors until a single row/column is
obtained. For example, the function can be used to compute horizontal
and vertical projections of a raster image. In case of REDUCE_MAX and
REDUCE_MIN , the output image should have the same type as the source
one. In case of REDUCE_SUM and REDUCE_AVG , the output may have a
larger element bit-depth to preserve accuracy. And multi-channel
arrays are also supported in these two reduction modes.
OLD:
I've used ROI method: move ROI of height of the image and width 1 from left to right and calculate means.
Mat src = imread(filename, 0);
vector<int> graph( src.cols );
for (int c=0; c<src.cols-1; c++)
{
Mat roi = src( Rect( c,0,1,src.rows ) );
graph[c] = int(mean(roi)[0]);
}
Mat mgraph( 260, src.cols+10, CV_8UC3);
for (int c=0; c<src.cols-1; c++)
{
line( mgraph, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,0,0), 1, CV_AA);
}
imshow("mgraph", mgraph);
imshow("source", src);
EDIT:
Just out of curiosity, I've tried resize to height 1 and the result was almost the same:
Mat test;
cv::resize(src,test,Size( src.cols,1 ));
Mat mgraph1( 260, src.cols+10, CV_8UC3);
for(int c=0; c<test.cols; c++)
{
graph[c] = test.at<uchar>(0,c);
}
for (int c=0; c<src.cols-1; c++)
{
line( mgraph1, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,255,0), 1, CV_AA);
}
imshow("mgraph1", mgraph1);
cvSum respects ROI, so if you move a 1 px wide window over the whole image, you can calculate the sum of each column.
My c++ got a little rusty so I won't provide a code example, though the last time I did this I used OpenCVSharp and it worked fine. However, I'm not sure how efficient this method is.
My math skills are getting rusty too, but shouldn't it be possible to sum all elements in columns in a matrix by multiplying it by a vector of 1s?
For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result