I have optical flow stored in a 2-channel 32F matrix. I want to visualize the contents, what's the easiest way to do this?
How do I convert a CV_32FC2 to RGB with an empty blue channel, something imshow can handle? I am using OpenCV 2 C++ API.
Super Bonus Points
Ideally I would get the angle of flow in hue and the magnitude in brightness (with saturation at a constant 100%).
imshow can handle only 1-channel gray-scale and 3-4 channel BRG/BGRA images. So you need do a conversion yourself.
I think you can do something similar to:
//extraxt x and y channels
cv::Mat xy[2]; //X,Y
cv::split(flow, xy);
//calculate angle and magnitude
cv::Mat magnitude, angle;
cv::cartToPolar(xy[0], xy[1], magnitude, angle, true);
//translate magnitude to range [0;1]
double mag_max;
cv::minMaxLoc(magnitude, 0, &mag_max);
magnitude.convertTo(magnitude, -1, 1.0 / mag_max);
//build hsv image
cv::Mat _hsv[3], hsv;
_hsv[0] = angle;
_hsv[1] = cv::Mat::ones(angle.size(), CV_32F);
_hsv[2] = magnitude;
cv::merge(_hsv, 3, hsv);
//convert to BGR and show
cv::Mat bgr;//CV_32FC3 matrix
cv::cvtColor(hsv, bgr, cv::COLOR_HSV2BGR);
cv::imshow("optical flow", bgr);
cv::waitKey(0);
The MPI Sintel Dataset provides C and MatLab code for visualizing computed flow. Download the ground truth optical flow of the training set from here. The archive contains a folder flow_code containing the mentioned source code.
You can port the code to OpenCV, however, I wrote a simple OpenCV wrapper to easily use the provided code. Note that the method MotionToColor is taken from the color_flow.cpp file. Note the comments in the listing below.
// Important to include this before flowIO.h!
#include "imageLib.h"
#include "flowIO.h"
#include "colorcode.h"
// I moved the MotionToColor method in a separate header file.
#include "motiontocolor.h"
cv::Mat flow;
// Compute optical flow (e.g. using OpenCV); result should be
// 2-channel float matrix.
assert(flow.channels() == 2);
// assert(flow.type() == CV_32F);
int rows = flow.rows;
int cols = flow.cols;
CFloatImage cFlow(cols, rows, 2);
// Convert flow to CFLoatImage:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
cFlow.Pixel(j, i, 0) = flow.at<cv::Vec2f>(i, j)[0];
cFlow.Pixel(j, i, 1) = flow.at<cv::Vec2f>(i, j)[1];
}
}
CByteImage cImage;
MotionToColor(cFlow, cImage, max);
cv::Mat image(rows, cols, CV_8UC3, cv::Scalar(0, 0, 0));
// Compute back to cv::Mat with 3 channels in BGR:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
image.at<cv::Vec3b>(i, j)[0] = cImage.Pixel(j, i, 0);
image.at<cv::Vec3b>(i, j)[1] = cImage.Pixel(j, i, 1);
image.at<cv::Vec3b>(i, j)[2] = cImage.Pixel(j, i, 2);
}
}
// Display or output the image ...
Below is the result when using the Optical Flow code and example images provided by Ce Liu.
Related
I am trying to segment an image of rocks and I get a decent result. But now I need to count the pixels in the largest colored object.
The picture above shows a segmented image of a rock pile and I want to count the number of green pixels which denote the largest rock in the image. And then also count the 2nd largest,i.e, the yellow one. After counting I would like to compare it with the ground truth to compare my results.
The code to get the segmented image is referred from Watershed segmentation opencv. A part of my code is also given below :
cv::findContours(peaks_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
// CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
cv::Mat markers = cv::Mat::zeros(input_image.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
cv::drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i) + 1), -1);
}
// Draw the background marker
cv::circle(markers, cv::Point(5, 5), 3, cv::Scalar(255), -1);
cv::watershed(in_sharpened_image, markers);
// Generate random colors; result of watershed
std::vector<cv::Vec3b> colors;
for (size_t i = 0; i < contours.size(); i++)
{
int b = cv::theRNG().uniform(0, 256); //0,256
int g = cv::theRNG().uniform(0, 256);
int r = cv::theRNG().uniform(0, 256);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i, j);
if (index > 0 && index <= static_cast<int>(contours.size()))
{
dst.at<cv::Vec3b>(i, j) = colors[index - 1];
}
}
}
Question: Is there an efficient way to count the pixels inside the largest/marker in opencv?
You can calculate a histogram of markers using cv::calcHist with range from 0 to contours.size() + 1 and find the largest value in it starting from the index 1.
Instead of counting pixels you could use contourArea() for your largest contour. This will work much faster.
Something like this.
cv::Mat mask;
// numOfSegments - number of your labels (colors)
for (int i = 0; i < numOfSegments; i++) {
std::vector<cv::Vec4i> hierarchy;
// this "i + 2" may be different for you
// depends on your labels allocation.
// This is thresholding to get mask with
// contour of your #i label (color)
cv::inRange(markers, i + 2, i + 2, mask);
contours.clear();
findContours(mask, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
double area = cv::contourArea(contours[0]);
}
Having contours in hands is also good because after watershed() they will be quite "noisy" with lots of small peaks and not suitable for most of using in the "raw" form. Having contour you may smooth it with gauss or approxPoly, etc., as well as check for some important properties or contour shape if you need it.
After some simple preprocessing I am receiving boolean mask of segmented images.
I want to "enhance" borders of the mask and make them more smooth. For that I am using OPEN morphology filter with a rather big circle kernel , it works very well until the distance between segmented objects is enough. But In alot of samples objects stick together. Is there exists some more or less simple method to smooth such kind of images without changing its morphology ?
Without applying a morphological filter first, you can try to detect the external contours of the image. Now you can draw these external contours as filled contours and then apply your morphological filter. This works because now you don't have any holes to fill. This is fairly simple.
Another approach:
find external contours
take the x, y of coordinates of the contour points. you can consider these as 1-D signals and apply a smoothing filter to these signals
In the code below, I've applied the second approach to a sample image.
Input image
External contours without any smoothing
After applying a Gaussian filter to x and y 1-D signals
C++ code
Mat im = imread("4.png", 0);
Mat cont = im.clone();
Mat original = Mat::zeros(im.rows, im.cols, CV_8UC3);
Mat smoothed = Mat::zeros(im.rows, im.cols, CV_8UC3);
// contour smoothing parameters for gaussian filter
int filterRadius = 5;
int filterSize = 2 * filterRadius + 1;
double sigma = 10;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
// find external contours and store all contour points
findContours(cont, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
for(size_t j = 0; j < contours.size(); j++)
{
// draw the initial contour shape
drawContours(original, contours, j, Scalar(0, 255, 0), 1);
// extract x and y coordinates of points. we'll consider these as 1-D signals
// add circular padding to 1-D signals
size_t len = contours[j].size() + 2 * filterRadius;
size_t idx = (contours[j].size() - filterRadius);
vector<float> x, y;
for (size_t i = 0; i < len; i++)
{
x.push_back(contours[j][(idx + i) % contours[j].size()].x);
y.push_back(contours[j][(idx + i) % contours[j].size()].y);
}
// filter 1-D signals
vector<float> xFilt, yFilt;
GaussianBlur(x, xFilt, Size(filterSize, filterSize), sigma, sigma);
GaussianBlur(y, yFilt, Size(filterSize, filterSize), sigma, sigma);
// build smoothed contour
vector<vector<Point> > smoothContours;
vector<Point> smooth;
for (size_t i = filterRadius; i < contours[j].size() + filterRadius; i++)
{
smooth.push_back(Point(xFilt[i], yFilt[i]));
}
smoothContours.push_back(smooth);
drawContours(smoothed, smoothContours, 0, Scalar(255, 0, 0), 1);
cout << "debug contour " << j << " : " << contours[j].size() << ", " << smooth.size() << endl;
}
Not 100% sure what you are trying to achieve, but this may be an avenue to explore... the tool potrace takes images and converts them to vectorised images which involves smoothing. It prefers PGM format input files so I use ImageMagick to prepare them. Anyway, here is an example of the command and the result so see what you think:
convert disks.png pgm:- | potrace - -s -o out.svg
I have converted the resulting SVG file to a PNG so I can upload it to SO.
I am struggling with finding the appropriate contour algorithm for a low quality image. The example image shows a rock scene:
What I am trying to achieve is to find contours arround features such as:
light areas
dark areas
grey1 areas
grey2 areas
etc. until grey-n areas
(The number of areas shall be a parameter of choice)
I do not want to take a simple binary-threshold but rather use some sort of contour-finding (for example watershed or other). The major feature-lines shall be kept, noise within a feature-are can be flattened.
The result of my code can be seen on the images to the right.
Unfortunately, as you can easily tell, the colors do not really represent the original large-scale image features! For example: check out the two areas that I circled with red - these features are almost completely flooded with another color. What I imagine is that at least the very light and the very dark areas are covered by its own color.
cv::Mat cv_src = cv::imread(argv[1]);
cv::Mat output;
cv::Mat cv_src_gray;
cv::cvtColor(cv_src, cv_src_gray, cv::COLOR_RGB2GRAY);
double clipLimit = 0.1;
cv::Size titleGridSize = cv::Size(8,8);
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(clipLimit, titleGridSize);
clahe->apply(cv_src_gray, output);
cv::equalizeHist(output, output);
cv::cvtColor(output, cv_src, cv::COLOR_GRAY2RGB);
// Create binary image from source image
cv::Mat bw;
cv::cvtColor(cv_src, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 180, 255, cv::THRESH_BINARY);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, CV_32F);
// Normalize the distance image for range = {0.0, 1.0}
cv::normalize(dist, dist, 0, 1., cv::NORM_MINMAX);
// Threshold to obtain the peaks
cv::threshold(dist, dist, .2, 1., cv::THRESH_BINARY);
// Create the CV_8U version of the distance image
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
cv::findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
int ncomp = contours.size();
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (int i = 0; i < ncomp; i++)
cv::drawContours(markers, contours, i, cv::Scalar::all(i+1), -1);
// Draw the background marker
cv::circle(markers, cv::Point(5,5), 3, CV_RGB(255,255,255), -1);
// Perform the watershed algorithm
cv::watershed(cv_src, markers);
// Generate random colors
std::vector<cv::Vec3b> colors;
for (int i = 0; i < ncomp; i++)
{
int b = cv::theRNG().uniform(0, 255);
int g = cv::theRNG().uniform(0, 255);
int r = cv::theRNG().uniform(0, 255);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i,j);
if (index > 0 && index <= ncomp)
dst.at<cv::Vec3b>(i,j) = colors[index-1];
else
dst.at<cv::Vec3b>(i,j) = cv::Vec3b(0,0,0);
}
}
// Show me what you got
imshow("final_result", dst);
I think you can use a simple clustering such as k-means for this, then examine the cluster centers (or the mean and standard deviations of each cluster). I quickly tried it in matlab.
im = imread('tvBqt.jpg');
gr = rgb2gray(im);
x = double(gr(:));
idx = kmeans(x, 4);
cl = reshape(idx, 600, 472);
figure,
subplot(1, 2, 1), imshow(gr, []), title('original')
subplot(1, 2, 2), imshow(label2rgb(cl), []), title('clustered')
The result:
You could try using SLIC Superpixels. I tried it and showed some good results. You could vary the parameters to get better clustering.
SLIC Superpixels
SLIC Superpixels with OpenCV C++
SLIC Superpixels with OpenCV Python
I use OpenCV to undestort set of points after camera calibration.
The code follows.
const int npoints = 2; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][2] = {{0,0}, {2560, 1920}};
CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);
// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
for (int ci = 0; ci < 2; ++ci) {
*(src_ptr + pi * 2 + ci) = input_points[pi][ci];
}
}
cvUndistortPoints(src, dst, &camera1, &distCoeffs1);
After the code above dst contains following numbers:
-8.82689655e-001 -7.05507338e-001 4.16228324e-001 3.04863811e-001
which are too small in comparison with numbers in src.
At the same time if I undistort image via the call:
cvUndistort2( srcImage, dstImage, &camera1, &dist_coeffs1 );
I receive good undistorted image which means that pixel coordinates are not modified so drastically in comparison with separate points.
How to obtain the same undistortion for specific points as for images?
Thanks.
The points should be "unnormalized" using camera matrix.
More specifically, after call of cvUndistortPoints following transformation should be also added:
double fx = CV_MAT_ELEM(camera1, double, 0, 0);
double fy = CV_MAT_ELEM(camera1, double, 1, 1);
double cx = CV_MAT_ELEM(camera1, double, 0, 2);
double cy = CV_MAT_ELEM(camera1, double, 1, 2);
float * dst_ptr = (float*)dst->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
float& px = *(dst_ptr + pi * 2);
float& py = *(dst_ptr + pi * 2 + 1);
// perform transformation.
// In fact this is equivalent to multiplication to camera matrix
px = px * fx + cx;
py = py * fy + cy;
}
More info on camera matrix at OpenCV 'Camera Calibration and 3D Reconstruction'
UPDATE:
Following C++ function call should work as well:
std::vector<cv::Point2f> inputDistortedPoints = ...
std::vector<cv::Point2f> outputUndistortedPoints;
cv::Mat cameraMatrix = ...
cv::Mat distCoeffs = ...
cv::undistortPoints(inputDistortedPoints, outputUndistortedPoints, cameraMatrix, distCoeffs, cv::noArray(), cameraMatrix);
It may be your matrix size :)
OpenCV expects a vector of points - a column or a row matrix with two channels. But because your input matrix is only 2 pts, and the number of channels is also 1, it cannot figure out what's the input, row or colum.
So, fill a longer input mat with bogus values, and keep only the first:
const int npoints = 4; // number of point specified
// Points initialization.
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][4] = {{0,0}, {2560, 1920}}; // the rest will be set to 0
CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);
// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
for (int ci = 0; ci < 2; ++ci) {
*(src_ptr + pi * 2 + ci) = input_points[pi][ci];
}
}
cvUndistortPoints(src, dst, &camera1, &distCoeffs1);
EDIT
While OpenCV specifies undistortPoints accept only 2-channel input, actually, it accepts
1-column, 2-channel, multi-row mat or (and this case is not documented)
2 column, multi-row, 1-channel mat or
multi-column, 1 row, 2-channel mat
(as seen in undistort.cpp, line 390)
But a bug inside (or lack of available info), makes it wrongly mix the second one with the third one, when the number of columns is 2. So, your data is considered a 2-column, 2-row, 1-channel.
I also reach this problems, and I take some time to research an finally understand.
Formula
You see the formula above, in the open system, distort operation is before camera matrix, so the process order is:
image_distorted ->camera_matrix -> un-distort function->camera_matrix->back to image_undistorted.
So you need a small fix to and camera1 again.
Mat eye3 = Mat::eye(3, 3, CV_64F);
cvUndistortPoints(src, dst, &camera1, &distCoeffs1, &eye3,&camera1);
Otherwise, if the last two parameters is empty, It would be project to a Normalized image coordinate.
See codes: opencv-3.4.0-src\modules\imgproc\src\undistort.cpp :297
cvUndistortPointsInternal()
In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV?
Using cvReduce has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
More information is available in the OpenCV documentation.
EDIT after 3 years:
The proper function for this is cv::reduce.
Reduces a matrix to a vector.
The function reduce reduces the matrix to a vector by treating the
matrix rows/columns as a set of 1D vectors and performing the
specified operation on the vectors until a single row/column is
obtained. For example, the function can be used to compute horizontal
and vertical projections of a raster image. In case of REDUCE_MAX and
REDUCE_MIN , the output image should have the same type as the source
one. In case of REDUCE_SUM and REDUCE_AVG , the output may have a
larger element bit-depth to preserve accuracy. And multi-channel
arrays are also supported in these two reduction modes.
OLD:
I've used ROI method: move ROI of height of the image and width 1 from left to right and calculate means.
Mat src = imread(filename, 0);
vector<int> graph( src.cols );
for (int c=0; c<src.cols-1; c++)
{
Mat roi = src( Rect( c,0,1,src.rows ) );
graph[c] = int(mean(roi)[0]);
}
Mat mgraph( 260, src.cols+10, CV_8UC3);
for (int c=0; c<src.cols-1; c++)
{
line( mgraph, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,0,0), 1, CV_AA);
}
imshow("mgraph", mgraph);
imshow("source", src);
EDIT:
Just out of curiosity, I've tried resize to height 1 and the result was almost the same:
Mat test;
cv::resize(src,test,Size( src.cols,1 ));
Mat mgraph1( 260, src.cols+10, CV_8UC3);
for(int c=0; c<test.cols; c++)
{
graph[c] = test.at<uchar>(0,c);
}
for (int c=0; c<src.cols-1; c++)
{
line( mgraph1, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,255,0), 1, CV_AA);
}
imshow("mgraph1", mgraph1);
cvSum respects ROI, so if you move a 1 px wide window over the whole image, you can calculate the sum of each column.
My c++ got a little rusty so I won't provide a code example, though the last time I did this I used OpenCVSharp and it worked fine. However, I'm not sure how efficient this method is.
My math skills are getting rusty too, but shouldn't it be possible to sum all elements in columns in a matrix by multiplying it by a vector of 1s?
For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result