Clusterization with mask - image-processing

Actually I have already asked this question in official Q&A but didn't get any answer yet.
My task is to use kmeans clusterization not for the whole image but only for it's masked part. So as input I have two images:
Masked image.
Image converted to Lab color space.
And if I clusterize image on n clusters, after clusterization with mask I want to have image with n+1 clusters (+1 because of mask).
Of course I researched and googled it but found nothing.
Thanks for any advice.

Create another image, copy the data unmasked data in it, and use this matrix to perform your kmeans. This is how it goes:
[edit]
The following does not work, it only black-out pixels in the mask, but tmp has same dimension as original image.
cv::Mat tmp;
labImage.copyTo(tmp, mask);
You should allocate the tmp matrix beforehand, and fill it with a loop over the mask:
cv::Mat tmp = cv::Mat::zeros(cv::countNonZero(mask), 1, labImage.type());
int counter = 0;
for (int r = 0; r < mask.rows; ++r)
for (int c = 0; c < mask.cols; ++c)
if (!mask.at<unsigned char>(r, c))
// I assume Lab pixels are stored as a vector of floats
tmp.at<cv::Vec3f>(counter++, 0) = labImage.at<cv::Vec3b>(r, c);
[/edit]
cv::kmeans(tmp, k, labels);
// Now to compute your image of labels
cv::Mat labelsImage = cv::Mat(labImage.size(), CV_32S, k); // initialize pixel values to K, which is the index of your N+1 cluster
// Now loop through your pixel mask and set the correspondance in your labelImage
int counter = 0;
for (int r = 0; r < mask.rows; ++r)
for (int c = 0; c < mask.cols; ++c)
if (!mask.at<unsigned char>(r, c))
labelsImage.at<int>(r, c) = labels.at<int>(counter++, 0);

Related

Copy the area defined by mask into a new continuous image

I would like to copy the mask pixels (maybe non continuous) into a new image of size that of mask. I have outlined the objective I am trying to achieve via this image.
One way to achieve this would be to loop through the complete image, compare mask value (== 255) for the particular pixel and then copy that into the new image.
What I am interested in - is there a better way to accomplishing the same? Something more optimized for performance.
fullImage = cv::imread("full_image.png");
cv::Mat grayImage;
cv::cvtColor(fullImage, grayImage, cv::COLOR_BGR2GRAY);
threshold, dst = cv::threshold(grayImage, 10, 255, cv::THRESH_BINARY);
cv::Mat subImage (1, cv::countNonZero(threshold), CV_8UC3, Scalar(255, 255, 255));
int siColIdx = 0;
// now loop thru the image
for (int i=0; i < fullImage.rows; i++) {
for (int j=0; j < fullImage.cols; j++) {
if (threshold.at<uchar>(i, j) == 255) {
subImage[siColIdx++] = fullImage.at<uchar>(i, j);
}
}
}
disclaimer: the code has not been tested. Purely to provide one approach to the solution.

Count the number of same coloured pixel in a labelled object in opencv

I am trying to segment an image of rocks and I get a decent result. But now I need to count the pixels in the largest colored object.
The picture above shows a segmented image of a rock pile and I want to count the number of green pixels which denote the largest rock in the image. And then also count the 2nd largest,i.e, the yellow one. After counting I would like to compare it with the ground truth to compare my results.
The code to get the segmented image is referred from Watershed segmentation opencv. A part of my code is also given below :
cv::findContours(peaks_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
// CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
cv::Mat markers = cv::Mat::zeros(input_image.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
cv::drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i) + 1), -1);
}
// Draw the background marker
cv::circle(markers, cv::Point(5, 5), 3, cv::Scalar(255), -1);
cv::watershed(in_sharpened_image, markers);
// Generate random colors; result of watershed
std::vector<cv::Vec3b> colors;
for (size_t i = 0; i < contours.size(); i++)
{
int b = cv::theRNG().uniform(0, 256); //0,256
int g = cv::theRNG().uniform(0, 256);
int r = cv::theRNG().uniform(0, 256);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i, j);
if (index > 0 && index <= static_cast<int>(contours.size()))
{
dst.at<cv::Vec3b>(i, j) = colors[index - 1];
}
}
}
Question: Is there an efficient way to count the pixels inside the largest/marker in opencv?
You can calculate a histogram of markers using cv::calcHist with range from 0 to contours.size() + 1 and find the largest value in it starting from the index 1.
Instead of counting pixels you could use contourArea() for your largest contour. This will work much faster.
Something like this.
cv::Mat mask;
// numOfSegments - number of your labels (colors)
for (int i = 0; i < numOfSegments; i++) {
std::vector<cv::Vec4i> hierarchy;
// this "i + 2" may be different for you
// depends on your labels allocation.
// This is thresholding to get mask with
// contour of your #i label (color)
cv::inRange(markers, i + 2, i + 2, mask);
contours.clear();
findContours(mask, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
double area = cv::contourArea(contours[0]);
}
Having contours in hands is also good because after watershed() they will be quite "noisy" with lots of small peaks and not suitable for most of using in the "raw" form. Having contour you may smooth it with gauss or approxPoly, etc., as well as check for some important properties or contour shape if you need it.

how can i extract v-disparity map from a disparity map

i'm new to opencv and i'm trying to run some codes..i need to get a v-disparity map from a disparity map.i 'm using a two rectified image to get stereo matching and after that the dense disparity map.i got the disparity map and when i tryed to tronsform it on v-disparity i got nothing an empty window appeared.i'm refering to the algorithm proposed by :
Raphael Labayrade, Didier Aubert, Jean-Philippe Tarel in their article Real Time Obstacle Detection in Stereovision on
Non Flat Road Geometry Through ”V-disparity”
Representation.
hear is my code :
int main(int argc, char *argv[]){
int nbrepetion ;
Mat img = imread(argv[1],0);
Mat image(img.rows,img.cols, CV_8UC1);
if(img.empty()){
printf("Could not load image file\n");
exit(0);
}
int height = img.rows;
int width = img.cols;
int a = width ;
int k = 0 ;
uchar pos =0 ;
for(int i = 0; i < height; i++){
for(int j = 0; j < width; j++)
for (int k = 0; k < a; k++){
if(img.at<uchar>(i,j) == img.at<uchar>(i,k)) {
nbrepetion ++ ;
}
}
if(nbrepetion == 1){
image.at<uchar>(i,k) = img.at<uchar>(i,k);
} else {
pos = img.at<uchar>(i,k);
image.at<uchar>(pos,k) = nbrepetion;
}
nbrepetion = 0 ;
}
namedWindow("disparityimage", CV_WINDOW_AUTOSIZE);
imshow("disparityimage", image );
waitKey(0);
return 0;
}
For a v-disparity image:
Use a matrix of size (rows, maxVal) and increment the corresponding element by 1 for each line of the disparity image where the disparity value corresponds to a column in the v-disparity image.
Repeat this along rows for the u-disparity image.
Let us denote disparity image as disp of size (height, width).
The output is v-disparity image of size (height, maxDisp), where maxDisp is maximum value in disparity image. Lets denote it vdisp.
Algorithm (pseudo code) is as follows:
For each i in disp.Rows DO
For each j in disp.Columns
if disp(i, j) > 0 Then
vdisp(i, disp(i,j)++
end
end
end
If you look at your v-disparity image, straight vertical lines represent surfaces of obstacles, and straight diagonal line represent ground surface plane. You can use Hough Transform to identify straight lines in the v-disparity image.
In the paper "FPGA implementation of the V-disparity based obstacles detection approach" it is very good explained.

Move every pixel to right (by 1px) in OpenCV without using remap?

I want to move every pixel in an image to right by 1px, and below is the map I use to do the remap transformation.
This approach require much more time than it should to do such a simple transform. Is there a cv function I can use? Or do I just split the image into 2 images, one is src.cols-1 pixels wide, the other is 1 px wide, and then copy them to the new image?
void update_map()
{
for( int j = 0; j < src.cols; j++ ){
for( int i = 0; i < src.rows; i++ ){
if (j == src.cols-1)
mat_x_Rotate.at<float>(i,j) = 0;
else
mat_x_Rotate.at<float>(i,j) = j + 1;
mat_y_Rotate.at<float>(i,j) = i;
}
}
}
Things you can do to improve your performance:
remap is overkill for this purpose. It is more efficient to copy the pixels directly than to define an entire remap transformation and then use it.
switch your loop order: iterate over rows, then columns. (OpenCV's Mat is stored in row-major order, so iterating over columns first is very cache-unfriendly)
use Mat::ptr() to access pixels in the same row directly, as a C-style array. (this is a big performance win over using at<>(), which probably does stuff like check indices for each access)
take your if statement out of the inner loop, and handle column 0 separately.
As an alternative: yes, splitting the image into parts and copying to the new image might be about as efficient as copying directly, as described above.
Mat Shift_Image_to_Right( Mat src_in, int num_pixels)
{
Size sz_src_in = src_in.size();
Mat img_out(sz_src_in.height, sz_src_in.width, CV_8UC3);
Rect roi;
roi.x = 0;
roi.y = 0;
roi.width = sz_src_in.width-num_pixels;
roi.height = sz_src_in.height;
Mat crop;
crop = src_in(roi);
// Move the left boundary to the right
img_out = Scalar::all(0);
img_out.adjustROI(0, 0, -num_pixels, 0);
crop.copyTo(img_out);
img_out.adjustROI(0, 0, num_pixels, 0);
return img_out;
}

OpenCV displaying a 2-channel image (optical flow)

I have optical flow stored in a 2-channel 32F matrix. I want to visualize the contents, what's the easiest way to do this?
How do I convert a CV_32FC2 to RGB with an empty blue channel, something imshow can handle? I am using OpenCV 2 C++ API.
Super Bonus Points
Ideally I would get the angle of flow in hue and the magnitude in brightness (with saturation at a constant 100%).
imshow can handle only 1-channel gray-scale and 3-4 channel BRG/BGRA images. So you need do a conversion yourself.
I think you can do something similar to:
//extraxt x and y channels
cv::Mat xy[2]; //X,Y
cv::split(flow, xy);
//calculate angle and magnitude
cv::Mat magnitude, angle;
cv::cartToPolar(xy[0], xy[1], magnitude, angle, true);
//translate magnitude to range [0;1]
double mag_max;
cv::minMaxLoc(magnitude, 0, &mag_max);
magnitude.convertTo(magnitude, -1, 1.0 / mag_max);
//build hsv image
cv::Mat _hsv[3], hsv;
_hsv[0] = angle;
_hsv[1] = cv::Mat::ones(angle.size(), CV_32F);
_hsv[2] = magnitude;
cv::merge(_hsv, 3, hsv);
//convert to BGR and show
cv::Mat bgr;//CV_32FC3 matrix
cv::cvtColor(hsv, bgr, cv::COLOR_HSV2BGR);
cv::imshow("optical flow", bgr);
cv::waitKey(0);
The MPI Sintel Dataset provides C and MatLab code for visualizing computed flow. Download the ground truth optical flow of the training set from here. The archive contains a folder flow_code containing the mentioned source code.
You can port the code to OpenCV, however, I wrote a simple OpenCV wrapper to easily use the provided code. Note that the method MotionToColor is taken from the color_flow.cpp file. Note the comments in the listing below.
// Important to include this before flowIO.h!
#include "imageLib.h"
#include "flowIO.h"
#include "colorcode.h"
// I moved the MotionToColor method in a separate header file.
#include "motiontocolor.h"
cv::Mat flow;
// Compute optical flow (e.g. using OpenCV); result should be
// 2-channel float matrix.
assert(flow.channels() == 2);
// assert(flow.type() == CV_32F);
int rows = flow.rows;
int cols = flow.cols;
CFloatImage cFlow(cols, rows, 2);
// Convert flow to CFLoatImage:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
cFlow.Pixel(j, i, 0) = flow.at<cv::Vec2f>(i, j)[0];
cFlow.Pixel(j, i, 1) = flow.at<cv::Vec2f>(i, j)[1];
}
}
CByteImage cImage;
MotionToColor(cFlow, cImage, max);
cv::Mat image(rows, cols, CV_8UC3, cv::Scalar(0, 0, 0));
// Compute back to cv::Mat with 3 channels in BGR:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
image.at<cv::Vec3b>(i, j)[0] = cImage.Pixel(j, i, 0);
image.at<cv::Vec3b>(i, j)[1] = cImage.Pixel(j, i, 1);
image.at<cv::Vec3b>(i, j)[2] = cImage.Pixel(j, i, 2);
}
}
// Display or output the image ...
Below is the result when using the Optical Flow code and example images provided by Ce Liu.

Resources