Displaying histogram plot openCV - opencv

I have the histogram for an image which i have calculated. I want to display this as an image so that I can actually see the histogram. I think my problem is to do with scaling although i am slightly confused over the co ordinate system starting with 0,0 in the top left as well.
int rows = channel.rows;
int cols = channel.cols;
int hist[256] = {0};
for(int i = 0; i<rows; i++)
{
for(int k = 0; k<cols; k++ )
{
int value = channel.at<cv::Vec3b>(i,k)[0];
hist[value] = hist[value] + 1;
}
}
Mat histPlot = cvCreateMat(256, 500,CV_8UC1);
for(int i = 0; i < 256; i++)
{
int mag = hist[i];
line(histPlot,Point(i,0),Point(i,mag),Scalar(255,0,0));
}
namedWindow("Hist",1);
imshow("Hist",histPlot);
This is my calculation for creating my histogram and displaying the result. If i do mag/100 in my second loop then i get some resemblance of a plot appearing (although upside down). I call this method whenever i adjust a value of my image, so the histogram should also change shape, which it doesn't appear to do. Any help in scaling the histogram and displaying it properly is appreciated.

please don't use cvCreateMat ( aka, the old c-api ), you also seem to have rows and cols wrong, additionally, if you want a color drawing, you need a color image as well, so make that:
Mat histPlot( 500, 256, CV_8UC3 );
image origin is top-left(0,0), so you've got to put y in reverse:
line(histPlot,Point(i,histPlot.rows-1),Point(i,histPlot.rows-1-mag/100),Scalar(255,0,0));

Related

Count the number of same coloured pixel in a labelled object in opencv

I am trying to segment an image of rocks and I get a decent result. But now I need to count the pixels in the largest colored object.
The picture above shows a segmented image of a rock pile and I want to count the number of green pixels which denote the largest rock in the image. And then also count the 2nd largest,i.e, the yellow one. After counting I would like to compare it with the ground truth to compare my results.
The code to get the segmented image is referred from Watershed segmentation opencv. A part of my code is also given below :
cv::findContours(peaks_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
// CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
cv::Mat markers = cv::Mat::zeros(input_image.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
cv::drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i) + 1), -1);
}
// Draw the background marker
cv::circle(markers, cv::Point(5, 5), 3, cv::Scalar(255), -1);
cv::watershed(in_sharpened_image, markers);
// Generate random colors; result of watershed
std::vector<cv::Vec3b> colors;
for (size_t i = 0; i < contours.size(); i++)
{
int b = cv::theRNG().uniform(0, 256); //0,256
int g = cv::theRNG().uniform(0, 256);
int r = cv::theRNG().uniform(0, 256);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i, j);
if (index > 0 && index <= static_cast<int>(contours.size()))
{
dst.at<cv::Vec3b>(i, j) = colors[index - 1];
}
}
}
Question: Is there an efficient way to count the pixels inside the largest/marker in opencv?
You can calculate a histogram of markers using cv::calcHist with range from 0 to contours.size() + 1 and find the largest value in it starting from the index 1.
Instead of counting pixels you could use contourArea() for your largest contour. This will work much faster.
Something like this.
cv::Mat mask;
// numOfSegments - number of your labels (colors)
for (int i = 0; i < numOfSegments; i++) {
std::vector<cv::Vec4i> hierarchy;
// this "i + 2" may be different for you
// depends on your labels allocation.
// This is thresholding to get mask with
// contour of your #i label (color)
cv::inRange(markers, i + 2, i + 2, mask);
contours.clear();
findContours(mask, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
double area = cv::contourArea(contours[0]);
}
Having contours in hands is also good because after watershed() they will be quite "noisy" with lots of small peaks and not suitable for most of using in the "raw" form. Having contour you may smooth it with gauss or approxPoly, etc., as well as check for some important properties or contour shape if you need it.

how can i extract v-disparity map from a disparity map

i'm new to opencv and i'm trying to run some codes..i need to get a v-disparity map from a disparity map.i 'm using a two rectified image to get stereo matching and after that the dense disparity map.i got the disparity map and when i tryed to tronsform it on v-disparity i got nothing an empty window appeared.i'm refering to the algorithm proposed by :
Raphael Labayrade, Didier Aubert, Jean-Philippe Tarel in their article Real Time Obstacle Detection in Stereovision on
Non Flat Road Geometry Through ”V-disparity”
Representation.
hear is my code :
int main(int argc, char *argv[]){
int nbrepetion ;
Mat img = imread(argv[1],0);
Mat image(img.rows,img.cols, CV_8UC1);
if(img.empty()){
printf("Could not load image file\n");
exit(0);
}
int height = img.rows;
int width = img.cols;
int a = width ;
int k = 0 ;
uchar pos =0 ;
for(int i = 0; i < height; i++){
for(int j = 0; j < width; j++)
for (int k = 0; k < a; k++){
if(img.at<uchar>(i,j) == img.at<uchar>(i,k)) {
nbrepetion ++ ;
}
}
if(nbrepetion == 1){
image.at<uchar>(i,k) = img.at<uchar>(i,k);
} else {
pos = img.at<uchar>(i,k);
image.at<uchar>(pos,k) = nbrepetion;
}
nbrepetion = 0 ;
}
namedWindow("disparityimage", CV_WINDOW_AUTOSIZE);
imshow("disparityimage", image );
waitKey(0);
return 0;
}
For a v-disparity image:
Use a matrix of size (rows, maxVal) and increment the corresponding element by 1 for each line of the disparity image where the disparity value corresponds to a column in the v-disparity image.
Repeat this along rows for the u-disparity image.
Let us denote disparity image as disp of size (height, width).
The output is v-disparity image of size (height, maxDisp), where maxDisp is maximum value in disparity image. Lets denote it vdisp.
Algorithm (pseudo code) is as follows:
For each i in disp.Rows DO
For each j in disp.Columns
if disp(i, j) > 0 Then
vdisp(i, disp(i,j)++
end
end
end
If you look at your v-disparity image, straight vertical lines represent surfaces of obstacles, and straight diagonal line represent ground surface plane. You can use Hough Transform to identify straight lines in the v-disparity image.
In the paper "FPGA implementation of the V-disparity based obstacles detection approach" it is very good explained.

Kinect depth and color image

I got a little problem with the depth map to point color map,
i simply threshold the nearst depth (about 70-80cm), then i bitwise and the thresholded depth image to corresponding color map,
Mat depthFilter(Mat depth,Mat color){
Mat I;
depth.convertTo(I, CV_8UC1, 255.0 / 4096.0);
unsigned char *input = (unsigned char*)(I.data);
for (int i = 0; i < I.cols; i++){
for (int j = 0; j < I.rows; j++){
int pixel = input[I.cols*j + i];
if (!(pixel <52 && pixel >42)){
input[I.cols*j + i] = 0;
}
else
{
input[I.cols*j + i] = 255;
}
}
}
cvtColor(color, color, CV_BGR2GRAY);
bitwise_and(I, color, I);
return I;
}
(I'm using OpenCvKinect, which uses OpenNi and OpenCv)
But my problem is, the point's are not the same.. I think i need to find some relation between two images, but how :)!
http://postimg.org/image/hyxt25bwd/
I would see Why kinect color and depth won't align correctly? as they are having a similar problem in matlab. The answer suggest to use OpenNI's AlternateViewCapability class to align the images. This is the documentation from the older version of OpenNI (1.5) as I cannot find the 2.0 documentation for C++, but there is probably a similar method. The images on that answer show the difference the shift made.
The code is essentially
depth.GetAlternativeViewPointCap().SetViewPoint(image); //depth is depth generator
//image is color generator
I am not sure if you have already solved the problem of alignment, however this has been implemented within OpenCVKinect Wrappers that you are already using.
To acquire aligned Depth and Color images from Kinect, you need to use setMode function as follows:
setMode(C_MODE_DEPTH | C_MODE_COLOR | C_MODE_ALIGNED);

Replicate OpenCV resize with bilinar interpolation in C (shrink only)

I'm trying to make a copy of the resizing algorithm of OpenCV with bilinear interpolation in C. What I want to achieve is that the resulting image is exactly the same (pixel value) to that produced by OpenCV. I am particularly interested in shrinking and not in the magnification, and I'm interested to use it on single channel Grayscale images. On the net I read that the bilinear interpolation algorithm is different between shrinkings and enlargements, but I did not find formulas for shrinking-implementations, so it is likely that the code I wrote is totally wrong. What I wrote comes from my knowledge of interpolation acquired in a university course in Computer Graphics and OpenGL. The result of the algorithm that I wrote are images visually identical to those produced by OpenCV but whose pixel values are not perfectly identical (in particular near edges). Can you show me the shrinking algorithm with bilinear interpolation and a possible implementation?
Note: The code attached is as a one-dimensional filter which must be applied first horizontally and then vertically (i.e. with transposed matrix).
Mat rescale(Mat src, float ratio){
float width = src.cols * ratio; //resized width
int i_width = cvRound(width);
float step = (float)src.cols / (float)i_width; //size of new pixels mapped over old image
float center = step / 2; //V1 - center position of new pixel
//float center = step / src.cols; //V2 - other possible center position of new pixel
//float center = 0.099f; //V3 - Lena 512x512 lower difference possible to OpenCV
Mat dst(src.rows, i_width, CV_8UC1);
//cycle through all rows
for(int j = 0; j < src.rows; j++){
//in each row compute new pixels
for(int i = 0; i < i_width; i++){
float pos = (i*step) + center; //position of (the center of) new pixel in old map coordinates
int pred = floor(pos); //predecessor pixel in the original image
int succ = ceil(pos); //successor pixel in the original image
float d_pred = pos - pred; //pred and succ distances from the center of new pixel
float d_succ = succ - pos;
int val_pred = src.at<uchar>(j, pred); //pred and succ values
int val_succ = src.at<uchar>(j, succ);
float val = (val_pred * d_succ) + (val_succ * d_pred); //inverting d_succ and d_pred, supposing "d_succ = 1 - d_pred"...
int i_val = cvRound(val);
if(i_val == 0) //if pos is a perfect int "x.0000", pred and succ are the same pixel
i_val = val_pred;
dst.at<uchar>(j, i) = i_val;
}
}
return dst;
}
Bilinear interpolation is not separable in the sense that you can resize vertically and the resize again vertically. See example here.
You can see OpenCV's resize code here.

Move every pixel to right (by 1px) in OpenCV without using remap?

I want to move every pixel in an image to right by 1px, and below is the map I use to do the remap transformation.
This approach require much more time than it should to do such a simple transform. Is there a cv function I can use? Or do I just split the image into 2 images, one is src.cols-1 pixels wide, the other is 1 px wide, and then copy them to the new image?
void update_map()
{
for( int j = 0; j < src.cols; j++ ){
for( int i = 0; i < src.rows; i++ ){
if (j == src.cols-1)
mat_x_Rotate.at<float>(i,j) = 0;
else
mat_x_Rotate.at<float>(i,j) = j + 1;
mat_y_Rotate.at<float>(i,j) = i;
}
}
}
Things you can do to improve your performance:
remap is overkill for this purpose. It is more efficient to copy the pixels directly than to define an entire remap transformation and then use it.
switch your loop order: iterate over rows, then columns. (OpenCV's Mat is stored in row-major order, so iterating over columns first is very cache-unfriendly)
use Mat::ptr() to access pixels in the same row directly, as a C-style array. (this is a big performance win over using at<>(), which probably does stuff like check indices for each access)
take your if statement out of the inner loop, and handle column 0 separately.
As an alternative: yes, splitting the image into parts and copying to the new image might be about as efficient as copying directly, as described above.
Mat Shift_Image_to_Right( Mat src_in, int num_pixels)
{
Size sz_src_in = src_in.size();
Mat img_out(sz_src_in.height, sz_src_in.width, CV_8UC3);
Rect roi;
roi.x = 0;
roi.y = 0;
roi.width = sz_src_in.width-num_pixels;
roi.height = sz_src_in.height;
Mat crop;
crop = src_in(roi);
// Move the left boundary to the right
img_out = Scalar::all(0);
img_out.adjustROI(0, 0, -num_pixels, 0);
crop.copyTo(img_out);
img_out.adjustROI(0, 0, num_pixels, 0);
return img_out;
}

Resources