I am trying to display 2 images horizontally adjacent to each other in the same window using OpenCV.
I have tried using adjustROI function for this.Image 1 has 1088 pixels width and 2208 pixels height while Image 2 has 1280 pixels width and 2208 pixels height.Please suggest what could be wrong in the code below.All I am getting is an image of size Image2 with content from Image2 as well.
Mat img_matches=Mat(2208,2368,imgorig.type());//set size as combination of img1 and img2
img_matches.adjustROI(0,0,0,-1280);
imgorig.copyTo(img_matches);
img_matches.adjustROI(0,0,1088,1280);
imgorig2.copyTo(img_matches);
EDIT: Here's how I'd do what you want to do:
Mat left(img_matches, Rect(0, 0, 1088, 2208)); // Copy constructor
imgorig.copyTo(left);
Mat right(img_matches, Rect(1088, 0, 1280, 2208)); // Copy constructor
imgorig2.copyTo(right);
The copy constructors create a copy of the Mat header that points to the ROI defined by each of the Rects.
Full code:
#include <cv.h>
#include <highgui.h>
using namespace cv;
int
main(int argc, char **argv)
{
Mat im1 = imread(argv[1]);
Mat im2 = imread(argv[2]);
Size sz1 = im1.size();
Size sz2 = im2.size();
Mat im3(sz1.height, sz1.width+sz2.width, CV_8UC3);
Mat left(im3, Rect(0, 0, sz1.width, sz1.height));
im1.copyTo(left);
Mat right(im3, Rect(sz1.width, 0, sz2.width, sz2.height));
im2.copyTo(right);
imshow("im3", im3);
waitKey(0);
return 0;
}
Compiles with:
g++ foo.cpp -o foo.out `pkg-config --cflags --libs opencv`
EDIT2:
Here's how it looks with adjustROI:
#include <cv.h>
#include <highgui.h>
using namespace cv;
int
main(int argc, char **argv)
{
Mat im1 = imread(argv[1]);
Mat im2 = imread(argv[2]);
Size sz1 = im1.size();
Size sz2 = im2.size();
Mat im3(sz1.height, sz1.width+sz2.width, CV_8UC3);
// Move right boundary to the left.
im3.adjustROI(0, 0, 0, -sz2.width);
im1.copyTo(im3);
// Move the left boundary to the right, right boundary to the right.
im3.adjustROI(0, 0, -sz1.width, sz2.width);
im2.copyTo(im3);
// restore original ROI.
im3.adjustROI(0, 0, sz1.width, 0);
imshow("im3", im3);
waitKey(0);
return 0;
}
You have to keep track of what the current ROI is, and the syntax for moving the ROI around can be a little un-intuitive. The result is the same as the first block of code.
As the height (rows of Mat) of the images are same, function hconcat maybe used to horizontally concatenate two images (Mat) and thus can be used to display them side-by-side in the same window. OpenCV doc.
It works with both grayscale and color images. The number of color channels in the source matrices must be same.
Mat im1, im2; // source images im1 and im2
Mat newImage;
hconcat(im1, im2, newImage); // <---- place image side by side
imshow("Display side by side", newImage);
waitKey(0);
For the sake of completeness, vconcat can be similarly used for vertical concatenation.
Here's a solution inspired in #misha's answer.
#include <cv.h>
#include <highgui.h>
using namespace cv;
int
main(int argc, char **argv)
{
Mat im1 = imread(argv[1]);
Mat im2 = imread(argv[2]);
Size sz1 = im1.size();
Size sz2 = im2.size();
Mat im3(sz1.height, sz1.width+sz2.width, CV_8UC3);
im1.copyTo(im3(Rect(0, 0, sz1.width, sz1.height)));
im2.copyTo(im3(Rect(sz1.width, 0, sz2.width, sz2.height)));
imshow("im3", im3);
waitKey(0);
return 0;
}
Instead of using the copy constructor, this solution uses Mat::operator()(const Rect& roi). While both solutions are O(1), this solution seems cleaner.
Related
How to add Scalar to Mat only where mask>0?
this code don't work as expected, area where mask>0 is img.value+scalar but where mask=0 us 0, but I expected img.value.
add(image,Scalar(0,0,80),dst, mask);
code that work as I expect is
Mat dst;
image.copyTo(dst,mask);
add(dst,Scalar(0,0,80),dst, mask);
dst.copyTo(image,mask);
dst= image;
but it's not very clear, is there any simpler variant?
Since your dst image is uninitialized, the values outside the mask are set to 0.
You get the expected behavior if you use as destination an initialized matrix. It can be your source matrix;
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Initial image
Mat3b image(10, 10, Vec3b(0,2,0));
// Mask
Mat1b mask(10,10, uchar(0));
rectangle(mask, Rect(0,0,3,4), Scalar(255), CV_FILLED);
add(image, Scalar(0, 0, 3), image, mask);
return 0;
}
Or if you need the source matrix to remain unchanged, you can simply clone the source image to the destination image before the add, like:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Initial image
Mat3b image(10, 10, Vec3b(0,2,0));
// Mask
Mat1b mask(10,10, uchar(0));
rectangle(mask, Rect(0,0,3,4), Scalar(255), CV_FILLED);
Mat3b dst = image.clone();
add(image, Scalar(0, 0, 3), dst, mask);
return 0;
}
I am currently trying to black out everything outside a circle.
I am drawing the circle using the following lines of code:
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); // CVRound converts floating numbers to integer
int radius = cvRound(circles[i][2]); // Radius is the third parameter [i][0] = x [i][1]= y [i][2] = radius
circle( image, center, 3, cv::Scalar(0,255,0), -1, 8, 0 ); // Drawing little circle to Image Center , next Line of Code draws the real circle
circle( image, center, radius, cv::Scalar(0,0,255), 3, 8, 0 ); // Circle(img, center, radius, color, thickness=1, lineType=8, shift=0)
What is the best approach of painting everything of the circly black, if I have a radius and the center of my circle?
Does OpenCV provide an easy mechanism of doing this or should I iterate through all the pixels of my image and depending on the position color them black or not?
Thanks to Abid for the hint, i ended up with this approach. Everything works fine:
cv::Mat src = someMethodThatReturnsSrcImage(); // src Image
cv::Mat maskedImage; // stores masked Image
std::vector<cv::Vec3f> circles = someMethodThatReturnsCircles(src);
cv::Mat mask(srcImageForDimensions.size(),srcImageForDimensions.type()); // create an Mat that has same Dimensons as src
mask.setTo(cv::Scalar(0,0,0)); // creates black-Image
// Add all found circles to mask
for( size_t i = 0; i < circles.size(); i++ ) // iterate through all detected Circles
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); // CVRound converts floating numbers to integer
int radius = cvRound(circles[i][2]); // Radius is the third parameter [i][0] = x [i][1]= y [i][2] = radius
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 ); // Circle(img, center, radius, color, thickness=1, lineType=8, shift=0)
}
src.copyTo(maskedImage,mask); // creates masked Image and copies it to maskedImage
you can make the background the color you want
image=cv::Scalar(red_value, green_value, blue_value);
then draw your circles
I think that comment just under your question is the best solution.
I made a modified version of your code for 5M image from fisheye camera. This image also needs to make black all points outside circle.
#include <Windows.h>
#include <Vfw.h>
#include <string>
#include <iostream>
#include "opencv2\core\core.hpp"
#include "opencv2\imgproc\imgproc.hpp"
#include "opencv2\imgcodecs\imgcodecs.hpp"
#include "opencv2\highgui\highgui.hpp"
using namespace std;
using namespace cv;
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat im_source_non_square = cv::imread("D:/FishLib/sample_02.bmp", CV_LOAD_IMAGE_COLOR);
cv::namedWindow("Image",CV_WINDOW_FREERATIO);
cv::imshow("Image", im_source_non_square);
Mat im_source_square;
int m_nCenterX=1280;
int m_nCenterY=960;
int m_nRadius=916;
Mat im_mask=im_source_non_square.clone();
im_mask.setTo(cv::Scalar(0,0,0));
circle( im_mask, cv::Point(m_nCenterX,m_nCenterY), m_nRadius, cv::Scalar(255,255,255), -3, 8, 0 );
cv::namedWindow("Mask image",CV_WINDOW_FREERATIO);
cv::imshow("Mask image", im_mask);
Mat im_source_circle;
cv::bitwise_and(im_source_non_square,im_mask,im_source_circle);
cv::namedWindow("Combined image",CV_WINDOW_FREERATIO);
cv::imshow("Combined image", im_source_circle);
cv::waitKey(0);
return 0;
}
Just tried your code snippet and it works.
Also if you want to change the background color instead of black, according to opencv docs here, before copyTo the destination mat will be initialized if needed, so just add code below:
cv::Mat maskedImage(srcImageForDimensions.size(), srcImageForDimensions.type()); // stores masked Image
maskedImage.setTo(cv::Scalar(0,0,255)); // set background color to red
Took an example image from opencv (cat.jpg).To reduce brightness at particular area. here is the link for the image
http://tinypic.com/view.php?pic=2lnfx46&s=5
Here is one possible solution. The bright spots are detected using a simple threshold operation. Then the bright spots are darkened using a gamma transformation. The result looks slightly better, but unfortunately, if the pixels in the image are exactly white, all the pixel information is lost and you will not be able to recover this information.
#include <opencv2/opencv.hpp>
#include <iostream>
#include <cfloat>
int threshold = 200;
double gammav = 3;
int main(int argc, char** argv )
{
cv::Mat image,gray_image,bin_image;
// read image
cv::imread(argv[1]).convertTo(image,CV_32FC3);
// find bright spots with thresholding
cv::cvtColor(image, gray_image, CV_RGB2GRAY);
cv::threshold( gray_image, bin_image, threshold, 255,0 );
// blur mask to smooth transitions
cv::GaussianBlur(bin_image, bin_image, cv::Size(21,21), 5 );
// create 3 channel mask
std::vector<cv::Mat> channels;
channels.push_back(bin_image);
channels.push_back(bin_image);
channels.push_back(bin_image);
cv::Mat bin_image3;
cv::merge(channels,bin_image3);
// create darker version of the image using gamma correction
cv::Mat dark_image = image.clone();
for(int y=0; y<dark_image.rows; y++)
for(int x=0; x<dark_image.cols; x++)
for(int c=0;c<3;c++)
dark_image.at<cv::Vec3f>(y,x)[c] = 255.0 * pow(dark_image.at<cv::Vec3f>(y,x)[c]/255.0,gammav);
// create final image
cv::Mat res_image = image.mul((255-bin_image3)/255.0) + dark_image.mul((bin_image3)/255.0);
cv::imshow("orig",image/255);
cv::imshow("dark",dark_image/255);
cv::imshow("bin",bin_image/255);
cv::imshow("res",res_image/255);
cv::waitKey(0);
}
Working on Face Detection and Recognition, and after successfully detecting a face, I just want to crop the face and save it somewhere in the drive to give it for the recognition code. I am having hard time doing the saving the Region of Interest as a new image. I have got some codes online but it is written in the previous version of OpenCV that uses IplImage*. I am using OpenCV 2.4.2 that uses cv::Mat.Heeeelp!!!
I will post my codes(Face detection and Recognition per se) if you guys want it.
#include <cv.h>
#include <highgui.h>
#include <math.h>
// alphablend <imageA> <image B> <x> <y> <width> <height>
// <alpha> <beta>
IplImage* crop( IplImage* src, CvRect roi)
{
// Must have dimensions of output image
IplImage* cropped = cvCreateImage( cvSize(roi.width,roi.height), src->depth, src->nChannels );
// Say what the source region is
cvSetImageROI( src, roi );
// Do the copy
cvCopy( src, cropped );
cvResetImageROI( src );
cvNamedWindow( "check", 1 );
cvShowImage( "check", cropped );
cvSaveImage ("style.jpg" , cropped);
return cropped;
}
int main(int argc, char** argv)
{
IplImage *src1, *src2;
CvRect myRect;
// IplImage* cropped ;
src1=cvLoadImage(argv[1],1);
src2=cvLoadImage(argv[2],1);
{
int x = atoi(argv[3]);
int y = atoi(argv[4]);
int width = atoi(argv[5]);
int height = atoi(argv[6]);
double alpha = (double)atof(argv[7]);
double beta = (double)atof(argv[8]);
cvSetImageROI(src1, cvRect(x,y,width,height));
cvSetImageROI(src2, cvRect(100,200,width,height));
myRect = cvRect(x,y,width,height) ;
cvAddWeighted(src1, alpha, src2, beta,0.0,src1);
cvResetImageROI(src1);
crop (src1 , myRect);
cvNamedWindow( "Alpha_blend", 1 );
cvShowImage( "Alpha_blend", src1 );
cvWaitKey(0);
}
return 0;
}
Thanks. Peace
Using cv::Mat objects will make your code substantially simpler. Assuming the detected face lies in a rectangle called faceRect of type cv::Rect, all you have to type to get a cropped version is:
cv::Mat originalImage;
cv::Rect faceRect;
cv::Mat croppedFaceImage;
croppedFaceImage = originalImage(faceRect).clone();
Or alternatively:
originalImage(faceRect).copyTo(croppedImage);
This creates a temporary cv::Matobject (without copying the data) from the rectangle that you provide. Then, the real data is copied to your new object via the clone or copy method.
For cropping the region, the ROI(Region of interest) is used. The opencv2 does the job quite easily. You can check the link:
http://life2coding.blogspot.com/search/label/cropping%20of%20image
I am doing some detection work using OpenCV, and I need to use the distance transform. Except the distance transform function in opencv gives me an image that is exactly the same as the image I use as source. Anyone know what I am doing wrong? Here is the portion of my code:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);
In a nutshell, I grad the image from the kinect, turn it into a one channel image, smooth it, run the canny edge detector, invert the values, and then I do the distance transform. But the transformed image looks exactly the same as the input image. What's wrong?
Thanks!
I believe the key here is that they look the same. Here is a small program I wrote to show the difference:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat before = imread("qrcode.png", 0);
Mat dist;
distanceTransform(before, dist, CV_DIST_L2, 3);
imshow("before", before);
imshow("non-normalized", dist);
normalize(dist, dist, 0.0, 1.0, NORM_MINMAX);
imshow("normalized", dist);
waitKey();
return 0;
}
In the non-normalized image, you see this:
which doesn't really look like it changed anything, but the distance steps are very small compared to the overall range of values [0, 255] (due to imshow converting the image from 32-bit float to 8-bits for display), we can't see the differences, so let's normalize it...
Now we get this:
The values themselves should be correct, but when displayed you will need to normalize the image to see the difference.
EDIT :
Here is a small 10x10 sample from the upper-left corner of the dist matrix show that the values are in fact different:
[10.954346, 10.540054, 10.125763, 9.7114716, 9.2971802, 8.8828888, 8.4685974, 8.054306, 7.6400146, 7.6400146;
10.540054, 9.5850525, 9.1707611, 8.7564697, 8.3421783, 7.927887, 7.5135956, 7.0993042, 6.6850128, 6.6850128;
10.125763, 9.1707611, 8.2157593, 7.8014679, 7.3871765, 6.9728851, 6.5585938, 6.1443024, 5.730011, 5.730011;
9.7114716, 8.7564697, 7.8014679, 6.8464661, 6.4321747, 6.0178833, 5.6035919, 5.1893005, 4.7750092, 4.7750092;
9.2971802, 8.3421783, 7.3871765, 6.4321747, 5.4771729, 5.0628815, 4.6485901, 4.2342987, 3.8200073, 3.8200073;
8.8828888, 7.927887, 6.9728851, 6.0178833, 5.0628815, 4.1078796, 3.6935883, 3.2792969, 2.8650055, 2.8650055;
8.4685974, 7.5135956, 6.5585938, 5.6035919, 4.6485901, 3.6935883, 2.7385864, 2.324295, 1.9100037, 1.9100037;
8.054306, 7.0993042, 6.1443024, 5.1893005, 4.2342987, 3.2792969, 2.324295, 1.3692932, 0.95500183, 0.95500183;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0]
I just figured this one out.
The OpenCV distanceTransform
Calculates the distance to the closest zero pixel for each pixel of
the source image.
and so it expects your edges image to be negative.
All you need to do is to negate your edges image:
edges = 255 - edges;
You can print this values using this code before normalize function:
for(int x=0; x<10;x++)
{
cout<<endl;
for(int y=0; y<10;y++)
cout<<std::setw(10)<<dist.at<float>(x, y);
}
Mat formats
Input: CV_8U
Dist: CV_32F
Normalized: CV_8U
normalize(Mat_dist, Mat_norm, 0, 255, NORM_MINMAX, CV_8U);
If you want to visualize the result, you need to scale the normalization to 0 ... 255 and not to 0 ... 1 or everything will seem black. Using imshow(); on a scaled to 0 ... 1 image will work but may cause problmes in the next processing steps. Al least it did in my case.