Im trying to detect the difference between two images :
image 1 :
image 2 :
The result I want is to have an image that doesn't contain the shared elements.
The problem that I'm facing is that when i do an image.diff, I get some lines around the edges of the shared elements.
and i don't want those to be detected.
Image 3 : is what I'm getting now those lines that have x in red shouldn't be there.
Is there anyway to do this ?
The issue I had is that I generated the second image from the first image using the Erode method and that ate away some pixles,
So instead I used MorphologyEx method.
Mat layersToRemove = new Mat();
Mat element = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(5, 5));
Cv2.MorphologyEx(whiteMask, layersToRemove, MorphTypes.Open, element);
The I applied a simple Substract
Mat output = new Mat();
Cv2.Subtract(image1, layersToRemove, output);
res.SaveImage("C:/output.png");
Related
I want to copy a center part (Rectangle) of my image to a completely white Mat (to the same position).
Code:
Mat src = Image.Mat;
Mat dst = new Mat(src.Height, src.Width, DepthType.Cv8U, 3);
dst.SetTo(new Bgr(255, 255, 255).MCvScalar);
Rectangle roi = new Rectangle((int)(0.1 * src.Width), (int)(0.1 * src.Height), (int)(0.8 * src.Width), (int)(0.8 * src.Height));
Mat srcROI = new Mat(src, roi);
Mat dstROI = new Mat(dst, roi);
srcROI.CopyTo(dstROI);
//I have dstROI filled well. CopyTo method is doing well.
//However I have no changes in my dst file.
However I'm getting only white image as a result - dst. Nothing inside.
What i'm doing wrong?
using EmguCV 3.1
EDIT
I have a dstROI Mat filled well. But there is a problem how to apply changes to original dst Mat now.
Changing CopyTo like this:
srcROI.CopyTo(dst);
causes that dst is filled now with my part of src image but not in the centre like i wanted
EDIT 2
src.Depth = Cv8U
As you suggested I check a value of IsSubmatrix property.
Console.WriteLine(dstROI.IsSubmatrix);
srcROI.CopyTo(dstROI);
Console.WriteLine(dstROI.IsSubmatrix);
gives output:
true
false
What can be wrong then?
Ancient question, I know, but it came up when I searched so an answer here might still be being hit in searches. I had a similar issue and it may be the same problem. If src and dst have different numbers of channels or different depths, then a new Mat is created instead. I see that they both have the same depth, but in my case, I had a single channel going into a 3 channel Mat. If your src is not a 3 channel Mat, then this may be the issue (it might be 1 (gray) or 4 channel (BGRA) for example).
According to the operator precedence rules of C# a type cast has higher priority than multiplication.
Hence (int)0.8 * src.Width is equivalent to 0 * src.Width, and the same applies to the other parameters of the roi rectangle. Therefore the line where you create the roi is basically
Rectangle roi = new Rectangle(0,0,0,0);
Copying a 0-size block does nothing, so you're left with the pristine white image you created earlier.
Solution
Parenthesize your expressions properly.
Rectangle roi = new Rectangle((int)(0.1 * src.Width)
, (int)(0.1 * src.Height)
, (int)(0.8 * src.Width)
, (int)(0.8 * src.Height));
I am new to Open Cv, I want to transform the two images,
here are my images,Left image and right image.
here is my Code
cv::Mat transformMat = cv::estimateRigidTransform(leftImageMat, rightImageMat, true);
transform(leftImageMat, reconMat, transformMat);
but the problem is that reconMat is of 2 channel. so how can I show it in openCv or convert to 1 channel image as shown above right and left images.
You have a fundamental misunderstanding of what cv::transform() does. The documentation
states:
Performs the matrix transformation of every array element.
This means that the numerical value of each element is transformed by the specified matrix.
It looks like you want a geometric transformation. This can be achieved using cv::warpAffine():
cv::Mat transformMat = cv::estimateRigidTransform(leftImageMat, rightImageMat, true);
cv::Mat output;
cv::Size dsize = leftImageMat.size(); //This specifies the output image size--change as needed
cv::warpAffine(leftImageMat, output, transformMat, dsize);
I have a binary image and I want to perform closing on that image with the line as structuring element.
The openCv api has a function getStructuringElement that takes the following parameters
Shape
Size
Anchor Point
I can pass CV_SHAPE_CUSTOM in the first parameter to create a new shape but where do I
pass the size and the values of my structuring element.
My line will be 10 pixels wide and 1 pixels in length basically {1,1,1,1,1,1,1,1,1,1}.
There is an old function createStructringElementEx but I don't want to use that as it involves a lot of conversion of datatype.
Is this what you want?
Size = Size(10,1)
Anchor Point = Point(-1,-1)
Got it . Thanks to the comment from Niko.
Create a matrix as
Mat line = Mat::ones(1,10,CV_8UC1);
//now apply the morphology close operation
morphologyEx(img, img, MORPH_CLOSE, line,Point(-1,-1));
This solved my problem.
I'm trying to reduce the runtime of a routine that converts an RGB image to a YCbCr image. My code looks like this:
cv::Mat input(BGR->m_height, BGR->m_width, CV_8UC3, BGR->m_imageData);
cv::Mat output(BGR->m_height, BGR->m_width, CV_8UC3);
cv::cvtColor(input, output, CV_BGR2YCrCb);
cv::Mat outputArr[3];
outputArr[0] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Y->m_imageData);
outputArr[1] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cr->m_imageData);
outputArr[2] = cv::Mat(BGR->m_height, BGR->m_width, CV_8UC1, Cb->m_imageData);
split(output,outputArr);
But, this code is slow because there is a redundant split operation which copies the interleaved RGB image into the separate channel images. Is there a way to make the cvtColor function create an output that is already split into channel images? I tried to use constructors of the _OutputArray class that accepts a vector or array of matrices as an input, but it didn't work.
Are you sure that copying the image data is the limiting step?
How are you producing the Y ? Cr / Cb cv::mats?
Can you just rewrite this function to write the results into the three separate images?
There is no calling option for cv::cvtColor, that gives it result as three seperate cv::Mats (one per channel).
dst – output image of the same size and depth as src.
source: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor
You have to copy the pixels from the result (as you are already doing) or write such a conversion function yourself.
Use split. This splits the image into 3 different channels or arrays.
Now converting them back to UIImage is where I am having trouble. I get three grayscale images, one in each array. I am convinced they are the proper channels in cvMat format but when I convert them to UIImage they are grayscale but different grayscale values in each image. If you can use imread and imshow then it should display the images for you after the split. My problem is trying to use the ios.h methods and I believe it reassembles the arrays, instead of transferring the single array. Here is my code using a segmented control to choose which layer, or array, you want to display. Like I said, I get 3 grayscale images but with completely different values. I need to keep the one layer and abandon the rest. Still working on that part of it.
UIImageToMat(_img, cvImage);
cv::cvtColor(cvImage, RYB, CV_RGB2BGRA);
split(RYB, layers);
if (_segmentedRGBControl.selectedSegmentIndex == 0) {
// cv::cvtColor(layers[0], RYB, CV_8UC1);
RYB = layers[0];
_imageProcessView.image = MatToUIImage(RYB);
}
if (_segmentedRGBControl.selectedSegmentIndex == 1) {
RYB = (layers[1]);
_imageProcessView.image = MatToUIImage(RYB);
}
if (_segmentedRGBControl.selectedSegmentIndex == 2) {
RYB = (layers[2]);
_imageProcessView.image = MatToUIImage(RYB);
}
}
I would like to replace a part of the image with my image in Opencv
I used
cvGetPerspectiveMatrix() with a warpmatrix and using cvAnd() and cvOr()
but could not get it to work
This is the code that is currently displaying the image and a white polygon for the replacement image. I would like to replace the white polygon for a pic with any dimension to be scaled and replaced with the region pointed.
While the code is in javacv I could convert it to java even if c code is posted
grabber.start();
while(isDisp() && (image=grabber.grab())!=null){
if (dst_corners != null) {// corners of the image to be replaced
CvPoint points = new CvPoint((byte) 0,dst_corners,0,dst_corners.length);
cvFillConvexPoly(image,points, 4, CvScalar.WHITE, 1, 0);//white polygon covering the replacement image
}
correspondFrame.showImage(image);
}
Any pointers to this will be very helpful.
Update:
I used warpmatrix with this code and I get a black spot for the overlay image
cvSetImageROI(image, cvRect(x1,y1, overlay.width(), overlay.height()));
CvPoint2D32f p = new CvPoint2D32f(4);
CvPoint2D32f q = new CvPoint2D32f(4);
q.position(0).x(0);
q.position(0).y(0);
q.position(1).x((float) overlay.width());
q.position(1).y(0);
q.position(2).x((float) overlay.width());
q.position(2).y((float) overlay.height());
q.position(3).x(0);
q.position(3).y((float) overlay.height());
p.position(0).x((int)Math.round(dst_corners[0]);
p.position(0).y((int)Math.round(dst_corners[1]));
p.position(1).x((int)Math.round(dst_corners[2]));
p.position(1).y((int)Math.round(dst_corners[3]));
p.position(3).x((int)Math.round(dst_corners[4]));
p.position(3).y((int)Math.round(dst_corners[5]));
p.position(2).x((int)Math.round(dst_corners[6]));
p.position(2).y((int)Math.round(dst_corners[7]));
cvGetPerspectiveTransform(q, p, warp_matrix);
cvWarpPerspective(overlay, image, warp_matrix);
I get a black spot for the overlay image and even though the original image is a polygon with 4 vertices the overlay image is set as a rectangle. I believe this is because of the ROI. Could anyone please tell me how to fit the image as is and also why I am getting a black spot instead of the overlay image.
I think cvWarpPerspective(link) is what you are looking for.
So instead of doing
CvPoint points = new CvPoint((byte) 0,dst_corners,0,dst_corners.length);
cvFillConvexPoly(image,points, 4, CvScalar.WHITE, 1, 0);//white polygon covering the replacement image
Try
cvWarpPerspective(yourimage, image, M, image.size(), INTER_CUBIC, BORDER_TRANSPARENT);
Where M is the matrix you get from cvGetPerspectiveMatrix
One way to do it is to scale the pic to the white polygon size and then copy it to the grabbed image setting its Region of Interest (here is a link explaining the ROI).
Your code should look like this:
resize(pic, resizedImage, resizedImage.size(), 0, 0, interpolation); //resizedImage should have the points size
cvSetImageROI(image, cvRect(the points coordinates));
cvCopy(resizedImage,image);
cvResetImageROI(image);
I hope that helps.
Best regards,
Daniel