I run the following code with the left and the right images and get the strange result. I'm not exactly sure what I'm doing wrong. First of all, why is it cropped and why is the disparity all one color?
CvStereoBMState *BMState = cvCreateStereoBMState();
assert(BMState != 0);
BMState->preFilterSize=41;
BMState->preFilterCap=31;
BMState->SADWindowSize=41;
BMState->minDisparity=-64;
BMState->numberOfDisparities=128;
BMState->textureThreshold=10;
BMState->uniquenessRatio=5;
CvMat* disp = cvCreateMat(image_pyramid[0][0]->height, image_pyramid[0][0]->width, CV_16S);
CvMat* vdisp = cvCreateMat(image_pyramid[0][0]->height, image_pyramid[0][0]->width, CV_8U);
cvFindStereoCorrespondenceBM(image_pyramid[0][0], image_pyramid[1][0], disp, BMState);
cvNormalize(disp, vdisp, 0, 256, CV_MINMAX);
cvSaveImage("wowicantbelieveitsnotbutter.jpg", vdisp);
I am not sure about the cropped image but I think that you should normalize it to range 0..1 and not to 0..255 since it is not 8 bit image.
Also maybe it looks cropped since the black values are actually negative.
Try changing min disparity to zero this may help in your case(problem due to cropping). I did face the same problem. But I came with a solution of BMTuner. I have seen a video. Here I attach video this may help you with problem of cropping.
http://www.youtube.com/watch?feature=player_embedded&v=FX7AMktf24E
Related
I am trying to crop a specific part of a frame in opencv to get a cropped image of the detections from mobilenet ssd model. The code to crop the image is like this
for box_id in boxes_ids:
x,y,w,h,id = box_id
crop=frame[y:h,x:w]
cv2.imshow("d",crop)
cv2.waitKey(5)
This code is producing a blank space towards the right of all the images that I extract :
Please tell me how can i fix this.
try using Pillow that helps
def trim(im, color):
bg = Image.new(im.mode, im.size, color)
diff = ImageChops.difference(im, bg)
diff = ImageChops.add(diff, diff)
bbox = diff.getbbox()
if bbox:
return im.crop(bbox)
This function will probably take it out, just be carefull that this will only work if the segment of image has consistent pixels
as said before in the comments, there is a minimum window width, and smaller crops will be drawn on some neutral background.
but maybe it's more intuitive to draw the crop into an empty image, conserving its original position:
for box_id in boxes_ids:
x,y,w,h,id = box_id
draw = np.zeros(frame.shape, np.uint8)
draw[y:h,x:w] = frame[y:h,x:w]
cv2.imshow("d",draw)
cv2.waitKey(5)
I want to copy a center part (Rectangle) of my image to a completely white Mat (to the same position).
Code:
Mat src = Image.Mat;
Mat dst = new Mat(src.Height, src.Width, DepthType.Cv8U, 3);
dst.SetTo(new Bgr(255, 255, 255).MCvScalar);
Rectangle roi = new Rectangle((int)(0.1 * src.Width), (int)(0.1 * src.Height), (int)(0.8 * src.Width), (int)(0.8 * src.Height));
Mat srcROI = new Mat(src, roi);
Mat dstROI = new Mat(dst, roi);
srcROI.CopyTo(dstROI);
//I have dstROI filled well. CopyTo method is doing well.
//However I have no changes in my dst file.
However I'm getting only white image as a result - dst. Nothing inside.
What i'm doing wrong?
using EmguCV 3.1
EDIT
I have a dstROI Mat filled well. But there is a problem how to apply changes to original dst Mat now.
Changing CopyTo like this:
srcROI.CopyTo(dst);
causes that dst is filled now with my part of src image but not in the centre like i wanted
EDIT 2
src.Depth = Cv8U
As you suggested I check a value of IsSubmatrix property.
Console.WriteLine(dstROI.IsSubmatrix);
srcROI.CopyTo(dstROI);
Console.WriteLine(dstROI.IsSubmatrix);
gives output:
true
false
What can be wrong then?
Ancient question, I know, but it came up when I searched so an answer here might still be being hit in searches. I had a similar issue and it may be the same problem. If src and dst have different numbers of channels or different depths, then a new Mat is created instead. I see that they both have the same depth, but in my case, I had a single channel going into a 3 channel Mat. If your src is not a 3 channel Mat, then this may be the issue (it might be 1 (gray) or 4 channel (BGRA) for example).
According to the operator precedence rules of C# a type cast has higher priority than multiplication.
Hence (int)0.8 * src.Width is equivalent to 0 * src.Width, and the same applies to the other parameters of the roi rectangle. Therefore the line where you create the roi is basically
Rectangle roi = new Rectangle(0,0,0,0);
Copying a 0-size block does nothing, so you're left with the pristine white image you created earlier.
Solution
Parenthesize your expressions properly.
Rectangle roi = new Rectangle((int)(0.1 * src.Width)
, (int)(0.1 * src.Height)
, (int)(0.8 * src.Width)
, (int)(0.8 * src.Height));
I am trying to creating a blurring function that can take all possible padding options. However for BORDER_CONSTANT you also need to provide the color, i.e. the numbers that you want to pad your image with. In opencv's documentation of blur I don't see an overload of function blur that takes padding and the color value. Does anyone know how to overcome this?
One thing I thought about doing was padding the image first and the blurring some region of interest with no padding at all, although I can't find a way to do that.
The question referred to was asked by me, so basically I would know if this was a duplicated. This question was relating to cv::blur which also handles padding, however does not have an option of adding the border values for BORDER_CONSTANT. I was asking if anyone knows a workaround.
If you follow the source code for blur, you'll find out that, when the borderType is BORDER_CONSTANT, the value for the border will be Scalar(0,0,0,0).
Just a quick reverse-engineering... If you create a white (255) CV_8UC1 matrix and blurwith a 3x3 filter with BORDER_CONSTANT, you'll see that the result is:
In the angles you'll get: (255*4 + 0*5) / 9 = 113, on the border you get (255*6 + 0*3) / 9 = 170. This demonstrate the the padding is of zeros.
Sample code:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat1b img(5,5,uchar(255));
blur(img, img, Size(3, 3), Point(-1, -1), BORDER_CONSTANT);
return 0;
}
Basically what I assumed in the original post is correct. If anyone cares for a solution here goes:
copyMakeBorder(image_in_mat, image_in_mat, r, r, r, r, BORDER_CONSTANT, Scalar(myNumbers));
ROI = Rect(r, r, w, h);
image_in_ROI = image_in_mat(ROI);
blur(image_in_ROI, image_out_mat, Size(blockSize, blockSize), Point(-1, -1));
where r is the radius you want to pad with; w,h are width and height of original image; myNumbers - the color you want your padding to have; image_in_mat - input image.
I am working on some leaf images using OpenCV (Java). The leaves are captured on a white paper and some has shadows like this one:
Of course, it's somehow the extreme case (there are milder shadows).
Now, I want to threshold the leaf and also remove the shadow (while reserving the leaf's details).
My current flow is this:
1) Converting to HSV and extracting the Saturation channel:
Imgproc.cvtColor(colorMat, colorMat, Imgproc.COLOR_RGB2HSV);
ArrayList<Mat> channels = new ArrayList<Mat>();
Core.split(colorMat, channels);
satImg = channels.get(1);
2) De-noising (median) and applying adaptiveThreshold:
Imgproc.medianBlur(satImg , satImg , 11);
Imgproc.adaptiveThreshold(satImg , satImg , 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 401, -10);
And the result is this:
It looks OK, but the shadow is causing some anomalies along the left boundary. Also, I have this feeling that I am not using the white background to my benefit.
Now, I have 2 questions:
1) How can I improve the result and get rid of the shadow?
2) Can I get good results without working on saturation channel?. The reason I ask is that on most of my images, working on L channel (from HLS) gives way better results (apart from the shadow, of course).
Update: Using the Hue channel makes threshdolding better, but makes the shadow situation worse:
Update2: In some cases, the assumption that the shadow is darker than the leaf doesn't always hold. So, working on intensities won't help. I'm looking more toward a color channels approach.
I don't use opencv, instead I was trying to use matlab image processing toolbox to extract the leaf. Hopefully opencv has all the processing functions for you. Please see my result below. I did all the operations in your original image channel 3 and channel 1.
First I used your channel 3, threshold it with 100 (left top). Then I remove the regions on the border and regions with the pixel size smaller than 100, filling in the hole in the leaf, the result is shown in right top.
Next I used your channel 1, did the same thing as I did in channel 3, the result is shown in left bottom. Then I found out the connected regions (there are only two as you can see in the left bottom figure), remove the one with smaller area (shown in right bottom).
Suppose the right top image is I1, and the right bottom image is I, the leaf is extracted by implement ~I && I1. The leaf is:
Hope it helps. Thanks
I tried two different things:
1. other thresholding on the saturation channel
2. try to find two contours: shadow and leaf
I use c++ so your code snippets will look a little different.
trying otsu-thresholding instead of adaptive thresholding:
cv::threshold(hsv_imgs,mask,0,255,CV_THRESH_BINARY|CV_THRESH_OTSU);
leading to following images (just OTSU thresholding on saturation channel):
the other thing is computing gradient information (i used sobel, see oppenCV documentation), thresholding that and after an opening-operator I used findContours giving something like this, not useable yet (gradient contour approach):
I'm trying to do the same thing with photos of butterflies, but with more uneven and unpredictable backgrounds such as this. Once you've identified a good portion of the background (e.g. via thresholding, or as we do, flood filling from random points), what works well is to use the GrabCut algorithm to get all those bits you might miss on the initial pass. In python, assuming you still want to identify an initial area of background by thresholding on the saturation channel, try something like
import cv2
import numpy as np
img = cv2.imread("leaf.jpg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
sat = cv2.medianBlur(sat, 11)
thresh = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh.jpg", thresh)
h, w = img.shape[:2]
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
grabcut_mask = thresh/255*3 #background should be 0, probable foreground = 3
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("GrabCut1.jpg", img*grabcut_mask[...,None])
This actually gets rid of the shadows for you in this case, because the edge of the shadow actually has high saturation levels, so is included in the grab cut deletion. (I would post images, but don't have enough reputation)
Usually, however, you can't trust shadows to be included in the background detection. In this case you probably want to compare areas in the image with colour of the now-known background using the chromacity distortion measure proposed by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection". This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Note that the pdf of the preprint you find online has a mistake (no + signs) in equation 6. You can use the version re-quoted in Rodriguez-Gomez et al (2012), equations 1 & 2. Or you can use my python code below:
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
You can feed the known background mean & stdev as the last two parameters of the chromacity_distortion function, and the RGB pixel image as the first parameter, which should show you that the shadow is basically the same chromacity as the background, and very different from the leaf. In the code below, I've then thresholded on chromacity, and done another grabcut pass. This works to remove the shadow even if the first grabcut pass doesn't (e.g. if you originally thresholded on hue)
mean, stdev = cv2.meanStdDev(img, mask = 255-thresh)
mean = mean.ravel() #bizarrely, meanStdDev returns an array of size [3,1], not [3], so flatten it
stdev = stdev.ravel()
chrom = chromacity_distortion(img, mean, stdev)
chrom255 = cv2.normalize(chrom, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
cv2.imwrite("ChromacityDistortionFromBackground.jpg", chrom255)
thresh2 = cv2.adaptiveThreshold(chrom255 , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh2.jpg", thresh2)
grabcut_mask[...] = 3
grabcut_mask[thresh==0] = 0 #where thresh == 0, definitely background, set to 0
grabcut_mask[np.logical_and(thresh == 255, thresh2 == 0)] = 2 #could try setting this to 2 or 0
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("final_leaf.jpg", grabcut_mask[...,None]*img)
I'm afraid with the parameters I tried, this still removes the stalk, though. I think that's because GrabCut thinks that it looks a similar colour to the shadows. Let me know if you find a way to keep it.
I would like to replace a part of the image with my image in Opencv
I used
cvGetPerspectiveMatrix() with a warpmatrix and using cvAnd() and cvOr()
but could not get it to work
This is the code that is currently displaying the image and a white polygon for the replacement image. I would like to replace the white polygon for a pic with any dimension to be scaled and replaced with the region pointed.
While the code is in javacv I could convert it to java even if c code is posted
grabber.start();
while(isDisp() && (image=grabber.grab())!=null){
if (dst_corners != null) {// corners of the image to be replaced
CvPoint points = new CvPoint((byte) 0,dst_corners,0,dst_corners.length);
cvFillConvexPoly(image,points, 4, CvScalar.WHITE, 1, 0);//white polygon covering the replacement image
}
correspondFrame.showImage(image);
}
Any pointers to this will be very helpful.
Update:
I used warpmatrix with this code and I get a black spot for the overlay image
cvSetImageROI(image, cvRect(x1,y1, overlay.width(), overlay.height()));
CvPoint2D32f p = new CvPoint2D32f(4);
CvPoint2D32f q = new CvPoint2D32f(4);
q.position(0).x(0);
q.position(0).y(0);
q.position(1).x((float) overlay.width());
q.position(1).y(0);
q.position(2).x((float) overlay.width());
q.position(2).y((float) overlay.height());
q.position(3).x(0);
q.position(3).y((float) overlay.height());
p.position(0).x((int)Math.round(dst_corners[0]);
p.position(0).y((int)Math.round(dst_corners[1]));
p.position(1).x((int)Math.round(dst_corners[2]));
p.position(1).y((int)Math.round(dst_corners[3]));
p.position(3).x((int)Math.round(dst_corners[4]));
p.position(3).y((int)Math.round(dst_corners[5]));
p.position(2).x((int)Math.round(dst_corners[6]));
p.position(2).y((int)Math.round(dst_corners[7]));
cvGetPerspectiveTransform(q, p, warp_matrix);
cvWarpPerspective(overlay, image, warp_matrix);
I get a black spot for the overlay image and even though the original image is a polygon with 4 vertices the overlay image is set as a rectangle. I believe this is because of the ROI. Could anyone please tell me how to fit the image as is and also why I am getting a black spot instead of the overlay image.
I think cvWarpPerspective(link) is what you are looking for.
So instead of doing
CvPoint points = new CvPoint((byte) 0,dst_corners,0,dst_corners.length);
cvFillConvexPoly(image,points, 4, CvScalar.WHITE, 1, 0);//white polygon covering the replacement image
Try
cvWarpPerspective(yourimage, image, M, image.size(), INTER_CUBIC, BORDER_TRANSPARENT);
Where M is the matrix you get from cvGetPerspectiveMatrix
One way to do it is to scale the pic to the white polygon size and then copy it to the grabbed image setting its Region of Interest (here is a link explaining the ROI).
Your code should look like this:
resize(pic, resizedImage, resizedImage.size(), 0, 0, interpolation); //resizedImage should have the points size
cvSetImageROI(image, cvRect(the points coordinates));
cvCopy(resizedImage,image);
cvResetImageROI(image);
I hope that helps.
Best regards,
Daniel