black lines around image using warpPerspective for stitching in opencv - opencv

Im trying to build a mosaic panorama from a video.
I stitched every frame together, but there is a problem in the final image.
I used findHomography for translation matrix, mask , warpPerspective and copy the new warped image into the final image panorama.
I think this is a problem of warpPerspective.Does anybody know a solution how to fix these black lines?
these black vertical lines are the corners of the stitched image.How to remove them?

I solved it.I figured out the corners of the stitched image and I tried to manually edit the mask.I draw some the black lines using this code to mask:
line(mask, corner_trans[0], corner_trans[2], CV_RGB(0, 0, 0), 4, 8);
line(mask, corner_trans[2], corner_trans[3], CV_RGB(0, 0, 0), 4, 8);
line(mask, corner_trans[3], corner_trans[1], CV_RGB(0, 0, 0), 4, 8);
line(mask, corner_trans[1], corner_trans[0], CV_RGB(0, 0, 0), 4, 8);

Related

How to draw contour without filling in OpenCV

Trying to extract numbers from an image, however I'm having issues with 0, 8 and such because of the enclosed portions.
When I use cv.drawContours(black_img, [contour_num], -1, (255, 255, 255), thickness=cv.FILLED)
it obviously fills the inside of the contour as well. How do I prevent this? How do I draw only the "outer" part of the number? Thanks
drawn contour
orig image
cv.drawContours(black_img, [contour_num], -1, (255, 255, 255), thickness=cv.FILLED)
cv.FILLED is equal to -1 and is the reason of drawing contours filled.
you should specify a value is greater than zero like thickness=1 , thickness=2 etc.

OpenCV Circle mask anti-aliasing

I'm trying to use OpenCV to overlay two images together.
Input 1 background (b.jpg):
Input 2 foreground (f.jpg):
Desired output:
Real output:
The idea is to overlay circular part of foreground onto background.
I'm using the code:
Mat background = imread("b.jpg");
Mat foreground = imread("f.jpg");
Mat mask{foreground.size(), foreground.type(), Scalar::all(0)};
circle(mask, Point{foreground.cols / 2, foreground.rows / 2}, foreground.cols / 2 - 10, Scalar::all(255), -1, CV_AA);
imwrite("mask.jpg", mask);
foreground.copyTo(background, mask);
imwrite("overlay.jpg", background);
For the mask itself, I can see a perfect circle draw with very smooth edge.
But as soon as I call copyTo with the circular mask. The resulting image has abrupt edges and seems the anti-aliasing part is completely missing.
Is there a way to make the copyTo honoring the anti-aliasing fact? Or there is an easier way to achieve the same output?

How to detect room borders with OpenCV

I want to clear a floor plan and detect walls. I found this solution but it is quite difficult understand the code.
Specially this line (how does it remove texts and others objects inside rooms?)
DeleteSmallComponents[Binarize[img, {0, .2}]];
https://mathematica.stackexchange.com/questions/19546/image-processing-floor-plan-detecting-rooms-borders-area-and-room-names-t
img = Import["http://i.stack.imgur.com/qDhl7.jpg"]
nsc = DeleteSmallComponents[Binarize[img, {0, .2}]];
m = MorphologicalTransform[nsc, {"Min", "Max"}]
How can I do the same with OpenCV?
In opencv there is slightly different approach to process images. In order to do some calculation you have to think in more low-level way. By low-level I mean thinking in basic image processing operations.
For example, line you showed:
DeleteSmallComponents[Binarize[img, {0, .2}]];
Could be expressed in opencv by algorithm:
binarize image
morphological open/close or simple dilation/erosion (based on what is color of objects and background):
cv::threshold(img, img, 100, 255, CV_THRESH_BINARY);
cv::dilate(img, img, cv::Mat());
cv::dilate(img, img, cv::Mat());
Further you can implement your own distance transformation, or use for example hit-and-miss routine (which as being basic is implemented in opencv) to detect corners:
cv::Mat kernel = (cv::Mat_<int>(7, 7) <<
0, 1, 0,0,0,0,0,
-1, 1, 0,0,0,0,0,
-1, 1, 0,0,0,0,0,
-1,1,0,0,0,0,0,
-1,1,0,0,0,0,0,
-1,1,1,1,1,1,1,
-1,-1,-1,-1,-1,-1,0);
cv::Mat left_down,left_up,right_down,right_up;
cv::morphologyEx(img, left_down, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 1);
cv::morphologyEx(img, right_down, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 0);
cv::morphologyEx(img, right_up, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 1);
cv::morphologyEx(img, left_up, cv::MORPH_HITMISS, kernel);
and then you will have picture like this:
One more picture with bigger dots (after single dilation):
Finally you can process coordinates of corners found to determine rooms.
EDIT: for images with "double wall lines" like:
We have to "merge" double wall lines first, so code will be visible like this:
cv::threshold(img, img, 220, 255, CV_THRESH_BINARY);
cv::dilate(img, img, cv::Mat()); //small object textures
cv::erode(img, img, cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(5, 5)),cv::Point(-1,-1),2);
cv::dilate(img, img, cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(5, 5)), cv::Point(-1, -1), 3);
And result image:
Sadly, if image properties change you will have to slightly change algorithm parameters. There is posibility to provide general solution, but you have to determine most of possible variants of problem and it will be more complex.

Error in marker pose estimation using single camera

I use the following OpenCV code to estimate the pose of a square Marker and draw the 3 axises of the marker on the image. But the Z-axis of the marker rotates 180 degrees time to time as shown in the image below. How to make the z-axis stable?
// Marker world coordinates
vector<Point3f> objecPoints;
objecPoints.push_back(Point3f(0, 0, 0));
objecPoints.push_back(Point3f(0, 2.4, 0));
objecPoints.push_back(Point3f(2.4, 2.4, 0));
objecPoints.push_back(Point3f(2.4, 0.0, 0));
// 2D image coordinates of 4 marker corners. They are arranged in the same order for each frame
vector<Point2f> marker2DPoints;
// Calculate Rotation and Translation
cv::Mat Rvec;
cv::Mat_<float> Tvec;
cv::Mat raux, taux;
cv::solvePnP(objecPoints, marker2DPoints, camMatrix, distCoeff, raux, taux);
// Draw marker pose on the image
vector<Point3f> axisPoints3D;
axisPoints3D.push_back(Point3f(0, 0, 0));
axisPoints3D.push_back(Point3f(2.4, 0, 0));
axisPoints3D.push_back(Point3f(0, 2.4, 0));
axisPoints3D.push_back(Point3f(0, 0, 2.4));
vector<Point2f> axisPoints2D;
// Take the camMatrix and distCoeff from camera calibration results
projectPoints(axisPoints3D, Rvec, Tvec, camMatrix, distCoeff, axisPoints2D);
line(srcImg, axisPoints2D[0], axisPoints2D[1], CV_RGB(0, 0, 255), 1, CV_AA);
line(srcImg, axisPoints2D[0], axisPoints2D[2], CV_RGB(0, 255, 0), 1, CV_AA);
line(srcImg, axisPoints2D[0], axisPoints2D[3], CV_RGB(255, 0, 0), 1, CV_AA);
This probably would be better as a comment but I don't have enough reputation for that. I think this may be happening due to the order in which solvePnP is getting the coordinates for your tag. Furthermore, since solvePnP is just trying to match (in this case) 4 points on a 3D plane to 4 2D points in an image, there are multiple solutions for this. The tag could be rotated around its up axis, as well as flipped upside down. solvePnP doesn't know from the provided points which is the upwards facing direction.
I have a hunch that solvePnP is a bit too general for this problem as stable tag detection algorithm should be able to feed the corners to the pose estimation code in a stable order.
Edit: order of the corners is important and the solution given by solvePnP depends on it. Perhaps the algorithm generating your corner points is not providing the corners in a consistent order? Please share the output of tags.points

Simulate cataract vision in OpenCV

I'm working on a project to design a low vision aid. What is the image processing operation which can simulate cataract vision to the normal eye using OpenCV?
It would be useful if you described the symptoms of the cataract and what happens to the retinal images since not all the people here are experts in computer vision and eye deceases at the same time. If a retinal image gets out of focus and gets a yellow tint you can used openCV blur() function and also boost RGB values with yellow a bit. If there are different degree of blur across a visual field I recommend using integral images, see this post
I guess there are at least three operations to do: add noise, blur, whiten:
Rect rect2(0, 0, w/2, h);
Mat src = I4(rect2).clone();
Mat Mnoise(h, w/2, CV_8UC3);
randn(Mnoise, 100, 50);
src = src*0.5+Mnoise*0.5; // add noise
Mat Mblur;
blur(src, Mblur, Size(12, 12)); // blur
Rect rect3(w, 0, w/2, h);
Mat Mblurw = Mblur*0.8+ Scalar(255, 255, 255)*0.2; //whiten

Resources