I'm working on a project to design a low vision aid. What is the image processing operation which can simulate cataract vision to the normal eye using OpenCV?
It would be useful if you described the symptoms of the cataract and what happens to the retinal images since not all the people here are experts in computer vision and eye deceases at the same time. If a retinal image gets out of focus and gets a yellow tint you can used openCV blur() function and also boost RGB values with yellow a bit. If there are different degree of blur across a visual field I recommend using integral images, see this post
I guess there are at least three operations to do: add noise, blur, whiten:
Rect rect2(0, 0, w/2, h);
Mat src = I4(rect2).clone();
Mat Mnoise(h, w/2, CV_8UC3);
randn(Mnoise, 100, 50);
src = src*0.5+Mnoise*0.5; // add noise
Mat Mblur;
blur(src, Mblur, Size(12, 12)); // blur
Rect rect3(w, 0, w/2, h);
Mat Mblurw = Mblur*0.8+ Scalar(255, 255, 255)*0.2; //whiten
Related
I want to extract the darker contours from images with opencv. I have tried using a simple threshold such as below (c++)
cv::threshold(gray, output, threshold, 255, THRESH_BINARY_INV);
I can iterate threshold lets say from 50 ~ 200
then I can get the darker contours in the middle
for images with a clear distinction such as this
here is the result of the threshold
but if the contours near the border, the threshold will fail because the pixel almost the same.
for example like this image.
What i want to ask is there any technique in opencv that can extract darker contour in the middle of image even though the contour reach the border and having almost the same pixel as the border?
(updated)
after threshold darker contour in the middle overlapped with border top.
It makes me fail to extract character such as the first two "SS".
I think you can simply add a edge preserving smoothing step to solve this :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
Mat filteredImg;
bilateralFilter(inputImg, filteredImg, 5, 60, 20);
// compute laplacian
Mat laplaceImg;
Laplacian(filteredImg, laplaceImg, CV_16S, 1);
// threshold
Mat resImg;
threshold(laplaceImg, resImg, 10, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
This will give you the following result : result
Regards,
I think using laplacian could partialy solve your problem :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
// compute laplacian
Mat laplaceImg;
Laplacian(inputImg, laplaceImg, CV_16S, 1);
Mat resImg;
threshold(laplaceImg, resImg, 30, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
Using this code you should obtain something like :
this result
You can then play with final threshold value and with laplacian kernel size.
You will probably have to remove small artifacts after this operation.
Regards
Using EmguCV with C#, MS VS 2015. Goal is to recognise black circles on white sheet (with some dust). Circles radiuses is about 80 pixels, no adjoining circles.
Given IntPtr ptrImg contains byte[] with grayscale image (8 bits per sample, one channel).
There's a code for circles detection:
Mat mat = new Mat(height, width, DepthType.Cv8U, 1, ptrImg, width);
CvInvoke.FastNlMeansDenoising(mat, mat, 20);
return CvInvoke.HoughCircles(mat, HoughType.Gradient, 2.0, 120.0, 90, 60, 60, 100);
In fact, some of the circles are detected ok, but some others have a glitch - detected radius differs from real radius for about 5-7 pixels; detected boundary coincides real boundary at one side and misses at the opposite side.
What do I wrong? Maybe, I have to play with dp, param1, param2? What have I to do with them?
P.S. If I remove denoising, but add binarization by threshold, the situation isn't better:
I want to clear a floor plan and detect walls. I found this solution but it is quite difficult understand the code.
Specially this line (how does it remove texts and others objects inside rooms?)
DeleteSmallComponents[Binarize[img, {0, .2}]];
https://mathematica.stackexchange.com/questions/19546/image-processing-floor-plan-detecting-rooms-borders-area-and-room-names-t
img = Import["http://i.stack.imgur.com/qDhl7.jpg"]
nsc = DeleteSmallComponents[Binarize[img, {0, .2}]];
m = MorphologicalTransform[nsc, {"Min", "Max"}]
How can I do the same with OpenCV?
In opencv there is slightly different approach to process images. In order to do some calculation you have to think in more low-level way. By low-level I mean thinking in basic image processing operations.
For example, line you showed:
DeleteSmallComponents[Binarize[img, {0, .2}]];
Could be expressed in opencv by algorithm:
binarize image
morphological open/close or simple dilation/erosion (based on what is color of objects and background):
cv::threshold(img, img, 100, 255, CV_THRESH_BINARY);
cv::dilate(img, img, cv::Mat());
cv::dilate(img, img, cv::Mat());
Further you can implement your own distance transformation, or use for example hit-and-miss routine (which as being basic is implemented in opencv) to detect corners:
cv::Mat kernel = (cv::Mat_<int>(7, 7) <<
0, 1, 0,0,0,0,0,
-1, 1, 0,0,0,0,0,
-1, 1, 0,0,0,0,0,
-1,1,0,0,0,0,0,
-1,1,0,0,0,0,0,
-1,1,1,1,1,1,1,
-1,-1,-1,-1,-1,-1,0);
cv::Mat left_down,left_up,right_down,right_up;
cv::morphologyEx(img, left_down, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 1);
cv::morphologyEx(img, right_down, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 0);
cv::morphologyEx(img, right_up, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 1);
cv::morphologyEx(img, left_up, cv::MORPH_HITMISS, kernel);
and then you will have picture like this:
One more picture with bigger dots (after single dilation):
Finally you can process coordinates of corners found to determine rooms.
EDIT: for images with "double wall lines" like:
We have to "merge" double wall lines first, so code will be visible like this:
cv::threshold(img, img, 220, 255, CV_THRESH_BINARY);
cv::dilate(img, img, cv::Mat()); //small object textures
cv::erode(img, img, cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(5, 5)),cv::Point(-1,-1),2);
cv::dilate(img, img, cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(5, 5)), cv::Point(-1, -1), 3);
And result image:
Sadly, if image properties change you will have to slightly change algorithm parameters. There is posibility to provide general solution, but you have to determine most of possible variants of problem and it will be more complex.
I am working on a project to detect object of interest using background subtraction and track them using optical flow in OpenCV C++. I was able to detect the object of interest using background subtraction. I was able to implement OpenCV Lucas Kanade optical flow on separate program. But, I am stuck at how to these two program in a single program. frame1 holds the actual frame from the video, contours2are the selected contours from the foreground object.
To summarize, how do I feed the forground object obtained from Background subtraction method to the calcOpticalFlowPyrLK? Or, help me if my approach is wrong. Thank you in advance.
Mat mask = Mat::zeros(fore.rows, fore.cols, CV_8UC1);
drawContours(mask, contours2, -1, Scalar(255), 4, CV_FILLED);
if (first_frame)
{
goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
fm0 = mask.clone();
features_prev = features_next;
first_frame = false;
}
else
{
features_next.clear();
if (!features_prev.empty())
{
calcOpticalFlowPyrLK(fm0, mask, features_prev, features_next, featuresFound, err, winSize, 3, termcrit, 0, 0.001);
for (int i = 0; i < features_prev.size(); i++)
line(frame1, features_prev[i], features_next[i], CV_RGB(0, 0, 255), 1, 8);
imshow("final optical", frame1);
waitKey(1);
}
goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
features_prev = features_next;
fm0 = mask.clone();
}
Your approach of using optical flow for tracking is wrong. The idea behind optical flow approach is that a movning point in two consequtive images has at the start and endpoint the same pixel intensity. That means a motion for a feautre is estimated by observing its appearance from the start images and search for the structure in the end image (very simplified).
calcOpticalFlowPyrLK is a point tracker that means point in the previous images are tracked to the current one. Therefore the methods need the original gray valued image of your system. Because it only can estimate motion on structured / textured region ( you need x and y gradients in your image).
I think your code should do somethink like:
Extract objects by background substraction (by contour) this is in the literature called a blob
Extract objects in the next image and apply a blob-assoziation (which countour belong to whom) this is also called blob-tracken
It is possible to do a blob-tracking with the calcOpticalFlowPyrLK. E.g. in a very simple way:
Track points from the countour or a point inside the blob.
Assoziation: The previous contour is one of the current if the points track, that belong to the previous contour are located at the current countour
I think the output of background subtraction in OpenCV not Gray Scale image. for input Optical flow we need gray scale images.
I have problems getting the contours of an object in my picture(s).
In order to delete all noise, I use adjustROI() and Canny().
I also tried erode() and dillate(), Laplacian(), GaussianBlur(), Sobel()... and I even found this code snippet to sharpen a picture:
GaussianBlur(src, dst_gaussian, Size(0, 0), 3);
addWeighted(src, 1.5, dst_gaussian, -0.5, 0, dst);
But my result is always the same: My object is filled with black and white colour (like noise on a TV-screen) so that it is impossible to get the contours with findContours() (findContours() finds a million of contours, but not the one of the whole object. I check this with drawContours()).
I use C++ and I load my picture as a grayscale Mat (for Canny it has to be grayscale). My object has a diffenent shape on every picture, but it is always around the middle of the picture.
I either need to find a way to get a better coloured object by image processing - but I don't know what else to try - or a way how to fill the object with colour after image processing (without having it's contours, because this is what I want in the end).
Any ideas are welcome. Thank you in advance.
I found a solution that works in most cases. I fill my object using the probabilistic Hough transform HoughLinesP().
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 80, 30, 10);
for(size_t i = 0; i < lines.size(); i++)
{
line(color_dst, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8);
}
This is from some sample code OpenCV documentation provides.
After using some edge-detection algorithm (like Canny()), the probabilistic Hough transform finds objects in binary pictures. The algorithm finds lines, which, if drawn, represent the whole object. Of course some of the parameters have to be adapted for some kind of picture.
I'm not sure if this will work on every picture or every object, but in my case, it does.