I have an image, and I can obtain a mask of the image as so
Now the mask is nearly perfect, but it is still clearly rough around the edges. For example, if I apply a blur effect to the background, there are portions around the ROI that are from the background, and look quite bad (Notice the area around the arms and right hip)
Could someone tell me how to refine the mask further to encompass only the ROI and not any part of the background? I'm using OpenCV and Tensorflow currently. The Grabcut algorithm of OpenCV wasn't too much help.
Related
I'm using OpenCV to merge multiple frames of a single video into a single image based on a movement mask. The video is taken by a moving phone with slight hand movement. I was able to align the frames (using feature matching), calculate the background (median), and estimate the movement mask (using BackgroundSubtractorMOG2) but the mask doesn’t give me the perfect movement body shape, and instead, it has “holes”. I’m using that mask to copy pixels from the source frame to the calculated background and not happy with the result because the image has the same whole as the mask. It's fine if the saturated mask is not precise, because all the frames are aligned, I don't mind take a little bit of extra of the source image with the saturated mask.
Is there a good way to do the following mask improvements using OpenCV?
UPDATE:
Trying to apply dilatation and noise reduction has the following result. It's not perfect but acceptable. With better noise control/reduction I feel it's possible to fill the largest contour although I still have some empty areas.
And some other example, when the whole object is in the scene, I really want no holes in a person
I am working on accurately segmenting objects from an image.
I have found contour lines by using a simple rectangular prism in HSV space as a color filter (followed by some morphological operations on the resulting mask to clear up noise). I found this approach to be better than applying canny edge detection to the whole image as that just picked up a lot of other edges I don't care about.
Is there a way to go about refining the contour line I have extracted such that it clips to the strongest local edge kind of like Adobe Photoshop's smart cropping utility?
Here's an image of what I mean
You can see a boundary between the sky blue and the gray. The dark blue is a drawn on contour. I'd like to somehow clip this to the nearby edge. It also looks like there are other lines in the grey region, so I think the algorithm should do some sort of more globalish optimisation to ensure that the "clipping" action doesn't jump randomly between my boundary of interest and the nearby lines.
Here are some ideas to try:
Morphological snakes: https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_morphsnakes.html
Active contours: https://scikit-image.org/docs/dev/auto_examples/edges/plot_active_contours.html
Whatever livewire is doing under the hood: https://github.com/PyIFT/livewire-gui
Based on this comment, the last one is the most useful.
I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.
I am working on a project and I ran into a situation. I want to detect a rectangle object (a black keyboard) in an IR image. The background is pretty clean so it's not really a hard problem, I used simple threshold and minAreaRect in OpenCV to solve it.
Easy case of the problem
But I also want the program to track this object when I use my hand to move it (yes, in real time). And my hand will cover a small part of the object like this case. Tricky case of the problem
My initial thought is to learn the object size in the easy case, and for the hard case, try to match my "learned rectangle" to cover as many white pixels as possible.
Anyone has a better solution, maybe a feature-based approach? I don't know if using features can improve the situation because object is mostly black in these IR images.
Thank you in advance.
How about using morphological operations like dilation and erosion (Opencv has implementations for these) on the thresholded image. Once you get that, you could try some corner detection/contour detection or line detectors(in opencv contrib module) to understand the shape of the object.
Your "tricky" case is still fairly simple, can be solved with dilate/erode (as mentioned by Shawn Mathew) and then the same minAreaRect. Here, on the right is your thresholded image after erosion and dilation with a 5x5 kernel, minAreaRect finds a rotated rectangle for it, drawn over the original thresholded image on the left:
Are you interested in more complicated cases, for example, where you hand covers one of the short edges of the keyboard entirely?
I need to process some images in a real-time situation. I am receiving the images from a camera using OpenCV. The language I use is C++. An example of the images is attached. After applying some threshold filters I have an image like this, Of course there may be some pixel noises here and there, but not that much.
I need to detect the center and the rotation of the squares, and the center of the white circles. I'm totally clueless about how to do it, as it needs to be really fast. The number of the squares can be predefined. Any help would be great, thanks in advance.
Is the following straight forward approch too slow?
Binarize the image, so that the originally green background is black and the rest (black squares are white dots) are white.
Use cv::findContours.
Get the centers.
Binarize the image, so that the everything except the white dots is black.
Use cv::findContours.
Get the centers.
Assign every dot contours to the squate contour, for that is an inlier.
Calculate the squares rotations by the angle of the line between their centers and the centers of their dots.