Improve an image mask to be solid with OpenCV - opencv

I'm using OpenCV to merge multiple frames of a single video into a single image based on a movement mask. The video is taken by a moving phone with slight hand movement. I was able to align the frames (using feature matching), calculate the background (median), and estimate the movement mask (using BackgroundSubtractorMOG2) but the mask doesn’t give me the perfect movement body shape, and instead, it has “holes”. I’m using that mask to copy pixels from the source frame to the calculated background and not happy with the result because the image has the same whole as the mask. It's fine if the saturated mask is not precise, because all the frames are aligned, I don't mind take a little bit of extra of the source image with the saturated mask.
Is there a good way to do the following mask improvements using OpenCV?
UPDATE:
Trying to apply dilatation and noise reduction has the following result. It's not perfect but acceptable. With better noise control/reduction I feel it's possible to fill the largest contour although I still have some empty areas.
And some other example, when the whole object is in the scene, I really want no holes in a person

Related

How do I refine the outline of an image from a mask?

I have an image, and I can obtain a mask of the image as so
Now the mask is nearly perfect, but it is still clearly rough around the edges. For example, if I apply a blur effect to the background, there are portions around the ROI that are from the background, and look quite bad (Notice the area around the arms and right hip)
Could someone tell me how to refine the mask further to encompass only the ROI and not any part of the background? I'm using OpenCV and Tensorflow currently. The Grabcut algorithm of OpenCV wasn't too much help.

Image Segmentation/Background Subtraction

My current project is to calculate the surface area of the paste covered on the cylinder.
Refer the images below. The images below are cropped from the original images taken via a phone camera.
I am thinking terms like segmentation but due to the light reflection and shadows a simple segmentation won’t work out.
Can anyone tell me how to find the surface area covered by paste on the cylinder?
First I'd simplify the problem by rectifying the perspective effect (you may need to upscale the image to not lose precision here).
Then I'd scan vertical lines across the image.
Further, you can simplify the problem by segmentation of two classes of pixels, base and painted. Make some statistical analysis to find the range for the larger region, consisting of base pixels. Probably will make use of mathematical median of all pixels.
Then you expand the color space around this representative pixel, until you find the highest color distance gap. Repeat the procedure to retrieve the painted pixels. There's other image processing routines you may have to do such as smoothing out the noise, removing outliers and background, etc.

Filtering out shadows when diffing frames in opencv

I am using OpenCV to process some videos where a user is placing their hands on different parts of a wall. I've selected some regions of interest and I'm currently just using cv2.absdiff on the original image of the wall with no user and the current frame to detect whether the user has their hand in a region of interest by looking at the average pixel difference. If it's above some threshold, I consider that region "activated".
The problem I'm having is that some of the video clips contain lighting and positions that result in the user casting a shadow over certain ROIs, such that they are above the threshold. Is there a good way to filter out shadows when diffing images?
OpenCV has a Mixture of Gaussian based background subtractor which also has an option to account for shadow. You can use this instead of absdiff. MOG can be a bit slow though, compared to absdiff.
Alternatively, you can convert to HSV, and check that the Hue doesn't change.
You could first detect shadow regions in the original images, and exclude them from the difference imaging part. This paper provides a simple but effective method to detect shadows in images. They explore a colour space that is invariant to shadows.

iOS: outlining the opaque parts of a partly-transparent image

I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)
I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.
For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.
You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.
instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.

Background removal using Kinect: noise suppression around body shape

The objective is to display the person on a different background (aka background removal).
I'm using the Kinect with Microsoft's Beta Kinect SDK to do so. With help of the depth, the background is filtered and we get only the image of the person.
This is pretty simple to do, and we can find the code that does that everywhere on the Internet. However, the depth signal is noisy, and we get pixels which do not belong to the person that are displayed.
I applied an edge detector to see if it was useful, and I currently get this:
Here's another without edge detection:
My question is: Which way can I get rid of these noisy white pixels around the person?
I tried morphological operations, but some parts of the body are erased and still leave white pixels behind.
The algorithm doesn't need to be real-time, I can just apply it when I press a 'Save image' button.
Edit 1:
I just tried to do background substraction with the closest frames on the shape border. The single pixels you see are flickering, which means it is noise and I can get easily get rid of them.
Edit 2:
The project is now over, and here's what we did: manual calibration of the Kinect by using the OpenNI driver, which provides directly the infrared image. The result is really good, but each calibration is specific to each Kinect.
Then, we applied a little transparency on the borders, and the result looks really nice! I can't provide pictures, however.
Your problem isn't just the noisy white pixels. You're missing significant parts of the person as well, e.g. part of his right hand. I'd recommend being more conservative with your thresholding of the depth data (allow more false positives). This would give you more noisy pixels, but at least you'd have the person in their entirety.
To get rid of the noisy pixels, I can think of a couple of things:
Feather the outer pixels (reduce them in intensity/increase their transparency if you're using an alpha channel)
Smooth the image, perform the edge detection on the smoothed image, then use these edges with your original sharp image.
Do some skin region detection to mark parts that definitely belong to a person. See skin detection in the YUV color space? and Skin Color Detection
For clothes, work with the hue and saturation image. If you know the color of the t-shirt (or that at least that it's not a neutral color), then this will stand out easily. If you don't know this information, then it may be worth building up a model of the person using the other frames (if there's a big gray blob that's moving around in your video, chances are that your subject is wearing a gray shirt)
The approaches aren't mutually exclusive so it may be worth trying to do them in combination. If I think of anything else, I'll post back here.
If there is no other way of resolving the jitter on the edges you could always try anti-alias as post-process.

Resources