OpenCV background subtraction: How to precompute background model? - opencv

I am working on a tracking algorithm and one of the earliest steps it does is background subtraction. The algorithm gets a series of frames that represent the video with a moving object and static background. The object is in every frame.
In my first version of this process I computed a median image from all the frames and got a very good background scene approximation. Then I subtracted the resulting image from every frame in video sequence to get foreground (moving objects).
The above method worked well, but then I tried to replace it by using OpenCV's background subtractors MOG and MOG2.
What I do not understand is how these two classes perform the "precomputation of the background model"? As far as I understood from dozens of tutorials and documentations, these subtractors update the background model every time I use the apply() method and return a foreground mask.
But this means thet the first result of the apply() method will be a blank mask. And the later images wil have initial object's position ghost in it (see example below):
What am I missing? I googled a lot and seem to be the only one with this problem... Is there a way to run background precomputation that I am not aware of?
EDIT: I found a "trick" to do it: Before using OpenCV's MOG or MOG2 I first compute median background image, then I use it in first apply() call. The following apply() calls produce the foreground mask without the initial position ghost.
But still, is this how it should be done or is there a better way?

If your moving objects are present right from the start, all updating background estimators will place them in the background initially. A solution to that is to initialize your MOG on all frames and then run MOG again with this initialization (as with your median estimate). Depending on the number of frames you might want to adjust the update parameter of MOG (learningRate) to make sure its fully initialized (if you have 100 frames it probably needs to be higher at least 0.01):
void BackgroundSubtractorMOG::operator()(InputArray image, OutputArray fgmask, double **learningRate**=0)
If your moving objects are not present right from the start, make sure that MOG is fully initialized when they appear by setting a high enough value for the update parameter learningRate.

Related

Time-delay effect GPUImage

I'm trying to achieve the "Ghost" effect from http://webcamtoy.com/ using GPUImage.
My understanding is that it would be a two-input filter, with a given time delay between the two frames used. I'd then just add the two frames with 0.5 alpha each.
I've seen how to use the current and previous frames with GPUImage using GPUImageBuffer (example of that in the GPUImageLowPassFilter) but I'm not sure how to set up a time delay between the two frames I want to use.
Any ideas or pointers? I was thinking of creating a custom filter and overriding newFrameReadyAtTime:atIndex: to delay the propagation downstream for the first x frames (where x is the delay in terms of number of frames). Maybe a clean way to do this would be to subclass GPUImageBuffer to automatically stack x frames before piping them out into a 2-input filter.
Thanks!
I think you're on the right track with keeping old frames. For the color effects, you're looking at something like extracting the color channels, using them as input to combine in a blend filter. The key is that the input's values have to add up to the natural color values in the non-changing portions of the video.

How to determine the movement using only two frames

I'm learning the moving object detection using a sequence of frames.
This is an example of two frames. I need to select moved object in the right frame.
I can subtract one frame from another. In the selected area the result would be none zero => that was a movement in that area. But if u look at the right frame, u could see a background selected as well.
Can I somehow separate the car from the background?
i guess the method, when we collecting the background pixels, and than subtract the image from the background is useless on a two frames, right?
You are right that the method does not work very well with only two frames. The method you describe works best when you have one image with only background, which you can then use to compare with new images to look for movement.
It is possible to calculate the movement of the object with only two frames, but then you probably need more advanced methods, such as optical flow or image registration algorithms.

Manipulating a subsection of an image in MATLAB

I have a task where I need to track a series of objects across several frames, and compose the background from the image. The issue arises because one of the objects does not move until near the end, so I'm forced to take a shoddy average of the image. However, if I can blur out the objects, I think I'll be able to improve the background average.
I can identify a subsection of the image where the object is, an m by m array. I just need the ability to blur out this section with a filter. However, imfilter uses a fullsized array (image) as its input, so I cannot simply move along this array pixel by pixel in a for loop. But, if I try removing the image to take an image, I cannot put it back in without using another for loop, which would be computational expensive.
Is there a method of mapping a blur to a subsection of an image using MATLAB? Can this be done without using two for loops?
Try this...
sub_image = original_image(ii:jj,mm:nn)
blurred_sub_image = imfilter(sub_image, etc)
original_iamge(ii:jj,mm:nn) = blurred_sub_image
In short, you don't need to use a for loop to address a subsection of an image. You can do it directly, both for reading and writing.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Partial re-colorizing a Bitmap at runtime

I'm drawing some cars. They're Bitmap's, loaded from PNG's in the library. I need to be able to color the cars-- red ones and green ones and blue ones, whatever. However, when you paint the car green, the tires should stay black, and the windows stay window-color.
I know of two ways to handle this, neither one of which makes me happy. First, I could have two bitmaps for each car; one underneath for the body color, and one on top for detail bits. The underneath bitmap gets its transform.colorTransform set to turn the white car-body into whatever color I need. Not great, because I end up with twice as many Bitmap's running around on screen at runtime.
Second, I could programmatically search-and-replace "white" with "car-body" color when I load the bitmap for each car. Not great either, because the amount of memory I take up multiplies by however many colors I need.
What I would LIKE would be a way to say "draw this Bitmap with JUST THE WHITE PARTS turned into this other color" at runtime. Is there anything like this available? I will be less than surprised if the answer is "no," but I figure it's worth asking.
You might have answered the question yourself.
I think your first approach would need only two transparent images: one with pixels of the parts that need to change colour, one with the rest of the image. You will use colorTransform or ColorMatrix filter by case. It might even work with having the pixels the need the colour change covered with Sprite with a flat colour set on overlay ?
The downside would be that you will need to create a 'colour map'/set of pixels to replace for each different item that will need colour replacement.
For the second approach:
You might isolate the areas using something like threshold().
For speed, you might want either to store the indices of the pixels you need to replace in an Vector.<int> object that could be used in conjuction with BitmapData's getVector() method. (You would loop once to fetch the pixel indices that need to be replaced)
Since you will use the same image(same dimensions) to fill the same content with a different colour, you'll always loop through the same pixels. Also keep in mind that you will gain a bit of speed by using lock() before your loop to setPixel() and unlock() after the loop.
Alternatively you could use Pixel Bender and try some green screen/background subtraction techniques. It should be fast and wouldn't delay the execution of the rest of your as3 code as Pixel Bender code runs in it's own thread.
Also check out Lee's Pixel Bender subtraction technique too.
Although it's a bit old now, you can use some knowledge from #Quasimondo's article too.
HTH
I'm a little confused where you see the difference between your second approach and the one you would like to have. You can go over your loaded bitmap pixel by pixel and read out the color. If it turns out to be white replace it with another color. I do not see occurence of multiplied memory consumption.
You might want to try my selective color transform: http://www.quasimondo.com/archives/000614.php - it's from 2006, so some parts of it could probably be replaced by a pixel bender filter now.
Why not just load the pieces separately, perform the color transform on the one you want to change, then do a BitmapData.copyPixels() with the result? The blit routine runs in machine code, so is wicked fast. Doing it pixel by pixel in ActionScript would be glacially slow in comparison.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels()

Resources