I have a task where I need to track a series of objects across several frames, and compose the background from the image. The issue arises because one of the objects does not move until near the end, so I'm forced to take a shoddy average of the image. However, if I can blur out the objects, I think I'll be able to improve the background average.
I can identify a subsection of the image where the object is, an m by m array. I just need the ability to blur out this section with a filter. However, imfilter uses a fullsized array (image) as its input, so I cannot simply move along this array pixel by pixel in a for loop. But, if I try removing the image to take an image, I cannot put it back in without using another for loop, which would be computational expensive.
Is there a method of mapping a blur to a subsection of an image using MATLAB? Can this be done without using two for loops?
Try this...
sub_image = original_image(ii:jj,mm:nn)
blurred_sub_image = imfilter(sub_image, etc)
original_iamge(ii:jj,mm:nn) = blurred_sub_image
In short, you don't need to use a for loop to address a subsection of an image. You can do it directly, both for reading and writing.
Related
I've an app whose function is to run in the background, collecting location data ('runkeeper' style app). It could potentially be running for hours, and collect thousands of points.
These 'runs' are listed in a tableview and on selection it will redraw that run on the map. I'm also coloring these polylines, so in order to have multiple colors on a seemingly single line, I connect a bunch of different polylines. When I go to add an NSArray containing (say 700) lines, and use
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
lineArray = [self polylinesFromSession];
dispatch_async(dispatch_get_main_queue(), ^{
[map addOverlays:lineArray] // lineArray.count = ~700
});
});
It really, really, bogs the app down for a 10-15 seconds. I can't use addOverlays on any thread other than main, so I don't see many options here. Is it possible to join a bunch of lines into a single overlay, THEN add it to the map? Or, any ideas for a better way to do this?
Thanks!
If your data points are contiguous, instead of adding hundreds of different lines, try to combine them into a single line with many points.
You might try creating raster tiles (with alpha transparency) of your lines on the fly and adding those as an MKTileOverlay to your map. For each point in a line, you can figure out in "tile space" what point this corresponds to, and use Core Graphics to draw in that tile. You can also skip points that would be over or under previous lines' points (unless you are plotting in a different color or want to layer the lines in a specific way).
The math is a little of the scope of an answer here, but Spherical Mercator is relatively easy to grasp as the world is a large square continuously tiled into smaller squares and the projection math is relatively straightforward trigonometry.
But you will likely find higher performance out of rasterizing this way, as long as you don't need to interact with the various line annotations individually in the app, but just show them.
I am working on a tracking algorithm and one of the earliest steps it does is background subtraction. The algorithm gets a series of frames that represent the video with a moving object and static background. The object is in every frame.
In my first version of this process I computed a median image from all the frames and got a very good background scene approximation. Then I subtracted the resulting image from every frame in video sequence to get foreground (moving objects).
The above method worked well, but then I tried to replace it by using OpenCV's background subtractors MOG and MOG2.
What I do not understand is how these two classes perform the "precomputation of the background model"? As far as I understood from dozens of tutorials and documentations, these subtractors update the background model every time I use the apply() method and return a foreground mask.
But this means thet the first result of the apply() method will be a blank mask. And the later images wil have initial object's position ghost in it (see example below):
What am I missing? I googled a lot and seem to be the only one with this problem... Is there a way to run background precomputation that I am not aware of?
EDIT: I found a "trick" to do it: Before using OpenCV's MOG or MOG2 I first compute median background image, then I use it in first apply() call. The following apply() calls produce the foreground mask without the initial position ghost.
But still, is this how it should be done or is there a better way?
If your moving objects are present right from the start, all updating background estimators will place them in the background initially. A solution to that is to initialize your MOG on all frames and then run MOG again with this initialization (as with your median estimate). Depending on the number of frames you might want to adjust the update parameter of MOG (learningRate) to make sure its fully initialized (if you have 100 frames it probably needs to be higher at least 0.01):
void BackgroundSubtractorMOG::operator()(InputArray image, OutputArray fgmask, double **learningRate**=0)
If your moving objects are not present right from the start, make sure that MOG is fully initialized when they appear by setting a high enough value for the update parameter learningRate.
I want to add line to image directly without a groups to make the line rasterized object (instead of a vector). The fact is that due to the large number of vector objects ( over 100) application work slow.
After drawing the lines, insert them to a group and convert them to a single display object using display.capture function
Have a look here http://docs.coronalabs.com/api/library/display/capture.html
Also take a look at
display.save
I have two images that I want to display on top of each other. one image a single channel image and the second image is a RGB image but with most of the area being transparent.
How these two images are generated in different functions. I know to just display these on top of each other, i can use the same window name when calling cvShowImage() but this doesn't work when they are drawn from different functions. When trying this, I used cvCvtcolor() to convert he binary image from single channel to RGB and then displaying the second image from another function. But this didn't work. Both images are same dimension, depth and number of channels (after conversion).
I want to avoid passing in one image into the second function and then draw them. So I'm looking for a quick dirty trick to display these two images overlapped.
Thank you
EDIT:
I don't think that's possible. You'll have to create a new image or modify an existing one. Here's an article that shows how to do this: Transparent image overlays in OpenCV
There is no way to "overlay" images. cvShowImage() displays a single image from memory. You'll need to blend/combine them together. There are several ways to do this.
You can copy one into 1 or 2 channels of the other, you can use logical operations like AND, OR or XOR, you can use arithmetic operations like Add, Multiply and MultiplyScale (these operations will saturate values larger than 255). All these can also be done with an optional mask image like your blob image.
Naturally, you may want to do this into a third buffer so as not to overwrite your originals.
Apparently now it can be done using OpenCV 2.1 version
http://opencv.willowgarage.com/documentation/cpp/highgui_qt_new_functions.html#cv-displayoverlay
I'm drawing some cars. They're Bitmap's, loaded from PNG's in the library. I need to be able to color the cars-- red ones and green ones and blue ones, whatever. However, when you paint the car green, the tires should stay black, and the windows stay window-color.
I know of two ways to handle this, neither one of which makes me happy. First, I could have two bitmaps for each car; one underneath for the body color, and one on top for detail bits. The underneath bitmap gets its transform.colorTransform set to turn the white car-body into whatever color I need. Not great, because I end up with twice as many Bitmap's running around on screen at runtime.
Second, I could programmatically search-and-replace "white" with "car-body" color when I load the bitmap for each car. Not great either, because the amount of memory I take up multiplies by however many colors I need.
What I would LIKE would be a way to say "draw this Bitmap with JUST THE WHITE PARTS turned into this other color" at runtime. Is there anything like this available? I will be less than surprised if the answer is "no," but I figure it's worth asking.
You might have answered the question yourself.
I think your first approach would need only two transparent images: one with pixels of the parts that need to change colour, one with the rest of the image. You will use colorTransform or ColorMatrix filter by case. It might even work with having the pixels the need the colour change covered with Sprite with a flat colour set on overlay ?
The downside would be that you will need to create a 'colour map'/set of pixels to replace for each different item that will need colour replacement.
For the second approach:
You might isolate the areas using something like threshold().
For speed, you might want either to store the indices of the pixels you need to replace in an Vector.<int> object that could be used in conjuction with BitmapData's getVector() method. (You would loop once to fetch the pixel indices that need to be replaced)
Since you will use the same image(same dimensions) to fill the same content with a different colour, you'll always loop through the same pixels. Also keep in mind that you will gain a bit of speed by using lock() before your loop to setPixel() and unlock() after the loop.
Alternatively you could use Pixel Bender and try some green screen/background subtraction techniques. It should be fast and wouldn't delay the execution of the rest of your as3 code as Pixel Bender code runs in it's own thread.
Also check out Lee's Pixel Bender subtraction technique too.
Although it's a bit old now, you can use some knowledge from #Quasimondo's article too.
HTH
I'm a little confused where you see the difference between your second approach and the one you would like to have. You can go over your loaded bitmap pixel by pixel and read out the color. If it turns out to be white replace it with another color. I do not see occurence of multiplied memory consumption.
You might want to try my selective color transform: http://www.quasimondo.com/archives/000614.php - it's from 2006, so some parts of it could probably be replaced by a pixel bender filter now.
Why not just load the pieces separately, perform the color transform on the one you want to change, then do a BitmapData.copyPixels() with the result? The blit routine runs in machine code, so is wicked fast. Doing it pixel by pixel in ActionScript would be glacially slow in comparison.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels()