Using cvCopy I get the object without background (in webcam stream). I want to make a transparent with removed background, because I need to play another video in the background.
How can Ido this?
You can follow these steps:
1.) You said you have the object without the background. Hence it will be straightforward to achieve a binary mat for the object (Convert the image into Grayscale and then threshold it). Let's call this Binary mask as objectBinMask
2.) Suppose each frame of your video is called vidFrame, in that case you can use the object mask to paste the object on the video frame, as so:
vidFrame.copyTo(outputFrame,objectBinMask);
here outputFrame should have the required object on each frame of the video.
copyTo is a method available in C++, you can equivalently use cvCopy for your C code.
Related
I want to stitch multiple background images provided by ARKit (ARFrame.capturedImage). (I know there are better ways to do this task, but I am using my custom algorithm.)
The issue is that the live stream does not have locked exposure and thus the color of an object in the scene depends on how I orient my iPhone. This, for example, leads to a wall having very different color in each frame (from white through gray to brown-ish), which creates visible banding when stitching the images together.
I noticed ARKit provides lightEstimate for each ARFrame with the ambientIntensity and ambientColorTemperature properties. There is also the ARFrame.camera.exposureOffset property.
Can these properties be used to "normalize" captured images so that colors of the objects in the scene stay roughly the same throughout time and I don't end up with severe banding?
P.S. I do need to use ARKit, otherwise I would set-up my own session based on the AVFoundation API with my own settings (e.g. locked exposure).
Since all mentioned properties are not settable you can't use them directly to fix an intensity of every stitched image in panorama-360.
But you can calculate a difference of intensity and exposure of each frame and then use that multipliers for CoreImage filters. For instance, exposure difference is as simple as that:
Frame_02_Exposure / Frame_01_Exposure = 0.37
Then use the result as input multiplier for CIExposureAdjust filter.
I am working on a tracking algorithm and one of the earliest steps it does is background subtraction. The algorithm gets a series of frames that represent the video with a moving object and static background. The object is in every frame.
In my first version of this process I computed a median image from all the frames and got a very good background scene approximation. Then I subtracted the resulting image from every frame in video sequence to get foreground (moving objects).
The above method worked well, but then I tried to replace it by using OpenCV's background subtractors MOG and MOG2.
What I do not understand is how these two classes perform the "precomputation of the background model"? As far as I understood from dozens of tutorials and documentations, these subtractors update the background model every time I use the apply() method and return a foreground mask.
But this means thet the first result of the apply() method will be a blank mask. And the later images wil have initial object's position ghost in it (see example below):
What am I missing? I googled a lot and seem to be the only one with this problem... Is there a way to run background precomputation that I am not aware of?
EDIT: I found a "trick" to do it: Before using OpenCV's MOG or MOG2 I first compute median background image, then I use it in first apply() call. The following apply() calls produce the foreground mask without the initial position ghost.
But still, is this how it should be done or is there a better way?
If your moving objects are present right from the start, all updating background estimators will place them in the background initially. A solution to that is to initialize your MOG on all frames and then run MOG again with this initialization (as with your median estimate). Depending on the number of frames you might want to adjust the update parameter of MOG (learningRate) to make sure its fully initialized (if you have 100 frames it probably needs to be higher at least 0.01):
void BackgroundSubtractorMOG::operator()(InputArray image, OutputArray fgmask, double **learningRate**=0)
If your moving objects are not present right from the start, make sure that MOG is fully initialized when they appear by setting a high enough value for the update parameter learningRate.
I'm working with the OpenCV library in XCode and doing a colour tracking application that draws lines from one point to the next. I was wondering if it's possible to put the output not as the video but as a white background.
So instead of
cvShowImage("video",frame);
Is there a function that would show (backgroundcolour, frame)?
edit:
I've added this code but since canvas is not an image it won't let me write on to it instead of frame.
cv::Mat canvas(320, 240, CV_8UC3, Scalar(255,255,255));
IplImage* imgYellowThresh1 = GetThresholdedImage1(canvas);
cvAdd(&canvas,imgScribble,&canvas);
cvShowImage("video",&canvas);
So the erro is on the GetThresholdedImage1 saying "no matching function for call to GetThresholdedImage1"
No, not it such a simple way you proposed. The solution would be to create separate Mat and draw the lines on it.
cv::Mat canvas(rows, cols, CV_8U3C, Scalar(255,255,255)); //set size, type and fill color
You would prepare this mat an the begining of code, and then use the draw functions on it. So instead of drawing the lines on frame you would draw on canvas.
EDIT:
There was slight misconception. The problem is you are using old C API. To learn about the latest C++ API, follow this tutorials:
http://docs.opencv.org/doc/tutorials/tutorials.html
I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.
I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.