How to overlay small animation on camera stream in opencv - opencv

I am developing an application using opencv as my college project, it's almost done except that i am unable to overlay a animated video( a flash video) over my camera stream, i want to capture user mouth and after detecting mouth i want to overlay a animated video of smoke. Please can anyone help me with the overlaying part? If it is not possible can you shed some light on any workaround
I am using opencv 2.3.1 and ubuntu 11.10.

Basically, all you need to do is set a ROI (Region of Interested) in the video frame and then perform the operation to copy an arbitrary image to that specific position in the video frame.
I've demonstrated how to do something similar in this thread, where the user selects the ROI with the mouse and the system performs a grayscale conversion of that area.
Also, this thread shows how to use the mouse to draw over the webcam window.
Both threads use the C interface of OpenCV and they show how to accomplish the overlay effect you are looking for.

Related

Can I use OpenCV to analyze a video for how long a face is in the centre of the screen?

This is a frame from a video taken using the HTC Vive (it's from the user's perspective of a game I developed in Unity) I've overlaid those boxes in paint.
I'm trying to determine which character the person is looking at (assuming the white box is where they user is focusing)
I know this can be done in Unity without the need for a video, but I want to know if the video can be analyzed using something like OpenCV to detect how long each character faces is in the white box. I just made this in paint to get the idea across, the parameters aren't to scale or anything, I just have no idea where to start with a concept like this apart from OpenCV.
But to summarize, can I use OpenCV to detect how long each face is in the centre of the screen? i.e. How long the user looked at each character.

OpenCV - background removal and object detection

I need to detect where objects (mostly people) are in relation to a wall. I can have a fixed position camera in the ceiling so I thought to get an image of the space with nothing in it. Then use the difference of that and the current camera image to get an image with just the things. Then I can do blob detection I think to get the positions (only need x).
Does this seem sound? I'm not very accomplished in OpenCV so am looking for some advice.
That would be one way of going about it, but not very robust as the video feed won't produce consistent precise images so the background will never be nicely subtracted out, and people walking through the scene will occlude light and could also possibly match parts of your background.
This process of removing the background from a video is simply dubbed "background subtraction" and there are built-in OpenCV methods for it.
OpenCV has tutorials on their site showing the basics, for both python and C++.

Fix a position in a webcam video using OpenCV

I'm trying to draw an arrow to moving video using OpenCV.
What I want to do is the following:
Select a position (eg with the mouse) in the video captured by my webcam. Then I want to draw an arrow at this position. While the camera is moving the arrow should get drawn at the right position relatively to the webcam video.
Can you give some hints on how to do this?
This is pretty much what I'm looking for, but it isn't that stable. Since I'm a newbie to OpenCV, it would be helpful to get some help.

Displaying a video stream to the oculus Rift

I'm trying to mod Oculus World Demo to show an video stream from a camera and not a pre-set graphic, however, I'm finding it difficult to find the proper way to render an cv::IplImage or cv::mat image type onto the Oculus screen. If anyone knows how to display an image to the oculus I would be very grateful. This is for the DK 2.
Pure OpenCV isn't really well suited to rendering to the Rift, because you would need to manually implement the distortion mechanisms that are normally provided by the Oculus Rift SDK.
The best way to render an image from OpenCV onto the screen is to load the image into an OpenGL or Direct3D texture and use the 3D rendering API (GL or D3D) to place it into a rendered scene. There is an example of this in Github repository for my book on Rift development.
In summary, it sets up the video capture using the OpenCV API and then launches a thread which is responsible for capturing images from the camera device. In the main thread, the draw call renders a simple 3D scene which includes the captured image. Most of the interesting Rift related code is in the parent class, RiftApp.

Detect presence of objects using OpenCV in live iphone camera

Can anyone help me to detect realtime objects in iPhone camera using OpenCV?
My actual objective is to give an alert to users while an object interfering on a specific location of my application camera view.
My current thinking is to capture an image with respect to my camera overlay view which represents a specific location of my camera view. And then I process that image using OpenCV to detect objects by colors. If there I can identify an object in a specific image. I will give an alert to user in camera overlay itself. I coudn't know how I can detect an object from UIImage.
Please direct me if anyone knows some other good way to achieve my goal. Thanks in advance.
I solved my issue by the following way,
Created an image capture module with AVFoundation classes (AVCaptureSession)
Capturing simultaneous image buffer through a timer working along with camera module.
Processing captured frames to find objects through OpenCV
(Cropping, grayscale, threshold, feature detection etc...)
Referral Link: http://docs.opencv.org/doc/tutorials/tutorials.html
Alerting user through animated camera overlay view
Anyway the detection of objects through image processing is not much accurate. We need to have a object sensor (like a depth sensor in Kinet camera or similar) to detect objects in real scenario in live streaming, or may be we have to create AI for it perfect working.

Resources