Adding an overlaid line on video footage following the center of the frames via video processing/pattern recognition - opencv

I have multiple 1-minute videos taken outside (a building, a sea cliff, etc). The camera is not fixed, it's taken by a drone very slowly going up a cliff for example.
What I need to do, if even possible, is use OpenCV or other video processing framework to automatically add a path to each of those video, a red line.
The red line would follow the location pointed by the centre of the video frame.
It means that on each frame, I must find the locations pointed at on the previous/next frames via some sort of pattern recognition and somehow link them with an overlaid red line path.
Is there some sort of algorithm or tool that facilitate this process? How would you approach this problem?
Example:
From those frames:
*
*
*
*
To those frames:
(in the real, slow, footage, the consecutive frames would be much closer to each other)
Looks like for all frames, I must try to locate all the previous/next frames and link their centres as an rightly ordered line. Surely, that must have been done before?

Related

Aligning two extremely similar but slightly different videos

I have two videos of the Super Smash Brothers video game. In one video, the characters exist. In the other video, the characters do not exist. Everything else about the videos are the exact same except the characters being invisible in one of them.
When I output the two videos, I have to manually align them in a video editor. Once they are aligned, they stay in sync! However, the videos have a random amount of start time which is the problem.
What's a good way to automatically align these two different but extremely similar videos? Here are example frames.
Current ideas:
Take a random frame halfway through the video, compare to the other video at the same location. Use Mean Squared Error between the pixels. Move forward 5 seconds and back 5 seconds. Take the frame with the smallest MSE difference and use that as the matching frame remove the offset from the longer video at the beginning. This seems extremely brittle and slow.
Your current idea is good, but it doesn't need to be slow at all. Since the different part of the images are only the fighters! and we can assume those fighters are always at the middle of the image, so you just need to match a little part of the images, like the rectangle I drew:
Besides you can use other fast matching methods too like ORB features.

TensorFlow video processing, changes detection

I'm newbie with machine learning, and I have only basic knowledge in neural networks.
I have pretty clear task:
1. Video stream shows static picture (white area with yellow squares)
(in different videos squares located in different places)
2. In some moment content of the video changes, and starts to show white area without some of the yellow squares.
3. I need to create mechanism which can determines and somehow indicates that changes.
I'm going to use for that task TensorFlow framework. Could anybody push me in right direction? Or I'll be very happy to see list of steps to overcome the problem.
Thanks in advance.
If you know how the static picture looks beforehand, may be some background-subtraction would work? Basically you just subtract the static picture from every frame and check the content of the result. If the resulting picture is empty (zeros or close to it up to some threshold) there is no change to detect. If the resulting picture contains a region that is non-zero (may be above or below a certain manually tuned threshold), you detected a change in that region.

Is it possible to match an image with its appearance in a video?

I have a short video of 10 mins. This video is actually an online lecture. When you watch it, you will only see slide show (some slides are annotated).
I have the original slides (pdf or image or ppt or whatever). Is it possible to match each slide with a specific time in video when it appears?
My idea is to take every image and compare it with every video frames of that video and try to match the slide image in video.
How do you think my idea? Is it possible and doable with some algorithm?Can I just substract the video frame with the image (calculate the difference) to see which difference is close to zero? Thanks
If the images are perfectly aligned, then you can use any of simple differencing, sum of squared differences or normalised cross-correlation. However, if they are not aligned, you will need to register the two images first, followed by any of the three mentioned matching methods. Do a google search for image registration. Affine registration might be sufficient for your problem.

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

recognize the moving objects and differentiate them from the background?

iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...

Resources