I am a ruby on rails developer, so if the answer for this question comes in area of ruby on rails then it will be of super help, but if not then answer in any technical area is also highly appreciated.
What I am trying to do is, I have a video, in which I want to replace the image(default cursor pointer image) in video frames with the other image that I have.
I have looked through image processing stuff for detecting the object in frames, but I can not find anything for replacing those objects.
How to replace an object(image) by another object(image) in video frames (in any languages java, ruby, python etc)?
Related
I need to detect something like a barcode but colored on live video (I don't necessarily need tracking, it is optional).I could use OpenCV but I thought it is worth a try asking here. The problem with openCV is that I am not very familiar with the whole concept of image processing. The other problem is that I use swift in most of the time, and however I could write an Obj-C wrapper with the help of google, the fact that OpenCV is written in C++ does not help at all, so maybe it would take too much time to use OpenCV.
Do you know any SDK-s that could help me in that? (It is not a problem if it is not free)
EDIT:
To be clear. By "something like a barcode" I mean that it is colored and instead of the width of the bars the colors are what metter.
I have some device which streams h264 video in following format: top half of picture is even lines of video, and bottom half of picture is odd lines of video. So the question is - how can I play this video in normal visibility, using standart players, ffplay for example.
I know about "tinterlace:merge" plugin in ffmpeg, but it combines video from two pictures following one by one. So my task is make a correct video from single frame.
Regards,
Alexey.
I recently had to deal with the exact same problem.
there are many different methods and the optimum solution completely depends on your situation,
the simplest fastest method is weaving two fields together which is perfect for immobile parts but create comb effect in moving object.
more complicated methods use motion detection methods.
what I did was merging two fields then applying Edge-Line averaging (ELA) for moving segments to reduce comb effect.
check this link for a detailed explanation of the problem
It would be good if you could provide a sample video file. You describe very well what the picture looks like, but the file may contain other information that is helpful for playback.
Furthermore, the format you describe doesn't sound like a standard format, so it's unlikely you will get a regular player to play it the way you want, out-of-the-box. If you're using ffplay, it's likely that you will have to write your own plugin to re-order the scanlines prior to displaying them.
Alternatively, you could re-encode the video into a standard format (interlaced or deinterlaced) using ffmpeg. You could then play it back in any regular player, like ffplay or VLC.
Finally, I recommend asking your question on the ffmpeg mailing list.
I have read in : this PDF that If I want to track object I do not have to track all the images(BRUTEFORCE) in video stream. It is enough to get the object in image of the video sequence and on the other images just focus on that object, It is some way implemented in OpenCV?
PS.: I know it is pretty old text :) Thanks
Yes:
As documented here,
the cv::goodFeaturesToTrack function finds the most prominent corners in the image or in the specified image region, as described in Shi94 (the paper you linked):
I would like to extract out all the slides from a video lecture, using OpenCV. Here is an example of a lecture: http://www.youtube.com/watch?v=-hxOpz9c0bY.
What approaches would you recommend? So far, I've tried:
Comparing the change in grayscale intensity from frame to frame. This can have problems when an object in the foreground moves around. For example, in this lecture, there's a hand that moves around: http://www.youtube.com/watch?v=mNzu42FrlHo#t=07m00s.
Using SURF features and doing comparisons frame by frame. This approach seems kind of slow.
Does anyone have other ideas?
Most of this work is most likely already done by video encoder. You just need to extract key-frames and check how well compressed are frames between them.
It should be also fairly easy to distinguish still images. You can save lot of time by examining just the key-frames. Slides are likely to have high contrast, solid shapes, solid background. Lecture hall has blurry shapes and low contrast.
What you need is a scene change detection. After that, you'll have to classify scenes as "lecture hall" or "presentation". As for the problem with hands - you could use background subtraction with an adaptive background (just make sure you mask the foreground... you don't want the foreground to become a part of the background).
You could try an edge detection and look for a rectangular object - the slides (above a certain area threshold). You could further reduce FPs by looking for some text within the rectangle.
There are several reasons to extract slides/frames from a video presentation, especially in the case of education or conference related videos. It allows you to access the study notes without watching the whole video.
I have faced this issue several times, so I decided to create a solution for it myself using python. I have made the code open-source, you can easily set up this tool and run it in few simple steps.
Refer to this for a youtube video tutorial. Steps on how to use this tool.
Clone this project video2pdfslides
Set up your environment by running "pip install -r requirements.txt"
Copy your video path
Run "python video2pdfslides.py <video_path>"
Boom! the pdf slides will be available in the output folder Make notes and enjoy!
This is simple text detection video made using an opencv.
Any ideas how was that made?
Opencv video 1
Opencv video 2
The author describes what he's doing in the comments to the video you linked.
I basically look for dark pixels
aligned horizontally, put a box around
them and call them letters.
He also mentions
If you like i could share the source code.
...so if you're still curious, you know what to do.