i want beat effect on video and i am using ffmpeg command for beat effect i was used this below command for beat effect black and white and original color after 2 sec looping but not this work this command only create black and white video ffmpeg -i addition.mp4 -vf hue=s=0 output.mp4
So please, suggest any solution.
I want make video like youtube.com/watch?v=7fG7TVKGcqI plaese suggest me
Thanks in advance
ffmpeg -i addition.mp4 -vf hue=s=0 output.mp4 will, as you said, just create a black video. vf is video filters and hue=s=0 is setting the hue and saturation to 0.
As far as I know, this kind of effect is way too advanced for a command line application unless you have a lot of knowledge on it already. I'd recommend using a graphical video editor. I use shotcut and I like it, but I'm not sure if you can do this in it.
Related
all. I have a very strange issue, reading the VideoCapture in OpenCV.
I have .MTS videos (MPEG2), and I read them in OpenCV using the next code:
cv2.namedWindow("frame", cv2.WINDOW_NORMAL)
cap = cv2.VideoCapture("00030.MTS")
while(cap.isOpened()):
ret,frame = cap.read()
cv2.imshow("frame", frame)
cv2.waitKey(0)
And this shows me the corrupted frames (with bands on them). The same quality is kept, if I save the frame as the image, and look at It outside the OpenCV.
But how It should look like:
I've never seen this before while working with .avi or .mp4
How can I get the same not-corrupted frames like in the media player?
(I edited the title of the question. Some of this information wasn't apparent originally.)
Your file names suggest that this video material is a video camera's own recording, with no alterations to it.
Your video seems to be "interlaced", i.e. not "progressive". Interlaced video consists of "fields" instead of complete frames. A field contains all the even or odd lines of an image. Even and odd fields follow each other. With that method, you can have "50i" video that updates at 50 Hz, yet requires only half the data of a full "50p" video (half the data means reduced vertical resolution).
OpenCV (probably) uses ffmpeg to read your video file.
Both ffmpeg and VLC know when a video file contains interlaced video data. VLC automatically applies a suitable filter to make this data nicer to look at. ffmpeg does not, because filtering costs processing time and changes the data.
You should use ffmpeg to "de-interlace" your video files. I would suggest the yadif filter.
Example:
ffmpeg -i 00001.MTS -c copy -vf yadif -c:v libx264 -b:v 24M 00001_deinterlaced.MTS
You should look at the settings/options of yadif. Maybe the defaults aren't to your liking. Try yadif=1 to get field-rate progressive video (50/60p).
I want to achieve something like this-
Reference Video
Say I have a video which is a vertical video(Dimension- 720x1280). I want to create a horizontal video with adaptive background like the video I've shown.
I have written some code for reading and writing to a file.
video_index = 0
cap = cv2.VideoCapture(videofiles[0])
# video resolution: 1920x1080 px
out = cv2.VideoWriter("video.mp4v",
cv2.VideoWriter_fourcc(*'MP4V'),
30, (1920, 1080), 1)
What is the effect of having background to the video which smudges on the sides called in opencv/ffmpeg or otherwise?
How do I achieve this effect using code or tools(I am open to using OSS desktop tools)?
That effect is simply realized by scaling the initial video to the desired size, bluring it, and overlaying the original video on top at the center.
For the blur, I suggest starting with Gaussian blur, available in OpenCV.
As suggested by Gyan, I used the following piece of code-
ffmpeg -i video720x1280.mp4
-filter_complex
"[0]scale=hd1080,setsar=1,boxblur=20:20[b];
[0]scale=-1:1080[v];[b][v]overlay=(W-w)/2" video1920x1080.mp4
It worked like charm!
Link to original answer.
So the question is does Spark AR support transparent videos? I've tried to play QuickTime's mov format, but nothing's displayed.
Spark doesn't support video unfortunately. The easiest thing might be to instead import the video as a sequence of PNGs with an alpha channel, and play that sequence as a "2D texture animation".
https://sparkar.facebook.com/ar-studio/learn/documentation/building-your-scene/animation-and-interactivity/2D-texture-animation/
I am using FFmpeg to overlay image/emoji on video by this command -
"-i "+inputfilePath+" -filter_complex "+"[0][1]overlay=enable='between(t,"+startTime+","+endTime+")'[v1]"+" -map [v0] -map 0:a "+OutputfilePath;
But above command only overlay image over video and stays still.
In Instagram and Snapchat there is New pin feature. I want exactly same ,eg blur on moving faces or as in below videos -
Here is link.
Is it possible via FFmpeg?
I think someone with OPENCV or Argumented Reality knowledge can help in this. It is quiet similar to AR as we need to move/zoom emoji exactly where we want to on video/live cam.
Based on overlay specification:
https://ffmpeg.org/ffmpeg-filters.html#overlay-1
when you specify time interval it will happen only at that time interval:
For example, to enable a blur filter (smartblur) from 10 seconds to 3 minutes:
smartblur = enable='between(t,10,3*60)'
What you need to do is to overlay an image at specific coordinates, for example the following at fixed x and y:
ffmpeg -i rtsp://[host]:[port] -i x.png -filter_complex 'overlay=10:main_h-overlay_h-10' http://[host]:[post]/output.ogg
Now the idea is to calculate those coordinates based on the current frame of the video and force filter to use changed coordinates on every frame.
For example based on time:
FFmpeg move overlay from one pixel coordinate to another
ffmpeg -i bg.mp4 -i fg.mkv -filter_complex \
"[0:v][1:v]overlay=enable='between=(t,10,20)':x=720+t*28:y=t*10[out]" \
-map "[out]" output.mkv
Or using some other expressions:
http://ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation
Unfortunately this will require to find a formula before using those limited expressions of cat moving his head or drawing a pen for x and y. It can be linear, trigonometric or other dependency from time:
x=sin(t)
With the free move it is not always possible.
To be more precise of finding an object coordinates to overlay something it should be possible to provide your own filter(ffmpeg is open sourced) similar to overlay:
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_overlay.c
Calculating x and y either based on external file(where you can dump all x and y for all times if it is a static video) or do some image processing to find specific region.
Hopefully it will give you an idea and direction to move to.
It's very interesting feature.
I would like to extract out all the slides from a video lecture, using OpenCV. Here is an example of a lecture: http://www.youtube.com/watch?v=-hxOpz9c0bY.
What approaches would you recommend? So far, I've tried:
Comparing the change in grayscale intensity from frame to frame. This can have problems when an object in the foreground moves around. For example, in this lecture, there's a hand that moves around: http://www.youtube.com/watch?v=mNzu42FrlHo#t=07m00s.
Using SURF features and doing comparisons frame by frame. This approach seems kind of slow.
Does anyone have other ideas?
Most of this work is most likely already done by video encoder. You just need to extract key-frames and check how well compressed are frames between them.
It should be also fairly easy to distinguish still images. You can save lot of time by examining just the key-frames. Slides are likely to have high contrast, solid shapes, solid background. Lecture hall has blurry shapes and low contrast.
What you need is a scene change detection. After that, you'll have to classify scenes as "lecture hall" or "presentation". As for the problem with hands - you could use background subtraction with an adaptive background (just make sure you mask the foreground... you don't want the foreground to become a part of the background).
You could try an edge detection and look for a rectangular object - the slides (above a certain area threshold). You could further reduce FPs by looking for some text within the rectangle.
There are several reasons to extract slides/frames from a video presentation, especially in the case of education or conference related videos. It allows you to access the study notes without watching the whole video.
I have faced this issue several times, so I decided to create a solution for it myself using python. I have made the code open-source, you can easily set up this tool and run it in few simple steps.
Refer to this for a youtube video tutorial. Steps on how to use this tool.
Clone this project video2pdfslides
Set up your environment by running "pip install -r requirements.txt"
Copy your video path
Run "python video2pdfslides.py <video_path>"
Boom! the pdf slides will be available in the output folder Make notes and enjoy!