I would like to mix two audio files using avconv. In the documentation I found the following way of implementing this:
avconv -i INPUT1 -i INPUT2 -filter_complex amix=inputs=2:duration=first:dropout_transition=3 OUTPUT
However, when I'm trying to run this I get the following errors:
Unrecognized option 'filter_complex'
Failed to set value 'amix=inputs=2:duration=first:dropout_transition=3' for option 'filter_complex'
I have been searching for a solution for quite some time, but couldn't find anything.
I have version 0.8.3-4:0.8.3-0ubuntu0.12.04.1 of avconv. Do I need to change anything in the configurations or use a different version of avconv? Is there another way in which I could mix the audio files?
Thank you for your help.
In some versions, the "complex filter" can specified using the "-filter:v" option, like:
avconv -i INPUT -filter:v amix=inputs=2:duration=first:dropout_transition=3 OUTPUT
Today I was reviewing this problem also and the final solution was just to download a fresh static copy of ffmpeg and forget about the problem...
It seems that avconv doesn't implement -filter_complex, although the manual states that it does.
Version problem you have to update ffmpeg with latest.
but we can also -vf instead of filer-complex like:
ffmpeg -i new1.mp4 -vf "movie=wlogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" -strict experimental output.mp4
You should use libav9 probably. Version 9.9 had been released recently.
Related
I'm able to create a gif from the image sequence, but I'm struggling to remove the dither from it.
This is the command I'm using to create the gif:
ffmpeg -f image2 -framerate 24 -y -i image_%d.png -loop -1 "C:\Users\agetr\Documents\_tmp\__giuf.gif"
And I've tried to use the paletteuse=dither=none filter in different ways with no luck.
P.S.: I'm very new to the ffmpeg cli
You need to use -sws_dither none (after the -i $file argument, and before the output file). I've tried this on current git/master of FFmpeg and it works as expected, but on older builds (e.g. 4.4.1) this doesn't work. I don't know why exactly, so use a recent (5.0 or any version from 2022, if possible) version/build.
I wanted to know if I can use "opencv" to write on a v4l2 device.
I would take a picture, apply small changes with the features of opencv, and then send it on a v4l2 device.
I searched on the web, but there are a lot of examples on how to read from a V4L2 device, but I found nothing about writing on v4l2.
can someone help me?
The question is 8 month old, but if you still need an answer (I suppose your OS is Linux):
Install v4l2 loopback module
1.1. Load and configure it linux: i.e. modprobe.conf: options v4l2loopback video_nr=22,23
Use such C++/OpenCV code: gist
2.1. Setup device using ioctl() call
2.2. Write raw RGB data to this device (i.e. /dev/video23)
2.3. Use it as regular v4l2 device (i.e. webcam or vlc v4l2:///dev/video23)
more: You can use ffmpeg with v4l2 loopback:
ffmpeg -f x11grab -r 12 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 -vf 'scale=800:600' /dev/video22
Hi I want to create a video from list of images on ruby on rails. I have searched alot and all i found out was ffmeg, but i guess that's a command line tool. How do i create it using pure ruby on rails. Is there any gem or tutorial. Please Help.
Thanks to LordNeckbeard, i found this single command to convert images into video here ffmpeg
.
ffmpeg -framerate 1/3 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4
there are some gems that work as an interface between ruby and ffmpeg like https://github.com/streamio/streamio-ffmpeg.
There are some other few, try them out!
You can access command line through RoR.
result = %x[some command line call here]
What you would have to do is be sure you have the names path to your end result and confirm the return code from the command line call.
It seems like i will have to use commandline tool as i did'nt find any gem that does all this stuff for me.
install image magick
install ffmpeg
first convert all the jpg images into a gif file
convert -delay 250 -dispose Background *.jpg images.gif
then convert that gif file into mp4 format
ffmpeg -f gif -i images.gif outfile.mp4
I would love to have a better answer than this.
I have a Rails app and I get as params in a controller two files. One is audio (WAV) and the other is video (webm).
I need to mix them together so that the output is a video (mp4) with the already mixed audio.
How can I do this?
As #Meier pointed, using Ruby is not the way to go, but using an external program.
Once ffmpeg is installed on host you can run following command inside Rails to have a mkv output video file:
`ffmpeg -i #{video_file.path} -i #{audio_file.path} -acodec copy -vcodec copy -f matroska output.mkv`
I try to start capturing with Wireshark using command line but the default output file is pcap extension but Wireshark - pcapng file type and I need libpcap file type.
My command is
tshark.exe" -i interfacenumber -W MyFile.pcap
I also tried
tshark.exe" -i 1 -F libpcap -W MyFile.pcap
and in this case no file created on my disk although I can see the packets in the command line window
tshark.exe -i 1 -F libpcap -w MyFile.pcap`
is the right answer (note the lower case 'w').
(-W does something different. There's lot's of tshark options, so you need to look carefully at the tshark -h output to make sure you're using the right option).
Since the -F libpcap option is not working for me either, I'm using another command line tool in the same directory:
editcap -F libpcap currentFile.pcap(ng) libpcapConvertedFile.pcap where "currentFile.whatever" is the pcapng-formatted file and "libpcapConvertedFile.whatever" is the outputted legacy libpcap format.
I run this once tshark is done capturing the original file.
I think that in the newest Wireshark versions (1.8.x or 1.10.x) you cannot start capturing in libpcap format and the default format is pcapng (also pcap extension - try to use verion 1.6.x)
That's what solve my problem
http://www.wireshark.org/download/win64/all-versions/
http://www.wireshark.org/download/win32/all-versions/