ffmpeg ignores every framerate option, locking the result to 25 fps - image-processing

ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
No matter what I do, ffmpeg just ignores everything and encodes it as if it's 25fps.
-framerate 60 does nothing
-t 60 does nothing
-r 60 makes it to interpolate frames
-r:v 60 does the same
-vf "fps=60" does the same
-vframes <actual number of frames> makes it to end the encoding prematurely
Everything google shows seems to be outdated, including ffmpegs own documentation

I had a similar issue. I'm using ffmpeg 3.4.2, trying to compile a set of PNGs into a video at 60 fps. What worked in the end is this:
ffmpeg -framerate 60 -i 'foo-%05d.png' -r 60 -c:a aac -vcodec libx264 -vf format=yuv420p foo.mp4
As others have noted above, it's the -framerate 60 that made the difference. Note that the order of the flags on the command line matters.
It's not clear from the man page why -framerate would change the behavior, and unfortunately, the option is not documented there beyond this reference:
-r[:stream_specifier] fps (input/output,per-stream)
Set frame rate (Hz value, fraction or abbreviation).
As an input option, ignore any timestamps stored in the file and instead generate timestamps
assuming constant frame rate fps. This is not the same as the -framerate option used for some
input formats like image2 or v4l2 (it used to be the same in older versions of FFmpeg). If in
doubt use -framerate instead of the input option -r.

The solution is:
ffmpeg -framerate <framerate> -start_number <number> -i ./<name>%d.png -r <framerate> -c:v <encoder name> -r <framerate> -crf <value> -preset <preset name> <output file name> -async 1 -vsync 1
example:
ffmpeg -framerate 60 -start_number 225 -i ./render_%d.png -r 60 -c:v libx264 -r 60 -crf 10 -preset veryslow render4k.mp4 -async 1 -vsync 1

Related

ffmpeg adding jpg and mp3 together to make a video for upload on YouTube

I am trying to take album art and join it with a track. The file formats in question are jpg and mp3. I have a working ffmpeg command
ffmpeg -y -i *.jpg -i *.mp3 -c:a copy result.avi
that creates a video that plays well in VLC, but when I upload it to YouTube, it gets stuck in processing.
The video will play on YouTube in low 240p, but I would like the image to be of 1440 pixel quality.
I know YouTube prefers mp4, and that the video I am creating only has a single image. How can I make some changes so the video will be accepted by YouTube and display correctly?
YouTube test link: https://www.youtube.com/watch?v=0t2A4erG4II&feature=youtu.be
This works!!
ffmpeg -loop 1 -i *.jpg -i *.mp3 -c:v libx264 -tune stillimage -c:a aac -strict experimental -b:a 192k -pix_fmt yuv420p -shortest out.mp4

Use ffmpeg to encode AUDIO+IMAGE into a VIDEO for YouTube

I need to generate a video containing a single image throughout the duration of the audio comming from an audio file. This video should be compatible with the parameters supported by YouTube.
I'm using ffmpeg.
I was trying various configurations explained right here and in other forums but not all have worked well.
I'm currently using these settings:
ffmpeg -i a.mp3 -loop 1 -i a.jpg -vcodec libx264 -preset slow -crf 20 -threads 0 -acodec copy -shortest a.mkv
Where a.mp3 containing audio, a.jpg contains the image and a.mkv is the name of the resulting video.
Using these parameters a.mkv works well on YouTube and can be played with Media Player Classic; but KMPlayer only recognizes the audio, showing a blank image as background.
My questions are two:
1 - There is something wrong that causes KMPlayer to fail?
2 - Is there any configuration that can deliver the video faster, of course losing some compression?
Muchas gracias!
Try this:
ffmpeg -i a.mp3 -loop 1 -r 1 -i a.jpg -vcodec libx264 -preset ultrafast -crf 20 -threads 0 -acodec copy -shortest -r 2 a.mkv
Notable changes:
added -r 1
changed -preset slow to -preset ultrafast
added -r 2

Join images and audio to result video

I have a lot of images with different sizes (i.e. 1024x768 and 900x942) and an audio file (audio.mp3) of 30 seconds and I need to create a video from them.
I'm trying it now with: result%d.png (1 to 4) and audio.mp3
ffmpeg -y -i result%d.png -i audio.mp3 -r 30 -b 2500k -vframes 900
-acodec libvo_aacenc -ab 160k video.mp4
The video video.mp4 has 30 seconds but the 3 first images is showed very quickly when the last image remains until the end of the audio.
Each image needs to be showed in a equal time until the end of the audio. Anyone knows how to do it?
The number of the images will vary sometimes.
FFMPEG version: 3.2.1-1
UBUNTU 16.04.1
Imagine, you have an mp3 audio file named wow.mp3 In that case, the following command will get the duration of the mp3 in seconds.
ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 wow.mp3
Once you have the duration in seconds (imagine I got 11.36 seconds). Now since I have 3 images, I want to run each image for (11.36/3 = 3.79), then please use the following:
ffmpeg -y -framerate 1/3.79 -start_number 1 -i ./swissGenevaLake_%d.jpg -i ./akib.mp3 -c:v libx264 -r 25 -pix_fmt yuv420p -c:a aac -strict experimental -shortest output.mp4
Here the images are ./swissGenevaLake_1.jpg, ./swissGenevaLake_2.jpg , and ./swissGenevaLake_3.jpg.
-framerate 1/3.784 means, each image runs for 3.784 seconds.
-start_number 1 means, starts with image number one, meaning ./swissGenevaLake_1.jpg
-c:v libx264: video codec H.264
-r 25: output video framerate 25
-pix_fmt yuv420p: output video pixel format.
-c:a aac: encode the audio using aac
-shortest: end the video as soon as the audio is done.
output.mp4: output file name
Disclaimer: I have not tested merging images of multiple sizes.
References:
https://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images
https://trac.ffmpeg.org/wiki/Encode/AAC
http://trac.ffmpeg.org/wiki/FFprobeTips
For creating video= n no of Image + Audio
Step 1)
You will create Video of these Images, as
Process proc = Runtime.getRuntime().exec(ffmpeg + " -y -r "+duration +" -i " + imagePath + " -c:v libx264 -r 15 -pix_fmt yuv420p -vf fps=90 " + imageVideoPath);
InputStream stderr = proc.getErrorStream();
InputStreamReader isr = new InputStreamReader(stderr);
BufferedReader br = new BufferedReader(isr);
String line = null;
while ((line = br.readLine()) != null)
{
//System.out.println(line);
}
int exitVal = proc.waitFor();
proc.destroy();
Where duration=No of Images/Duration of Audio i.e in 1 sec you want how many Images
Step 2)
Process proc4VideoAudio = Runtime.getRuntime().exec(ffmpeg +" -i " + imageVideoPath + " -i "+ audioPath + " -map 0:0 -map 1:0 " + videoPath);
InputStream stderr1 = proc4VideoAudio.getErrorStream();
InputStreamReader isr1 = new InputStreamReader(stderr1);
BufferedReader br1 = new BufferedReader(isr1);
String line1 = null;
while ((line1 = br1.readLine()) != null)
{
//System.out.println(line1);
}
int exitVal1 = proc4VideoAudio.waitFor();
proc4VideoAudio.destroy();
Both Step 1 and Step 2 can be run in sequence now. If you want to do it manually then only run Runtime.getTime.exec(..)
The code below it is to make it synchronized.
** Also note the statement of FFMPEG to create video in one step from images and audio, gives you the same problem as mentioned by you and if not the solution will be static for fix number of images for given audio file.
These imagePath, VideoPath, audioPath are all Strings
This should help: http://ffmpeg.org/trac/ffmpeg/wiki/Create%20a%20video%20slideshow%20from%20images
Using -r, you can set the number of seconds you want each image to appear.

How can I extract audio from video with ffmpeg? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
The community reviewed whether to reopen this question 5 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I tried the following command to extract audio from video:
ffmpeg -i Sample.avi -vn -ar 44100 -ac 2 -ab 192k -f mp3 Sample.mp3
but I get the following output
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
SamplE.avi: Invalid data found when processing input
Can anyone help, please?
To extract the audio stream without re-encoding:
ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac
-vn is no video.
-acodec copy says use the same audio stream that's already in there.
Read the output to see what codec it is, to set the right filename extension.
To encode a high quality MP3 or MP4 audio from a movie file (eg AVI, MP4, MOV, etc) or audio file (eg WAV), I find it's best to use -q:a 0 for variable bit rate and it's good practice to specify -map a to exclude video/subtitles and only grab audio:
ffmpeg -i sample.avi -q:a 0 -map a sample.mp3
If you want to extract a portion of audio from a video use the -ss option to specify the starting timestamp, and the -t option to specify the encoding duration, eg from 3 minutes and 5 seconds in for 45 seconds:
ffmpeg -i sample.avi -ss 00:03:05 -t 00:00:45.0 -q:a 0 -map a sample.mp3
The timestamps need to be in HH:MM:SS.xxx format or in seconds.
If you don't specify the -t option it will go to the end.
You can use the -to option instead of the -t option, if you want to specify the range, eg for 45 seconds: 00:03:05 + 45 = 00:03:50
Working example:
Download ffmpeg
Open a Command Prompt (Start > Run > CMD) or on a Mac/Linux open a Terminal
cd (the change directory command) to the directory with the ffmeg.exe, as depicted.
Issue your command and wait for the output file (or troubleshoot any errors)
Windows
Mac/Linux
Extract all audio tracks / streams
This puts all audio into one file:
ffmpeg -i input.mov -map 0:a -c copy output.mov
-map 0:a selects all audio streams only. Video and subtitles will be excluded.
-c copy enables stream copy mode. This copies the audio and does not re-encode it. Remove -c copy if you want the audio to be re-encoded.
Choose an output format that supports your audio format. See comparison of container formats.
Extract a specific audio track / stream
Example to extract audio stream #4:
ffmpeg -i input.mkv -map 0:a:3 -c copy output.m4a
-map 0:a:3 selects audio stream #4 only (ffmpeg starts counting from 0).
-c copy enables stream copy mode. This copies the audio and does not re-encode it. Remove -c copy if you want the audio to be re-encoded.
Choose an output format that supports your audio format. See comparison of container formats.
Extract and re-encode audio / change format
Similar to the examples above, but without -c copy. Various examples:
ffmpeg -i input.mp4 -map 0:a output.mp3
ffmpeg -i input.mkv -map 0:a output.m4a
ffmpeg -i input.avi -map 0:a -c:a aac output.mka
ffmpeg -i input.mp4 output.wav
Extract all audio streams individually
This input in this example has 4 audio streams. Each audio stream will be output as single, individual files.
ffmpeg -i input.mov -map 0:a:0 output0.wav -map 0:a:1 output1.wav -map 0:a:2 output2.wav -map 0:a:3 output3.wav
Optionally add -c copy before each output file name to enable stream copy mode.
Extract a certain channel
Use the channelsplit filter. Example to get the Front Right (FR) channel from a stereo input:
ffmpeg -i stereo.wav -filter_complex "[0:a]channelsplit=channel_layout=stereo:channels=FR[right]" -map "[right]" front_right.wav
channel_layout is the channel layout of the input. It is not automatically detected so you must provide the layout name.
channels lists the channel(s) you want to extract.
See ffmpeg -layouts for audio channel layout names (for channel_layout) and channel names (for channels).
Using stream copy mode (-c copy) is not possible to use when filtering, so the audio must be re-encoded.
See FFmpeg Wiki: Audio Channels for more examples.
What's the difference between -map and -vn?
ffmpeg has a default stream selection behavior that will select 1 stream per stream type (1 video, 1 audio, 1 subtitle, 1 data).
-vn is an old, legacy option. It excludes video from the default stream selection behavior. So audio, subtitles, and data are still automatically selected unless told not to with -an, -sn, or -dn.
-map is more complicated but more flexible and useful. -map disables the default stream selection behavior and ffmpeg will only include what you tell it to with -map option(s). -map can also be used to exclude certain streams or stream types. For example, -map 0 -map -0:v would include all streams except all video.
See FFmpeg Wiki: Map for more examples.
Errors
Invalid audio stream. Exactly one MP3 audio stream is required.
MP3 only supports 1 audio stream. The error means you are trying to put more than 1 audio stream into MP3. It can also mean you are trying to put non-MP3 audio into MP3.
WAVE files have exactly one stream
Similar to above.
Could not find tag for codec in stream #0, codec not currently supported in container
You are trying to put an audio format into an output that does not support it, such as PCM (WAV) into MP4.
Remove -c copy, choose a different output format (change the file name extension), or manually choose the encoder (such as -c:a aac).
See comparison of container formats.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
This is a useless, generic error. The actual, informative error should immediately precede this generic error message.
Seems like you're extracting audio from a video file & downmixing to stereo channel.
To just extract audio (without re-encoding):
ffmpeg.exe -i in.mp4 -vn -c:a copy out.m4a
To extract audio & downmix to stereo (without re-encoding):
ffmpeg.exe -i in.mp4 -vn -c:a copy -ac 2 out.m4a
To generate an mp3 file, you'd re-encode audio:
ffmpeg.exe -i in.mp4 -vn -ac 2 out.mp3
-c (select codecs) & -map (select streams) options:
-c:a -> select best supported audio (transcoded)
-c:a copy -> best supported audio (copied)
-map 0:a -> all audio from 1st (audio) input file (transcoded)
-map 0:0 -> 1st stream from 1st input file (transcoded)
-map 1:a:0 -> 1st audio stream from 2nd (audio) input file (transcoded)
-map 1:a:1 -c:a copy -> 2nd audio stream from 2nd (audio)input file (copied)
ffmpeg -i sample.avi will give you the audio/video format info for your file. Make sure you have the proper libraries configured to parse the input streams. Also, make sure that the file isn't corrupt.
The command line is correct and works on a valid video file. I would make sure that you have installed the correct library to work with mp3, install lame o probe with another audio codec.
Usually
ffmpeg -formats
or
ffmpeg -codecs
would give sufficient information so that you know more.
To encode mp3 audio ffmpeg.org shows the following example:
ffmpeg -i input.wav -codec:a libmp3lame -qscale:a 2 output.mp3
I extracted the audio from a video just by replacing input.wav with the video filename. The 2 means 190 kb/sec. You can see the other quality levels at my link above.
For people looking for the simpler way to extract audio from a video file while retaining the original video file's parameters, you can use:
ffmpeg -i <video_file_name.extension> <audio_file_name.extension>
For example, running:
ffmpeg -i screencap.mov screencap.mp3
extracts an mp3 audio file from a mov video file.
Here's what I just used:
ffmpeg -i my.mkv -map 0:3 -vn -b:a 320k my.mp3
Options explanation:
my.mkv is a source video file, you can use other formats as well
-map 0:3 means I want 3rd stream from video file. Put your N there - video files often has multiple audio streams; you can omit it or use -map 0:a to take the default audio stream. Run ffprobe my.mkv to see what streams does the video file have.
my.mp3 is a target audio filename, and ffmpeg figures out I want an MP3 from its extension. In my case the source audio stream is ac3 DTS and just copying wasn't what I wanted
320k is a desired target bitrate
-vn means I don't want video in target file
Creating an audio book from several video clips
First, extracting the audio (as `.m4a) from a bunch of h264 files:
for f in *.mp4; do ffmpeg -i "$f" -vn -c:a copy "$(basename "$f" .mp4).m4a"; done
the -vn output option disables video output (automatic selection or mapping of any video stream). For full manual control see the -map option.
Optional
If there's an intro of, say, 40 seconds, you can skip it with the -ss parameter:
for f in *.m4a; do ffmpeg -i "$f" -ss 00:00:40 -c copy crop/"$f"; done
To combine all files in one:
ffmpeg -f concat -safe 0 -i <(for f in ./*.m4a; do echo "file '$PWD/$f'"; done) -c copy output.m4a
If the audio wrapped into the avi is not mp3-format to start with, you may need to specify -acodec mp3 as an additional parameter. Or whatever your mp3 codec is (on Linux systems its probably -acodec libmp3lame). You may also get the same effect, platform-agnostic, by instead specifying -f mp3 to "force" the format to mp3, although not all versions of ffmpeg still support that switch. Your Mileage May Vary.
To extract without conversion I use a context menu entry - as file manager custom action in Linux - to run the following (after having checked what audio type the video contains; example for video containing ogg audio):
bash -c 'ffmpeg -i "$0" -map 0:a -c:a copy "${0%%.*}".ogg' %f
which is based on the ffmpeg command ffmpeg -i INPUT -map 0:a -c:a copy OUTPUT.
I have used -map 0:1 in that without problems, but, as said in a comment by #LordNeckbeard, "Stream 0:1 is not guaranteed to always be audio. Using -map 0:a instead of -map 0:1 will avoid ambiguity."
Use -b:a instead of -ab as -ab is outdated now, also make sure your input file path is correct.
To extract audio from a video I have used below command and its working fine.
String[] complexCommand = {"-y", "-i", inputFileAbsolutePath, "-vn", "-ar", "44100", "-ac", "2", "-b:a", "256k", "-f", "mp3", outputFileAbsolutePath};
Here,
-y - Overwrite output files without asking.
-i - FFmpeg reads from an arbitrary number of input “files” specified by the -i option
-vn - Disable video recording
-ar - sets the sampling rate for audio streams if encoded
-ac - Set the number of audio channels.
-b:a - Set the audio bitrate
-f - format
Check out this for my complete sample FFmpeg android project on GitHub.

converting video files to .flv format in grails

I am developing a web-application in grails.In that I have implemented video's playing option.For playing video I used the flashplayer plugin .It is working.Now I am planning to implement the feature that user's also can upload their video's.After uploading the video files how to convert those video files to .flv format?
or
Does flash player plays all the video formats?I tried with .wmv file .It is not working.
Can anyone provide help on this?
Thanks
Flash can only play FLVs, and h.264 encoded videos (e.g. mp4, f4v). You can convert video into any of these formats using FFmpeg. If you're using Windows, you can get some pre-built binaries here.
Sample command-line that should convert inputfile.avi to an flv with an audio bitrate of 48kbps, and a video bitrate of 224kbps (might need to substitute libmp3lame for mp3 depending on version of ffmpeg):
ffmpeg -i inputfile.avi -s 640x480 -y -f flv -acodec mp3 -ac 1 -ab 48k -b 224k -ar 22050 outputfile.flv
Sample for h.264 / aac:
ffmpeg -i inputfile.avi -s 640x480 -y -f mp4 -vcodec libx264 -acodec libfaac -ab 48k -b 224k -ar 22050 outputfile.mp4

Resources