**I am using ruby 1.9.3 it would be helpful if i can get any gem name or method to convert the .ppt to .mp4
PPTX >> PNG >> MP4
You can use docsplit gem to convert PDF to PNG
Docsplit.extract_images('test.pdf', :format => [:png])
and ffmpeg to convert PNGs to MP4.
ffmpeg -framerate 1/5 -i img_%d.png -c:v libx264 -vf scale=1280:-2 -pix_fmt yuv420p output.mp4
if your player cannot handle a non-standard frame rate add -r 30 eg. VLC
ffmpeg -framerate 1/5 -i img_%d.png -r 30 -c:v libx264 -vf scale=1280:-2 -pix_fmt yuv420p output.mp4
check docs for more
Related
I have one image and one video. I would like to add image as video just before actual video so i can stream video with intro frame for 5 seconds.
I found one command and full fill my requirement but in that command image added at the end of the video and i need beginning of the video. Let me share command below:
ffmpeg -i 1.mp4 -loop 1 -t 5 -i 2.jpg -f lavfi -t 5 -i anullsrc -filter_complex "[0]split[base][full];[base]trim=0:5,drawbox=t=fill[base];[1][base]scale2ref=iw:ih:force_original_aspect_ratio=decrease:flags=spline[2nd][base];[base][2nd]overlay='(W-w)/2':'(H-h)/2'[padded];[full][0:a][padded][2:a]concat=n=2:v=1:a=1[v][a]" -c:v libx264 -c:a aac -strict -2 -map "[v]" -map "[a]" output.mp4
Image should be resize dynamically according to the video resolution.
Best solution will be appreciate from bottom of heart.
ffmpeg -i 1.mp4 -loop 1 -t 5 -i 2.jpg -f lavfi -t 5 -i anullsrc -filter_complex "[0:v]trim=0:5,drawbox=t=fill[base];[1][base]scale2ref=iw:ih:force_original_aspect_ratio=decrease:flags=spline[2nd][base2];[base2][2nd]overlay='(W-w)/2':'(H-h)/2'[padded];[padded][2:a][0:v][0:a]concat=n=2:v=1:a=1[v][a]" -c:v libx264 -c:a aac -map "[v]" -map "[a]" output.mp4
No need for the split filter.
Do not re-use labels. Each output label must be unique. For example, you used [base] several times. So I renamed the next one [base2].
Order of video is determined by the order given to the concat filter. I re-arranged it so [padded][2:a] plays before [0:v][0:a].
-strict -2 hasn't been needed since 2015 (it was for the AAC encoder). You don't need that unless your FFmpeg is very old.
I'm trying to stream a webpage captured with PhantomJS to Youtube using FMMpeg.
This is the command I use:
xvfb-run phantomjs --web-security=no render.js | ffmpeg -threads 0 -y -v verbose -c:v png -r 30 -f image2pipe -i - -f lavfi -i anullsrc -strict -2 -acodec aac -ac 1 -ar 44100 -b:a 128k -c:v libx264 -s 1280x720 -pix_fmt yuv420p -f flv "rtmp://a.rtmp.youtube.com/live2/key";
And the render.js code:
http://pastebin.com/raw/X9gv8iGH
It looks like it's streaming, but no feed is received by YouTube, and I can't see where the problem is.
Outpout from my console
Try this:
phantomjs --web-security=no render.js | ffmpeg -threads 0 -y -v verbose -c:v png -framerate 33 -f image2pipe -i - -f lavfi -i anullsrc -strict -2 -acodec aac -ac 1 -ar 44100 -b:a 128k -c:v libx264 -s 1280x720 -pix_fmt yuv420p -g 60 -r 30 -f flv "rtmp://a.rtmp.youtube.com/live2/key";
Parameter -framerate:
You can specify two frame rates: input and output.
Set input frame rate with the -framerate input option (before -i). The default for reading inputs is -framerate 25 which will be set if
no -framerate is specified.
The output frame rate for the video stream by setting -r after -i or by using the fps filter.
So in your case framerate should be 1/period_from_phantomjs which is 1000/30 = 33.33
As for the -g 60, that will add a key frame every 2 seconds, which is probably a requirement for the youtube streaming api (I know that for facebook it is).
avconv -y -i input.avi -b 915k -an -f mp4 -ar 44100 -f s16le -ac 2 -i /dev/zero -acodec libfaac -ab 128k -strict experimental -shortest -vcodec libx264 output.mp4 -loglevel fatal
First of all, this seems to be an old version of avconv, since the command line has changed since then (but not too much).
So, let's break it down:
-y
This answers 'yes' to questions like "do you want to overwrite the output file".
-i input.avi
This gives the program the file input.avi as an input
-b 915k
This asks to change the bitrate to 915 Kibibytes per second
-an
This removes all the audio from the output.
-f mp4
Sets up MP4 as the format of the output file
-ar 44100
This sets audio sampling rate of the following input file.
-f s16le
This sets the format of the audio of the following input file.
-ac 2
This sets number of channels of audio to two.
-i /dev/zero
This adds another input file that consists entirely of zero input
-acodec libfaac
This reencodes the audio (silence most likely) with libfaac
-ab 128k
Setting the audio bitrate to 128 Kbps
-strict experimental
Allows avconv to use nonstandard approaches while encoding.
-shortest
Ends encoding when the shortest of the inputs has ended. This is needed because /dev/zero will never end.
-vcodec libx264
This sets the library to do the video encoding. The codec will be (unfortunately) h264
output.mp4
This is the name of the output file
-loglevel fatal
Fatal messages will be written as the log, and that's it.
In the future you may find man avconv to be your friend.
I'm using ffmpeg - streaming local file to crtmpserver (or other server):
ffmpeg.exe -re -i file.avi -vcodec libx264 -preset veryfast -acodec aac -strict experimental -f flv rtmp://256.257.0.0:1935/flvplayback/live
How to change the resolution? File has a resolution 1920x1080, but I want to send only 640x360.
-s 640x360 does not work.
Use -vf scale=640:360.
ffmpeg.exe -re -i file.avi -vf scale=640:360 -vcodec libx264 -preset veryfast -acodec aac -strict experimental -f flv rtmp://256.257.0.0:1935/flvplayback/live
Im trying to stream raw YUV frames in an array generated in a C++ program to video using FFPEG. Can anyone point me to the right direction?
To stream piped YUV420 planar frames to RTMP try e.g.
ffmpeg -f rawvideo -c:v rawvideo -s 1920x1080 -r 25 -pix_fmt yuv420p -i - -c:v libx264 -f flv rtmp:///live/myStream.sdp