I'm developing upload video (taken from iPhone) to my server.
However, I have no idea how to implement.
Any source code objective-c or swift will be welcomed.
I have 120fps or 240fps video (It's a slo-mo).
When I playback these video on my iPhone6. I can see slo-mo effect.
(I know playback frame rate is 30fps.)
I want to convert that video before upload to my server, from 120/240 fps to 30fps video. (I mean not adjusting playback frame rate, it means video transcode to 30fps.)
Additionally, I want to check slo-mo effect start-point and end-point.
(Maybe iPhone record this information to video binary(it might be reside in file's header.)
Well, I guess if I use ffmpeg library, it should be easy(?).
So any suggestions will be welcomed.
Here are ffmpeg command lines I use to import into Adobe Premiere:
Video:
ffmpeg -i <input MOV> -filter "setpts=4.0*PTS" -r 30 -an videofilename.mp4
4.0 in -filter means that the iphone slomo video was shot at 120fps, i.e. 4 × 30fps; the related -r 30 parameter is for 30fps. For example if you want to export as 60fps, use setpts="2.0*PTS" -r 60
-an discards the audio stream
Audio:
ffmpeg -i <input MOV> -filter "setpts=4.0*PTS" audiofilename.mp3
At this point you have the video and audio streams in separate files. You can probably use ffmpeg to recombine them.
But there's a catch: the iPhone will record the audio stream at normal speed, meaning the converted sound track will be 4 times (in my example) shorter than the converted video track. if you use Premiere, import both video and audio files, right click on the sound track in your timeline, choose "Speed/Duration", and set speed=25% (or 50% for 120fps to 60fps)
For 120fps footage with 44100 audio sample rate:
ffmpeg -i in.MOV -filter_complex "[0:v]setpts=4.0*PTS[v];[0:a]asetrate=11025,aresample=44100[a]" \
-map "[v]" -map "[a]" -r 30 out.mp4
For 240fps footage with 44100 audio sample rate:
ffmpeg -i in.MOV -filter_complex "[0:v]setpts=8.0*PTS[v];[0:a]asetrate=5512.5,aresample=44100[a]" \
-map "[v]" -map "[a]" -r 30 out.mp4
For 240fps footage with 48000 audio sample rate:
ffmpeg -i in.MOV -filter_complex "[0:v]setpts=8.0*PTS[v];[0:a]asetrate=6000,aresample=48000[a]" \
-map "[v]" -map "[a]" -r 30 out.mp4
The quality of the resulting video will be low. Increasing quality is a science in itself https://trac.ffmpeg.org/wiki/Encode/H.264 currently the -crf parameter (with a low number in the 0–63 range) seems to be the simplest way to increase quality (at the price of file size and encoding time). For example, use -crf 18 before out.mp4
Based on FFMPEG documentation: https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video
With the help of https://superuser.com/questions/292833/how-to-change-audio-frequency for the audio slowdown (chaining atempo=2.0,atempo=2.0 gives a horrible-sounding result).
Related
For this demo, it looks like you have a --source option to change it to the video source I want. However, I've been trying with no luck to use my own video but the output images always come out as a black screen. There are also no documentations on how to format the videos, have any one here sucessfully use their own videos with this demo?
We didn't document the video format for edgetpu_demo since it wasn't designed for changing videos. However, here is the command to format the video (you'll need to install ffmpeg):
$ ffmpeg -i your_video.mp4 -an -vf "scale=960:540,format=yuv420p" -colorspace bt470bg -color_range tv -color_primaries smpte170m -color_trc bt709 -vcodec libx264 -brand mp42 -c:v libx264 -profile:v baseline video_stream.mp4
I want to download M3U8 file chunks (HLS) and store that video (after decrypting it) for later viewing. I have made a demo to play M3U8 file but I want to download video data for later view.
You can use ffmpeg to download and decode the HTTP-LS stream:
ffmpeg -i http://example.org/playlist.m3u8 -c copy -bsf:a aac_adtstoasc output.mp4
There is an iOS version of ffmpeg available.
This Perl script is a good fetcher: https://github.com/osklil/hls-fetch
Steps:
wget https://raw.githubusercontent.com/osklil/hls-fetch/master/hls-fetch
chmod +x hls_fetch
./hls_fetch --playlist "THE_URL"
Replace THE_URL with the full URL of your M3U8 playlist (or try other options with --help).
Bonus: If you have missing Perl's JSON module (as I had), simply run sudo cpan JSON.
There also exists a Chrome extension that makes a whole video from m3u8 chunks, here's the link HLS Video Saver
From iOS 10, you can use AVFoundation to download HTTP Live Streaming (HLS) assets to an iOS device.
https://developer.apple.com/documentation/avfoundation/media_assets_playback_and_editing/asset_manipulation/downloading_and_playing_offline_http_live_streaming_content?changes=_4
or use this git : HLSion
url: https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8
Step-1: ffmpeg -i 'https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8' -vf scale=w=1280:h=720:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -b:a 128k -c:v h264 -profile:v main -crf 20 -g 48 -keyint_min 48 -sc_threshold 0 -b:v 2500k -maxrate 2675k -bufsize 3750k -hls_time 10 -hls_playlist_type vod -hls_segment_filename my_hls_video/720p_%03d.ts my_hls_video/720p.m3u8
Step-2:
-i 'https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8'
:=> Set https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8 as the video source source.
-vf "scale=w=1280:h=720:force_original_aspect_ratio=decrease"
:=> Scale video to maximum possible within 1280x720 while preserving aspect ratio
-c:a aac -ar 48000 -b:a 128k
:=> Set audio codec to AAC with sampling of 48kHz and bitrate of 128k
-c:v h264
:=> Set video codec to be H264 which is the standard codec of HLS segments
-profile:v main
:=> Set H264 profile to main - this means support in modern devices read more
-crf 20
:=> Constant Rate Factor, high level factor for overall quality
-g 48 -keyint_min 48
:=> IMPORTANT create key frame (I-frame) every 48 frames (~2 seconds) - will later affect correct slicing of segments and alignment of renditions
-sc_threshold 0
:=> Don't create key frames on scene change - only according to -g
-b:v 2500k -maxrate 2675k -bufsize 3750k
:=> Limit video bitrate, these are rendition specific and depends on your content type - read more
-hls_time 4 :
:=> Segment target duration in seconds - the actual length is constrained by key frames
-hls_playlist_type vod
:=> Sdds the #EXT-X-PLAYLIST-TYPE:VOD tag and keeps all segments in the playlist
-hls_segment_filename beach/720p_%03d.ts
:=> - explicitly define segments files names
my_hls_video/720p.m3u8 - path of the playlist file - also tells ffmpeg to output HLS (.m3u8)
I just tried downloading a video from some https .m3u8 file following the tutorial https://www.oneminuteinfo.com/2016/10/download-ts-files-and-convert-to-mp4.html and it worked. None of the Chrome plugins or ffmpeg worked for me.
I'm building an iOS app where re-encoding and trimming a video in the background is necessary.
I can not use iOS libraries (AVFoundation) since they rely on the GPU and no app can access the GPU if it's backgrounded.
Due to this issue I switched to FFMpeg and compiled it (alongside libx264) and integrated it on my iOS app.
To sum things up what I need is:
Trim the video for the first 10 seconds
re-scale the video
After a couple of weeks - and banging my head against the wall quite often - I managed to:
split the video container into streams (demuxing)
copy the audio stream into the output stream (no decoding or encoding)
decode the video stream, run the necessary filters per frame, encode each resulting frame and remux it to the output stream (I decode the h264, filter it, re-encode it back to h264)
If I were to run ffmpeg through the command line I would run it like this:
ffmpeg -i input.MOV -ss 0 -t 10 -vf scale=320:240 -c:v libx264 -preset ultrafast -c:a copy output.mkv
My concern is how to trim the video? Although I could count the number of video frames that I encode/decode and based on the FPS decide when to stop I cannot do the same with the audio since I'm only demuxing and remuxing it.
Ideally - before scaling the video - I would run a process to trim the video by copying the 10 seconds of each stream (video and audio) into a new video container.
How to I achieve this through the AV libraries?
I know you can do this with one call to ffmpeg:
ffmpeg -i input.MOV -filter_complex [0:v]trim=duration=10.0,scale=320:240[vid];[0:a]atrim=duration=10.0[aud] -map [vid] -map [aud] -c:v libx264 -preset ultrafast -c:a libvo_aacenc -b:a 128k -flags +aic+mv4 output.mkv
I am making a site in Ruby in which I have a series of images, (almost like a powerpoint) and I need to automatically convert those images into one continuous video file (mov, mpeg) that shows each image for 5 seconds or so. Any one have any clues where to start.
I'm also open to using another language if there are tools to get the job done.
You could probably use FFmpeg to do this. Here's an example from the FFmpeg Wiki on the subject:
ffmpeg -framerate 1/5 -i img%03d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p -movflags +faststart out.mp4
What this would do is...
-i img%03d.jpg
Read input from a series of JPEG files named img001.jpg, img002.jpg and so on
-framerate 1/5
...at one frame per five seconds...
-c:v libx264
...then turn it into H.264/MPEG-4 AVC...
-r 30
...at thirty frames per second...
-pix_fmt yuv420p
...with YUV420 pixel format (really, all the FFmpeg flags work here)...
-movflags +faststart
...after encoding completes, relocate some data to the beginning of the file so playback can begin before the file is completely downloaded...
out.mp4
...and store it into out.mp4.
If you were using this from Ruby you'd likely launch a subprocess. The flags would be similar if you really want a (QuickTime) .mov file instead of H.264 MPEG-4.
Has anyone tried this ?
What's the best practice for this?
FMS live streams are using the RTMP protocol:
ffmpeg -i rtmp://***server/path* **-acodec copy -vcodec copy -y *captured***.flv**
Here, we are saving the whole stream to an FLV file, which is Flash's static movie file format and so can always preserve all RTMP audio and video codecs without conversion.
You can then extract any frames you want, e.g.
ffmpeg -i *captured***.flv -s** starttime -vframes 1 -f image2 -vcodec mjpeg *captured***.jpg**
If you are ambitious and know exactly what time offsets and intervals you want to capture in advance, you can do both steps at once, e.g. one frame every second:
ffmpeg -i rtmp://***server/path* **-r 1 -f image2 -vcodec mjpeg *captured***%d.jpg**
All commandlines have not been tested, will need fixing but give you a good impression