Http Live Streaming without encoding - ios

I have MPEG-TS H.264 video stream of Live TV Channel, I want to Live Stream it for iPhone but as HLS requires to make segments (e.g.: 10s) segment and send it using M3u8. and for this purpose I am currently using ffmpeg and m3u8-segmenter available on internet. But I do not want to use transcoding using ffmpeg as i have memory + processor limitations on my hardware. Is it possible that i can only make segments of the MPEG-TS video and directly send it to the iPhone.
I have tried many ways but unable to do so. I am using Linux based system.
Please help me, what are the procedures of live streaming to iphone without transcoding the video.
Thanks

The best way to go about this is to cut out FFmpeg entirely. Although you can coerce FFmpeg to not transcode (by using -c copy), since the video is already in MPEG-TS format straight off of the livestream, it's best to use it directly.
Since it looks like the video is coming over HTTP, you can use curl to print it to stdout:
curl http://localhost:6954/myvideo.ts | ./m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://domain.com
Or if you want to use wget instead of curl, it's similar
wget -O - http://localhost:6954/myvideo.ts | ./m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://domain.com
Either wget or curl will likely already be installed on your system.

Related

Using vlc command line for capture device frin a network URL

I'm trying to capture a video using vlc.
The procedure is standard using the GUI :
enter the url e.g http://some_site_some_video.mp4/playlist.m3u8 in the network protocol capture device tool (ctrl+n) in the next screen enter the path so save the file and that's it.
Tried using VLC docs, and the closest command I found was vlc -I dummy -vvv input_stream --sout
but
vlc -I dummy -vvv http://some_site_some_video.mp4/playlist.m3u8 --sout home/me/videos
Didn't work.
Is it the right command to use?

Extracting frames from a video created by muxing a bunch of images together is changing the integrity of the frames

I am trying to transfer a bunch of images as a video. What I am doing is simply muxing these static images into a video using the following commands.
ffmpeg -framerate 1 -i %02d.jpg -codec copy 1.mkv
After this, I verify the integrity of my static images and the frames in the video using -
ffmpeg -i %02d.jpg -f framehash -
and
ffmpeg -i 1.mkv -map 0:v -f framehash -
I get the same hashes so it means I have archived the images properly. Now next I send this video to my friend who extracts the frames using -
ffmpeg -i 1.mkv mkv%02d.jpg
Now after extracting the hashes don't remain the same that means the integrity is lost.
How to extract the frames as it is so that integrity is not lost?
Also if you any other way to achieve what I am trying, please advice.
Here are the hashes.
The command
ffmpeg -i 1.mkv mkv%02d.jpg
doesn't "extract" the JPEGs. It decodes the stored frames and re-encodes them using the MJPEG encoder.
Use
ffmpeg -i 1.mkv -c copy mkv%02d.jpg

Mixing video and audio files in Rails

I have a Rails app and I get as params in a controller two files. One is audio (WAV) and the other is video (webm).
I need to mix them together so that the output is a video (mp4) with the already mixed audio.
How can I do this?
As #Meier pointed, using Ruby is not the way to go, but using an external program.
Once ffmpeg is installed on host you can run following command inside Rails to have a mkv output video file:
`ffmpeg -i #{video_file.path} -i #{audio_file.path} -acodec copy -vcodec copy -f matroska output.mkv`

VLC recording rtsp stream

I have a problem with recording rtsp stream with VLC player. Actually my method works in MacOS X, but doesn't in Windows. Command line:
vlc -vvv rtsp://admin:admin#192.168.0.151/live/h264/ --sout="#transcode{vcodec=mp4v,vfilter=canvas{width=800,height=600}}:std{access=file,mux=mp4,dst=C:\123.mp4}"
On MacOS it works fine, but under Windows it creates unreadable file. MediaInfo output:
General
Complete name : C:\123.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom
File size : 1.08 MiB
Any suggestions?
Seems like your destination URL is not correct. Try this:
vlc -vvv rtsp://admin:admin#192.168.0.151/live/h264/ --sout="#transcode{vcodec=mp4v,vfilter=canvas{width=800,height=600}}:std{access=file,mux=mp4,dst=C:\\123.mp4}"
For Linux users, ffmpeg alone works straight away.
If you want to watch the stream while recording, write to a .mkv instead of a .mp4.
This example will overwrite the file video.mp4 in your home folder without asking due to the -y param, and the original codecs are kept.
ffmpeg -i rtsp://192.168.42.1/live -vcodec copy -acodec copy -y ~/video.mp4
NB: This example url is for an Ambarella Xiaomi Mijia 4K camera, like many wifi ip cameras, you have to activate the stream via telnet first, for this particular model the command to be sent before reading the stream via rtsp://:
echo '{"msg_id":257,"token":0}' | telnet 192.168.42.1 7878

rtmpdump options confused between -p and -r

Does anyone know the difference between the -p and the -roptions in the rtmpdump utility for media streaming? I am confused because I think that the RTMP server should be the server streaming the video, but then rtmpdump asks for the -p option, which is the page url...
As I understand it:
-r (or --rtmp) is for specifying the actual location of the content stream/server.
-p (or --pageUrl) is for the URL of the website where the SWF player (which is the -W or --swfUrl argument) was embedded.
So, if you would ordinarily find your stream by going to http://example.com/video, that would go under -p. The flash player embedded on or accessed from that page would go under -W. The server that the flash player streams the content from belongs to -r.
See man rtmpdump or rtmpdump --help for all the options.

Resources