I use RTMP to stream from my iPhone to my server with FMS. I followed some tutorials and now I have the flv playback file in /webroot/live_recorded.
What i want to do is the following.
1) Stream from iPhone to server using RTMP : DONE
2) Stream back to iPhone using HLS : I don't understand the docs and i read hundreds of threafds but none helped me. I would like the user to read the stream from the beginning, as it is stored on my server. Thanks
i'm actually not about FMS.. i work with Wowza and i suppose you'll need something like nDVR feature or have someone write special module for you that will split live stream into small recordings, and so you'll need to play playlist of such recorded files from your iPhone.
hopefully someone will recommend true solution, not just some assumptions :)
Related
I'm trying to get a livestream working on youtube. I want to stream 360° content with H264 video and AAC audio. The stream is started with the youtube live api from my mobile app and librtmp is used to deliver video and audio packets. I easily get to the point where the livestream health is good and my broadcast and stream are bound successfully.
However, when I try to transition to "testing" like this:
YoutubeManager.this.youtube.liveBroadcasts().transition("testing", liveBroadcast.getId(), "status").execute();
I get stuck on the "startTesting" status every time (100% reproducible) while I expect it to change to testing after few seconds to allow me to change it to live.
I don't know what's going on as in the youtube live control room everything seems to be fine but the encoder won't start.
Is it a common issue? Is there a mean to access the encoder logs? If you need more information feel free to ask me.
Regards.
I found a temporary fix !
I noticed 2 things :
When the autostart option was on, the stream changed its state to startLive as soon as I stopped sending data. It suggested that the encoder was trying to start but it was too slow to do it before some other data paket was received (I guess)
When I tried to stream to the "Stream now" URL, as #noogui suggested, it worked ! So I checked out what was the difference in the stream now & event configurations.
It turned out I just had to activate the low latency option as it's done by default in the stream now configuration.
I consider it as a temporary fix because I don't really know why the encoder isn't starting otherwise and because it doesn't work with the autostart option... So I hope it wont break again if Youtube does another change on their encoder.
So, if you have to work with the Youtube api, good luck guys !
Good day everyone!
So, as the title suggests, i am developing an app with similar functionality to that off Periscope and Facebook Live video streaming. Here is what the end goal is:
A Broadcasting device [user]
EC2 Instance [Hosting an ffmpeg transcoder]
Cloudfront Distrubution [CDN]
1 to n viewers of the live feed
I've been doing a lot of googling and what I cant seem to figure out is:
As you send chunks of video to the server from the Broadcaster, how do
you create an
.m3u8 playlist when you don't have all the chunks of video yet (e.g. the
device sends its first 5second chunk of video)?
It seems a .m3u8 file is created from a .mp4 file that is already complete, then broken down into chunks... But i'm sending chunks of the video to the server, how can it generate the .m3u8 file when more chunks are still coming from the Broadcaster, so the watchers / clients can continuously stitch together the video chunks?
I'll be happy to clarify this question further. Thanks!
If you take a look at the docs for the segment muxer you can specify the m3u8 to be outputted and you can also tell it to update the m3u8 as it goes. It might look something like this:
ffmpeg -i infile.mp4 -c:v copy -c:a copy -map 0 -f ssegment -segment_list playlist.m3u8 -segment_list_type hls -segment_list_size 10 -segment_list_flags +live -segment_time 4 outchunk%07d.ts
Note the segment_list_size is the maximum number of chunks referenced in the m3u8 file at one time and the segment_list_flags tells ffmpeg that this a live stream.
I think your confusion is that you are trying to send HLS fragments to their server. Don’t. Send a stream via another protocol like RTPM. Then let the server convert to HLS.
is this possible to create live event by simply using video file instead of web camera? I don't see option like this in live event creation
For doing this directly on youtube: No
For doing this by encoding some video file and push to youtube in real time: Yes
How to do?
Try wirecast play. Just like a live-feed console but free with some limit. Also other rtmp server may work. One of them is ffmpeg. I tried before and can ensure it works. But it's a backend with only command line. For more functionality, you need a front-end app(you can stream/pipe to ffmpeg).
About ffmpeg rtmp read this:
https://www.ffmpeg.org/ffmpeg-protocols.html#rtmp
i am streaming some FTA channels from
http://www.tbsdtv.com/products/tbs6985-dvb-s2-quad-tuner-pcie-card.html
using mediaportal
http://www.team-mediaportal.com/
and then i get rtsp url from mediaportal of channel i timeshift
and vlc i can send that stream to mediaserver FMS to get HLS, HDS, RTMP, RTSP
i have 3 servers running erlyvideo (flussonic)
so it take care of the delivery.
i want some alternate solution beside that
i have done some methods to work this our
including
VLC
IPTVL
Dvbdream
but the quality is better when i stream some thing as file, only FMLE works good with live stream, but for that we only can use directshow enabled devices like
http://www.viewcast.com/products/osprey-cards
i am doing it on windows.
if some one have any more methods or want to share his version please do so
I am in the process of building a rails/flex application which requires audio to be recorded and then stored in our amazon s3 account. I have found no alternative to using some form of RTMP server for recording audio through flash, but our hosting environment will not allow us to install anything like FMS, Red5, etc.
Is there any existing Ruby/Rails RTMP solution that will allow audio recording? If not, is it possible for Rails to at least intercept the RTMP stream and then I can hope to reference red5's source or something for parsing the data (long shot, I know)?
The other alternative I can think of is hosting a red5 server on another host and communicating with our rails app once the saving/uploading is done, which is not preferred.
Am I going to have any luck here?
I was able to get this to work
1) Flash Player 10.1 can get the microphone's ByteArray
2) I captured this ByteArray, used Adobe's WavWriter class (from a microphone tutorial they put together) to create a new ByteArray in proper wav format
3) Sent this over to rails through RubyAMF
4) Used something along the lines of
wav_data = rubyamf_params[0][:wav_data]
f = File.new('c:/hello.wav')
f << wav_data.pack('c'*wav_data.length)
f.close
Once I've got this wav data it won't be too far of a stretch to convert it to an mp3, woo