How does video on demand over P2P work? - network-programming

In simple terms how does video on demand and streaming video work over P2P? I assume videos are cut up into small pieces (a few seconds each) and these pieces are transferred in chunks. As soon as a user is finished watching a chunk, it is deleted from their computer. Wouldn't this mean if no user on the network was currently watching a certain instance (chunk/time slice?) of the video then it's permanently lost? If no, how does VoD over P2P work? If you store all the chunks then it's exactly the same as normal file sharing with P2P.
Let me know if any parts of the question are unclear and I'll try to improve it.

P2P Live: each user downloads and simultaneously uploads chunks for other users who watch the same stream. More users means better quality.
source: P2P TV - Wikipedia
P2P VOD: this is more challenging to achieve since like you noticed there's less simultaneity in the way users watch the video. In this case each user is expected to contribute a reasonable amount of disk space to store chunks for other users. The strategies concerning what to store on each user's cache are subject to ongoing research.
If you search for P2P VOD you will find a lot of white papers presenting different approaches. There are too many links to list here.

Related

mp4 pseudo-streaming implementation server & iOS side

I'm trying to learn how to do pseudo streaming for MP4 files. I can't think of a good way to do it, but I just found a great example app has similar implementation (except I don't understand how it does it yet)
Here's the scenario:
Alice can send a video to Bob in the app
Bob can open it immediately and see Alice's video, from beginning, while Alice is still recording it
Also, Bob can choose to view the video later after Alice finished recording. But Bob should be able to view the video instantly without waiting too much time, even when the whole size of the video is large.
Thus, my hunch is, it's using some sort of pseudo streaming for mp4.
Here's the screenshots of the requests Alice's phone makes while using the example app:
The screenshot suggests, the example app is making an array of PATCH requests to their server, every 0.x seconds. And finally, the very last request will make a PATCH to update the moov information for this MP4.
Thus my question is, how is this implemented (any educated guess will be welcomed)? Or is there any sort of existing protocol/iOS encoder that I didn't know is doing this already?
Thanks a lot!
Reading the text of your question rather than the title, I think there are a number of likely steps:
Alice is recording video
She is ending the video to a streaming server
Alice notifies Bob that the stream is available and sends the URL on the streaming server that Bob can access to retrieve the stream
Bob's video client requests the stream, using range request to download it chunk by chunk
Have a server in the middle like this is a typical approach for any stream which may have more than one client watching it.
More sophisticated streaming servers may also support delivery the stream in different bit rates and even encoded with different codecs for maximum device reach.
There are commercial (e.g. https://www.wowza.com) and open source streaming servers (e.g. https://gstreamer.freedesktop.org) you can look at to get more info on streaming servers and to see some examples.

Mixing streams concept in kurento media server

Can anybody explain what is the basic concept in mixing in Kurento media server?
As it is mentioned in what kurento provides, there is a term mixing. So, I would like to know what kurento Media server mixes. As,
Do it mix multi stream generated by a user into one stream and broadcast that stream to other receiving user? If it does this how to use this concept
Do kurento able to receive multi-streams through one PeerConnection object with user, i.e., at one WebRtcEndPoint Kurento can receive or send multi stream by mixing those streams into one stream?
Edit Regarding Answer Update
So, I can use mixing concept by using Hubport.
Now, do this HubPort supports different MediaTypes. As, if one user is streaming its screen sharing and at the same time he is streaming its audio also. So, do this composite element mix both the streams to one and stream one single stream to all other users?
The concept of mixing refers to combining several media streams into one. This can be better understood with a conference room. In other setups, every user would have one stream going out, and another coming in for each other participant (except himself). That leaves you with 1 + (n -1) = n streams per participant. This results in n * n streams total, where n is the number of participants.
Mixing all streams in the media server allows you to save bandwidth, ideal in scenarios like mobile devices connected through 3G, for instance. What the mixer does, it combines all the streams into one, so each user is sending one stream, and receiving one stream that has all the combined participant's media (except his own). So just two streams per user saves a lot of bandwidth.
This, however, has a toll on CPU consumption, as it's necessary to adapt the videos to the new resolution, combine them... there is some processing involved.
On the other hand, the concept you are referring to is multicast, which is the ability to send several streams through one WebRTC connection. This doesn't save bandwidth, nor combines all the streams into one, but helps you reduce the number of endpoints present in your deployment. this is in our roadmap, but can't tell you when that'll be.
EDIT
Mixing can be achieved in the media server through the Composite media element. You can check this other SO answer for more info on how to use that media element.

How does HLS video on iOS pick which rendition to start out with?

I heard from a WWDC video that it measures the speed of previous HLS downloads to pick which rendition to use, but how does it choose which one to use at the very start? Is the download speed of the file for the list of renditions or the download speed of the file for a specific rendition used at all? I want to make sure that I'm not tricking the video player into using too high quality of a rendition by loading metadata files instantly from the cache.
It picks the first entry:
The first entry in the variant playlist will be played at the initiation of a stream and is used as part of a test to determine which stream is most appropriate. The order of the other streams is irrelevant. Therefore, the first bit rate in the playlist should be the one that most clients can sustain.
From the Bit rate recommendations section of Apple's Technical Note TN2224:
Best Practices for Creating and Deploying HTTP Live Streaming Media for the iPhone and iPad

Capping the bit rate of an HLS stream programmatically on iOS devices

I have an HD video that I am streaming to an iOS app. I want to allow the user the ability to cap the max stream quality (low, medium, high) considering the video is several GBs when streaming at the max bit rate. Along the same lines, I would like to automatically choose a setting based on cellular vs wifi connection, for the obvious data-cap reasons.
I have no problem getting the current bit rate by accessing the AVPlayerItemAccessLogEvent, but am lost when it comes to forcing a lower quality stream.
Is this even possible with HLS? Thanks!
If you are using AVPlayer, the right way should be
preferredPeakBitRate
From Apple doc here, The desired limit, in bits per second, of network bandwidth consumption for this item.
It's not exactly dynamic, but I did solve this problem by creating four different m3u8 playlists. I labeled each playlist to represent a stream quality (low, medium, high, extreme). The user would select one based on the desired max quality. The extreme playlist includes the URLs of all qualities. The high playlist has less URLs than the extreme, the medium less URLs than the high, and the low less URLs than the medium. Whenever the user selects a different quality, I would just switch the base stream playlist to the respective quality playlist URL.
Here is a simple example of the four different playlists.
HLS_Movie_Extreme.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=64000
stream-0-64000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=350000
stream-1-350000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=800000
stream-2-800000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1200000
stream-3-1200000/index prog_index.m3u8 m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1800000
stream-4-1800000/prog_index.m3u8
HLS_Movie_High.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=64000
stream-0-64000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=350000
stream-1-350000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=800000
stream-2-800000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1200000
stream-3-1200000/index prog_index.m3u8 m3u8
HLS_Movie_Medium.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=64000
stream-0-64000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=350000
stream-1-350000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=800000
stream-2-800000/prog_index.m3u8
HLS_Movie_Low.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=64000
stream-0-64000/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=350000
stream-1-350000/prog_index.m3u8
Like I said, it's not dynamic, but you could use various techniques to get the users network connection and point to the desired quality playlist if needed. For me, it was sufficient to get the user's preference, and adjust the stream accordingly.

optimize upload videos in different signal strength

I have a question, my app is a short video share application just like vine, but now I encounter questions when used in subway or some places with weak signals, it will fail sometimes and have poor user experience.
I am a newbie for network programming and iOS. I did a lot search on Google, and have some general sense, let me sum up my finds and pls help to give some suggestions for it.
My requirement is:1. support resume when uploading interrupt. 2. can success upload in weak signal. Actually I do NOT need to think about the realtime problems or how to compress the video, just think the video as a file is totally ok. BTW the server is a REST style, I use post to upload datas.
Questions:
which is the better way for my requirement, using stream(stream NOT mean live stream video just data stream like NSOutStream&NSInputStream, just play the video after all of it has uploaded, NOT the live stream video playing and downloading at meantime) or divide the whole file into several chunks and upload chunk by chunk.
someone said, using stream is good for resource efficiency since the stream will read files into memory and control the size of the buffer and after setup connection with server we use delegate to control the failure so easy to use.
Upload chunk by chunk is good at speed, I have puzzled with this statement, upload by chunks after successfully upload one chunk we need to release the connection resources and setup another connection then do upload I think this will spend time to do these preparation stuffs.
If upload by chunks which size should be good, one video file is almost 1M bytes, someone said 8k is a safe choice, but......
since the app needs to adapt to different signal strength, is there any way? for example the chunk size is depended on the bandwidth or other ways
Is there any private API already support resume uploading interrupt or is there any apple api can support this, my app needs to run on iOS 5 and above so can NOT use NSURLSession
Concurrent uploading is a way to speed up? If so how to implement or any API available?
Thank you in advance for helping a newbie like me. Thank you very much.
It takes o lot of topics your question. iOS doesn't have an public API to stream video (such as the face time components). The main issue here is sending frame by frame will require a lot of network traffic, instead if you use the normal video writer you get hardware compression, that will be a lot better. There's more and you can check here: Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone), Upload live streaming video from iPhone like Ustream or Qik, How send to stream video from iOS device to server? and here
If real time is not your problem I would suggest you just to use a good network manager such as: MKNetworkkit or AFNetworking 2.0 . They will take care of most of the aspect that you asked.

Resources