Can anybody explain what is the basic concept in mixing in Kurento media server?
As it is mentioned in what kurento provides, there is a term mixing. So, I would like to know what kurento Media server mixes. As,
Do it mix multi stream generated by a user into one stream and broadcast that stream to other receiving user? If it does this how to use this concept
Do kurento able to receive multi-streams through one PeerConnection object with user, i.e., at one WebRtcEndPoint Kurento can receive or send multi stream by mixing those streams into one stream?
Edit Regarding Answer Update
So, I can use mixing concept by using Hubport.
Now, do this HubPort supports different MediaTypes. As, if one user is streaming its screen sharing and at the same time he is streaming its audio also. So, do this composite element mix both the streams to one and stream one single stream to all other users?
The concept of mixing refers to combining several media streams into one. This can be better understood with a conference room. In other setups, every user would have one stream going out, and another coming in for each other participant (except himself). That leaves you with 1 + (n -1) = n streams per participant. This results in n * n streams total, where n is the number of participants.
Mixing all streams in the media server allows you to save bandwidth, ideal in scenarios like mobile devices connected through 3G, for instance. What the mixer does, it combines all the streams into one, so each user is sending one stream, and receiving one stream that has all the combined participant's media (except his own). So just two streams per user saves a lot of bandwidth.
This, however, has a toll on CPU consumption, as it's necessary to adapt the videos to the new resolution, combine them... there is some processing involved.
On the other hand, the concept you are referring to is multicast, which is the ability to send several streams through one WebRTC connection. This doesn't save bandwidth, nor combines all the streams into one, but helps you reduce the number of endpoints present in your deployment. this is in our roadmap, but can't tell you when that'll be.
EDIT
Mixing can be achieved in the media server through the Composite media element. You can check this other SO answer for more info on how to use that media element.
Related
I'm trying to learn how to do pseudo streaming for MP4 files. I can't think of a good way to do it, but I just found a great example app has similar implementation (except I don't understand how it does it yet)
Here's the scenario:
Alice can send a video to Bob in the app
Bob can open it immediately and see Alice's video, from beginning, while Alice is still recording it
Also, Bob can choose to view the video later after Alice finished recording. But Bob should be able to view the video instantly without waiting too much time, even when the whole size of the video is large.
Thus, my hunch is, it's using some sort of pseudo streaming for mp4.
Here's the screenshots of the requests Alice's phone makes while using the example app:
The screenshot suggests, the example app is making an array of PATCH requests to their server, every 0.x seconds. And finally, the very last request will make a PATCH to update the moov information for this MP4.
Thus my question is, how is this implemented (any educated guess will be welcomed)? Or is there any sort of existing protocol/iOS encoder that I didn't know is doing this already?
Thanks a lot!
Reading the text of your question rather than the title, I think there are a number of likely steps:
Alice is recording video
She is ending the video to a streaming server
Alice notifies Bob that the stream is available and sends the URL on the streaming server that Bob can access to retrieve the stream
Bob's video client requests the stream, using range request to download it chunk by chunk
Have a server in the middle like this is a typical approach for any stream which may have more than one client watching it.
More sophisticated streaming servers may also support delivery the stream in different bit rates and even encoded with different codecs for maximum device reach.
There are commercial (e.g. https://www.wowza.com) and open source streaming servers (e.g. https://gstreamer.freedesktop.org) you can look at to get more info on streaming servers and to see some examples.
In simple terms how does video on demand and streaming video work over P2P? I assume videos are cut up into small pieces (a few seconds each) and these pieces are transferred in chunks. As soon as a user is finished watching a chunk, it is deleted from their computer. Wouldn't this mean if no user on the network was currently watching a certain instance (chunk/time slice?) of the video then it's permanently lost? If no, how does VoD over P2P work? If you store all the chunks then it's exactly the same as normal file sharing with P2P.
Let me know if any parts of the question are unclear and I'll try to improve it.
P2P Live: each user downloads and simultaneously uploads chunks for other users who watch the same stream. More users means better quality.
source: P2P TV - Wikipedia
P2P VOD: this is more challenging to achieve since like you noticed there's less simultaneity in the way users watch the video. In this case each user is expected to contribute a reasonable amount of disk space to store chunks for other users. The strategies concerning what to store on each user's cache are subject to ongoing research.
If you search for P2P VOD you will find a lot of white papers presenting different approaches. There are too many links to list here.
I need to capture traffic Information about a live event on the Internet. I need as much details as possible about the traffic. (e.g. number of viewers from a particular region, Devices the viewers are using, type of video stream (e.g. 240p) and so on.
I see that all this information can be captured using the APIs provided on YouTube Live Streaming Event. However, the APIs can only be used by the owner of the channel broadcasting the event. I can set up my own event using YouTube Live Streaming or setting up my own server and gather the required statistics, however it would be better if I have the data from an established source, so the traffic data obtained is good to work upon for major events.
I have already tried speaking to a few of the channel owners with no luck. Is there a way, I could obtain this data? (e.g. Capture traffic on a third party web page like YouTube).
I would say that while it may be theoretically possible to capture live event statistics by scraping the video's webpage, monitoring packets, or reverse-engineering their API, this is against YouTube's Terms of Service:
You agree not to circumvent, disable or otherwise interfere with security-related features of the Service or features that prevent or restrict use or copying of any Content or enforce limitations on use of the Service or the Content therein.
https://www.youtube.com/static?template=terms
A potential client has come to me asking for a an app which will stream a six hour audio file. The user needs to be able to set the "playback head" to any position along the file. Presumably, this means that the app must not be forced to download the entire file before it beings playing back starting at an arbitrary
An added complication -- there are actually four files which need to be streamed and mixed simultaneously.
My questions are:
1) Is there an out-of-the box technology which will allow me random access of streaming audio, on iOS? Can this be done with standard server technology and a single long file, or will it involve some fancy server tech?
2) Which iOS framework is best suited for this. Is there anything high-level that would allow me to easily mix these four audio files?
3) Can this be done entirely with standard browser technology on the client side? (i.e. HTML5)
Have a close look at the MP3 format. It is remarkably easy and efficient to parse, chop up into little bits, and reassemble into a custom stream.
Hence rolling your own server-side code to grab what you want and send to the client will not be as crazy or difficult as it may sound.
MP3 is also widely supported by various clients. I strongly suspect any HTML5 capable browser will be able of play the stream you generate via a long-lived bit-rate regulated HTTP request.
When programs such as Skype streams video from a user to another and vice versa, how is that usually accomplished?
Does client A stream to a server, and server sends it to client B?
or does it go directly from client A to B?
Feel free to correct me if i am way off and none of those is correct.
Skype is much more complicated than that, because it is Peer to Peer, meaning that your stream may travel through several other skype clients, acting as several servers. Skype does not have a huge central system for this. Skype always keeps track of multiple places that it can deliver your stream to, so that if one of these places disappear (that Skype client disappears), then it will continue sending through another server/skype-client. This is done so efficiently, that you don't notice the interruption.
Basically , this is how its achieved.
1) encode video / audio using the best compression you can get. Go lossy compression and plenty of aliasing to throw away portions of video and audio which is not usable. Like removing background hiss
2) pack video / audio into packets and put a timestamp on them. The packets are usually datagrams.
3) send packets directly to destination. Use the most appropriate route. You dont have to send all packets the same way. Use many routes if possible. P2P networks often use many routes to the same destination
4) re-encode on the destination. If a packet is too old , throw it away. If packets are lost , dont bother about it since its too late.
5) join the video back and fill in the missing frames the best you can.