I need to capture traffic Information about a live event on the Internet. I need as much details as possible about the traffic. (e.g. number of viewers from a particular region, Devices the viewers are using, type of video stream (e.g. 240p) and so on.
I see that all this information can be captured using the APIs provided on YouTube Live Streaming Event. However, the APIs can only be used by the owner of the channel broadcasting the event. I can set up my own event using YouTube Live Streaming or setting up my own server and gather the required statistics, however it would be better if I have the data from an established source, so the traffic data obtained is good to work upon for major events.
I have already tried speaking to a few of the channel owners with no luck. Is there a way, I could obtain this data? (e.g. Capture traffic on a third party web page like YouTube).
I would say that while it may be theoretically possible to capture live event statistics by scraping the video's webpage, monitoring packets, or reverse-engineering their API, this is against YouTube's Terms of Service:
You agree not to circumvent, disable or otherwise interfere with security-related features of the Service or features that prevent or restrict use or copying of any Content or enforce limitations on use of the Service or the Content therein.
https://www.youtube.com/static?template=terms
Related
What I'm doing
Basically I'm writing simple a Q&A site with an option to create links to specific positions in media files. As of now the app is intended to be used in LAN environment only.
I have put a video in appRoot/public folder and created a view using
html5 video tag.
It works and even seeking is available. Wow...
What I don't understand
I'm clueless as to the tech behind and its limitations.
It just worked, so I don't even know a key word to hit google with.
What I know
With the way I'm doing:
No encryption
No way to prevent users to save video files
No automatic trans-coding available
The real question
What is the name of the tech behind.
How well can rails handle streaming and seeking requests with the way I did as compared to using dedicated video streaming servers or gems.
As long as your underlying web server understands how to handle the MIME types for video, and responds correctly to byte range requests - as it seems to be - that's all you need. The underlying mechanics of streaming video with HTML5 is that the browser asks for a chunk of content as a range of bytes from the source (enough to keep the buffer full) and the server delivers it.
You might want to look at using ffmpeg to optimize your videos so that the metadata is in the right place in the file to start streaming quicker.
You've correctly pointed out the limitations of the solution in your environment. The other thing to be aware of is capacity - if the videos are long and a lot of people are accessing them concurrently then without caching (in a LAN via a caching proxy or on the internet via a CDN service) your server capacity may be stretched
Can anybody explain what is the basic concept in mixing in Kurento media server?
As it is mentioned in what kurento provides, there is a term mixing. So, I would like to know what kurento Media server mixes. As,
Do it mix multi stream generated by a user into one stream and broadcast that stream to other receiving user? If it does this how to use this concept
Do kurento able to receive multi-streams through one PeerConnection object with user, i.e., at one WebRtcEndPoint Kurento can receive or send multi stream by mixing those streams into one stream?
Edit Regarding Answer Update
So, I can use mixing concept by using Hubport.
Now, do this HubPort supports different MediaTypes. As, if one user is streaming its screen sharing and at the same time he is streaming its audio also. So, do this composite element mix both the streams to one and stream one single stream to all other users?
The concept of mixing refers to combining several media streams into one. This can be better understood with a conference room. In other setups, every user would have one stream going out, and another coming in for each other participant (except himself). That leaves you with 1 + (n -1) = n streams per participant. This results in n * n streams total, where n is the number of participants.
Mixing all streams in the media server allows you to save bandwidth, ideal in scenarios like mobile devices connected through 3G, for instance. What the mixer does, it combines all the streams into one, so each user is sending one stream, and receiving one stream that has all the combined participant's media (except his own). So just two streams per user saves a lot of bandwidth.
This, however, has a toll on CPU consumption, as it's necessary to adapt the videos to the new resolution, combine them... there is some processing involved.
On the other hand, the concept you are referring to is multicast, which is the ability to send several streams through one WebRTC connection. This doesn't save bandwidth, nor combines all the streams into one, but helps you reduce the number of endpoints present in your deployment. this is in our roadmap, but can't tell you when that'll be.
EDIT
Mixing can be achieved in the media server through the Composite media element. You can check this other SO answer for more info on how to use that media element.
Not sure if this is something obvious or not. After creating an YouTube LiveBroadcast, binding that to a LiveStream with a specific CDN format (let's say "720p"), and transitioning the broadcast from "ready" to "live" ... how can I change the stream quality without having to create a new broadcast?
Trying to unbind the current stream - exception is returned, cannot unbind the stream.
Trying to bind broadcast to another stream - same exception as above.
In addition, after looking through the support pages for YouTube live streaming, it is suggested that "ingest settings cannot be modified after the broadcast has started" - it says nothing about the actual API not being able to support this, but it looks like a major limitation from somewhere deeper. I only thought it applies to the web Live Control room.
I need this functionality so that I can change the stream quality for when a user switches from WiFi to mobile data. Currently streaming RTMP data in another resolution that what the LiveStream CDN format is configured for, results in health errors and encoding artifacts on YouTube's side. As suggested by the support pages, creating a "1080p" live stream ("maximum expected resolution") should work, but when that stream is receiving a 720p or 480p stream, depending on whether it was started or not, it either doesn't start at all, or goes to a gray scene with high-pitch audio (my stream is sent correctly, since I can output it to a dozen more outputs, like MP4, FLV, and other RTMP servers).
Solution?
From what have gathered so far, Apple provided tools to make Mac to act as HTTP Live Streaming server. But my goal is different. I want to make iDevices to be the HTTP Live Streaming server. (for local network only)
Can it be done at all?
Yes and no. Apple does not provide a way to stream encoded media data, so that part is 100% up to you. Also, Apple does not provide a way to access encoded frames directly (i.e. you can easily get an encoded file or the raw frames, but not easily get "encoded frames'). So you need to develop a way to get these encoded frames from the files for streaming, or encode the raw frames on the fly.
It may or may not fit your use case, but if you first right the streamer portion, you should be able to say small/short clips to disk, and stream them out as they are created with minimal overall latency.
A potential client has come to me asking for a an app which will stream a six hour audio file. The user needs to be able to set the "playback head" to any position along the file. Presumably, this means that the app must not be forced to download the entire file before it beings playing back starting at an arbitrary
An added complication -- there are actually four files which need to be streamed and mixed simultaneously.
My questions are:
1) Is there an out-of-the box technology which will allow me random access of streaming audio, on iOS? Can this be done with standard server technology and a single long file, or will it involve some fancy server tech?
2) Which iOS framework is best suited for this. Is there anything high-level that would allow me to easily mix these four audio files?
3) Can this be done entirely with standard browser technology on the client side? (i.e. HTML5)
Have a close look at the MP3 format. It is remarkably easy and efficient to parse, chop up into little bits, and reassemble into a custom stream.
Hence rolling your own server-side code to grab what you want and send to the client will not be as crazy or difficult as it may sound.
MP3 is also widely supported by various clients. I strongly suspect any HTML5 capable browser will be able of play the stream you generate via a long-lived bit-rate regulated HTTP request.