Adaptive bitrate ingest(publishing) for live stream - youtube-api

As I know by doing some research,
popular CDN based live streaming platform(e.g, twitch.tv, youtube live) provides
recommended encoder settings(resolution, bitrate, fps) for broadcasters
who use advanced software encoder (e.g, obs, xsplit).
Before doing live stream, broadcaster should test one's upload bandwidth
and select one of the recommendations. And once the encoder setting is selected, it can't be changed during live stream.
On the delivery side however, there is popular adaptive bitrate streaming(DASH, HLS) to cope with heterogeneous bandwidth state of the viewers.
[CDN based live streaming architecture]
<--------------Ingest Side-------------------> <------Delivery Side--->
RTMP HLS
[Broadcaster] ----------------> [Media Server]--->[CDN]--------> viewer 1
constant bitrate | 720p
ABR |-----> viewer 2
| 360p
|-----> viewer 3
240p
My question is,
why live streaming platform, like twitch, youtube live, does not provide any bitrate adaptation during ingest to media server?
Or do they have bitrate control only on their mobile apps?
In my opinion, adaptviely changing bitrate according to publisher's bandwidth seems necessary and reasonable in case of bad network or bandwidth fluctuations.
Is there any bitrate adaptation for live ingesting side that I don't know of?
I know that realtime video systems(e.g, webRTC, Hangouts) has their control logic to deal with congestion and packet loss.
Therefore, I assume that mobile streaming apps for youtube live, twitch.tv have their own bitrate control logic as well.
However, I couldn't find any docs or information about it and also couldn't find any, for the case where broadcasters use advanced encoder to do better live streaming.

Is there any bitrate adaptation for live ingesting side that I don't know of?
Yes. Many encoders (including OBS) let you change the bitrate on the fly. It just doesn’t do it automatically.
why live streaming platform, like twitch, youtube live, does not provide any bitrate adaptation during ingest to media server?
Because nobody at those companies have done the work to make it work. Those companies don’t put a lot of value on broadcasters who supply a bad experience to viewers, and would rather put the engineering effort into people with stable connections and high quality input.

Related

What algorithm does YouTube use to change playback speed without affecting audio pitch?

Youtube has an option to change the playback speed of videos that speeds up or slows down a video's audio without affecting its pitch. I know that there are a number of different algorithms that can do this, but I am curious as to which specific algorithm Youtube uses because it seems to work rather well.
Also, are there any open source libraries implementing this algorithm?
I found this article on the subject dating back to 2017, I presume it's still valid or should give you some pointers: https://www.googblogs.com/variable-speed-playback-on-mobile/
It reads, in part:
"On Android, we used the Sonic library for our audio manipulation in ExoPlayer. Sonic uses PICOLA, a time domain based algorithm. On iOS, AVplayer has a built in playback rate feature with configurable time stretching."

What are the ways to stream live video to a webpage on iOS, except WebRTC?

WebRTC requires too much processing power on server so doing it massively will be cost-prohibitive.
For nearly all other platforms - both for Windows and Mac - Chrome, Safari desktop, even IE and Edge, and Android - there is a Media Source Extensions API (https://en.wikipedia.org/wiki/Media_Source_Extensions) which allows sending stream over websockets and play it, it works. Problem is just with iOS.
Is there anything better (lower latency) than HLS which would work for me?
If not, is there a WebRTC server which is free and better scalable/more stable than Kurento Media Server (https://github.com/Kurento/kurento-media-server)?
There is a jsmpeg player http://jsmpeg.com/ but it is MPEG-1 only so will require unacceptable amount of bandwidth. There is broadway.js but it does not support audio...
Is there anything better (lower latency) than HLS which would work for me?
HTTP Progressive is a fine technology for this. It can be ran at much lower latencies than a segmented technology like DASH or HLS, and requires very little in terms of server-side resources. Look into Icecast for your server, and FFmpeg as your source.
There's no point in sending video over Web Sockets, unless you're implementing a bi-directional protocol. This isn't uncommon for ABR support, but it's definitely not the most efficient or simple way to do it.
Since you don't want to implement webRTC by yourself and need lower latency than HLS, I would prefer a media server. There are many media servers available in the market. But if you are looking for free and open source media server, your options are limited to few.
I would suggest red5 media server which is free and open source. Please check this link to find more about red5. If you use free red 5 media server you need little knowledge of java. Red5 also has a paid version called red5 pro which has better webRTC support and higher capabilities. Red5 is mostly for rtmp with flash player pulgin and its fairly new for red 5 webRTC streaming.
Also you can use wowza streaming engine trail version with limited number of connections. So these are the easiest options for you.

Available encoder to IIS smooth streaming

I am trying to make a web site with IIS smooth streaming but all the tutorials and examples that I found use Microsoft Expression Encoder 4 pro. According to them only the pro version is capable of using H.264 compression. But the problem is Microsoft Expression Encoder 4 pro is discontinued and the available free version dose not support H.264 compression. So I want to know whether there any other encoders(commercial or freeware) that I can use with IIS 7. Please help me
I have used Sorenson Squeeze with relative success for on-demand video encoding. It does not do live video but can successfully encode plan video files into many different formats. This is the "cheap" encoder I recommend for everyday light use. It does most of what Expression Encoder used to do.
For professional-quality encoding needs, you will want to look towards products from Harmonic, Envivio and similar vendors. Together with high quality and broad feature set you will, of course, also be faced with a significant pricetag.
There are also Wowza and Unified Streaming Platform which offer such services for relatively low cost, though my personal opinion of them is not very high - they seem more marketing than functionality oriented businesses.

HTML5 and MP4 vs. M2TS containers

Problem:
To get an iOS app that streams video accepted into the app store, we need to have a HLS version.
What’s the problem?
Android does not support HLS well, and for other reasons, we need to store MP4 and HLS files of the same content.
What’s the difference between MP4 and HLS and why do you need to store both?
MP4 is a container that stores H.264 video and AAC audio for best compatibility in HTML 5 browsers – jsvideo players often have flash fallback if the browser does not support MP4 video in HTML 5 that use the same MP4 file, but played through flash.
HLS is a protocol where text files (.m3u8) contain references to playlists, which themselves reference .ts files (or m2ts), which are mpeg-2 transport streams, not to be confused with mpeg-2 video. The .ts files are containers for the same H.264 video and AAC audio.
Why am I complaining?
It takes time to create the HLS files and playlists from the MP4 files
(Most importantly) We are now storing twice as much data on S3
Why should I care? If your S3 bill is $10K per month for storing both MP4 and HLS, now it is only $5K. Or put another way, if you are paying $100K for storing data in MP4, it would cost $200K to store the same content in both MP4 and HLS.
What do I want?
I want to store only the .ts files and serve both desktop users, iOS users, and Android users with that single file.
Is it possible?
Doesn’t HLS require 5-10 second .ts segments instead of one big file?
As of IETF draft 7, and version 4 of the protocol, HLS supports the tag EXT-X-BYTERANGE which allows you to specify a media segment as a byte range (subrange) of a larger URL.
Are .ts files compatible with HTML5 video?
Apparently not. They are a different container than MP4, yet contain the same video and audio content. Worth looking into how to store the raw video/audio data once and simply using the correct containers when necessary. If JS video players can convert HTML 5 MP4 files into Flash video on the fly if the browser does not support HTML 5 MP4, then why not be able to do the same with M2TS data?
I might be ignorant on some level, but maybe someone can shed some light on this issue, and possibly present a solution.
There currently is no good solution.
A little background.
Video streaming used to require custom protocols such as RTP/RTMP/RTSP etc. These protocols work fine except, we were basically building two separate networks. One HTTP based for standard web traffic, and the other one. The idea came along to split video into little chunks and serve them to the player over HTTP. This way we do not need special servers/software and we could take advantage of the giant HTTP CDNs that were being built. In addition. because the video was split into chunks, we can can encode the same video into different qualities/file sizes. Then the player can choose the best quality video for its current bandwidth. This was the perfect solution for mobile because of the constant changing network conditions. Several competing standard were developed. Move networks was the first to market [citation needed]. The design was copied by Microsoft (Smooth Streaming) and Apple (HTTP Live streaming aka HLS). Microsoft is phasing out smooth streaming in favor of DASH. DASH looks like it will become the default streaming solution of the future. Except, because of its design-by-comity approach, it has basically been stuck in comity for a few years. Now, in those few years, Apple sold Millions of IOS devices. So HLS can not just be discontinued. Why doesn't everyone just use HLS then? I can think of three reasons 1) Its Apples standard, and people are haters. 2) Transport streams are a complicate file format. and 3) Transport streams a patent encumbered. MP4 is not patent encumbered but it also does not have the adaptive abilities. This make user experience poor on 2G networks. The only network supported by the iPhone 1. Also AT&T at the time did not want full bitrate video streamed over there woefully inadequate celular network. HLS was the compromise. All of this predates HTML5. So the video tag was not even considered in its design.
Addressing your points:
1) It takes time to create the HLS files and playlists from the MP4
files
This is a programing website, Automate it.
2) We are now storing twice as much data on S3
[sic] I want to store only the .ts files and serve both desktop users,
iOS users, and Android users with that single file.
You and me both man :).
Possible solutions.
1) What is specifically wrong with Androids implementation? (except for lack of in older devices)
2) JW player can play HLS (Not sure about on android)
3) Server side transmux on demand.
Doesn’t HLS require 5-10 second .ts segments instead of one big file?
You can do byte-ranges, but you need to make sure all devices you are interested in support it.
If JS video players can convert HTML 5 MP4 files into Flash video on
the fly if the browser does not support HTML 5 MP4, then why not be
able to do the same with M2TS data?
They don't convert. Flash natively supports mp4. It is possible to convert TS in AS3/JS. I have done it. JW player can convert TS in browser. video.js may be able to as well.

what audio compression Algorithm to use in iPhone App?

I am trying to record audio using an iPhone app and send the audio file through Mail. I need to compress the file before sending. what audio compression Algorithm to use in iPhone App?
It depends very much on your application.
Do you need loss-less compression, or can you afford losing some audio quality?
How fast to you need the file transfer to be?
How fast do you need the compression process to be?
Depending on the answers to these questions, you can choose one of the formats available in iOS.
You can read more here:
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/CoreAudioOverview/Introduction/Introduction.html
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW4
First choose the right bitrate. Typical bitrates for different purposes:
32kbit/s: AM Radio quality
48kbit/s: Common rate for long speech podcasts
64kbit/s: Common rate for normal-length speech podcasts
96kbit/s: FM Radio quality
128kbit/s: Most common bit rate for MP3 music
160kbit/s: Musicians or sensitive listeners prefer this to 128kbit/s
192kbit/s: Digital radio broadcasting quality
320kbit/s: Virtually indistinguishable from CDs
So if audio contains only speech 48 kbit/s is usually enough. For music 128 should be ok.
Second - you should use good compression codec. For detail information please check this link http://soundexpert.org/encoders-48-kbps but usually you should use AAC codec.
Other options (sample rate, bit depth, etc.) are not so important and usually you should leave them default.

Resources