Anyone knows what is YouTube's policy when generating MPEG DASH?
Especially what is the rule to decide the segment's duration?
For example, I found some files have ~ 4.8 sec per segment but some have ~4.4 sec. So it seems to not the same but depends on the unload videos.
Is it fixed? If not so, why it is not fixed for simplicity?
Thanks
Related
Youtube has an option to change the playback speed of videos that speeds up or slows down a video's audio without affecting its pitch. I know that there are a number of different algorithms that can do this, but I am curious as to which specific algorithm Youtube uses because it seems to work rather well.
Also, are there any open source libraries implementing this algorithm?
I found this article on the subject dating back to 2017, I presume it's still valid or should give you some pointers: https://www.googblogs.com/variable-speed-playback-on-mobile/
It reads, in part:
"On Android, we used the Sonic library for our audio manipulation in ExoPlayer. Sonic uses PICOLA, a time domain based algorithm. On iOS, AVplayer has a built in playback rate feature with configurable time stretching."
I used to use YT video in FLV format (itag 5) and I was able to start at a specific point in time by adding "&begin=xxxx" to the URL.
I had to move to WEBM as FLV seems to be fully deprecated (itag 43) but I can't find a way to start at a given position.
I can't use HLS or DASH.
note: this is not about embedding a YT link in a page like https://www.youtube.com/watch?v=xxxxxx. I know in that case I can use "&start=zzz". Here it's about the link to the file itself on goooglevideo.com sites
Thanks
Just for clarification, I know now this is not possible. You have to crawl through the webm format and then request the right chunk using HTTP offset. Works, but less convenient compare to what it was in the past
We are trying to use capture the PCM data from an HLS stream for processing, ideally just before it is played, though just after is acceptable. We want to do all this while still using AVPlayer.
Has anyone done this? For non-HLS streams, as well as local files, this seems to be possible with the MPAudioProcessingTap, but not with HLS. This issue discusses doing it with non-HLS:
AVFoundation audio processing using AVPlayer's MTAudioProcessingTap with remote URLs
Thanks!
Unfortunately, this has been confirmed to be unsupported, at least for the time being.
From an Apple engineer:
The MTAudioProcessingTap is not available with HTTP live streaming. I suggest filing an enhancement if this feature is important to you - and it's usually helpful to describe the type of app you're trying to design and how this feature would be used.
Source: https://forums.developer.apple.com/thread/45966
Our best bet is to file enhancement radars to try to get them to devote some development time towards it. I am in the same unfortunate boat as you.
I am developing an app and I am quite hesitant on allowing my users to share videos.
Mainly due the space that videos can take up in the server which adds a huge maintainance cost.
I have been reading around and I can see that there are lots of different libraries that allow me to compress video on iOS to make it easy to share.
After much researching I couldn't find any estimate on file size / second after compressed.)
I was wondering if anyone could share their experience with what file size / second I could expect with your preferred library in a video quality setting that is reasonable for mobile (I guess medium).
Based on the 2 following URL, you should be able to do what you want:
I suppose that you plan to use H264, and as the video is the major issue, First look at the following to choose the resolution & fps and the recommended bit-rate.
http://www.billhung.net/single_pages/video.encoding.resolution.vs.bitrate.by.experience.html#mozTocId728778
After follow the formula describe in:
Video bitrate and file size calculation
to compute the file size based on the maximum video duration you choose.
I want to find out if I can get some data on the percentage wise distribution of video content, for different video codecs currently used for video encoding. I know there are different applications/use-case scenarios which have different encoder used but i want to consdier all that and have a overall usage number(%)
My guess is(highest to lowest % of content) -
H.264(AVC)
DivX
MPEG2
VP6
Where do H.263, MPEG4, VC-1, RV, Theora, etc. fit in here.
How may this look like in future?
PS:I would like this to be community wiki to have get wider range of inputs, if someone with privileges can do it for me please.
thank you.
-AD
I am guessing that WebM format (which is actually VP8) which is part of the WebM Project will see a rise since YouTube is encoding all of the videos in WebM format as an alternative.
I vote for WebM as a better quality and freer alternative to H.264