How do I get a YouTube FLV URL? - url

Looking through the site, every question uses an outdated method. How do the YouTube FLV downloader websites/applications do it?
I am trying to do this in PHP but the theory or steps to do it will suffice, thanks.

As mentioned in other posts, you may want to look at our code in
youtube-dl (or in the code of the Firefox extension called
FlashVideoReplacer).
In the particular case of youtube-dl, the "real work" is done in the
subclasses of InformationExtractor and it hard to give a "stable" answer,
as the layout of such sites changes constantly.
There are some pieces of the information that are not dynamic, such as, for
instance, the uploader of the video, the title, the date of upload, and,
most importantly, the identifier of the video (a 11-character string).
For the dynamic parts, what can be said about such tools is that,
essentially, the URLs generated by such videos are dynamically generated and
you need to perform some back-and-forth communication with the server.
It is important to have in mind that what such sites can (and do) take into
consideration depend on a number of parameters, including: the cookies that
you have already received---as the case for HTML5 videos, your
geolocation---for regional control, your age--for "strong" material, your
language/locale---for showing content tailored to you, etc.
youtube-dl uses a regular expression to extract the video ID from the URL
that you give and, then, uses a "normalized", typical URL as used from the
United States, from which to proceed.
Some of the dynamic data to be gathered includes:
some time-stamp (the expire, and fexp parts of the final URL)
the cookies sent to the browser
the format in which we want to download the video (the itag part of the final URL)
throttling information (the algorithm, burst, factor)
some hashes/tokens used internally by them (e.g., the signature part of the final URL)
Some of the information listed above were once not required, but now they
are (in particular, the cookies that they send you). This means that the
information listed above is very likely to become obsolete, as the controls
become stricter.
You can see some of the work (with respect to the cookies) that I did in
this regard in the implementation of an external backend to use an external
downloader (a "download accelerator") with what youtube-dl extracts.
Discloser: I have committed some changes to the repository, and I maintain
the youtube-dl package in Debian (and, as a side effect, in Ubuntu).

You might want to take a look at how youtube-dl downloads the files. As YouTube changes, that program does seem to get updated rather quickly.

Youtube doesn't store FLV files, they compile your video into a SWF object. Those videos need to be either extracted or converted to FLV in order to get the FLV.
http://www.youtube.com/v/videoid
ex:
http://www.youtube.com/watch?v=C6nRb45I3e4
becomes
http://www.youtube.com/v/C6nRb45I3e4
From there, you need to convert the SWF into an flv, which can be done with ffmpeg.

If you really want to get an url of youtube video in flv or mp4 mode then use "YouTube Downloader - Version: 5.0" in Chrome you can right click on download button and copy path.
![You can get url of any format from this button][1]
http://i.stack.imgur.com/AFUWr.jpg (see this url due to i can't upload image at this time)
You can click on this button and copy url from "chrome://downloads/"
I think this may help you.

Related

How to build a simple video streaming server?

I am a newbie in video streaming and I just build a sample website which plays videos. Here i just give the video file location to the video tag in html5. I just noticed that in youtube the video tag contains the blob url and had a look into this. I found that the video data comes in segments and came across a term called pseudo streaming. Whereas it seems likes the website that i build downloads the whole file and plays the video. I am not trying to do any live streaming, just trying to stream local videos. I thought maybe the way video data is received in segments is done by a video streaming server. I came across RED5 open source streaming server, but most of the examples that is given is for live streaming which I am not experimenting on. Its been few days and I am not sure whether i am on the right track
The segmented approach you refer to is to support Adaptive Bit Rate streaming - ABR.
ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions. See here for an example:
https://stackoverflow.com/a/42365034/334402
For your existing site, so long as your server supports range requests then you probably are not actually downloading the whole video. With Range Requests, the browser or player will request just part of the file at a time so it can start playback before the whole file is downloaded.
For MP4 files, it is worth noting that you need to have the header information, which is contained in a 'block' or 'atom' called MOOV atom, at the start of the file rather than the end - it is at the end for regular MP4 files. There are a number of tools which will allow you move it to the start - e.g.:
http://multimedia.cx/eggs/improving-qt-faststart/
You are definitely on the right track with your investigations - video hosting and streaming is a specialist area so it is generally easier to leverage existing streaming technologies and services rather than to build them your self. Some good places to look to get a feel for open source solutions:
https://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html

What is the limitations of streaming video files in public folder with html5 video tag in Ruby on Rails 5

What I'm doing
Basically I'm writing simple a Q&A site with an option to create links to specific positions in media files. As of now the app is intended to be used in LAN environment only.
I have put a video in appRoot/public folder and created a view using
html5 video tag.
It works and even seeking is available. Wow...
What I don't understand
I'm clueless as to the tech behind and its limitations.
It just worked, so I don't even know a key word to hit google with.
What I know
With the way I'm doing:
No encryption
No way to prevent users to save video files
No automatic trans-coding available
The real question
What is the name of the tech behind.
How well can rails handle streaming and seeking requests with the way I did as compared to using dedicated video streaming servers or gems.
As long as your underlying web server understands how to handle the MIME types for video, and responds correctly to byte range requests - as it seems to be - that's all you need. The underlying mechanics of streaming video with HTML5 is that the browser asks for a chunk of content as a range of bytes from the source (enough to keep the buffer full) and the server delivers it.
You might want to look at using ffmpeg to optimize your videos so that the metadata is in the right place in the file to start streaming quicker.
You've correctly pointed out the limitations of the solution in your environment. The other thing to be aware of is capacity - if the videos are long and a lot of people are accessing them concurrently then without caching (in a LAN via a caching proxy or on the internet via a CDN service) your server capacity may be stretched

Video distribution to iPhone with Amazon Cloudfront, with fixed content programming

I would like to set up a scalable video distribution server/infrastructure for streaming video to iOS devices. The client will have some programming of pre-produced content, e.g. 6 hours that will be played and then repeated from the beginning. It should be possible to enter the exact schedule when the video starts, and also the possibility to have it start at different times on different days.
I've been pointed to the Live Smooth Streaming offer from Amazon, using the Amazon CloudFront.
So my question to you: does this support the features I need, and how do I get it set up properly. I've already taken a look at their documentation at http://awsdocs.s3.amazonaws.com/CF/latest/cf_dg.pdf but that didn't cover the use case I want, namely setting up some programming scheme. I've seen references to Cloudformation templates for the live streaming but is there also s.th. similar for doing the fixed programming, or maybe it can be used for that too?
Thanks for your time!
Flo
Your question is a bit mixed up. iOS devices need HLS protocol content. You simply need to create your content in HLS form [ts files with .m3u8] and store in a S3 bucket and link your cloudfront to it.
Since you mention pre-produced content i am guessing it means that it is available beforehand and not generated live.
Your program then should point to the right .m3u8 file to pick and can update the .m3u8 file appropriately. Your program which controls access to the m3u8 (when its available what should be playable etc) is independent of the storage in s3/cloudfront.
You can also generate content live but nothing changes except content is getting created on the fly. Your program controlling the .m3u8 will control what the client gets access to.
If it was not for iOS devices but also other devices, the same would apply. Keep your content on S3 bucket and link to CF. You need your content in the format the device needs. Let your webserver program control access to the content. Remember CF is not a player. CF provides support for flash server as well and you can use that as well.

Watch video in the time they are uploaded

It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.

How to implement the Adobe HTTP Streaming spec without using their Streaming server

As of Flash 10.1, they have added the ability to add bytes into the NetStream object via the appendBytes method (described here http://www.bytearray.org/?p=1689). The main reason for this addition is that Adobe is finally supporting HTTP streaming of video. This is great, but it seems that you need to use the Adobe Media Streaming Server (http://www.adobe.com/products/httpdynamicstreaming/) to create the correct video chunks from your existing video to allow for smooth streaming.
I have tried to do a hacked version of HTTP streaming in the past where I swap out the NetStream objects (similar to here http://video.leizhu.com/video.html), but there is always a momentary pause between the chunks. With the new appendBytes, I tried to do a quick mock up with the two sections of video from the preceding site, but even then, the skip still remains.
Does anyone know how the two consecutive .FLV files needs to be formated in order for the appendBytes method on the NetStream object to create a nice smooth video without a noticeable skip between the segments?
I was able to get this working using Adobe's File Packager Tool which Samuel described. I didn't use the NetStream object but I used the OSMF Sample Player which I assume uses this internally. Here's how to do with without using FMS:
Get Adobe's File Packager for Http Dynamic Streaming from http://www.adobe.com/products/httpdynamicstreaming/
Run the File Packager on an existing MP4 file containing H.264/AAC like this:
C:\Program Files\Adobe\Flash Media Server 4\tools\f4fpackager>
f4fpackager.exe --input-file="MyFile.mp4" --segment-duration=30
This will result in 30 second long F4F files, also F4X and a F4M file. The F4F files are your correctly segmented (and fragmented) MP4 files that should play.
If you want to test this using the OSMF Player also do the following:
Get Apache Server
Get Adobe's Http Origin Module for Apache from http://www.adobe.com/products/httpdynamicstreaming/
Install the module according to http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS8d6ed60bd880807c48597a9e1265edd6cc0-8000.html
Put the F4F, F4X and F4M file into the vod directory under httpdocs
Get the “OSMF Sample Player for HTTP Dynamic Streaming” from http://www.osmf.org/downloads/OSFMPlayer_zeri2.zip
Put the Sample Player in the httpdocs directory
Load the html file from the Sample Player in a browser eg http://localhost/OSMFPlayer.html
Press the eject button and put in the URL of your F4M file, it should play
So to answer the original question Adobe's File Packager is the file splitter to use, you don't need to buy FMS to use it and it works for FLV and MP4/F4V files.
You don't need to use their server. Wowza supports Adobe's version of HTTP Streaming and you can implement it yourself by segmenting the videos properly and loading all the segments on a standard HTTP server.
Links to all the specs for Adobe's HTTP Streaming are here:
http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS9463dbe8dbe45c4c-1ae425bf126054c4d3f-7fff.html
Trying to hack the client to do some custom style http streaming will be a lot more troublesome.
Note that HTTP Streaming does not support streaming several different videos but streams a single file that was broken off into separate segments.
File Packager
A command-line tool that translates on-demand media files into fragments and writes the fragments to F4F files. The File Packager is an offline tool. You can use the File Packager to encrypt files for use with Flash Access. For more information, see Packaging on-demand media.
The File Packager is available from adobe.com and is installed with Adobe® Flash® Media Server to the rootinstall/tools/f4fpackager folder.
Packager download link is on right here: Download File Packager for HTTP Dynamic Streaming
http://www.adobe.com/products/httpdynamicstreaming/
You could use F4Pack, it's a GUI around the commandline-tool from Adobe, that lets you process your flv/f4v file so they can be used for HTTP Dynamic Streaming.
The place in the OSMF code where this happens is the timer-fired state machine inside of the HTTPNetStream class implementation... might be an informative read. I think I even put some helpful comments in there when I wrote it.
As far as the general question:
If you read an entire FLV file into a ByteArray and pass it to appendBytes, it will play. If you break that FLV file in half, and pass the first half as a byte array and then the second half as a byte array, that will play as well.
If you want to be able to switch around between bitrates without a gap, you need to split up your FLV files at matching keyframe points... and remember that only the first call to appendBytes has the initial FLV file header ('F', 'L', 'V', flags, offset)... the rest just expect a continuation of the FLV byte sequence.
I recently found a similar project for node.js to achieve m3u8 transcoding (https://github.com/andrewschaaf/media-server) but have yet to hear of one besides Wowza doing it outside of Origin module for Apache. Since the payloads are nearly identical you're better off looking for a good mp4 segmenting solution (plenty out there) than looking for f4m segmenting. The problem is moov atoms especially on larger mp4 video are difficult to manage and put in their proper initial (near beginning of file) location. Even using optimal ffmpeg settings and 'qtfaststart' you end up with noticeably slower seeking, inefficient bandwidth usage (usually greedy), and a few minor headaches relating to scrubbing/time that you don't get with flv/f4v playback.
In my player I have or intend to switch between HTTP Dynamic Streaming (HDS) and MP4 based on load and realtime log parsing Apache using awk/cron instead of licensing Adobe's Access product for stream protection .. both have unique 'onmetadata' handlers.. but in the end I receive sequenced time/byte hashes virtually equivalent. Just MP4 is slower. So mod_origin is just a synchronizer / request router for Flash clients (over http). I'm still looking for ways to speed up mp4-container-based playback. One incredible solution I read this recently and was rather awestruck by it http://zehfernando.com/2011/flash-video-frame-time-woes/ where a video editor (guy) and flash developer came up with their own mp4 timecoding solution that literally added (via Adobe Premiere script) about 50 pixels to the bottom of every video frame with a visual 'binary' stamp like a frame barcode.. and those binary values translate into highly-accurate timecode values. So Flash could analyze the video frames as they were painted (realtime) and determine precisely where the player was and what bytes were needed from any kind of mp4 byte-segmenting-friendly webserver. The thing is (and perhaps I'm wrong here) Flash seems to arbitrarily choose when it gets to moov data, especially on large video files (.5-1.5gigs). Even if you make sure to run your mp4 through MP4Box (i.e. MP4Box -frag 10000 -inter 0 movie.mp4) I guess this has been a problem OSMF and HDS have worked on quite well
now, though it is annoying that you need Apache and a proprietary closed-source module to use it imo. Its probably just a matter of time before open source implementations arrive as HDS is only 1-2 years old, and it just needs a little reverse engineering like that Andrew Chaaf guy with node.js + mpegts streaming (live or not).
In the end I may just end up using OSMF exclusively beneath my UI as it seems to have similar virtues to HDS if not more so i.e. Strobe if you need sick extensible HDS or MP4 open player platform to hack from to realize your own custom player.
Adobe's F4F format is based on MP4 files, are you able to use F4V or MP4 instead of FLV files?
There are plenty of MP4 file splitters around but you would need to make sure the timestamps in the files are continuous, maybe the pause happens when it sees a zero timestamp within the audio or video stream inside the file.

Resources