On the 4th gen Apple TV you can select a custom json file URL to load screensaver movies other than the apple ones. For that you have to go into settings, move to about, while on about, click Play/Pause 4 times. This enters a store/channel mode, which allows when selecting channel mode to specify the URL where to fetch the movies from via an intermediary JSON file describing the download URLs. This worked one time for me and one of my own movies was downloaded. But I changed the URL since then, and the Apple TV for several days has not downloaded any of the new movies.
I have both 1080p versions (about 600MB per movie) and 720p versions (about 70MB per movie) available. My version of the JSON file is located here: http://wx.inside.net/sat/ss.json, you can compare this with Apple's own version here: http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/videos/entries.json, I think the syntax is OK in mine.
The Apple TV has been on permanently, so would have had ample time to download the new movies, and the movies have been available (and play fine on the iPad, or MBP, using the URLs from the JSON file.
Questions:
- Is there a way to get some feedback from the Apple TV as to whether it has fetched the latest JSON file?
- Can the locally stored / cached movies be erased to force a download of the new movies?
- Are there any size / quality limitations on what movie files it will play?
- Can I somehow force the Apple TV to reload the screensavers URL?
In addition to the above, you can make the Apple TV use your own JSON for the screensavers even when not in channel mode (i.e. in normal operating mode) by "spoofing" its DNS queries (in the whitest hat way possible).
Simply set up a DNS server that authoritatively resolves a1.phobos.apple.com to your own server but forwards any other queries, then set up a web server that answers to that name and replicate the full path to the JSON file, but (obviously) have it point at your own file.
Then set the DNS manually in the TVs network configuration to your DNS. Done!
Problem solved: movie files need to have file name extension .mov (I used .mp4) - irrespective of content format. Way to go Apple...
Hi Balthasar,
I'm reaching out to you for help regarding the possibility of streaming my own videos instead of the Aerial videos as a screensaver in the Apple TV.
As far as I've been doing my research online, I understand that the process is as follows:
With an Apple TV 4th Generation hooked up and running...
Go to Settings > General and then click Play/Pause button 4 times on the Apple TV Remote until you get to the settings to Demo mode. Inside of the Demo Mode settings, you are presented with 3 options; Off, Apple Store, and Channel. If you enable the "Channel" mode, there will be a config URL that you can use to feed info for the screensavers. I don't know how to actually accomplish the last part and was wondering if you could elaborate.
Based on another post on stackoverflow I have downloaded the .json file and understand that I have to my modified .json file on a server and type in the address of where I have it hosted in the URL at the Channel mode in the Apple TV. On the other hand I understand that my own videos have to be .mov in order to work. However, inside of the .json file I have no clue on the following marked on bold:
[
{
"id" : "73F3F654-9EC5-4876-8BF6-474E22029A49",
"assets" : [
{
"url" : "http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/videos/comp_GL_G004_C010_v03_6Mbps.mov",
"accessibilityLabel" : "Greenland",
"type" : "video",
"id" : "D388F00A-5A32-4431-A95C-38BF7FF7268D",
"timeOfDay" : "day"
},
––––––––––––
Concrete questions:
1) Can you give examples on to how to configure the server on how to serve that .json file? Where can I host this file on Apache or Windows IIS?
2) On the .json file there are a bunch of video files, but I might not be hosting the same amount, for now, most likely it will be just a couple, perhaps in the future I could host many more. Can I just erase the code and leave only those that I need?
3) What is the "id" on the code, how do I get it for my own videos? Is it needed in order to be fetched by the Apple TV?
4) What about the accessibilityLabel? Is this required? Can I omit it? Delete it? Or simply replace it with my own label?
5) Same thing with the timeOfDay. What am I supposed to use this for?
6) Where should I host the video files at? Can I use any cloud service like Google Drive, Dropbox, OneDrive? If not, then what kind of server should be used.
Please be as specific and descriptive as possible.
Please help me out. I'm sure many more are interested on this feature and would love to learn how to get the most out of their Apple TV. I promise to make a video or a very detailed guide on how to do this "for Dummies" like myself, so that we can spread the word.
Thank you very much in advance.
Related
Is there any way, using currently available SDK frameworks on Cocoa (touch) to create a streaming solution where I would host my mp4 content on some server and stream it to my iOS client app?
I know how to write such a client, but it's a bit confusing on server side.
AFAIK cloudKit is not suitable for that task because behind the scenes it keeps a synced local copy of datastore which is NOT what I want. I want to store media content remotely and stream it to the client so that it does not takes precious space on a poor 16 GB iPad mini.
Can I accomplish that server solution using Objective-C / Cocoa Touch at all?
Should I instead resort to Azure and C#?
It's not 100% clear why would you do anything like that?
If you have control over the server side, why don't you just set up a basic HTTP server, and on client side use AVPlayer to fetch the mp4 and play it back to the user? It is very simple. A basic apache setup would do the job.
If it is live media content you want to stream, then it is worth to read this guide as well:
https://developer.apple.com/Library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/StreamingMediaGuide.pdf
Edited after your comment:
If you would like to use AVPlayer as a player, then I think those two things don't fit that well. AVPlayer needs to buffer different ranges ahead (for some container formats the second/third request is reading the end of the stream). As far as I can see CKFetchRecordsOperation (which you would use to fetch the content from the server) is not capable of seeking in the stream.
If you have your private player which doesn't require seeking, then you might be able to use CKFetchRecordsOperation's perRecordProgressBlock to feed your player with data.
Yes, you could do that with CloudKit. First, it is not true that CloudKit keeps a local copy of the data. It is up to you what you do with the downloaded data. There isn't even any caching in CloudKit.
To do what you want to do, assuming the content is shared between users, you could upload it to CloudKit in the public database of your app. I think you could do this with the CloudKit web interface, but otherwise you could create a simple Mac app to manage the uploads.
The client app could then download the files. It couldn't stream them though, as far as I know. It would have to download all the files.
If you want a streaming solution, you would probably have to figure out how to split the files into small chunks, and recombine them on the client app.
I'm not sure whether this document is up-to-date, but there is paragraph "Requirements for Apps" which demands using HTTP Live Streaming if you deliver any video exceeding 10min. or 5MB.
I made a WebApp for an iPad which is supposed to run in an intranet. The app is basically a form, kind of an exam. Some questions has videos (videos are around 20MB size). I've defined my cache manifest as follow:
CACHE MANIFEST
CACHE:
/videos/preg1Calidad.m4v
/videos/preg2Calidad.m4v
The .manifest file content-type header is "text/cache-manifest". The thing is that, as this webapp is supposed to access some webservices to read/write data on a database located in a server connected to the intranet, I need the iPads to be connected to the network. When I add my app to the home screen and a question containing a video is prompted, I can see the video is being fetched from the network (I can see the loading animation next to the WiFi icon) istead of being accessed from the iPad itself.
I've deleted safari's data storage, cache, historial, deleted the app and added again, nothing seems to work. The content-type for the .m4v video I've setted it up to "video/mp4".
So, I've several questions:
How can I know for sure if the files on the .manifest are being cached?
I know some browsers apparently has a maximum size of storage for offline apps, never the less, looking into the apple documentation I haven't seen such a thing. Is there any limitation with iPads on file size? Maybe file types?
I don't know if I'm miss understanding the behavior of the webapp for offline access, the definition of the .manifest file, I've been thinking that it might only work when the device is actually offline (no network connection available, airplane mode maybe) but I thought that defining a file as "CACHE" would do the trick so this file wouldn't be accessed from the network. Shouldn't it behave like that?
I can not start developing this as native app at the moment since its kind in production. If anyone gets an idea on how to fix it quickly, would be great. I've been thinking on add the files to the internal database as base64 or in a javascript varial (as base64 also).
Thanks so much.
I would like to set up a scalable video distribution server/infrastructure for streaming video to iOS devices. The client will have some programming of pre-produced content, e.g. 6 hours that will be played and then repeated from the beginning. It should be possible to enter the exact schedule when the video starts, and also the possibility to have it start at different times on different days.
I've been pointed to the Live Smooth Streaming offer from Amazon, using the Amazon CloudFront.
So my question to you: does this support the features I need, and how do I get it set up properly. I've already taken a look at their documentation at http://awsdocs.s3.amazonaws.com/CF/latest/cf_dg.pdf but that didn't cover the use case I want, namely setting up some programming scheme. I've seen references to Cloudformation templates for the live streaming but is there also s.th. similar for doing the fixed programming, or maybe it can be used for that too?
Thanks for your time!
Flo
Your question is a bit mixed up. iOS devices need HLS protocol content. You simply need to create your content in HLS form [ts files with .m3u8] and store in a S3 bucket and link your cloudfront to it.
Since you mention pre-produced content i am guessing it means that it is available beforehand and not generated live.
Your program then should point to the right .m3u8 file to pick and can update the .m3u8 file appropriately. Your program which controls access to the m3u8 (when its available what should be playable etc) is independent of the storage in s3/cloudfront.
You can also generate content live but nothing changes except content is getting created on the fly. Your program controlling the .m3u8 will control what the client gets access to.
If it was not for iOS devices but also other devices, the same would apply. Keep your content on S3 bucket and link to CF. You need your content in the format the device needs. Let your webserver program control access to the content. Remember CF is not a player. CF provides support for flash server as well and you can use that as well.
It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.
Looking through the site, every question uses an outdated method. How do the YouTube FLV downloader websites/applications do it?
I am trying to do this in PHP but the theory or steps to do it will suffice, thanks.
As mentioned in other posts, you may want to look at our code in
youtube-dl (or in the code of the Firefox extension called
FlashVideoReplacer).
In the particular case of youtube-dl, the "real work" is done in the
subclasses of InformationExtractor and it hard to give a "stable" answer,
as the layout of such sites changes constantly.
There are some pieces of the information that are not dynamic, such as, for
instance, the uploader of the video, the title, the date of upload, and,
most importantly, the identifier of the video (a 11-character string).
For the dynamic parts, what can be said about such tools is that,
essentially, the URLs generated by such videos are dynamically generated and
you need to perform some back-and-forth communication with the server.
It is important to have in mind that what such sites can (and do) take into
consideration depend on a number of parameters, including: the cookies that
you have already received---as the case for HTML5 videos, your
geolocation---for regional control, your age--for "strong" material, your
language/locale---for showing content tailored to you, etc.
youtube-dl uses a regular expression to extract the video ID from the URL
that you give and, then, uses a "normalized", typical URL as used from the
United States, from which to proceed.
Some of the dynamic data to be gathered includes:
some time-stamp (the expire, and fexp parts of the final URL)
the cookies sent to the browser
the format in which we want to download the video (the itag part of the final URL)
throttling information (the algorithm, burst, factor)
some hashes/tokens used internally by them (e.g., the signature part of the final URL)
Some of the information listed above were once not required, but now they
are (in particular, the cookies that they send you). This means that the
information listed above is very likely to become obsolete, as the controls
become stricter.
You can see some of the work (with respect to the cookies) that I did in
this regard in the implementation of an external backend to use an external
downloader (a "download accelerator") with what youtube-dl extracts.
Discloser: I have committed some changes to the repository, and I maintain
the youtube-dl package in Debian (and, as a side effect, in Ubuntu).
You might want to take a look at how youtube-dl downloads the files. As YouTube changes, that program does seem to get updated rather quickly.
Youtube doesn't store FLV files, they compile your video into a SWF object. Those videos need to be either extracted or converted to FLV in order to get the FLV.
http://www.youtube.com/v/videoid
ex:
http://www.youtube.com/watch?v=C6nRb45I3e4
becomes
http://www.youtube.com/v/C6nRb45I3e4
From there, you need to convert the SWF into an flv, which can be done with ffmpeg.
If you really want to get an url of youtube video in flv or mp4 mode then use "YouTube Downloader - Version: 5.0" in Chrome you can right click on download button and copy path.
![You can get url of any format from this button][1]
http://i.stack.imgur.com/AFUWr.jpg (see this url due to i can't upload image at this time)
You can click on this button and copy url from "chrome://downloads/"
I think this may help you.