Stream AWS S3 HLS Videos in iOS Browsers - ios

How can I stream HLS(.m3u8) in iOS Safari Browsers? My videos are stored in AWS S3 Bucket and the only way to access the video and audio .m3u8 is to pass a signed URL.
I am using videojs to stream videos. videojs.Hls.xhr.beforeRequest is not working on iOS browsers. I also read that MSE is not supported in iOS, is there any alternative I can use to pass a signed URL to be able to stream my videos on iOS browsers?
Here are my sample codes and screenshot of error:
videojs.Hls.xhr.beforeRequest = function(options) {
if (options.uri.includes('Audio')) {
options.uri = options.uri + '?Policy=' + policy + '&Key-Pair-Id=' + keyPairId + '&Signature=' + signature;
}
else if (options.uri.includes('Video')) {
options.uri = options.uri + '?Policy=' + policy + '&Key-Pair-Id=' + keyPairId + '&Signature=' + signature;
}
return options
}
var overrideNative = false;
var player = videojs('video-test', {
"controls": true,
"fluid": true,
"preload": 'none',
"techOrder": ["html5"],
"html5": {
"hls": {
"withCredentials": true,
overrideNative: overrideNative,
},
},
nativeVideoTracks: !overrideNative,
nativeAudioTracks: !overrideNative,
nativeTextTracks: !overrideNative
});
player.src(
{
src: url, type: "application/x-mpegURL", withCredentials: true
});

Exact same issue, except implemented in ReactJS
the videojs vhs overrides do not work, as it has to do with Safari and the parsing (or not) of the options to see the security parameters for subsequent calls past the register m3u8.
There are a few other people dealing with this, such as
https://github.com/awslabs/unicornflix/issues/15
i've tried everything, from amazon IVS+VideoJS attempts, to re-writing my class modules as functional to try examples I've found; and basically always end up right back at this issue
---------------UPDATE BELOW---------------
(and grab a comfy seat)
Delivering protected video from S3 via Cloudfront using secure cookies (for iOS based browsers + all Safari) and secure urls for Chrome and everything else.
website architecture:
Frontend: ReactJS
Backend: NodeJS
cloud service architecture: https://aws.amazon.com/blogs/media/creating-a-secure-video-on-demand-vod-platform-using-aws/ (and attached lab guide)
Presumptions: equivalent setup to above cloud architecture, specifically the IAM configuration for CF to S3 bucket, and the related S3 security configurations for IAM and CORS.
TL/DR:
NON-SAFARI aka Chrome etc - use secure urls (VERY easy OOTB); the above guide worked for chrome, but not for safari.
Safari requires secure cookies for streaming hls natively, and won’t let recognize xhr.beforeRequest overloads at all.
SAFARI / iOS BROWSERS BASED ON SAFARI - use secure cookies
Everything below, explains this.
Setting cookies, is easy enough sounding! Its probably why there is no end to end example anywhere in AWS CloudFront, AWS Forums, or AWSDeveloper Slack channel, that its presumed to be easy because, hey its just cookies right?
Right.
END TL/DR
Solution Details
The ‘AH-HA!’ moment was finally understanding that for this to work, you need to be able to set a cookie for a cloudfront server, from your own server, which is basically an enormous web security no-no. aka - ‘domains need to be the same, all the way down/up the network call’
comments here https://jwplayer-support-archive.netlify.app/questions/16356614-signed-cookies-on-cloudfront-with-hls-and-dash
and here link https://www.spacevatican.org/2015/5/1/using-cloudfront-signed-cookies/
both combined with original AWS documentation about signed cookies with a cname of a domain to apply to subdomains, all combined for me finally.
The solution is:
setup a CNAME for your cloudfront instance; ie: you can’t set a cookie against 5j1h24j1j.cloudfront.net as you don’t own it, but you can CNAME something like cloudfront.<your-domain>.com in your DNS. good docs exists for this particular step at https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html
Important - you need to also setup a reference for this CNAME in your CF Distribution. (if you want SSL, you need to re-sign your domain cert with cloudfront.<your-domain>.com, then upload this cert to AWS Certificate Manager, so it will be referencable in the CF Distribution edit screen drop down list (which, you can’t save unless you select something).
for your local development box, set up a hosts file overload for whatever the NodeJS listening/bound IP is. ie: if your node is bound to 0.0.0.0, then edit your /etc/hosts to have a line 0.0.0.0 dev.<your-domain>.com - when you deploy to your production host, the domain will obviously work there.
Now, in your backend (aka server side) code, where you will set the cookies, you need to set the domain parameter, and you can’t directly wildcard but you can leave it as <your-domain.com>(which in a browser, if you inspect using developer tools, you will see listed as .<your-domain>.com NOTE THE LEADING DOT. This is fine, and expected behaviour for a modern browser; essentially saying ‘any subdomain of <your-domain>.com will have these cookies accessible for.
What the above does, is make sure that END TO END, your are able to send the cookie, assigned to the .<your-domain>.com from a call starting in dev.<your-domain>.com or your future production <your-domain>.com through to the same uri but on a different port for your backend, then on to CF via your CNAME which is a subdomain the cookie can see now. At this point, its up to CF to pass on the required headers to the S3 instance.
But wait, there is more to do client side first.
A thing that blocked me even seeing the cookies in the first place, was the fact they don’t get set
unless the requestor/initiator uses a ‘withCredentials: true’ flag in the network call that starts it. In my code, that is a ReactJS componentDidMount() based Axios network REST GET call to my backend nodeJS endpoint for the video list (which the nodeJS gets from graphQL in AWS, but thats not needed for this explanation of my fix).
componentDidMount() {
axios.get('http://dev.<your-domain>.com:3000/api/my-data-endpoint'
,{
withCredentials: true,
})
.then(vidData => {
this.setState({
....//set stuff for player component include to use
});
})
}
When my axios call did not have ‘withCredentials: true’, the cookies were never sent back; as soon as i had that? my cookies were at least sent back to the first caller, localhost (with no domain parameter in the cookie, it defaults to calling, which i had as local host at the time), which therefore meant it would never pass it to CF, which was the 2435h23l4jjfsj.cloudfront.net name at that point.
So, updating axios to use dev.<your-domain>.com for server access, and the withCredentials flag, my cookies were set, on the call to my backend info about the videos. As AWS documentation does point out, the cookies need to be fully set BEFORE the call for secure content, so this is accomplished.
In the above described call to my api, i get back something like
{src:’https://cloudfront.<your-domain>.com/path-to-secure-register-m3u8-file’, qps:’?policy=x&signature=y&key-pair-id=z’, blah blah}
[sidebar - signed urls are all generated in the cloud by a lambda]
For Chrome, the player code will append the two together, then Wherever you instantiate your video.js player, overload the videojs.Hls.xhr.beforeRequest as follows
videojs.Hls.xhr.beforeRequest = function (options) {
options.uri = `${options.uri}${videojs.getAllPlayers()[0].options().token}`;
return options;
};
which puts the query string of ?policy=x&signature=y&Key-Pair-ID=z on the end of every sub-file in the stream after the register m3u8 file kicks it off.
the backend call to the api described above, also tears apart the QP’s to set the cookies before the json is sent as a response, as follows
res.cookie("CloudFront-Key-Pair-Id", keypair, {httpOnly: true, path: "/", domain: ‘<your-domain>.com'});
res.cookie("CloudFront-Signature", sig, {httpOnly: true, path: "/", domain: ‘<your-domain>.com'});
res.cookie("CloudFront-Policy", poli, {httpOnly: true, path: "/", domain: ‘<your-domain>.com'});
INTERRUPT - now we have set withCredentials to true, you probably see CORS issues; fun.
in your server side code (my reactJS) i set a few headers in my nodejs router
res.header("Access-Control-Allow-Credentials", "true");
res.header("Access-Control-Allow-Origin", "http://dev.<your-domain>.com:8080"); // will be set to just <your-domain>.com for production
At this point, stuff still wasn’t working though. This is because the cloud code was putting the CF 234hgjghg.cloudfront.net domain into the policy, and not my CNAME mapping. I updated this in the cloud. So now my calls for video data, returned urls to the secure m3u8 using cloudfront.<your-domain>.com and not the cloudfront.net which is described here https://forums.aws.amazon.com/thread.jspa?messageID=610961&#610961 in the last response step 3.
At THIS point, if i used safari debug tools, I knew i was close, because my responses to attempted streaming changed from the no key or cookie xml, to
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
error, and in it, was a reference to my S3 bucket. This meant to me, that my CF distribution was essentially happy with the cookie based policy, key-id, and signature, and had passed me on to S3, but S3 told me to get lost.
The good thing at this point though, was that the 3 required cloudfront cookies were set from dev.<your-domain>.com all the way through to the cloudfront.<your-domain>.com calls for the m3u8 register file, and then in all the subsequent calls to a .ts or .m3u8
OK, so I spent a bit of time in the s3 config (not editing anything, just reviewing everything… which looked 100% fine to me), and then went back to CF distribution behaviours edit page, where you setup headers to forward.
settings (listed below, then a screenshot of mine):
cache and origin request settings: use legacy cache settings
cache based on selected request headers - whitelist
add origin, access-control-request-headers, access-control-request-method. you will need explicitly type the last 2 in, they didn’t auto-complete for me nor show in the suggestion list, but add custom button worked.
object caching: use origin cache headers
forward cookies/query strings - none(improves caching) on both
restrict viewer access (use signed urls or cookies) - yes (this is the entire point of this headache lol)
trusted signer, self
After the distribution had saved and propagated, Safari and Chrome video playing both worked!
This was quite a rabbits hole and a degree (or 15) more difficult than I anticipated, but of course once its all written out, it all seems so logical and obvious. I hope this at least partially helps the others i found on the internet with secure streaming private content across all major browsers using AWS Cloudfront infront of S3

This seems promising but I am still trying to figure out what the hls version of this looks like (this is an example for dash): https://github.com/videojs/video.js/issues/5247#issuecomment-735299266

Related

Stream remote file to client in ruby/rails 4/unicorn/nginx

I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)

Accessing secure endpoint from WKWebView loaded from local files

So we are in development of an iPhone application (iOS 9+, not 8), where we are using WKWebView and local files to start the thing up. To get data into the app we are planning to use a secure service that will also handle authentication, and were thinking that CORS was the way to do this.
Loading files from the file system, however, sets the origin in the HTTP request to null, which is not allowed when we also want to send cookies (for authentication).
So my question(s):
Can we set an origin in WKWebView to overwrite the null with something like https://acme.server.net?
What are other people (you) doing?
Should we consider doing something else other than CORS? (JSONP is not an option).
You can create a local webserver on the iPhone and load your files from that. This way origin will not be null and you can use CORS to connect to your server.
I have found GCDWebServer to be easy to work with.
Personally I would rather use HTML5 AppCache or ServiceWorkers to create a local application cache without the need for CORS, but WKWebView does not support these and to my knowledge you are forced to use the webserver approach

Cloudflare + Heroku SSL

I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.

PayPal Custom Payment Page with SSL image sources from S3 WITHOUT certificate problems?

On the PayPal payment page customization you can insert your own URI for the logo and header background. Obviously it's bad news to insert a non 443 (https) URL, but it also appears that insertion of a secure S3 link is also a major fail.
Initially I inserted my own cname alias to it cdn.mydomain.com and then after seeing an error message from Firefox, thought I needed to pass the actual AWS S3 full path to the bucket which is: https://cdn.mydomain.com.s3.amazonaws.com/paypal/my-logo.png
Even after doing this, I'm seeing the same problem with Firefox:
I am figuring that there is a different way to refer to a bucket via a path vs the subdomain for the bucket, but I'm not readily finding the answer. Or is there an altogether different way to handle this so that there's no certificate issue?
Use a subdomain/bucketname that doesn't have a dot in it. The cert is only good for *.s3.amazonaws.com, not *.*.s3.amazonaws.com.

Rails implementation for securing S3 documents

I would like to protect my s3 documents behind by rails app such that if I go to:
www.myapp.com/attachment/5 that should authenticate the user prior to displaying/downloading the document.
I have read similar questions on stackoverflow but I'm not sure I've seen any good conclusions.
From what I have read there are several things you can do to "protect" your S3 documents.
1) Obfuscate the URL. I have done this. I think this is a good thing to do so no one can guess the URL. For example it would be easy to "walk" the URL's if your S3 URLs are obvious: https://s3.amazonaws.com/myapp.com/attachments/1/document.doc. Having a URL such as:
https://s3.amazonaws.com/myapp.com/7ca/6ab/c9d/db2/727/f14/document.doc seems much better.
This is great to do but doesn't resolve the issue of passing around URLs via email or websites.
2) Use an expiring URL as shown here: Rails 3, paperclip + S3 - Howto Store for an Instance and Protect Access
For me, however this is not a great solution because the URL is exposed (even for just a short period of time) and another user could perhaps in time reuse the URL quickly. You have to adjust the time to allow for the download without providing too much time for copying. It just seems like the wrong solution.
3) Proxy the document download via the app. At first I tried to just use send_file: http://www.therailsway.com/2009/2/22/file-downloads-done-right but the problem is that these files can only be static/local files on your server and not served via another site (S3/AWS). I can however use send_data and load the document into my app and immediately serve the document to the user. The problem with this solution is obvious - twice the bandwidth and twice the time (to load the document to my app and then back to the user).
I'm looking for a solution that provides the full security of #3 but does not require the additional bandwidth and time for loading. It looks like Basecamp is "protecting" documents behind their app (via authentication) and I assume other sites are doing something similar but I don't think they are using my #3 solution.
Suggestions would be greatly appreciated.
UPDATE:
I went with a 4th solution:
4) Use amazon bucket policies to control access to the files based on referrer:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?UsingBucketPolicies.html
UPDATE AGAIN:
Well #4 can easily be worked around via a browsers developer's tool. So I'm still in search of a solid solution.
You'd want to do two things:
Make the bucket and all objects inside it private. The naming convention doesn't actually matter, the simpler the better.
Generate signed URLs, and redirect to them from your application. This way, your app can check if the user is authenticated and authorized, and then generate a new signed URL and redirect them to it using a 301 HTTP Status code. This means that the file will never go through your servers, so there's no load or bandwidth on you. Here's the docs to presign a GET_OBJECT request:
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html
I would vote for number 3 it is the only truly secure approach. Because once you pass the user to the S3 URL that is valid till its expiration time. A crafty user could use that hole the only question is, will that affect your application?
Perhaps you could set the expire time to be lower which would minimise the risk?
Take a look at an excerpt from this post:
Accessing private objects from a browser
All private objects are accessible via
an authenticated GET request to the S3
servers. You can generate an
authenticated url for an object like
this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default
authenticated urls expire 5 minutes
after they were generated.
Expiration options can be specified
either with an absolute time since the
epoch with the :expires options, or
with a number of seconds relative to
now with the :expires_in options:
I have been in the process of trying to do something similar for quite sometime now. If you dont want to use the bandwidth twice, then the only way that this is possible is to allow S3 to do it. Now I am totally with you about the exposed URL. Were you able to come up with any alternative?
I found something that might be useful in this regard - http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempFederationTokenRuby.html
Once a user logs in, an aws session with his IP as a part of the aws policy should be created and then this can be used to generate the signed urls. So in case, somebody else grabs the URL the signature will not match since the source of the request will be a different IP. Let me know if this makes sense and is secure enough.

Resources