Accessing secure endpoint from WKWebView loaded from local files - ios

So we are in development of an iPhone application (iOS 9+, not 8), where we are using WKWebView and local files to start the thing up. To get data into the app we are planning to use a secure service that will also handle authentication, and were thinking that CORS was the way to do this.
Loading files from the file system, however, sets the origin in the HTTP request to null, which is not allowed when we also want to send cookies (for authentication).
So my question(s):
Can we set an origin in WKWebView to overwrite the null with something like https://acme.server.net?
What are other people (you) doing?
Should we consider doing something else other than CORS? (JSONP is not an option).

You can create a local webserver on the iPhone and load your files from that. This way origin will not be null and you can use CORS to connect to your server.
I have found GCDWebServer to be easy to work with.
Personally I would rather use HTML5 AppCache or ServiceWorkers to create a local application cache without the need for CORS, but WKWebView does not support these and to my knowledge you are forced to use the webserver approach

Related

Stream AWS S3 HLS Videos in iOS Browsers

How can I stream HLS(.m3u8) in iOS Safari Browsers? My videos are stored in AWS S3 Bucket and the only way to access the video and audio .m3u8 is to pass a signed URL.
I am using videojs to stream videos. videojs.Hls.xhr.beforeRequest is not working on iOS browsers. I also read that MSE is not supported in iOS, is there any alternative I can use to pass a signed URL to be able to stream my videos on iOS browsers?
Here are my sample codes and screenshot of error:
videojs.Hls.xhr.beforeRequest = function(options) {
if (options.uri.includes('Audio')) {
options.uri = options.uri + '?Policy=' + policy + '&Key-Pair-Id=' + keyPairId + '&Signature=' + signature;
}
else if (options.uri.includes('Video')) {
options.uri = options.uri + '?Policy=' + policy + '&Key-Pair-Id=' + keyPairId + '&Signature=' + signature;
}
return options
}
var overrideNative = false;
var player = videojs('video-test', {
"controls": true,
"fluid": true,
"preload": 'none',
"techOrder": ["html5"],
"html5": {
"hls": {
"withCredentials": true,
overrideNative: overrideNative,
},
},
nativeVideoTracks: !overrideNative,
nativeAudioTracks: !overrideNative,
nativeTextTracks: !overrideNative
});
player.src(
{
src: url, type: "application/x-mpegURL", withCredentials: true
});
Exact same issue, except implemented in ReactJS
the videojs vhs overrides do not work, as it has to do with Safari and the parsing (or not) of the options to see the security parameters for subsequent calls past the register m3u8.
There are a few other people dealing with this, such as
https://github.com/awslabs/unicornflix/issues/15
i've tried everything, from amazon IVS+VideoJS attempts, to re-writing my class modules as functional to try examples I've found; and basically always end up right back at this issue
---------------UPDATE BELOW---------------
(and grab a comfy seat)
Delivering protected video from S3 via Cloudfront using secure cookies (for iOS based browsers + all Safari) and secure urls for Chrome and everything else.
website architecture:
Frontend: ReactJS
Backend: NodeJS
cloud service architecture: https://aws.amazon.com/blogs/media/creating-a-secure-video-on-demand-vod-platform-using-aws/ (and attached lab guide)
Presumptions: equivalent setup to above cloud architecture, specifically the IAM configuration for CF to S3 bucket, and the related S3 security configurations for IAM and CORS.
TL/DR:
NON-SAFARI aka Chrome etc - use secure urls (VERY easy OOTB); the above guide worked for chrome, but not for safari.
Safari requires secure cookies for streaming hls natively, and won’t let recognize xhr.beforeRequest overloads at all.
SAFARI / iOS BROWSERS BASED ON SAFARI - use secure cookies
Everything below, explains this.
Setting cookies, is easy enough sounding! Its probably why there is no end to end example anywhere in AWS CloudFront, AWS Forums, or AWSDeveloper Slack channel, that its presumed to be easy because, hey its just cookies right?
Right.
END TL/DR
Solution Details
The ‘AH-HA!’ moment was finally understanding that for this to work, you need to be able to set a cookie for a cloudfront server, from your own server, which is basically an enormous web security no-no. aka - ‘domains need to be the same, all the way down/up the network call’
comments here https://jwplayer-support-archive.netlify.app/questions/16356614-signed-cookies-on-cloudfront-with-hls-and-dash
and here link https://www.spacevatican.org/2015/5/1/using-cloudfront-signed-cookies/
both combined with original AWS documentation about signed cookies with a cname of a domain to apply to subdomains, all combined for me finally.
The solution is:
setup a CNAME for your cloudfront instance; ie: you can’t set a cookie against 5j1h24j1j.cloudfront.net as you don’t own it, but you can CNAME something like cloudfront.<your-domain>.com in your DNS. good docs exists for this particular step at https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html
Important - you need to also setup a reference for this CNAME in your CF Distribution. (if you want SSL, you need to re-sign your domain cert with cloudfront.<your-domain>.com, then upload this cert to AWS Certificate Manager, so it will be referencable in the CF Distribution edit screen drop down list (which, you can’t save unless you select something).
for your local development box, set up a hosts file overload for whatever the NodeJS listening/bound IP is. ie: if your node is bound to 0.0.0.0, then edit your /etc/hosts to have a line 0.0.0.0 dev.<your-domain>.com - when you deploy to your production host, the domain will obviously work there.
Now, in your backend (aka server side) code, where you will set the cookies, you need to set the domain parameter, and you can’t directly wildcard but you can leave it as <your-domain.com>(which in a browser, if you inspect using developer tools, you will see listed as .<your-domain>.com NOTE THE LEADING DOT. This is fine, and expected behaviour for a modern browser; essentially saying ‘any subdomain of <your-domain>.com will have these cookies accessible for.
What the above does, is make sure that END TO END, your are able to send the cookie, assigned to the .<your-domain>.com from a call starting in dev.<your-domain>.com or your future production <your-domain>.com through to the same uri but on a different port for your backend, then on to CF via your CNAME which is a subdomain the cookie can see now. At this point, its up to CF to pass on the required headers to the S3 instance.
But wait, there is more to do client side first.
A thing that blocked me even seeing the cookies in the first place, was the fact they don’t get set
unless the requestor/initiator uses a ‘withCredentials: true’ flag in the network call that starts it. In my code, that is a ReactJS componentDidMount() based Axios network REST GET call to my backend nodeJS endpoint for the video list (which the nodeJS gets from graphQL in AWS, but thats not needed for this explanation of my fix).
componentDidMount() {
axios.get('http://dev.<your-domain>.com:3000/api/my-data-endpoint'
,{
withCredentials: true,
})
.then(vidData => {
this.setState({
....//set stuff for player component include to use
});
})
}
When my axios call did not have ‘withCredentials: true’, the cookies were never sent back; as soon as i had that? my cookies were at least sent back to the first caller, localhost (with no domain parameter in the cookie, it defaults to calling, which i had as local host at the time), which therefore meant it would never pass it to CF, which was the 2435h23l4jjfsj.cloudfront.net name at that point.
So, updating axios to use dev.<your-domain>.com for server access, and the withCredentials flag, my cookies were set, on the call to my backend info about the videos. As AWS documentation does point out, the cookies need to be fully set BEFORE the call for secure content, so this is accomplished.
In the above described call to my api, i get back something like
{src:’https://cloudfront.<your-domain>.com/path-to-secure-register-m3u8-file’, qps:’?policy=x&signature=y&key-pair-id=z’, blah blah}
[sidebar - signed urls are all generated in the cloud by a lambda]
For Chrome, the player code will append the two together, then Wherever you instantiate your video.js player, overload the videojs.Hls.xhr.beforeRequest as follows
videojs.Hls.xhr.beforeRequest = function (options) {
options.uri = `${options.uri}${videojs.getAllPlayers()[0].options().token}`;
return options;
};
which puts the query string of ?policy=x&signature=y&Key-Pair-ID=z on the end of every sub-file in the stream after the register m3u8 file kicks it off.
the backend call to the api described above, also tears apart the QP’s to set the cookies before the json is sent as a response, as follows
res.cookie("CloudFront-Key-Pair-Id", keypair, {httpOnly: true, path: "/", domain: ‘<your-domain>.com'});
res.cookie("CloudFront-Signature", sig, {httpOnly: true, path: "/", domain: ‘<your-domain>.com'});
res.cookie("CloudFront-Policy", poli, {httpOnly: true, path: "/", domain: ‘<your-domain>.com'});
INTERRUPT - now we have set withCredentials to true, you probably see CORS issues; fun.
in your server side code (my reactJS) i set a few headers in my nodejs router
res.header("Access-Control-Allow-Credentials", "true");
res.header("Access-Control-Allow-Origin", "http://dev.<your-domain>.com:8080"); // will be set to just <your-domain>.com for production
At this point, stuff still wasn’t working though. This is because the cloud code was putting the CF 234hgjghg.cloudfront.net domain into the policy, and not my CNAME mapping. I updated this in the cloud. So now my calls for video data, returned urls to the secure m3u8 using cloudfront.<your-domain>.com and not the cloudfront.net which is described here https://forums.aws.amazon.com/thread.jspa?messageID=610961&#610961 in the last response step 3.
At THIS point, if i used safari debug tools, I knew i was close, because my responses to attempted streaming changed from the no key or cookie xml, to
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
error, and in it, was a reference to my S3 bucket. This meant to me, that my CF distribution was essentially happy with the cookie based policy, key-id, and signature, and had passed me on to S3, but S3 told me to get lost.
The good thing at this point though, was that the 3 required cloudfront cookies were set from dev.<your-domain>.com all the way through to the cloudfront.<your-domain>.com calls for the m3u8 register file, and then in all the subsequent calls to a .ts or .m3u8
OK, so I spent a bit of time in the s3 config (not editing anything, just reviewing everything… which looked 100% fine to me), and then went back to CF distribution behaviours edit page, where you setup headers to forward.
settings (listed below, then a screenshot of mine):
cache and origin request settings: use legacy cache settings
cache based on selected request headers - whitelist
add origin, access-control-request-headers, access-control-request-method. you will need explicitly type the last 2 in, they didn’t auto-complete for me nor show in the suggestion list, but add custom button worked.
object caching: use origin cache headers
forward cookies/query strings - none(improves caching) on both
restrict viewer access (use signed urls or cookies) - yes (this is the entire point of this headache lol)
trusted signer, self
After the distribution had saved and propagated, Safari and Chrome video playing both worked!
This was quite a rabbits hole and a degree (or 15) more difficult than I anticipated, but of course once its all written out, it all seems so logical and obvious. I hope this at least partially helps the others i found on the internet with secure streaming private content across all major browsers using AWS Cloudfront infront of S3
This seems promising but I am still trying to figure out what the hls version of this looks like (this is an example for dash): https://github.com/videojs/video.js/issues/5247#issuecomment-735299266

Cherry-pick: from two URLs with same file on different CDNs, load whichever is in cache

I have a web app that wants to load bootstrap.min.js
It's on these two CDN's (among others):
https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.3.1/js/bootstrap.min.js
https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js
The odds of a cache hit from some other app using these CDN's is relatively high.
How can I tell the browser to check if they are cached and load from browser cache?
Can a service worker do this?
I believe that there are some privacy/security restrictions in place that attempt that make it difficult to determine, using JavaScript, whether a third-party URL is present in the browser's cache.
Adding a service worker into the mix will not get around those restrictions.
It's possible to use the Fetch API to create a Request with a mode of 'only-if-cached', which will behave more or less in the way you describe, but that will only work if the request's mode is 'same-origin'. In other words, only if the Request is for a first-party URL, not a third-party CDN URL as in your example.

Cordova & CORS (iOS)

I recently got my hands on an relatively old cordova app for iOS (iphones), which was built around one year ago, in order to debug it.
The app queries an API from a server. This server is built using Laravel and makes use of laravel-cors.
For a peculiar reason, the developers of this app have set up CORS server-side to accept requests, only if the Origin header is missing.
I was told that the app was working just fine for the past year.
While debugging it, I noticed that the iOS browser adds origin => 'file://' to its headers, when cordova app uses $.ajax for doing requests
And now for my questions
Are you aware of such a change on newer iOS verions?
I suppose I can't do anything client-side in order to bypass it?
How safe is to add "file://" as an accepted origin, server-side?
Thanks a ton!
The reason the server accepts null-Origin isn't "peculiar" -- that is how CORS is defined to work. It is intended to protect against browser-based XSS attacks -- browsers send the Origin header automatically so the server can accept or reject the request based on which domain(s) they allow javascript calls from. It is intended as a safe standards-based successor to the JSONP hack to allow cross-origin server requests, but in a controlled way. By default, browsers require and allow only same-origin XHRs and other similar requests (full list).
CORs is undefined for non-browser clients, since non-browser clients can set whatever Origin they want to anyway (e.g. curl), so in those cases it makes sense to just leave off the Origin header completely.
To answer part of your question, it is not (very) safe to add file:// as an accepted origin server-side. The reason is that an attacker wishing to bypass CORS protections could trick a user into downloading a web page to their filesystem and then executing it in their browser -- thus bypassing any intended Origin restrictions since file:// is in the allowed list. There may also be other exploits, known and unknown, that could take advantage of servers that accept a file:// origin.
You'll have to evaluate the risks of adding this based on your own project requirements.

Stream remote file to client in ruby/rails 4/unicorn/nginx

I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)

'Vary: If-None-Match' to cache mobile and desktop requests separately

Note: Please correct me if any of my assumptions are wrong. I'm not very sure of any of this...
I have been playing around with HTTP caching on Heroku and trying to work out
a nice way to differentiate between mobile and desktop requests when caching using Varnish
on Heroku.
My first idea was that I could set a Vary header so the cache is Varied on If-None-Match. As Rails automatically sends back etags generated from a hash of the content the etag would vary between desktop and mobile requests (different templates) and so it would eventually cache two versions (not fact, just my original thoughts). I have been playing around with this but I don't think it works.
Firstly, I can't wrap my head around when/if anything gets cached as surely requests with If-None-Match will be conditional gets anyway? Secondly, in practice fresh requests (ones without If-None-Match) sometimes receive the mobile site. Is this because the cache doesn't know whether to serve up the mobile or desktop cached version as the If-None-Match header isn't there?
As it probably sounds, I am rather confused. Will this approach work in any way or am I being silly? Also, is there anyway to achieve separate cached versions if I am unable to reach the Varnish config at all (as I am on Heroku)?
The exact code I am using in Rails to set the cache headers is:
response.headers['Cache-Control'] = 'public, max-age=86400'
response.headers['Vary'] = 'If-None-Match'
Edit: I am aware I can use Vary: User-Agent but trying to avoid it if possible due to it have a high miss rate (many, many user agents).
You could try Vary: User-Agent. However you'll have many cached versions of a single page (one for each user agent).
An other solution may be to detect mobile browsers directly in the reverse proxy, set a X-Is-Mobile-Browser client header before the reverse proxy attempts to find a cached page, set a Vary: X-Is-Mobile-Browser on the backend server (so that the reverse proxy will only cache 2 versions of the same page) and replace that header with Vary: User-Agent before sending to client.
If you can not change your varnish configuration, you have to make different urls for mobile and desktop pages. You can add some url-parameter (?mobile=true), add a piece in your path (yourdomain.com/mobile/news) or use a different host (like m.yourdomain.com).
This makes a lot of sense because (I've seen this many times, both in CMSs and applications) at some point in time you want to differentiate content and structure for mobile devices. People just do different things or are looking for different information on mobile devices...

Resources