Hosting service-worker file from CDN/Google Drive - service-worker

While implementing web push notifications, I have to add a service-worker file location in my JS code.
navigator.serviceWorker.register('/service-worker.js')
Can I host this service-worker file in a CDN or Google Drive?

Your service worker JavaScript file, the one that gets passed in to navigator.serviceWorker.register(), can't live on a different domain that the pages it controls. If your site is served from https://www.example.com/, then you can't serve your service worker JavaScript file from https://www.somecdn.com/. You'd have to make sure all of your web pages and your service worker file are all served from the same domain, though whether that single domain corresponds to a CDN's server or something else is not important.
Additionally, the path to the service worker script file affects the maximum scope of the service worker's control, and you'll generally need to place the service worker script file at the "top level" of your website if you want to control all the pages on the site. There's more information in this other Stack Overflow response.

Related

Completely replacing a request in a WebExtension

I'm looking to make a web extension for Firefox that stores HTML pages and other resources in local storage and serves them for offline viewing. To do that, I need to intercept requests that the browser makes for the pages and the content in them.
Problem is, I can't figure out how to do that. I've tried several approaches:
The webRequest API doesn't allow fulfilling a request entirely - it can only block or redirect a request, or edit the response after it's been done.
Service Workers can listen to the fetch event, which can do what I want, but calling navigator.serviceWorker.register in an addon page (the moz-extension://<id> domain) results in an error: DOMException: The operation is insecure. Relevant Firefox bug
I could possibly set up the service worker on a self hosted domain with a content script, but then it won't be completely offline.
Is there an API that I missed that can intercept requests from inside a web extension?

Is it possible to create jsbin code with service worker?

In JSBin, I don't see an option to add service worker. Is it possible? Or, is there any other options?
I don't think it's possible to put together an example/demo that registers your own service worker using JSBin.
In terms of other options, what I tend to do is use GitHub's Gists to store my HTML and service worker JavaScript, and then use RawGit to serve the resources. RawGit gives you HTTPS plus proper Content-Type headers, both of which are necessary in order to register a service worker.
Here's an example of a Gist that uses this setup.
You need to get the "Raw" URL for your HTML (click on the "Raw" button in the Gist interface), and then paste that URL into https://rawgit.com/. When registering your service worker from your HTML, always use a relative URL (like navigator.serviceWorker.register('sw.js');), and include the code for your service worker in another file that's part of the same Gist.
You'll end up with a URL served by RawGit that will let you access your HTML and can register and use your service worker file.

How does Chrome load `vue.min.js` from ServiceWorker

when visit the https://vuejs.org/js/vue.min.js, i find it's loaded from ServiceWorker, how it works?
Screenshot of Chrome DevTool
A service worker is a piece of JavaScript code that your browser runs in the background. Service workers are given some special privileges over normal JavaScript code that runs in a browser, with a commonly used priveledge being their ability to intercept fetch events.
A fetch event is activated anytime the client requests a file. Many service workers will download all requested files into a cache. The second time the same file is requested, the service worker will step in and return the cached file without sending any http request.
In this particular case, a service worker is storing the contents of the vue file into a cache the first time it is downloaded, removing the need for an expensive network request on future page loads.

What is elastic IP on Amazon EC2? is it okay if I don't use it?

I have hosted a RoR app on Amazon EC2 instance. Instance has public IP but no elastic IP is assigned. Application is pointed to a domain using Dreamhost.
We use Amazon S3 to store audio files uploaded through web application and load these files back to site and play in player.
This is where I am facing weird issue, sometimes files play fine but sometimes it gives error saying
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin http://XX.XXX.XX.XXX is therefore not allowed access.
But at the same time if I copy paste S3 url in browser outside my application it loads.
Why error gives IP address instead of saying mydoamin.com is therefore not allowed access?
I am guessing the issue is because of some domain/IP configurations.
An elastic IP on amazon is an IP which is reserved to you. Without it, every time you stop and start your instance, a different IP will be set to it.
You don't have to use elastic IP, you could, for example point your domain to an ELB (elastic load balancer) CNAME, which will remain constant as it load balances between one or more instances of your application.
I'm not sure this has anything to do with the error given, which is explained in this answer:
Site B uses Access-Control-Allow-Origin to tell the browser that the
content of this page is accessible to certain domains. By default,
site B's pages are not accessible to any other domain; using the ACAO
header opens a door for cross-domain access by specific domains.
Site B should serve its pages with
Access-Control-Allow-Origin: http://sitea.com
It seems that the problematic link is an absolute path with the explicit IP, I have no idea why this should happen, look at the source of the page from which the link fails, and try to figure it out.

mod_xsendfile alternatives for a shared hosting service without it

I'm trying to log download statistics for .pdfs and .zips (5-25MB) in a rails app that I'm currently developing and I just hit a brick wall; I found out our shared hosting provider doesn't support mod_xsendfile. The sources I've read state that without this, multiple downloads could potentially cause a DoS issue—something I'm definitely trying to avoid. I'm wondering if there are any alternatives to this method of serving files through rails?
Well, how sensitive are the files you're storing?
If you hosted these files somewhere under your app's /public directory, you could just do a meta tag or javascript redirect to the public-facing URL of these files after your users hit some sort of controller action that will update your download statistics.
In this case, your users would probably need to get one of those "Your download should commence in a few moments" pages before the browser would start the file download.
Under this scenario, your Rails application won't be streaming the file out, your web server will, which will give you the same effect as xsendfile. On the other hand, this won't work very well if you need to control access to those downloadable files.

Resources