Pre cache folder and subfolder content - service-worker

Is there a way to precache all files in folder and its subfolders?
What i want to accomplish is something like this:
...
event.waitUntil(
caches
.open(version + 'fundamentals')
.then(function (cache) {
return cache.addAll([
"/",
"/images/*"
...

The JavaScript executing in the context of your service worker won't have any knowledge of your filesystem.
If you'd like to precache all of the files matching a specific wildcard pattern, you'll need to add some build-time tooling, when the code that's being executed has access to your filesystem. The build-time tool can then feed output into a service worker's JavaScript file, ideally via some templating system.
The sw-precache tool can automate this process for you, including generating the service worker's JavaScript and keeping your caches up to date as static assets in your local filesystem change.
If you choose not to use a pre-packaged solution like sw-precache, make sure that you understand the service worker install/activate lifecycle events, and that you're properly versioning your static resources and caches.

Related

Conditional Precaching in Workbox

There are cases where an application might need a relatively large resource as a strict requirement, but only in certain cases that are not easily detectable from a service worker. For example:
Safari has a prefixed and non-conforming implementation of the Web Audio API. I found a great shim, but it's over 300kb. It's critical for the web app to function in Safari but unnecessary in other browsers.
Some media are available in multiple formats that may not always be supported. Video is typically too large to precache and has problems with range requests, but it could apply to WebP images or short audio files (e.g. Opus vs. AAC). If you include all formats in the precache manifest, by default it will download all of them.
One approach would be to manually exclude certain files from the precache manifest and then to conditionally load those files from the scripts on the main thread to be stored in the runtime cache. But then those files are not precached - they're only loaded after the new version activates, by which point you may no longer be online.
Is there a solution that allows the following?:
Have the service worker send a message to the main thread with the URL of a "test" script that checks the various conditions.
Load and run that script on the main thread and send the service worker the list of required conditional assets
Add those assets to the precache manifest to be diff'ed against the previous version and downloaded as necessary
The service worker should not switch over to the new version until all precached assets are loaded, including the conditional ones.
I ran into a similar issue with i18n files. I don't want my users to precache all strings for all available languages.
Some context for my setup. I'm using the injectManifest approach for vite. The webpack plugin should expose the same config options for this.
Language files in my case have this shape in the build dir:
assets/messages.XXXX.js
So the first step was to tell the injectManifest config to ignore these files:
injectManifest: {
globIgnores: ["**/node_modules/**/*", "assets/messages.*.js"],
}
The second step was going into the sw.js file and tell workbox that all scripts that aren't precached should be requested via the CacheFirst strategy. (i.e. first attempt to load it from the cache, if not present, load it from the network and put it into the cache).
So here's the adapted part of the sw.js:
/* ... */
import {registerRoute, NavigationRoute, Route} from "workbox-routing";
import {CacheFirst} from "workbox-strategies";
precacheAndRoute(self.__WB_MANIFEST);
// Handle non-pre-cached scripts
const scriptsRoute = new Route(
({request}) => {
return request.destination === "script";
},
new CacheFirst({
cacheName: "scripts",
})
);
cleanupOutdatedCaches();
registerRoute(new NavigationRoute(createHandlerBoundToURL("index.html")));
registerRoute(scriptsRoute);
/* ... */

sw-precache, cache files with URL Parameters

I have a quick question.
I'm building a PWA with Polymer and Lighthouse reports, that the manifest's start_url is not cached by the ServiceWorker.
Since I want to track the users, which use the 'Add to homescreen' function, my manifest.json contains
"start_url": "index.html?homescreen=1",
I tried putting this exact string into my sw-precache config file, but the script generates a ServiceWorker, that just caches the index.html file.
(I'm aware, that it's a bit redundant to cache index.html & index.html?homescreen=1)
Do you have any idea, how to fix this behaviour?
Thanks!
The ignoreUrlParametersMatching option is sw-precache can help you here.
By default, it's set to [/^utm_/], meaning that if you configured your Web App Manifest like
{
"start_url": "index.html?utm_source=homescreen"
}
then things should work as expected. If you'd like to keep that ?homescreen=1, then, when generating your service worker, you can change explicitly set the ignoreUrlParametersMatching parameter to [/^homescreen/].

ASP.Net MVC Bundle linked content files

I've been trying to reduce the amount of copying and pasting of content files across some of my projects and decided to go down the adding files as links from a central project.
The problem I have now is that the System.Web.Optimization.Bundle.AddFile(string virtualPath, bool throwIfNotExist = true) does not work as the file doesn't really exist in the directory I specified.
Does anyone have any pointers? Or maybe an alternative to linking content files?
Thanks.
I think you cannot access files outside of your web project with the virtual path system and It might hurt when you want to deploy your app.
I suggest to make a custom project for your static content with a custom domain: e.g. static.yourcompany.com and reference all this files from this domain. This also has the advantage that the browser does not have to add the cookies for authentication and tracking to these requests, which might be faster if you have a lot of traffic. You can also open your custom cdn (http://www.maxcdn.com/pricing/) or store the files in azure or amazon aws (which is more or less free for less files and traffic).
Another approach is to make some batch files to keep your content files synced.

How do I generate files and then zip/compress with Heroku?

I sort of want to do the reverse of this.
Instead of unzipping and adding the collection files to S3 I want to
On user's request:
generate a bunch of xml files
zip the xml files with some images (pre-existing images hosted on s3)
download zip
Does anybody know agood way of doing this? I think I could manage this no problem on a normal machine but Heroku complicates things somewhat in that it has a read-only filesystem.
From the heroku documentation on the read-only filesystem:
There are two directories that are writeable: ./tmp and ./log (under your application root). If you wish to drop a file temporarily for the duration of the request, you can write to a filename like #{RAILS_ROOT}/tmp/myfile_#{Process.pid}. There is no guarantee that this file will be there on subsequent requests (although it might be), so this should not be used for any kind of permanent storage.
You should be able to pretty easily write your generated xml files to tmp/ and keep track of the names, download and write the s3 files to the same directory, and (maybe?) invoke a zip command as long as the output is in tmp/, then serve the file to the browser with the correct mime type to prompt a download. I would only be concerned with how big the filesize is and if heroku has an undocumented limit on what they'll allow in the tmp directory. Especially since you are only performing this action for a one-time download in the duration of a single request, I think you have a good chance of being able to do it.
Edit: Looking around a bit, you might be able to use something like RubyZip to create your zip file if you want to avoid calling system commands.

Sharing Uploaded Files between multiple Rails Applications

I have multiple applications (an admin application, a "public"/non-admin application and a web service application) that all share a single database.
I've gotten the applications to share models and other code where appropriate, so I don't have multiple copies of the same code in each. However, the one task that I've yet to configure is how to share files that get uploaded between applications. I'm using Paperclip to successfully upload files to my applications, but if it uploads the files to the application doing the upload.
Ideally, I'd like to be able to serve all the files from the web service. My idea was that I'd need some type of task executed every time a new file is uploaded to any of the applications to have the file created in the file structure of the web service.
I know I could easily accomplish serving files from a single application if I loaded the files into the database (which is how I accomplished this in a similar application suite), but I'm not sure if that's the best route to go for managing/serving the files. Another idea I had was storing the files in the database and having the web service manage "serving" them and having it create the file on the disk on the first request. After the first request for the file, the web service would serve the file from the disk rather than from the database.
Does anyone have any ideas on what the best way to accomplish this might be? Or any better ideas?
Thank you in advance for any feedback anyone might have on the subject.
I'd recommend putting them in a shared location that is served directly by your front end webserver (not Rails) if you have that kind of setup, in this example it's serving up a location called files that points at a folder on disk. Then in your paperclip options, change the save location.
has_attached_file :image,
:url => "/files/:basename.:extension",
:path => "/var/htdocs/public/files/:basename.:extension"
Are you running all apps on the same UNIX/Linux system? Have you tried creating symbolic links to share the folder that contains the images? The goal is to save all images to the same location. Eliminating the need to throw in complicated hooks for attachment creation.
Paperclip by default stores things at :rails_root/public/system/:attachment/:id/:style/:filename If you're sharing a database you won't have to worry about collisions. And you just need to create a system folder to be used by each app.
You can use one app's public/system folder as the master, or create an entirely new one. From this point on all other system folders that aren't the master one will be referred to slave folders. Once you've chosen your master it's as simple as moving everything in each slave folder to the master folder. Deleting the slave folders and replacing them with a symbolic link to the master folder.
Sample command set to migrate and replace with symlink given the paperclip defaults. It's probably a good idea to stop the server before attempting this.
$ mv /path/to/slave/project/public/system/* /path/to/master/system
$ mv /path/to/slave/project/public/system.bak
$ ln -s /path/to/master/system /path/to/slave/project/public/system
Once you're sure the migration is sucessful you can remove the backup:
$ rm /path/to/slave/project/public/system.bak

Resources