How to point to current directory in serviceworker.js - service-worker

I have my service worker installed on a staging directory
Main Website is on: https:example.com
Staging is on: https:example.com/test/
Service Worker path: https:example.com/test/serviceworker.js
This is how I register service worker:
navigator.serviceWorker.register('https:example.com/test/serviceworker.js', {scope: '/test/'})
.then(registration => {
console.log(`Service Worker registered! Scope: ${registration.scope}`);
})
Service worker works just fine, and is installed.
My concern is that inside serviceworker.js I want to use the / to point to the path of the current directory automatically, which is in this case: /staging/
For example when I want to cache the homepage in my staging site
/**
* Cache the homepage.
*/
workbox.routing.registerRoute('/', workbox.strategies.staleWhileRevalidate();
Here its caching https:example.com not https:example.com/test/
I know I can simply change registerRoute('/') to registerRoute('/staging/)'
But that is not efficient, and will make me need to have different version of the serviceworker for localhost, staging, and deployment.
I want to know what's the right way to do it here, so I can set the scope '/' to the current directory that the serviceworker.js is in.
Thank you.

Ok I think I figured how to do it the right way.
How to point to current directory: simply './' Instead of '/'
/**
* Cache the homepage.
*/
workbox.routing.registerRoute('./', workbox.strategies.staleWhileRevalidate();
How to cache files and images in current path: './image/' instead of '/image/'
const urlsToCache = [
'./image/fallback.png',
'./offline/'
];
Now the same file works the same locally and on staging.
This is such a newbie mistake, I need to be more clever :)
Edit:
I found this in Google documentation (The Service Worker Lifecycle):
The default scope of a service worker registration is ./ relative to
the script URL. This means if you register a service worker at
//example.com/foo/bar.js it has a default scope of //example.com/foo/.

in your case
you will need a flag to wether decide in the service worker registration if it's staging or production.
having the staging on a sub path of your production domain is not optimal at all.

Related

Using Digitalocean Spaces as CDN for Next.js _next directory

I have a Kubernetes cluster on Digitalocean where www.example.com points to my Next.js application. This works as expected, however it serves all the assets from that same pod where my Next.js is running:
https://www.example.com/_next/static/chunks/webpack-16e102404d6e7c36f3ae.js
https://www.example.com/_next/static/chunks/framework-852c1b21255b4351ab3d.js
and so on.
Now I would like to serve these static files from a CDN instead, so I started researching on how to do this. One thing I found is that DigitalOcean offers CDN functionality through their Spaces, and one other thing is the documentation from Next.js here: https://nextjs.org/docs/api-reference/next.config.js/cdn-support-with-asset-prefix
So I set up a DigitalOcean Space, which is now available through https://cdn.example.com and also I followed the instructed from Next.js and modified my next.js.config file like this:
const isProd = process.env.NODE_ENV === 'production';
module.exports = {
// Use the CDN in production and localhost for development.
assetPrefix: isProd ? 'https://cdn.example.com' : '',
future: {
webpack5: true,
},
};
And deployed it. But of course, this doesn't work - the files that are generated during the build stage never get uploaded to my CDN. So now when I open my site it doesn't load any of the static files, because these URLs don't exist:
https://cdn.example.com/_next/static/chunks/webpack-16e102404d6e7c36f3ae.js
https://cdn.example.com/_next/static/chunks/framework-852c1b21255b4351ab3d.js
So now my question is, how do I set this up? As I understood there are two possible ways to do this.
1: is to configure my DigitalOcean Space to point at the _next folder of my pod, the first request will then serve the file still from my pod, but every request after that will then serve the file from the CDN
2: during the build phase in deployment I would have to upload my files to the CDN's _next folder.
And here is where I am stuck - I have no idea how to do either. For option 1, I tried finding such a setting inside DigitalOcean but couldn't find anything.
For option 2, this is my current workflow:
I make changes to the code
I commit the changes to Github
Github Actions is configured so that it will automatically build a new Docker image
Github Actions then pushes this new Docker image to my registry
Github Actions then updates my Kubernetes cluster, tells it to use this new image for my Next.js application
If I would have to make changes to this workflow to upload things to the CDN, where would I have to do it? My Dockerfile is a multistage file (3 stages) and only in the 2nd stage I run the build command.

Rails 6 application fails due to cache directory changing ownership to root

I have a Rails 6 application running on Debian buster. In one place I am using "low-level" caching. Here is the relevant code:
# Get the value.
def self.ae_enabled?()
Rails.cache.fetch("ae_enabled", expires_in: 1.hour)
end
# Change the value.
def self.ae_toggle()
ac = AdminConfiguration.find_by(name: "ae-enabled")
ac.value = ! ac.value
ac.save()
# Invalidate the cache.
Rails.cache.delete("ae_enabled")
return ac
end
This works fine ... for a while. At some point, and for reasons I cannot figure out, the cache directory tmp/cache/3F1/ where the above value is cached changes ownership from www-data:www-data (the user Apache runs under) to root:root. Once this happens Apache can no longer read this cached value and the application throws an error.
The odd thing is none of the other directories under tmp/cache/ have their permissions change, it is only the one associated with this low-level cache.
Why is that particular cache directory changing ownership?
Technical details: Rails version 6.0.3.3.
Apache usually does not relate to rails cache, unless you're using passenger, in which case it may be passenger's bug/misconfiguration, check if user sandboxing is enabled and configured correctly.
A typical rails deployment usually has multiple processes:
a web server handling static files and proxying requests to rails (usually nginx, you've mentioned apache)
rails web server (in case of passenger - "inside" the previous, but in fact there's still a child process)
some background workers or processes run from cron
File ownership confusion most probably originates from one of the above writing to disk while running under a different os user.
Look into how your processes are started. First suspect is some cron job that may be configured as system-wide, these run under root.

Vue subdomain in development & production

I'm have following (abstract) project structure:
src/brands
src/admin
src/home
Brands & admin are a pure vue project, home is a nuxt project. I'm trying to get the brands & admin project to run on their own subdomain (brands.website.com & admin.website.com, respectively), and home on the main domain. The deploy to production/staging happens via docker (with an nginx image), and I was thinking to just copy a nginx config file from my project to the docker image to point the files in the dist folder to the correct html file (not sure how yet, I need to research that first).
For development, I used vue.config.js (since I'm using v3 of the vue cli) and I have setup the following:
index: {
entry: 'src/index/main.js',
filename: 'index.html',
},
brands: {
entry: 'src/brands/main.js',
filename: 'brands/index.html',
},
admin: {
entry: 'src/admin/main.js',
filename: 'admin/index.html',
},
I can reach the brands module via localhost:8080/brands, admin module via localhost:8080/admin and the homepage via localhost:8080, but the problem is that on my index page I'm gonna have a route that is /brands, so that would probably overwrite the brands module route with the page from nuxt (or vice versa). Now is my question if there is a better way of doing this (for example to enable subdomains in vue / localhost) and if not, is the way I'm wanting to copy the nginx config to my docker image a good practice? Or not?
Thanks in advance!
I have a similar project architecture. I have a single repo with multiple vue/nuxt projects. Each of my projects is its own npm/webpack project and is accessed by subdomain when developing locally.
Based on your example, this is how I would setup the projects.
Modify your hosts file:
127.0.0.1 website.localhost brands.website.localhost admin.website.localhost
Using localhost as the TLD was my personal decision, feel free to name the domains anyway you like.
Configure webpack dev server to serve each project at the corresponding subdomain + port.
src/brands: https://brands.website.localhost:8080
src/admin: https://admin.website.localhost:8081
src/home: https://website.localhost:8082
Whats nice about this configuration is that your dev URLs match your production URLs. https://brands.website.localhost:8080 -> https://brands.website.com
Each project will have complete control over the domain's subpaths and won't clobber other project's routes, which you alluded to with the /brands route.

Permissions with sidekiq monit and capistrano

I'm having troubles with capistrano and sidekiq monit.
I setup a user for capistrano and everything was going smoothly until I installed Sidekiq.
My problem is when I try to execute cap staging sidekiq:monit:config (sidekiq:monit:start has the same permission problem).
Everytime I've tried, it "freezes" because it asks for the password.
Then I tryed to set sidekiq_monit_use_sudo to false. It's ok, it doesn't use sudo, but then it doesn't have permission to copy the /tmp/monit.conf into /etc/monit/conf.d/ folder.
It's the first time I'm setting up a server and I'm kinda lost here =|
Maybe try to config the sidekiq monit manually?
I'm using ruby 2.5 and these gems:
capistrano 3.10
capistrano-sidekiq 1.0
rails 5.1
Also I have the :pty config set to true as I don't feel comfortable not using a password.
Thank you!
You have a couple of options, of which I'll define two, the right one, and the easy/bad one.
Local User Monit
My personal usage of Monit is on a shared server on which I do not have root access. So I run Monit itself as a non-root user.
In order to do this, I compiled Monit with its prefix as $HOME/apps, so that the config files are in $HOME/apps/etc. This avoids the sudo issue. If you have access to the package manager and installed Monit that way, you can run monit as your user with the -c param to define where it should look for configuration files:
monit -c $HOME/config/monitrc
In order to get Capistrano to recognize the local monit, you will need some extra parameters in config/deploy.rb:
#set :monit_bin, '/usr/bin/monit' # Use this if you compile monit yourself.
set :sidekiq_monit_conf_dir, '/home/myuser/config/monit.d' # Feel free to customize.
set :sidekiq_monit_use_sudo, false
In the monitrc file you have defined with the -c option, you will need to make sure whatever folder you define in :sidekiq_monit_conf_dir is pulled in via includes:
include /home/myuser/config/monit.d/*.conf
Since I don't have an init system, I have Cron start Monit every 30 minutes, which is a noop if it is already running:
# Restart monit if it dies
*/30 * * * * $HOME/apps/bin/monit > /dev/null
If you have root access, you can improve upon this by having an init script (or systemd unit file) start Monit as your local user.
Bad option: give your user access to the conf dir
You can edit /etc/monit/monitrc to include your local user config directory as above. Similarly, you can allow your user to write to /etc/monit/conf.d. The major downside of these solutions is that you are now allowing your non-root user to create files which will be executes as root, opening a privilege escalation vulnerability. If your user ever got compromised, you certainly don't want an easy way for the attacker to get to root.
I include this option mostly because it's commonly considered, and should be avoided in the vast majority of cases (such as whenever you care about security). However, this might be useful in occasional rare cases (such as when you have a short term server for internal use only behind a firewall with only trusted users, and you need to set it up in a hurry).

What is the purpose of this information in a seperate .yml file

I'm pretty new to this and I was curious as to why this information may have been given to me in a separate .yml file to be used on a RoR app.
I assumed that it was info to the put into my bash profile as it has corresponding environmental variables in the app itself.
BASE_URL: 'http://localhost.com:5000'
development:
MAX_THREADS: '1'
PORT: '5000'
WEB_CONCURRENCY: '1'
test:
I'm also curious as to why you would want to set your url differently as the information states.
Thanks a bunch.
I'd think changing the default port a matter of preference unless there's another part of the stack the development team likes to leave running at 3000 by default, for example, a Node.JS server or other projects.
The .yml file you've been given should be picked up when running bundle exec <command>, but not as part of your bash environment variables.

Resources