I've an Nuxt app, deployed on Vercel, xpto.vercel.app with specific client routes:
xpto.vercel.app/client-a
xpto.vercel.app/client-b
xpto.vercel.app/admin
I have 3 domains, I don't know if this is possible, but there is any way to point each domain to that client specific route? (with only one project on vercel)
www.client-a.com => xpto.vercel.app/client-a
www.client-b.com => xpto.vercel.app/client-b
www.app-admin.com => xpto.vercel.app/admin
This is my current solution, but it's far from ideal, and requires FTP.
.htaccess
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.+)$ index.php [QSA,L]
</IfModule>
index.php
<?php
$project = "client-a";
$url = "https://xpto.vercel.app/" . $project . $_SERVER['REQUEST_URI'];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
curl_close($ch);
echo $data;
?>
From Vercel Support:
Thank you for reaching out to Vercel Support.
Unfortunately, this is not possible in a single Vercel project.
Let us know if you have any further questions or concerns.
I have a file like this (delimited by \t):
AAED1 Previous_symbol PRXL2C
AARS Previous_symbol AARS1
ABP1 Previous_symbol AOC1
ACN9 Previous_symbol SDHAF3
ADCY3 Previous_symbol ADCY8
AK3 Previous_symbol AK4
AK8 Previous_symbol AK3
I want to delete the rows that contain AAED1 and AK3 in the first column. In reality my file have thousand of lines and I want to delete hundred of rows. I have a file with the patterns I want to search for (this is an example):
AAED1
AK3
I tried this:
grep -wvf pattern.txt file.txt
Expected output:
AARS Previous_symbol AARS1
ABP1 Previous_symbol AOC1
ACN9 Previous_symbol SDHAF3
ADCY3 Previous_symbol ADCY8
AK8 Previous_symbol AK3
The result I obtained:
AARS Previous_symbol AARS1
ABP1 Previous_symbol AOC1
ACN9 Previous_symbol SDHAF3
ADCY3 Previous_symbol ADCY8
The last row is also deleted because it contains AK3 on the third column. Is there a way to only grep the first column?
In the current set up, the file with patterns will search for any occurrence of those patterns in the lines and so the line:
AK8 Previous_symbol AK3
will also match AK3
You need to add a start of line marker to the patterns to ensure that the patterns are anchored checked at the start of the lines only and so:
^AAED1
^AK3
If you cannot directly edit the file with patterns use the following:
grep -f <(sed 's/^/^/' file1) file
With file 1 as the file with the patterns and file as the file to search. We run a sed command to replace the start of every line in file1 with ^ and then redirect the result back into grep as the patterns to check.
I am using Google's workbox-cli tool in order to precache some of the files on my website. Is it possible to setup the webserver to return the following in the HTTP response header for all files by default:
cache-control: s-maxage=2592000, max-age=86400, must-revalidate, no-transform, public
But, have the webbrowser use the follwing instead only if the file is going to be precached by the service worker:
cache-control: s-maxage=2592000, max-age=0, must-revalidate, no-transform, public
So, I would like the service worker to change max-age=86400 into max-age=0 in the webserver's response header before precaching the file. This makes the service worker fetch files, that have changed according to the revision in sw.js, from the webserver instead of retrieving them from local cache. Any files not managed by the service worker are cached for 86400 seconds by default.
Some background info
Currently, I am using the following bash script to setup my sw.js:
#!/bin/bash
if [ ! -d /tmp/workbox-configuration ]; then
mkdir /tmp/workbox-configuration
fi
cat <<EOF > /tmp/workbox-configuration/workbox-config.js
module.exports = {
"globDirectory": "harp_output/",
"globPatterns": [
EOF
( cd harp_output && find assets de en -type f ! -name "map.js" ! -name "map.json" ! -name "markerclusterer.js" ! -name "modal.js" ! -name "modal-map.html" ! -name "service-worker-registration.js" ! -name "sw-registration.js" ! -path "assets/fonts/*" ! -path "assets/img/*-1x.*" ! -path "assets/img/*-2x.*" ! -path "assets/img/*-3x.*" ! -path "assets/img/maps/*" ! -path "assets/img/video/*_1x1.*" ! -path "assets/img/video/*_4x3.*" ! -path "assets/js/workbox-*" ! -path "assets/videos/*" ! -path "de/4*" ! -path "de/5*" ! -path "en/4*" ! -path "en/5*" | sort | sed 's/^/"/' | sed 's/$/"/' | sed -e '$ ! s/$/,/' >> /tmp/workbox-configuration/workbox-config.js )
cat <<EOF >> /tmp/workbox-configuration/workbox-config.js
],
"swDest": "/tmp/workbox-configuration/sw.js"
};
EOF
workbox generateSW /tmp/workbox-configuration/workbox-config.js
sed -i 's#^importScripts(.*);$#importScripts("/assets/js/workbox-sw.js");\nworkbox.setConfig({modulePathPrefix: "/assets/js/"});#' /tmp/workbox-configuration/sw.js
sed -i 's/index.html"/"/' /tmp/workbox-configuration/sw.js
uglifyjs /tmp/workbox-configuration/sw.js -c -m -o harp_output/sw.js
On my Nginx webserver the following HTTP header is delivered by default:
more_set_headers "cache-control: s-maxage=2592000, max-age=0, must-revalidate, no-transform, public";
But, if the requested ressource is not handled by the service worker, the default cache-control setting is overwritten:
location ~ ^/(assets/(data/|fonts/|img/(.*-(1|2|3)x\.|maps/|video/.*_(1x1|4x3)\.)|js/(map|markerclusterer|modal|service-worker-registration|sw-registration)\.js|videos/)|(de|en)/((4|5).*|modal-map\.html)) {
more_set_headers "cache-control: s-maxage=2592000, max-age=86400, must-revalidate, no-transform, public";
}
Problem with the current approach (see background info)
I have to keep track of the files and update nginx.confcorrespondingly.
max-age=0 is used also for webbrowsers that don't support service-workers. So, they request the ressources from the webservers on each page visit.
1st Update
My desired precaching behaviour can be illustrated with two of the workbox strategies. I want the service worker to show below behaviour as described in scenario 1 and 2, although cache-control: max-age=86400 is delivered in the HTTP header by the webserver for an asset (e.g. default.js).
Scenario 1: revision in sw.js didn't change
The webpage is accessed, the sw.js file is retrieved from the webserver due to max-age=0 and the webbrowser noticed that the revision for default.js didn't change. In this case, default.js is retrieved from the precache cache:
Scenario 2: revision in sw.js did change
The webpage is accessed, the sw.js file is retrieved from the webserver due to max-age=0 and the webbrowser noticed that the revision of default.js changed. In this case, default.js is retrieved from the webserver:
2nd Update
Basically, the desired strategy is similar to the network-first strategy. But, step 2 is only taken if the revision of the file in sw.js has changed.
3rd Update
If I am not mistaken, there is already some work on this:
self.addEventListener('install', event => {
event.waitUntil(
caches.open(`static-${version}`)
.then(cache => cache.addAll([
new Request('/styles.css', { cache: 'no-cache' }),
new Request('/script.js', { cache: 'no-cache' })
]))
);
});
I don't think you have a comprehensive enough understanding of how service workers actually work.
You define one, or many caches for a service worker to use. You specify what goes in which cache, whether to cache future requests etc
The service worker now intercepts all network requests from the client and then responds to them however you have programmed it to. It can return cached content if available, cached content first while updating over the network, network first and copy to cache in case of no connection, cache for images but not for anything else, only cache GET requests, only cache certain domains, file types etc......
What it caches and for how long each cache is valid is entirely up to you and not influenced by server response headers at all. If you tell your service worker to make a fetch request for a resource then it will load that resource over the network, regardless of any headers or what is already cached locally.
You have total control over the entire caching process, which is very useful but has it's own set of pitfalls.
I used s-max-age instead of s-maxage in the cache-control HTTP header, which lead to some unexpected behaviour with my reverse proxy and workbox service worker. After the fix, the service worker is working as expected.
I want to create URL case insensitive for which I am already using CheckSpelling On and it works fine.
In Parallel I also wished to remove extension from URL for that I applied
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^(.*)$ $1.php [NC,L]
It also worked.
But both doesn't work altogether.
If I keep both in htaccess it starts to give error 300 ("Multiple Choices")
You're making it hard on yourself. It'd be easier if you do basic routing in your php. Just send anything to your index.php that's not a directory, and is either not a file or is a php file:
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f [OR]
RewriteCond %{REQUEST_FILENAME} (?>.*)(?<=\.php) [NC]
RewriteRule ^(?!index\.php$). index.php [NS,L]
At the top of index.php, do something like this:
<?php
$url = explode('?', $_SERVER['REQUEST_URI'], 2);
$url = substr($url[0], 1);
if ($url) {
$url = strtolower($url) . '.php';
if (preg_match('#^[^./][^/]*(?:/[^./][^/]*)*$#', $url) && file_exists($url)) {
. # does not contain dotfiles, nor `..` directory traversal, so is a php file below web root
include $url;
}
else {
# virtual URL doesn't exist,
# set 404 response code header and serve a default page
include '404.php';
}
exit;
}
# no virtual URL, continue processing index.php
Add a rel=canonical to each page's <head> that contains the lowercase URL (with any query string re-added) so you are not penalized for duplicate content,
What's a simple way to grep just the dot files (.*) in the current directory ($HOME)? Using
grep target .*
returns a lot of nonsense:
grep: foo is a directory.
I don't want to see the nonsense.
grep has an option -s :
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
so you can just do grep -s 'target' .*