service worker not intercepting fetch events reliably - service-worker

I have a webapp that is implemented with flask and nginx (in docker environment)
I want to add a service worker
so I read here how to set the configuration such that the scope is the root directory ('/')
When I start the application I can see that my service worker registers, installed and activated. This happens repeatedly as expected.
But I have a problem to intercept the fetch commands reliably.
Using chrome devtools, if I set a breakpoint in the install, wait and continue,
then sometimes, the GET operations are routed to the service worker (I can see the console printout from the fetch event listener in the service worker).
When I get to this state, then all fetch events are intercepted, as expected
But if I remove the breakpoints and run the program normally, the service worker doesn't intercept the fetch events.
I read here that the scope of the service worker can cause to miss of routing. But in such case the miss is systematic, i.e. the path which is not in the scope is never intercepted
This is not my case, because with certain conditions, my service worker does intercept the fetch calls.
My settings are below.
Thanks,
Avner
# the file structure
/usr/src/app/web/
├── V1
│   ├── js
│   │   └── mlj
│   │   └── ...
│   │   ├── main.js
│   ├── ...
├── project
│   ├── sites
│   │   └── ...
│   │   └── views.py
└── sw1.js
------------------------------------------------------------
# the file that registers the service worker
cat main.js
...
navigator.serviceWorker.register("../../../sw1.js", {scope: '/'})
.then(regisration => console.log('SW Registered1'))
.catch(console.error);
------------------------------------------------------------
# the service worker
cat sw1.js
const version = 1;
self.addEventListener('install', function(event) {
console.log ('SW v%s installed at', version, new Date().toLocaleTimeString());
});
self.addEventListener('activate', function(event) {
console.log ('SW v%s activated at', version, new Date().toLocaleTimeString());
});
self.addEventListener('fetch', function(event) {
console.log ('SW v%s fetched at', version, new Date().toLocaleTimeString());
if(!navigator.onLine) {
event.respondWith(new Response('<h1> Offline :( </h1>', {headers: {'Content-Type': 'text/html'}}));
}
else {
console.log (event.request.url);
event.respondWith(fetch(event.request));
}
});
------------------------------------------------------------
# the route to the service worker in the flask python file
cat web/project/sites/views.py
...
from flask import current_app, send_from_directory
...
#sites_blueprint.route('/sw1.js', methods=['GET'])
def sw():
# /usr/src/app
root_dir = os.path.dirname(os.getcwd())
filename = 'sw1.js'
# /usr/src/app/web
dir1 = os.path.join(root_dir, 'web')
return send_from_directory(dir1, filename)

I found out here that on the Chrome Developer Tools Network tab, if Disable cache is checked, requests will go to the network instead of the Service Worker, i.e. Service Worker does not get a fetch event.
After enabling the cache by unchecking the button Disable cache (in Chrome devtool -> Network -> Disable cache), fetch events are now intercepted by the service worker.
p.s. Note that using the shortcut to bypass the cache, e.g. in Chrome: Ctrl-F5, Shift-F5, and in Firefox: Ctrl-F5, Ctrl-Shift-R achieves the same effect as unchecking the button Disable cache

Related

terraform validate error: The argument "region" is required, but was not set

I wrote a custom RDS module for my development team to consume for deploying RDS instances. I am using BitBucket for source control and I am trying to integrate a BitBucket pipeline to run terraform validate on my .tf files to validate syntax before making it consumable to the devs.terraform init runs fine but when I run terraform validate I get the following error: Error: Missing required argument. The argument "region" is required, but was not set. After looking at the documentation, I am confused why this command would check for a declared provider if it is not actually deploying anything? I am admittedly new to writing modules. Perhaps this isn't the right command for what I want to accomplish?
Terraform version: v0.12.7
AWS Provider version: 2.24
bitbucket-pipelines.yml:
image: hashicorp/terraform:full
pipelines:
branches:
master:
- step:
script:
- terraform version
- terraform init
- terraform validate
Module tree:
├── CHANGELOG.md
├── README.md
├── bitbucket-pipelines.yml
├── main.tf
├── modules
│   ├── db_instance
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── db_option_group
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── db_parameter_group
│   │   ├── README.md
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── db_subnet_group
│   ├── README.md
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── outputs.tf
└── variables.tf
The situation you've hit here is the bug described in Terraform issue #21408, where validation is checking that the provider configuration is complete even though you're intending to write a module that will inherit a provider.
There are two main workarounds for this at the time of writing. The easiest one-shot workaround is to set the environment variable AWS_DEFAULT_REGION to any valid AWS region then it should be used as a value for region and allow validation to pass.
To make that reproducible, you can use a test configuration which can serve a test bed for the module when you are developing it alone, outside the context of a particular caller. To do this, make a directory tests/simple (or really anything you like; the name doesn't matter) and put in it a test.tf file containing something like this:
provider "aws" {
region = "us-east-1"
}
module "under_test" {
source = "../.."
# Any arguments the module requires
}
You can then switch into that test directory and use the normal Terraform workflow to validate the whole configuration together:
cd tests/simple
terraform init
terraform validate
A nice benefit of this general idea of test configurations is that you can potentially also use them for end-to-end testing by running terraform plan or terraform apply with a suitable set of environment variables set, and you can have multiple test configurations to test different permutations of options and make sure they all pass validation and, if you do end-to-end testing, that they all work.
Even once that Terraform issue is fixed, test configurations will remain a nice technique for ensuring that your module works as a child module, separately from whether it's valid in isolation.
I have run into the same problem even though I have provided region to my provider configuration.
After some digging I have come across this thread from terraform's discussion board. The problem it seems is that for some undocumented reason, terraform expects AWS_DEFAULT_REGION environment variable to be set to a region value (eg. "us-west-1"). Setting it to a valid region has solved this problem for me.
You can find more information about setting environment variables for Terraform here.
Hope it helps your problem.
One or more of your TF resources has no region configured. To handle this without the AWS_DEFAULT_REGION env variable or if you have multiple regions, you can use provider aliases in your resources to specify your region. For example:
provider "aws" {
region = "us-east-1"
alias = "us"
}
...
resource "aws_cloudwatch_log_metric_filter" "hk_DBrecoverymode-UAT" {
provider = aws.us
...
}

Is there a way to call a private/protected twilio function?

This is my first time using twilio and I start with the new twilio-cli and I create new project to build and deploy a backend over twilio functions, but I need that some of the functions keep in private, and I want to call that function through their specific api-endpoint but, I always receive the message "Unauthorized - you are not authenticated to perform this request"
This is the plugin that I am using with twilio-cli https://github.com/twilio-labs/plugin-serverless to start the basic project to deploy to twilio.
I already tried to use the curl documentation that I found here: https://www.twilio.com/docs/studio/rest-api/execution but none of the example execute the function.
curl -X POST 'https://serverless.twilio.com/v1/Services/ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Functions/ZHXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' \
-u ACXXXXXXXXXXXX:your_auth_token
I just need to receive a hello world message, this is the code of the function:
exports.handler = function(context, event, callback) {
const twiml = new Twilio.twiml.MessagingResponse();
twiml.message("Hello World!");
console.log("Track this");
callback(null, twiml);
};
The accepted answer doesn't actually answer the question.
To call a protected function, you must provide a signature in a X-Twilio-Signature header. This is how to create such a signature (according to the official docs):
Take the full URL of the request URL you specify for your phone number or app, from the protocol (https...) through the end of the query string (everything after the ?).
If the request is a POST, sort all of the POST parameters alphabetically (using Unix-style case-sensitive sorting order).
Iterate through the sorted list of POST parameters, and append the variable name and value (with no delimiters) to the end of the URL string.
Sign the resulting string with HMAC-SHA1 using your AuthToken as the key (remember, your AuthToken's case matters!).
Base64 encode the resulting hash value.
Official docs: https://www.twilio.com/docs/usage/security#validating-requests
Heyooo. 👋 Twilio developer evangelist here.
If you followed the serverless plugin init process by running twilio serverless:init you should have the following project structure.
.
├── assets
│   ├── index.html
│   ├── message.private.js
│   └── style.css
├── functions
│   ├── hello-world.js
│   ├── private-message.js
│   └── sms
│ └──reply.protected.js
├── node_modules
├── package-lock.json
└── package.json
These files result in the following HTTP endpoints after you run twilio serverless:deploy. (you will have a different domain).
Deploying functions & assets to the Twilio Runtime
Account SK6a...
Token kegH****************************
Service Name foo-2
Environment dev
Root Directory /private/tmp/foo
Dependencies
Env Variables
✔ Serverless project successfully deployed
Deployment Details
Domain: foo-3513-dev.twil.io
Service:
foo (ZS8...)
Environment:
dev (ZE0...)
Build SID:
ZB9...
Functions:
[protected] https://foo-3513-dev.twil.io/sms/reply
https://foo-3513-dev.twil.io/hello-world
https://foo-3513-dev.twil.io/private-message
Assets:
[private] Runtime.getAssets()['/message.js']
https://foo-3513-dev.twil.io/index.html
https://foo-3513-dev.twil.io/style.css
Have a close look at the Runtime Urls in the functions block. These are the endpoints that will be available. As you see the bootstrap project includes two public functions (/hello-world and /private-message). You can call these with curl or your browser.
Additionally, there is one protected function (/sms/reply). This function available for calls from within Twilio.
This means that protected functions expect a valid Twilio signature. You can read about that here. If you connect e.g. Studio to call the function it will work because the webhook includes a Twilio signature. If you want to curl it you have to provide X-Twilio-Signature header.
Hope this helps. :)

Setting up extra context with kitchen-terraform

I'm trying to use kitchen-terraform to verify a terraform module I'm building. This particular module is a small piece in a larger infrastructure. It depends on some pieces of the network being available and will then be used later to spin up additional servers and whatnot.
I'm curious if there's a way with kitchen-terraform to create some pieces of infrastructure before the module under test runs and to also add in some extra pieces that aren't part of the module proper.
In this particular case, the module is creating a new VPC with some peering connections with an existing VPC, security groups, and subnets. I want to verify that the peering connections were established correctly as well as spin up some ec2 instances to verify the status of the network.
Does anyone have examples of doing something like this?
I'm curious if there's a way with kitchen-terraform to create some
pieces of infrastructure before the module under test runs and to also
add in some extra pieces that aren't part of the module proper.
You can do all of this. Your .kitchen.yml will specify where the terraform code exists to execute here:
provisioner:
name: terraform
directory: path/to/terraform/code
variable_files:
- path/to/terraform/variables.tfvars
More to the point, create a main.tf in a test location that builds all the infrastructure you want, including the modules. The order of execution will be controlled by the dependencies of the resources themselves.
Assuming you are testing in the same repo as your module, maybe arrange something like this:
├── .kitchen.yml
├── Gemfile
├── Gemfile.lock
├── README.md
├── terraform
│   ├── my_module
│      ├── main.tf
│      └── variables.tf
├── test
   ├── main.tf
   └── terraform.tfvars
The actual .kitchen.yml will include this:
provisioner:
name: terraform
directory: test
variable_files:
- test/variables.tfvars
variables:
access_key: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
And your test/main.tf will instantiate the module along with any other code under test.
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
...
module "my_module" {
name = "foo"
source = "../terraform/my_module"
...
}
resource "aws_instance" "test_instance_1" {
...
}

Keep uploads directory under public while deploying new revisions using capistrano

I have a Rails app and using Apache2 + Passenger + Capistrano on production server:
.
├── current -> releases/20150527234152
| ├── app
| ├── db
| ├── lib
| ├── ...
| └── public
| ├── assets
| └── uploads
| ├── 01.jpg
| ├── 02.jpg
| ├── 03.jpg
| └── ...
├── releases
| ├── 20150527212555
| ├── 20150527230415
| └── 20150527234152
├── repo
└── shared
I am not tracking the public/uploads directory (Where images are being uploaded by users). So whenever I do cap production deploy, the current links to the new version which won't have the uploads directory anymore. I am using carrierwave gem for image upload.
The only solution I can think of is to have capistrano run a script after deploying that moves the directory from older to latest revision.
Or
Have the uploads directory outside of the app. (If so, what's the best/safest location for it?)
I want to know which solution is better, or if there is a better option.
Cheers
The method you are looking for is called linked_dirs.
It accepts an Array of directories and will create a symlink to the directories specified across each successive deployment works well for directories that should persist even when other code is updated as is your case for uploads.
When you deploy what it does is it runs deploy:check:linked_dirs to confirm that the path exists and/or creates it. Then it runs deploy:symlink:linked_dirs which creates a symlink to this directory.
You can find it in the Official Documentation. The Rake Tasks can be Found Here

Rebar eunit skips all app tests if root app is not included

my problem is that I can't run eunit tests for a single app or module without including the root app. My directory laylout looks a bit like this:
├── apps
│   ├── app1
│   └── app2
├── deps
│   ├── amqp_client
│   ├── meck
│   ├── rabbit_common
│   └── ranch
├── rebar.config
├── rel
└── src
   ├── rootapp.app.src
   ├── rootapp.erl
   ├── rootapp.erl
   └── rootapp.erl
Now, what I can do is:
$ rebar eunit skip_deps=true
which runs the tests for all apps. Also, I can do:
$ cd apps/app1/
$ rebar eunit skip_deps=true
which runs the tests for app1 (I have a rebar.config in apps/app1 as well.
However, if I try
$ rebar eunit skip_deps=true apps=app1
does...nothing. no output. Trying verbose mode gives me:
$ rebar -vv eunit skip_deps=true apps=app1
DEBUG: Consult config file "/Users/myuser/Development/erlang/rootapp/rebar.config"
DEBUG: Rebar location: "/usr/local/bin/rebar"
DEBUG: Consult config file "/Users/myuser/Development/erlang/erlactive/src/rootapp.app.src"
DEBUG: Skipping app: rootapp
When I include the root app, it works:
$ rebar eunit skip_deps=true apps=rootapp,app1
Despite the fact, that I actually want to test app1, not rootapp, this is really uncomfortable since the SublimeErl plugin for SublimeText 2 will always set the apps to the app that the module under test is contained in. So the tests will always fail because actually no tests will run at all.
Long story short: Is there something I can configure in any of the rebar.config files to make it possible to run the tests for one app in /apps without including the root app?
Personally I prefer to put the main app into its own OTP compliant folder in apps. Just create a new app rootapp in apps and include it in your rebar.config:
{sub_dirs, ["apps/app1",
"apps/app2",
"apps/rootapp"]}.
You might also have to include the apps directory into your lib path:
{lib_dirs, ["apps"]}.
You might want to have a look into Fred Herbert's blog post “As bad as anything else”.
With this set up you should be able to run:
rebar skip_deps=true eunit
which will run all eunit tests of the apps in apps.

Resources