ServiceWorker fails on hard reload (Ctrl-Shift-R) in Chrome - service-worker

Why does service worker fail after a hard reload (Ctrl-Shift-R)?
---- case 1 CHROME --- success
Uninstall service worker
Load page
Page installs service worker
Worker.postMessage() succeeds
reload page
Worker.postMessage() succeeds
---- case 2 CHROME --- failure
Uninstall service worker
Load page
Page installs service worker
Worker.postMessage() succeeds
HARD RELOAD (ctrl-shift-R) page (serviceworker still running according to chrome://serviceworker-internals/ )
Worker.postMessage() fails -- 'error sendingTypeError: Cannot read property 'postMessage' of null'

When you shift-reload, the reloaded page will not be controlled by a service worker. This is part of the service worker specification.
This just applies for that next page load. Future page loads (that don't involve shift-reload) will continue to be controlled by a service worker, assuming that the page is in scope of one.

During development you can use, for example, Chrome's Dev-Tools and set that the service workers are updated on each reload:
For further reference you can read Googles Web Essentials on this topic.

Related

How debug/log errors on production services worker installation

We have been using services worker on our mobile web app from some time now.
We use Sentry as event logs tool.
We are getting lot of error of the type:
Cannot update a null/nonexistent service worker registration
Error: AbortError: Failed to update a ServiceWorker for scope ('https://www.some.production.domain/') with script ('https://www.some.production.domain/sw.js'): Timed out while trying to start the Service Worker.
And so,
Is there a standard way to know why and if we should be worried about those kind of errors?
Or even get more details to try to figure out why they happen apparently in a random way?

Spring Boot Admin - Running in Docker Swarm weirdly

I am running multiple Spring-Boot servers all connected to a Spring Boot Admin instance. Everything is running in the same Docker Swarm.
Spring Boot Admin keeps reporting on these "fake" instances that pop up and die. They are up for 1 second and then become unresponsive. When I clear them, they come back. The details for that instance show this error:
Fetching live health status failed. This is the last known information.
Request failed with status code 502
Here's a screenshot:
This is the same for all my APIs. This is causing us to get an inaccurate health reading of our services. How can I get Admin to stop reporting on these non-existant containers ?
I've looked in all my nodes and can't find any containers (running or stopped) that match the unresponsive containers that Admin is reporting.

Request to external service times out on Heroku web process but works in console process

I have a Rails 4 application running on Heroku. For one type of request I make a HTTP call to an external service and then return the response to the client.
As I see from the logs, the request to the external service is taking too long and resulting in the Heroku's H12 error where it sends a 503 after 30 seconds. The HTTP request that I am making to the external service eventually comes back with a Net::ReadTimeout after some more time (60 seconds).
However if I run heroku run console and make the same HTTP call (through the same Ruby code), it works just fine. The request completes in about a second or two at the max.
I am unable to understand why this request is timing out when run from the web process while it works seamlessly in the heroku run console.
I am running Puma as my webserver. I followed guidelines given here : https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
I also tried the basic WEBrick server to see if that helps. But no avail.
Has anyone faced this issue? Any hints on how to debug this?

Performance monitoring of production site using Shell script and Selenium Web Drivers

I will shortly try to explain what I am trying to do here. I need to periodically check the response time of the my site by logging into the system and noting the time to load the welcome page.
I am doing this using Selenium WebDriver and Java. I am currently checking the response time using the org.apache.commons.lang3.time.StopWatch which start when user hits the login button and stops when welcome page renders completely. I check weather this response time is above threshold level and send mail to admin alerting him in case of slow response of system.
Currently, I have created the executable jar file which opens the web browser using Selenium WebDriver and check the response time. I have also created the job in Jenkins using DOS commands which runs periodically using cron schedular. This I'm doing in my Windows 7 pc and I have Jenkins installed on my localhost. The scheduled job builds on Jenkins periodically but I can't see any activity like opening the web browser and the further task explained above. It runs perfectly when I use windows scheduler to execute batch file. The ultimate goal I have, is to run the Selenium WebDriver tests on the Linux system via jenkins while Jenkins server has been installed on a Linux machine.
Any help will be great! Also let me know if anybody wants to see the code.

403 errors from load balancer while new instances are booting

I have a RoR app running on elastic beanstalk. I have occasionally seen 403 errors from Passenger for a while. Most of the time 1 server is running but this gets increased to 3 or 4 instances in busy periods during the day.
Session stickeyness is not turned on
I have noticed that when a new server is started the ELB is sending requests to it before bundle install has finished.
If I ssh to the newly started server I can see in /var/app/current/ that the app has not yet been installed and if I run top it looks like bundler is running and compiling things with cc1, etc.
/var/app/support/log/passenger.log shows that requests to valid urls within my rails app are being received and responded to with 404. Hardly surprising because the app isn't there yet
After 5-10 minutes all of the compiling is complete and the app files appear in /var/app/current and all is well.
This doesn't seem quite right to me. How do I set up the ELB / my rails app so that the ELB can tell when it is ready to receive requests?
I found the answer to this. There was no application health check url set. In this case the ELB pings the instance to see if it's healthy, i.e. it checks that it is booted rather than if rails is up and running. Setting the health check url to '/login/' fixed it for me because this gives a 404 until rails in running and a 200 afterwards.
Elastic beanstalk demands 2 correct responses before it deems an instance to be healthy. It checks the instance every 5 minutes. This means that an instance can take a while to start serving requests. i.e. it takes boot time + waiting for next poll from elb + 5 minutes before it sees any real traffic

Resources