What is the purpose of service worker? - service-worker

From what I'm seeing you can even cache a web page. According to this documentation: https://www.mnot.net/cache_docs/#BROWSER, representation can be cached in browser cache. I see the that even serviceworker does the exact same thing(html, css and javascript files). What is the purpose of service worker?

A service worker is not just a script to cache a web page or scripts, it is actually a script that handles your background operations separately from the webpage, functionalities include but not limited to : caching resources, background sync, push notifications... Their main advantage is that they can be woke up, even if the browser is closed and the website isnt open.
it would be helpful to check here for basics.

A service worker is a specific type of JS Script, which runs in the background of the user’s browser. It acts like a proxy server exists between your app, the browser and the network. Among other things, service workers allow apps to continue functioning offline in case the user loses internet connection.
Improve performance of the website:
Service worker helps the website to load offline. Unlike AppCache API, it is having more control over the browser cache. Returning visitors can fetch the website instantly from the browser cache and get a smoother experience so that they’ll keep coming back.
How Do Service Workers Work?
Service worker scripts is independent and have their own life cycle:
The script first registered and then installed by the browser. After that it start caching static assets right away. If any errors occur, then installation will fail, and you must try again.
For successful installation, the service worker will activate and gain control over all pages under its scope. Active service worker alternate between two states: either it will handle fetch and message events, or it will terminate to save memory.
Service Worker Caching:
Service worker caching is much more manageable. It always checks network status whether it is online or offline and fetch or terminate the resource files accordingly. This means the site can work when there is no network, offline, or poor network connectivity, low bars or false cellular connectivity.
Supported Browsers
Browser options are growing. Service workers are supported by Chrome, Firefox and Opera. Microsoft Edge is now showing public support. Even Safari has dropped hints of future development.
Restrictions and Security:
• Service Worker can’t access the window object. That’s why it does not have any threat for spreading viruses.
• Require HTTPS. This ensures a level of security to allow the service worker to do the things it is designed to do.
• Only Asynchronous. This means synchronous APIs like XHR and local Storage are not available to a service worker.
Conclusions:
Service workers transform your website to an instant loading powerhouse. Apart from improving web performance, they can be used to implement push notifications to keep users engaged with mobile apps. Future applications for service workers might include things like Geofencing and Periodic sync.
Visit my video blog for more details: https://www.youtube.com/watch?v=nKHDUUvrkeg

Related

Notify backend that container has been deployed

I want to create containers which start small webservices. Developers of our team should then upload small images which contain different services. A main backend system then uses these services.
My problem is: When a developer uploads a new service, how does the backend service know there is a new service it can use? Beforehand, when service X should be used and there was no service for that functionality, it returned just a simple message. When there is a service uploaded to do X, the main backend should use that service. But how does it know it is there and should be used?
You can add some notification to your small web service. But what do you do when the service will be down unexpectedly or network will be down at the short time? You need add some logic to clients for refreshing connection.
And docker recommends to use this way.
The problem of waiting for a database (for example) to be ready is
really just a subset of a much larger problem of distributed systems.
In production, your database could become unavailable or move hosts at
any time. Your application needs to be resilient to these types of
failures.
To handle this, your application should attempt to re-establish a
connection to the database after a failure. If the application retries
the connection, it should eventually be able to connect to the
database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason.

Comparison between service worker and AppCache

What are the core differences between service worker and AppCache. What are the pros and cons of each and when to prefer one over another .
The primary difference is that AppCache is a high-level, declarative API, with which you specify the set of resources you'd like the browser to cache; whereas Service Worker is a low-level, imperative, event-driven API with which you write a script that can intercept fetch events and cache their responses along with doing other things (like displaying push notifications).
The pros and cons are largely a function of API design: theoretically, AppCache is easier to use, while having more limited use cases; whereas Service Worker is harder to use, but is more flexible.
Nevertheless, AppCache is considered hard to use in practice due to poor design (see Application Cache Is A Douchebag for a list of design issues). And it has been deprecated, so it is being removed from browsers (per Using the application cache).
Thus the only reason to prefer AppCache is to offline an app on browsers that don't yet support Service Worker, as Kenneth Ormandy recommends in Don’t Wait for ServiceWorker: Adding Offline Support with One-Line.
Compare Can I use Service Workers? to Can I use Offline web applications? to see the differences in browser support. But note that browsers that support Service Worker, like Chrome and Firefox, are removing support for AppCache, so you'll need to implement both to offline your app across all browsers that support either standard.
In addition of what Myk Melez said, One of the main benefits of Service Workers against Application Cache is that Application Cache only works when user is disconnected from the network, so you can not manage situations of:
1- "slow network" - Your connection signal is strong, however some external entities (server, routes, etc) are delaying the transmission to your specific application.
2- "Lie-fi" (your phone shows is connected to a wi-fi or a cell network with low signal) so it seems to be connected when actually is not.
Service Workers is like a middle ware giving you control over the requests the browser is making, you can actually intercept the request and respond wherever you want, no matter you are connected or not. So you can implement "offline first" principle.

MassTransit in ASP.NET MVC site?

I'd like to decouple a number of business objects that my website is using to support actions of the users.
My website is a SaaS/B2B site and I do not anticiapte to have a need for "mega scale". My primary issue is a need to decouple business objects from each other, and perform occasional longer-running operations asynchronously - outside of execution of threads that handle user traffic.
Having said that, I really do not want to have a separate set of servers that process my messages, and would prefer for web servers to just host MassTransit or other Bus software) internaly in memory. Assured message delivery (at this point) is also not yet my most important concenrn. I plan to "outsorce" a number of supporting business actions to the bus so that they do not pollute my main business services/objects.
Is this possible? Do I need Loopback for now as a transport or do I need full RabbitMq? Will RabbitMQ require me to install yet another set of servers to host it?
TIA
Loopback is just for testing. Installing RMQ is the right path. You don't NEED different servers for it, but would suggest it. If you off load work to a bus, you don't really want that contending with resources for the website. Given that, you can run RMQ locally without any issue. It message volume is low, so is resource usage in RMQ. When you reacher higher volumes, IO can be a problem with RabbitMQ (or any MQ).

Load Balancing in ASP.NET MVC Web Application. What can/can't be done?

I'm in the middle of developing a web application and have been asked the question whether it will work with a load balancer. My initial reaction is yes, since there is no state tracked between requests anywhere in the system. However, there is some application specific state loaded on app start (configuration settings from the database mainly.)
This data is all Read Only. Is it sufficient to rely on the normal cache dependency mechanisms to manage this and invalidate these objects across all the applications in the cluster or would I have to move to a shared cache system like App Fabric to ensure reliability/consistency?
With diagnostics enabled, I've got numerous logging calls using EventSource.Write and an out of process logger picking these up. I assume in this case, I'd need one logger installed on each of the servers in the cluster to pick up the events each one triggers. I'm not too fussed about that, but what is a good way to identify which server in the cluster serviced the request?
If you initialize the data on each server seperately and it is read-only, there's no problem. The separate applications will have a copy each.
Yes, you'd need a logger on each instance. In order to identify the server you could include the servers' IP into the log. That way you can track the server. (provided you have static IP's, but I assume you do).

Best practice for rate limiting users of a REST API?

I am putting together a REST API and as I'm unsure how it will scale or what the demand for it will be, I'd like to be able to rate limit uses of it as well as to be able to temporarily refuse requests when the box is over capacity or if there is some kind of slashdotted scenario.
I'd also like to be able to gracefully bring the service down temporarily (while giving clients results that indicate the main service is offline for a bit) when/if I need to scale the service by adding more capacity.
Are there any best practices for this kind of thing? Implementation is Rails with mysql.
This is all done with outer webserver, which listens to the world (i recommend nginx or lighttpd).
Regarding rate limits, nginx is able to limit, i.e. 50 req/minute per each IP, all over get 503 page, which you can customize.
Regarding expected temporary down, in rails world this is done via special maintainance.html page. There is some kind of automation that creates or symlinks that file when rails app servers go down. I'd recommend relying not on file presence, but on actual availability of app server.
But really you are able to start/stop services without losing any connections at all. I.e. you can run separate instance of app server on different UNIX socket/IP port and have balancer (nginx/lighty/haproxy) use that new instance too. Then you shut down old instance and all clients are served with only new one. No connection lost. Of course this scenario is not always possible, depends on type of change you introduced in new version.
haproxy is a balancer-only solution. It can extremely efficiently balance requests to app servers in your farm.
For quite big service you end-up with something like:
api.domain resolving to round-robin N balancers
each balancer proxies requests to M webservers for static and P app servers for dynamic content. Oh well your REST API don't have static files, does it?
For quite small service (under 2K rps) all balancing is done inside one-two webservers.
Good answers already - if you don't want to implement the limiter yourself, there are also solutions like 3scale (http://www.3scale.net) which does rate limiting, analytics etc. for APIs. It works using a plugin (see here for the ruby api plugin) which hooks into the 3scale architecture. You can also use it via varnish and have varnish act as a rate limiting proxy.
I'd recommend implementing the rate limits outside of your application since otherwise the high traffic will still have the effect of killing your app. One good solution is to implement it as part of your apache proxy, with something like mod_evasive

Resources