So this is something I've been racking my brain about a bit, consider the following scenario:
I'm working on my project, I build it, and in my bundle is a lazyloaded module: module-a-[oldhash].js, that will get lazyloaded at some point in time.
Everything is fine and dandy.
I do some more work on my project, create a new bundle, deploy, and now my content hash has changed: module-a-[newhash].js. I deploy, go to my page, my service worker calls skipWaiting, but my page still tries to request module-a-[oldhash].js, which now no longer exists.
How do I go about this? The only way that I can think of handling this, is show an 'update available' message that posts a skipWaiting message to the service worker, and reloads the page on controllerchange event. But I'm curious if theres no way to achieve to same thing without having to include such a notification/toast pattern and a reload.
Additionally, its my understanding that this would only pose a problem with lazyloaded resources
Is my understanding of these problems correct? What are some common patterns for dealing with this?
Pretty much everything you describe there is correct. I'll just point out that this is a problem that extends beyond the use of a service worker. It can easily happen with long-lived single page apps that attempt to lazy-load a URL that has been replaced server-side with a new deployment.
There's some general information about the problem and potential solutions collected on at this Paying Attention while Loading Lazily site and associated video.
In general, the best practice is to:
Always assume that lazy-loading might fail (for whatever reason) and handle those failures gracefully. One approach might be to ask a user to reload the page upon encountering a failure.
Using a cache-first service worker can help protect against lazy-loading failures, at the expense of delaying updates until the newly installed service worker moves from waiting to active. As you mentioned, the best practice tends to be to show something in your UI letting a user know that updates are available, and once they opt-in to accepting those updates, postMessage() to the service worker telling it to call skipWaiting(). And finally, listening for the controllerchange event and calling window.location.reload() when that's fired.
Related
Here is the scenario:
You have a site that currently cached via a SW. You deploy a new version that includes an updated SW with a cache busting version. The company then announces the new features. People visit the site, however, even though the SW busts it still serves up the previous cache while updating its cache in the background. So visitors that come for the new features don't see them.
Is this the expected experience with ServiceWorkers? What are the recommended strategies to get around this?
It's the expected behavior whenever you serve resources with a cache-first strategy, yes.
There are two options:
Don't use a cache-first strategy. Unfortunately, you lose out on most of the performance benefits of service workers if you use a network-first strategy. I wouldn't recommend going network-first if you can help it.
Adopt the UX pattern of displaying a "Reload for the latest updates" toast message on the screen letting the user know that the cached content has been refreshed, and allowing them to take action to see the latest content. This is, I think, the best approach. If you're using a service worker which gets updated whenever your cached content changes (e.g., one generated by sw-precache), then you can detect these updates by listening for specific service worker controller events, and use those to trigger the message. (Here's an example.)
I have an application that makes API requests to salesforce using restforce.
Specifically the application finds a contact object, returns IDs for all related objects and then pulls the full record for every related object based on their ID.
This takes a long time for two reasons:
There are a lot of request to an external API, usually takes a few fractions of a second for each to reply and for some there can be +500 individual requests.
There is often a large amount of data being pulled back via each request.
All requests currently fall within the salesforce rest API limits but I'm getting timeout errors from my development server as it can take 5+ minutes for some of these requests to process.
Rails 4.2 - How best to handle this?
My question is how do I best get rails to handle this?
I can fire the API requests either from the controller (which definitely violates the skinny controllers) or from the view (via helper methods, which seems like a dodgy hack).
Ideally I'd like to get it running in a background job, but i'm unsure if I can just include all the authentication and other methods in a job in the same way I can include helper methods?
Even if I could get it to work in a background job, I'm unsure what best practice might be for the user experience. Ideally I'd like to route them to a page telling them to "hang tight, go get a coffee" with a progress bar, and then auto route them to the final page once the request is complete...
But I'm unsure how to generate a temporary display until a job has been completed?
Could anyone recommend any gems or strategies that might help me digest this problem?
You should definitely use a background job for this.
Give a database object to the job, which it will update to signal that is has finished, and maybe from time to time to indicate progress.
On the user side, simply tell them that the background job is working, with eventually a progress indicator, and display the result once the database object giving to the job tells you it's ready.
I've got two applications: Interface is a Rails app and Processor is for bash scripts.
I need to notify an Interface user's session if a bash process on Processor fails. I have access to the command line on Processor, so I can hit http://interface.com/process/12/error/:error_message with :error_message set on Processor.
I'm not sure how to make that work though. That route works from the browser, but I don't know how to redirect the user with the error message.
Any help would be great.
Thanks.
To answer this, I am going to make some assumptions about your setup. Please correct me if I don't have it right.
I assume that your Interface user is monitoring your site by visiting a page such as http://interface.com/process/12/monitor on the Interface server and you want the error message to pop up to let them know something went wrong.
Given that, consider having your call to http://interface.com/process/12/error/:error_message store the error in a related ProcessError table. Then, use javascript on the monitor page to poll the Interface server for "new" errors. The polling interval really depends on the situation. If you're the only user, every second would be fine, but if there are going to be lots of monitoring users at once, you would probably want to make the polling interval longer. How long depends on load and how important it is that the user is notified quickly.
A push solution would be more efficient, but is a bit harder to accomplish. If this is appealing to you, have a look at Faye, a publish-subscribe messaging system that supports Rails servers and html+javascript clients.
Hopefully this points you in the right direction!
I have a requirement to inform every user to save their work and logout so that admin can reset iis or do some changes in the asp.net MVC application server.
looping through session object collection is not thread safe that is what i have learned.
any other ideas?
and even if i can get hold of active sessions how do i send a message to those clients ?
thanks in advance.
Save the message in a database and query the database for every request to see if a message exist.
This seems like a poorly-defined requirement.
Serious maintenance should be done at a specific time, and users should be alerted to that time window well in advance.
Simply restarting IIS is a pretty quick procedure... is there any reason users would lose their work when simply restarting IIS? While I've been filling out this StackOverflow answer, for instance, they could have restarted the server a dozen times. Once I hit Post, if the server is down, it'll either timeout and leave my work in the textarea, or else it will connect successfully if the server is back in time.
If I'm not submitting data, but just clicking a link, the same applies: either the browser times out, in which case a simple refresh is enough once the server is back up, or it eventually takes the user where they want to go.
If you're doing pure AJAX requests you will need to handle a missing server yourself, rather than relying on the browser to do it, but you'd need to work that out anyway because of the Eight Fallacies of Distributed Computing #1: "The network is reliable." (see http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing)
So, I'd actually push back on that requirement. They're asking you to do something that won't really meet the need (users don't lose data, have a reasonably good experience), that will become complicated, and that will be a brittle solution in the end.
Sounds like a case for SignalR!
https://github.com/SignalR/SignalR
I'm trying to develop an application which modifies a couple of tasks of the famous Online-TODO List RememberTheMilk (rememberthemilk.com) using the REST API.
Unfortunately the modifying takes a lot of time, so I want to give a feedback to the users.
My idea was just to display a couple of text lines (e.g. modifying task 1 of n...).
Therefore I used the periodically_call_remote on my page and called a which reads a Singleton.
In the request I store the text that should be displayed in the same singleton. But I found out, that once I set up a request, the periodically_call_remote does not update the specified div.
My question to this:
1. is this a good way to implement this behaviour?
2. if it is, how do get the periodically_call_remote to work during a submit?
Using a Singleton is most definitely a bad idea. In an advanced production setup it isn't guaranteed that subsequent requests will go to the same process or to the same machine (and subsequently will have a different Singleton). Plus, if you have many users, I don't even want to think about what'll happen to those poor Singletons.
Does any of this stuff actually need to go through your Rails app? It seems like you can call the RTM API via Javascript from the page the user is on and then update the page when the XHR request is complete.