The site I'm working on has a caching Service Worker, which feels great for the users of it, but I can't make up my mind on whether I should leave it on during development or switch it off?
If I leave it on during development I'll have to keep the Console open otherwise I'll miss 500 errors because the service worker hides them.
If I switch it off, it'll have less visibility to me and I won't notice as easily when I've broken it, not that I ever have.
Is there a best practice/consensus on this yet?
Being a relative new topic, there is not any canonical rule on the matter you ask for, and I think it won't be because it's a matter of preference, which in the end does not affects either user experience and app behavior.
Having said that, and being a regular service worker user, I recommend to enable sw's during development.
You have to get rid of the caches by regularly cleaning local storage in order to check updates on your scripts.
Server side you have to properly enable header responses, in order to catch any possible server error, by it's proper response. That is a de facto rule while developing. Locally you can catch error messages, 500, 404, 200, whatever error you get in the devTools.
Check the network transit on devTools: you can check any server response.
Catch service worker events responses. Catch all, installation, activation, fetching. all.
Finally, yes enable service workers while developing, if not, you may be missing some bugged behavior associated with them. And work with devTools, it's imperative in the way it makes you gazillion times more productive. DevTools is much more than just the console, here is a good starting point.
And don't forget to read MDN documentation.
Related
Do I understand correctly that a server worker file in a PWA should not be cached by a PWA? As I understand, once registered, installed and activated, it exits as an entity separate from a page in a browser environment and gets reloaded by the browser once a new version is found (I am omitting details that are not important here). So I see no reason to cache a service worker file. Browser kind of caching it by storing it in memory (or somewhere). I think caching a service worker file will complicate discovery of its code update and will bring no benefits.
However, if a service worker is not cached, there will be an error trying to retrieve it while refreshing a page that registers it in an offline mode because the service worker file is not available when the network is down.
What's the best way to eliminate this error? Or should I cache a service worker file? What's the most efficient strategy here?
I was doing some reading on PWA but found no clear explanation of the matter. Please help me with your advice if possible.
You're correct. Never cache service-worker.js.
To avoid the error that comes from trying to register without connectivity simply check the connection state from window.navigator.onLine and skip calling register if offline.
You can listen for network state changes and call registration at a later point in time if you want.
I'm looking for suggestions to the best way to handle network connection issues for an iPhone app (iOS9/Swift2/Xcode7), to give the best user experience since we know that mobile data networks are unreliable. I have my coding options in place but I'd like to know what's worked well for other experienced techs. There's lots of info out there but nothing I could find specific to a strategy to occur when there is a connection failure.
Here is my basic strategy dealing with failed connections I'd like to implement (along with questions):
App sends request to api.myserver.com and the request fails
Wait X second(s) and try request to api.myserver.com again (how many tries and at what time interval would you suggest?)
Try pinging some other server (i.e. google.com) to see if we can access a resource other than api.myserver.com
If we can successfully ping google.com then we know our internet is working, so we try once again to ping api.myserver.com
If this last ping fails then we alert the user that we can't communicate for some reason and to try again later
I'm using the philosophy outlined in this SO answer recommended by an Apple tech, which in general means you always check the connection to your server first, using Reachability as a separate check to ensure phone hardware is available.
At any time during this process if Reachability is false then we would put our request in a queue to be tried again when the phone hardware connection was restored.
I think I've got a handle on the code involved, but looking for insights like "this is what worked for our app and gives a good user experience during connection issues...and was approved for use in the Apple app store...". I understand the concepts of trying/retrying connections in the case of failure and alerting the user (currently my code already does this successfully), but still not solid on a good policy to use for how many times should I try to reconnect and at what intervals?
For most of the apps I have worked on it was useful to define a couple of categories of requests which have different rules. For each category consider if retries are appropriate and how long you can really afford to wait before considering the request(s) a failure.
At the most sensitive are blocking requests, things which the user must allow to complete before they can proceed. Sign in, checkout, some editing actions, etc. For these it is often not worth retrying(1) and failures need to be communicated to the user immediately: if the device is offline let the user decide when to try again, if the request fails you've probably already made the user wait too long. Since failures tend to block the user they usually also need to be communicated prominently.
Less sensitive are usually use initiated but non-blocking actions: pull-to-refresh, loading details of a selected collection item, or performing a search. Your user might be waiting to see the results but is probably free to give up or navigate elsewhere in the app and check back later. Failures still need to be communicated so users can choose to try again or at least know to stop waiting but the notification of those failures can be less prominent. Here retries start to make sense. I usually start by trying to define a time limit from the user's perspective, how long will they wait before the app feels broken, and then let that be your limit for how long a request can wait for connectivity or make any number of retries in response to failed connections.
Even less sensitive are requests triggered only indirectly by your user; polling for updates, loading non-essential images, warming caches. These you might retry but the impact of failure is often so low that it may not matter.
Of all of those requests your retry policy really only impacts #2 so I would make sure you actually have requests of that type before worrying about it. Assuming those do actually apply to your app...
Wait X second(s) and try request to api.myserver.com again (how many tries and at what time interval would you suggest?)
I would set some interval here (in the tens to hundreds of milliseconds depending on your normal api performance) to avoid an accidental flood of requests. I don't want to suggest a precise number when I don't have a solid justification for it.
My experience has been that optimizing this value is unlikely to make a perceptible difference to your users because requests often take hundreds of milliseconds to fail and users are only willing to wait for a few thousand milliseconds so making 1 or 5 or 10 requests in that time doesn't really change the final outcome. If you are able to set different expectations with your users then your results may vary.
Try pinging some other server (i.e. google.com) to see if we can access a resource other than api.myserver.com
If we can successfully ping google.com then we know our internet is working, so we try once again to ping api.myserver.com
I would not assume that this is true nor do I think that making an extra request to a third party will help you make useful predictions about when to attempt to reach your own systems. This seems like extra work to build and maintain and likely to be a source of misleading results more than valuable information. In what scenario do you imagine this provides useful information to your app or its user?
Maybe not the answer you're looking for, hopefully it's still useful.
Disclaimer: my experience is biased toward apps with a fairly simple set of REST or RPC style network requests. If you're working on a problem which calls for streaming data, P2P connections, or some other scenario then don't start with these assumptions.
(1) One end note here because I see it as a source of failures so often: These requests should really be idempotent. Yes, even those POSTs creating new resources, checking out your cart, or whatever. When you cannot safely repeat a request you'll eventually see cases where the request completed but the client never got the acknowledgement so it looked like a failure. It's much easier to recover through a retry (automatic or user triggered) of the same request than to detect and recover from duplicate requests.
For better network performance. In my application I use to ping Google server for before every API request if its reachable then I called my server API else no network alert.
If you are on wifi network then also you have to do the same, because wifi reachability only checks for wifi connectivity not for network access.
I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.
I have a quite big application, running from inside spree extension. Now the issue is, all requests are very slow even locally. I am getting messages like 'Waiting for localhost" or "waiting for server" in my browser status bar for 3 - 4 seconds for each request issued, before it starts execution. I can see execution time logged in log file is quite good. But overall response time is poor because of initial delay. So please suggest me, where can I start looking into improving this situation?
One possible root cause for this kind of problem is that initial DNS name resolution is failing before eventually resolving. You can check if this is the case using tcpdump (if that's available for your platform) or wireshark. Look for taffic to and from your client host on port 53 and see if the name responses are happening in a timely fashion.
If it turns out that this is the problem then you need to make sure that the client is configured such that the first resolver it trys knows about your server addresses (I'm guessing these are local LAN addresses that are failing). Different platforms have different ways of configuring this. A quick hack would be to put the address of your server in the client's hosts file to see if that fixes it.
Once you send in your request, you will see 'waiting for host' right up until the Ruby work is done, and it starts sending a response. So, if there is pretty much any processing work that is slowing you down, you'd see this error. What you'd want to do is start looking at the functions that youre seeing the behaviour on, and breaking them down into pieces to see which peices are slow. If EVERYTHING is slow, than you need to look at the things that are common to every function - before functions, or Application Controller code, or something similar. What I do, when I'm just playing around to see what I need to fix is just put 'puts' statements in my code at different stages, to print the current time, then I can see which stage is taking a long time, you know?
In the question What little things do I need to do before deploying a rails application I am getting a lot of answers that are bigger than "little things". So this question is slighly different.
What reasonably major steps do I need to take before deploying a rails application. In this case, i mean things which are are going to take more than 5 mins, and so need to be scheduled. For small oneline config changes, please use the little things question.
Set up Capistrano to deploy You'll want to learn capistrano if you don't already know it, and use it to deploy your code in an automated way. This will involve setting up your shared directory and shared resources like database.yml.
Install C Based MySQL gem If you don't have all the required libs, this can take a little while, but less than 20 minutes.
Make sure you aren't vulnerable to common web application attacks Session fixation, session hijacking, cross-site scripting, SQL injection (probably you don't have to worry much about SQL injection). Be sure you use h() when outputting user-entered data in a view screen. Lots of good material online about this.
Choose a server architecture Nginx, Mongrel, FastCGI, CGI, Apache, Passenger: there is a lot to choose from. Think about how your app will be used and decide on the best architecture, then set it up.
Set up Exception Notifier or Exception Logger You will want your app to warn you when it breaks. Set one of these tools up to track production exceptions. Note: Exception notifier will warn you when routing errors occur (i.e. when people fat-finger URLs or script kiddies attack you): so think about what you want the framework to do when that happens and adjust accordingly.
Make sure all of your passwords are out of source control If you have database.yml, mail.yml (if you use yaml_mail_config) or other sensitive files in source control, get them out of there, replace them with database.yml.example, and put them in the shared/ folder on your server.
Ensure that your DB is locked down. A lot of people forget to secure MySQL when setting up their new production Rails box. Don't be like them.
Make sure all of the little web-files are in place If you are planning to be listed in Google, generate a sitemap.xml file. If you are planning to use an .htaccess file for something, make sure it's there. If you need a robots.txt file to prevent certain areas of your site from being indexed, make one. If you want a good looking 404 Page, make sure it's set up correctly. If you want a "Be Right Back" page to be present when you deploy, make sure that you have a Capistrano maintenance file specified and Nginx or Apache knows how and when to redirect to it.
Get your SSL Certs in place If you are going to use SSL, make sure you get certificates that are valid on your production domain, and set them up.
Use some process monitoring
Sometimes your processes (mongrels in many cases) will die or other bad things will happen to them. For example a memory leak could cause the memory consumption to increase indefinitely or a process could start using all your CPU.
monit and god are both good choices to save you from this fate. They can also be set to hit a url on your site to check for a 200 response code.
Set up server monitoring
Some suggestions in this space: fiveruns, newrelic, scout
These tools will record detailed metrics on your servers and are invaluable when something goes wrong and you need to see what actually happened. They also give you real time information on server load.
If you have a cluster this kind of reporting is even more critical.
Backup
Write a script to periodically backup your database and any other assets that your users can upload. S3 might be a good choice for this.
Choose a web server / load balancer
My preferred server is nginx, but the common pattern is to start with apache + mod_proxy_http.