Are Progressive Web Apps offline capability a good idea on applications that display data that change frequently like a bank account balance?
If The user is using his PWA offline mode and navigates for example to the bank product balances section he is actually viewing no updated data about his balances and allowing to make operations based on a data that may not be updated.
Do I miss something about this approach (PWA) on data that frequently changes?
PWA doesn't mean you capture the entire page. As a developer, you choose what you want to cache. Two type of cache can be done.
1) Static content cache aka App shell cache - like your HTML/CSS/JS and image files. This can be refreshed using service worker when the change(will happen in the background without user needing to do anything). This is something which can be done even for sites like bank transaction page.
2) API data cache - This is where you cache the dynamic data like JSON response from your web service. Even this can be implemented for bank transaction page, if displayed the information responsibly. Say on top of the transactions, you can display a message "Transactions as of June-6th-2018 5.11PM" in a nice prompting way so user knows he is not seeing real-time data, but he/she might be happy to see the old transactions if thats what he is looking for.
Or you can completely ignore to cache dynamic data like API response or server rendered HTML which has such dynamic data and cache what is static only.
End of the day, its you as a developer who decides what to cache and caching something will give you improvement over no cache even in such dynamic content site.
Here is a Google's doc on explaining both.
Related
When looking at how websites such as Facebook stores profile images, the URLs seem to use randomly generated value. For example, Google's Facebook page's profile picture page has the following URL:
https://scontent-lhr3-1.xx.fbcdn.net/hprofile-xft1/v/t1.0-1/p160x160/11990418_442606765926870_215300303224956260_n.png?oh=28cb5dd4717b7174eed44ca5279a2e37&oe=579938A8
However why not just organise it like so:
https://scontent-lhr3-1.xx.fbcdn.net/{{ profile_id }}/50x50.png
Clearly this would be much easier in terms of storage and simplicity. Am I missing something? Thanks.
Companies like Facebook have fairly intense CDNs. They may look like randomly generated urls but they aren't, each individual route is on purpose and programed to be handled in that manner.
They aren't after simplicity of storage like you would be if you were just using a FTP to connect to a basic marketing website server. While you may put all your images in a /images folder, Facebook is much too complex for this. Dozens of different types of applications accessing hundreds if not thousands of CDNs and servers world wide.
If you ever build a web app, such as a Ruby on Rails app, and you work with a services such as AWS (Amazon Web Services) you'll also encounter what seems like nonsensical urls. But it's all part of the fast delivery network provided within the architecture. Every time you "push" your app up to the server new urls are generated for each unique resource automatically, css files, JavaScript files, image files, etc all dynamically created. You don't have to type in each of these unique urls individually each time you publish the app, the code simply knows where to look for those as a part of the publishing process.
Example: you tell the web app to look for
//= require jquery
and it returns you http://example.com/assets/jquery-eb3e278249152b5b5d5170b73d9dbf52.js?body=1 in your header.
It doesn't matter that the url is more complex than it should be, the application recognizes it, and that's all that matters.
Simply put, I think it can boil down to two main reasons: Security and Cache:
Security - Adding these long unpredictable hashes prevent others from guessing photo URLs and makes it pretty hard to download photos you aren't supposed to.
Consider what would happen if I could easily guess your profile photo URL and download it, even when you explicitly chose to share it only with friends.
Cache - by adding "random" query params to each photo, you make sure each photo instance gets its own URL. Thus you can store the photo in browser's cache for a long time, knowing that whenever you replace it with a new one, the new photo will have a fresh URL and the browser won't keep showing you the old photo.
If you were to keep the same URL for each user's profile photo (e.g. https://scontent-lhr3-1.xx.fbcdn.net/{{ profile_id }}/50x50.png), and then upload a new photo, either one of these can happen:
If you stored the photo in browser's cache for a long time, the browser will keep showing you the cached version (as long as URL is the same, and cache hasn't expired, there's no need to re-download the image).
If, instead, you only keep the image in cache for short period of time, you end up hitting your server much more then actually needed, increasing the load and hurting performance.
I hope this clarifies it.
With your route scheme, how would you avoid strangers to access the pictures of a private account? The hash also prevent bots to downloads all the pictures.
I get your pain :-) I might not stay with describing how this problem could appear more, but rather let me speak of a solution. Well it is normal that in general code while dealing with hashed value or even base64ed value it seems likes mess to deal with, but with an identifier to explain along, it does not remain much!
I use to work in a company where we use to collate Facebook post, using Graph API get its Insights Object and extract information from it for easy passing around within UI and sending back to our Redis cache store; and once we defined a data-structure in TaffyDB how an object organization is going to look like, everything just made sense with its ability to query the useful finite from long junk looking stream of minified Javascript stream
Refer: http://www.taffydb.com/
The extra values in the URL are useful to:
Track access. This is like when a newspaper appends "&homepage" vs. "&email" to an article URL, so their system knows how a reader found the page.
Avoid abuse and control access. Imagine that a user loaded a small, popular pornographic image into a profile image. They could then hijack the CDN to be a free web host for their porn site. But that code is used internally by the CDN to limit the number of views.
I have a application need a list of data, but these data may be very large. If I'm going to show this list of data in client (mobile app), I can't get all of the data from server because the limit space of mobile.
For example, like Facebook app, there are tons of newsfeed in server, and user can only see some of them. If user want to see more, they need to scroll down and fresh. So how to implement something like this in both client and server? (Currently my server is written in ruby on rails, and client is iOS)
And once the client get those data, does it store in memory or in local database? I'm worried about memory limit in mobile phones.
Thanks
On the server-side, you could probably write an API supporting pagination and custom results count, i.e.: myapp.com/api/get?start=0&count=20 to get the first 20 results, and when the user scrolls all the way down your view on the iPhone, fetch the next items, like that: myapp.com/api/get?start=20&count=20.
If you plan your design well, you'll get something very flexible that you'll be able to change later if you realize that 20 results is too much/not enough, etc.
Depending on your app's architecture and the amount of data your app will handle, you might also need to provide API methods based on the last-updated time, to ensure you're not missing data (e.g., if you call your second get?start=20 a few minutes after the first one, the start index might not have the same meaning).
As for storing data locally, it all depends on what you want to achieve. Are you sure you need to save everything the user has downloaded? You could store only the most recently fetched items in a local SQLite database and query them the next time your app starts up, before refreshing the view (I don't know how it is implemented in Facebook's iPhone app but at least it looks like it's done that way).
I am trying to build a iOS based NEWS app. I went through some of the best NEWS app and found out that, when I tap on any menus like Home(for ex.), they request for home data, only once, next time when i tap Home, I think they display cached data because I don't see any sign of request for data, maintaining speed in app.
So how do they maintain the app with recent data, because every time if cached data is displayed, the data may be already changed in server which may not reflect in the app. So what is the best way to handle data request in apps. Is it like I should request data on every tap of menu buttons or should I maintain some timer to request recent data from the server and rest of the time display cached data.
Use CoreData for caching the news and store the timestamp as well and before displaying it to the user, check for the timestamp. If the last updated time is older than 'x' minutes, get the data from server.
Also, you can store the last updated time of the news articles on the server and create an API to just return the article IDs and their timestamps. Then in your app, first query for the time stamps, and fetch only those articles which are missing in your DB or with older timestamps.
The simplest and most popular way is to use Great Http libraries like AFNetwork
or ASIHttp.
This libraries provide support for caching in the most recommended way.
By setting simple cachePolicy you can easily achieve your purpose.
Its not just about caching it can handle many hidden http complexities on its own (cookies,https authentication,Not-Modified http header many more).
I strongly recommend you to use this way as i have already done some of the ios news reading app.
We have a web-app that uses Mahout and CF filtering to generate product recommendations, based on users assigning ratings to items.
There is a iOS application that communicates with the webapp through a REST API, and let's users scroll through items, and assign them ratings.
The iOS application will pull a list of ranked products from the webapp - this is the list that is displayed to the user. As the user scrolls through to the end, we request further down the list.
There is also a requirement that the iOS application not show a user products that they've seen before on that specific device.
My question is - how should we be handling this last requirement?
Should each iOS client be maintaining a list of what they've seen before, and simply remove these from the list that it pulls from the server?
Or should the server maintain a state for each client, and remove them from the list before it sends it?
What pros and cons can you see for either approach?
Cheers,
Victor
First off, if the requirement is to not show a user products they've seen before on any device/platform (for example if they used the app on their iphone then ipad/ipod) then you'd definitely have to do it server side (as the app cannot know what the user has seen on other devices unless you are storing it on the server).
Assuming, it is a device specific requirement, I would think your goal would be to minimize (potentially unreliable or slow) network traffic to optimize the iphone experience. Syncing back and forth with the server will require extra network traffic, which may fail at times. Which would lead to a conclusion of client side storage. Unless your users are going to be seeing some huge number of products that would chew up disk space, but I assume the amount of data you store per user would be nominal.
Currently I'm building a few mobile apps (currently on iOS but later on Android)that retrieve information via ajax calls (returning JSON) from a Ruby on Rails application. This obviously applies to other applications as well that are using another source to return the JSON data.
The main question is WHEN to store the data and when to just use ajax calls to retrieve it. Currently, my apps do not store a single thing locally and instead require ajax calls for all data. I think for this example we can use the Twitter mobile app, which is one a lot of people are familiar with and has a lot of functionality that I'm wondering how they do it (more logically than technically).
Questions:
1) When you log in the first thing you see is a list of all of the items in your stream. That list is available offline. Does that mean that when you originally signed in, Twitter already went and pulled all of your last X (100?) stream items into a local database and then future views just pull it from there?
2) If you then put your phone on airplane mode (or just shut off mobile data) and click one of those tweets, it opens up the tweet page with all of that data. So now, it looks like they aren't pulling that information in via individually each time you visit a tweet page (which is what my app currently does and takes some time to load that data in and create the views). Does it make sense that they are probably just using the same information that they pulled in when creating your stream items?
3) Users. Is it better practice to (when viewing a users "profile" page for example) store a users data locally and then refresh on future visits, or just do pull in all of the data via ajax each time? In theory each requires an ajax call...
I think those are my main questions for now. If anyone has any thoughts on any of those things (or any other insights into mobile storage) that would be great! If anyone needs screenshots of anything I referenced please let me know and I'd be happy to get those for you.
Currently using:
Titanium Appcelerator for iOS
Ruby on Rails for Backend and remote storage
Ok firstly there is a difference between local storage and device cache.
Mobile phones cache a lot of data so that it doesn't have to be requested each time using up your data plans. Its the same idea when you open a page on safari, go to home screen and go back into safari its still there. This wasn't saved locally its just been cached by IOS.
When you should use local storage is when the data never changes, using twitter as an example like you have, on first start up it downloads your current activity, if one of those contains a link then it will generate a new request, if you have turned off cellular data and still been able to click a link, this is not because twitter has stored it locally but because IOS has cached it temporarily to avoid downloading multiple times. twitter may very well store some of you activity locally, but at least from what I've seen it stores a maximum limit of them starting with most recent, it downloads the rest frequently.
generally speaking if the data is based on the web its fine to use ajax calls, that is what most do, local storage is when the data is only created / viewed on the device (like an app for taking down notes). If you wish to provide local storage so that someone can view there activity offline, great but this is a feature not a requirement.
Most people would only start thinking about this if users frequently request the same data over and over and its not going to change often, then you would need to develop a last modified system, where you send an ajax call to see is there anything new, if not read from local. If the data is dynamic and subject to change often, stick with the ajax calls