How does Falcor cache data in the server side? - falcor

I understand that in the falcor client side, it caches data in the model. On the application server side, we need to implement falcor routes as data source. Does Falcor cache data in the application server side? If so, how?
Thanks,

In short, no, the falcor-router does not cache data. Because a single request might be resolved by multiple routes, the router does build a per-request cache, but that cache is dropped after the router finishes responding to the request.
E.g. the following request
method=get
paths=[
items[0..10]['id', 'name'],
items.length,
]
could be resolved by two or three different routes, e.g.
[items[{range}]]
[items.length]
[itemsById[{keys}]
The router will merge each route response into a graph fragment until it resolves all requested paths and follows up any returned ref nodes. This graph fragment can be thought of as a per-request cache (or at least, it's referred to as such in the source code), but it is dropped after the response is returned to the client.
This implies a few things:
the server has no knowledge of what data is/isn't already on a client
graph fragments are not materialized, meaning running the same query twice (assuming it doesn't hit your falcor client model's cache) will run the same query.
Server-side caching and cache invalidation is more appropriately handled in a database layer, rather than the router layer

Related

Is it better to send 1 POST Request with an array parameter or 40 POST requests to create 40 objects in DB?

I have got a functionality on my client side that will create 40 apartment objects in DB with an array of objects with credentials in it. I was wondering what would be the best approach to create 40 objects in DB sent from the client side. Is it better to pass the array of objects as a parameter to the POST HTTP request and send a 1 POST request? OR Is it better iterate the array of objects to send 40 POST requests one by one to the server? (Side note: The server is built with Ruby on Rails with Postgres)
If your application has a heavy network usage, consider the former, sending one single POST request and processing on Rails end. On the other hand, if you're scarce on computing resources (say you're not using any concurrent webserver such as Puma) consider the first approach, as more clients can be served.
On my production application I tend to go for 1 single POST as we can handle multiple users, so if one CPU is busy accessing the database, subsequent API calls can still be served as there's still enough pool.
Plus with 1 single POST you could wrap everything in a single atomic operation and rollback if s**t hits the fan.
This is based on your usecase actually and do you need to rollback or not .
it is not black and white you might trigger 4 request each contain 10 rows
It is based more on you payload (how heavy it is ? ) and you network and processing power.
The point is that there is not concrete answer to and you need to consider your usecase to take the decision .

Multiple requests are being made from service worker to cache a resource

As I'm working on building progressive web apps. We are facing weird behaviour for service worker.
1. Clear cache and unregister service worker
2. Go to www.example.com
3. Examine the network calls for resources (JS/CSS)
Expected result:
Only single network request should go for one resource.
Actual result:
Two network requests are being made for each of the resource [
What you're seeing in sw-precache fetching the resources to populate its caches. That happens independently of the initial request made by the controlled page. It's a fairly common model, whether or not you're using sw-precache.
(As an aside, I see that you're explicitly versioning your JS and CSS resources, which is great. You'll notice that sw-precache appends a cache-busting header to its precaching requests right now, meaning that they'll always go against the network instead of the HTTP browser cache. The upcoming 4.0.0 release of sw-precache, which you can use now via the master branch, has a new dontCacheBustUrlsMatching option, which allows you to opt-out of cache-busting for resources that you're explicitly versioning via filenames. Using that option means that the additional sw-precache request to populate its caches will be fulfilled via the HTTP browser cache, skipping a trip to the network.)

MVC 5 how to achieve POST that behaves like a redirect to GET with content

My client redirects to a https://domain.com/Controller/GetInfo?Querystring method. Now my query string is getting dangerously close to the 2K limit, so I need to reproduce this behavior but pack my query string into the content of the messages. Since it would be heresy (etc.) to try a GET with content, I'll use a POST. However, I can't redirect to a POST since a Redirect has no content.
So, what I am looking for is the best MVC 5 pattern to resolve this: I need to provide lots of content, but I want the resulting page hosted on my remote server (i.e. as if I had redirected)
Also, since I use load balanced servers in azure, I'd prefer maintaining my clean stateless server if at all possible (else I'll have to introduce session caching).
#AntP is absolutely right in the comments above. If your query string is approaching 2K, then you're abusing it.
If there's a particular object you're referencing, then you can simply include the id or some other identifying piece of it and use that to look it up again from your data store.
If there's no persistent record of the object, then you can use something like Session or TempData to store it between one request and the next.
Regardless, it's not possible to redirect with a request body, with also means it's not possible to redirect using POST. The reason for this that the a redirect is not something the server does, but rather the client. The server merely suggests that the client go to a different URL. It's then up to the client (web browser) to issue a new request for that URL. Since the client is the one issuing the request, it makes the decision about what data is or isn't included in that request, not the server.

Client Server API pattern in REST (unreliable network use case)

Let's assume we have a client/server interaction happening over unreliable network (packet drop). A client is calling server's RESTful api (over http over tcp):
issuing a POST to http://server.com/products
server is creating an object of "product" resource (persists it to a database, etc)
server is returning 201 Created with a Location header of "http://server.com/products/12345"
! TCP packet containing an http response gets dropped and eventually this leads to a tcp connection reset
I see the following problem: the client will never get an ID of a newly created resource yet the server will have a resource created.
Questions: Is this application level behavior or should framework take care of that? How should a web framework (and Rails in particular) handle a situation like that? Are there any articles/whitepapers on REST for this topic?
The client will receive an error when the server does not respond to the POST. The client would then normally re-issue the request as they assume that it has failed. Off the top of my head I can think of two approaches to this problem.
One is that the client can generate some kind of request identifier, such as a guid, which it includes in the request. If the server receives a POST request with a duplicate GUID then it can refuse it.
The other approach is to PUT instead of POST to create. If you cannot get the client to generate the URI then you can ask the server to provide a new URI with a GET and then do a PUT to that URI.
If you search for something like "make POST idempotent" you will probably find a bunch of other suggestions on how to do this.
If it isn't reasonable for duplicate resources to be created (e.g. products with identical titles, descriptions, etc.), then unique identifiers can be generated on the server which can be tracked against created resources to prevent duplicate requests from being processed. Unlike Darrel's suggestion of generating unique IDs on the client, this would also prevent separate users from creating duplicate resources (which you may or may not find desirable). Clients will be able to distinguish between "created" responses and "duplicate" responses by their response codes (201 and 303 respectively, in my example below).
Pseudocode for generating such an identifier — in this case, a hash of a canonical representation of the request:
func product_POST
// the canonical representation need not contain every field in
// the request, just those which contribute to its "identity"
tags = join sorted request.tags
canonical = join [request.name, request.maker, tags, request.desc]
id = hash canonical
if id in products
http303 products[id]
else
products[id] = create_product_from request
http201 products[id]
end
end
This ID may or may not be part of the created resources' URIs. Personally, I'd be inclined to track them separately — at the cost of an extra lookup table — if the URIs were going to be exposed to users, as hashes tend to be ugly and difficult for humans to remember.
In many cases, it also makes sense to "expire" these unique hashes after some time. For example, if you were to make a money transfer API, a user transferring the same amount of money to the same person a few minutes apart probably indicates that the client never received the "success" response. If a user transfers the same amount of money to the same person once a month, on the other hand, they're probably paying their rent. ;-)
The problem as you describe it boils down to avoiding what are called double-adds. As mentioned by others, you need to make your posts idempotent.
This can be easily implemented at the framework level. The framework can keep a cache of completed responses. The requests have to have a request unique so that any retries are treated as such, and not as new requests.
If the successful response gets lost on its way to the client, the client will retry with the same request unique, the server will then respond with its cached response.
You are left with durability of the cache, how long to keep responses, etc. One approach is to remove responses from the server cache after a given period of time, this will depend on your app domain and traffic and can be left as a configurable step on the framework piece. Another approach is to force the client to sent acknowledgements. The acks can be sent either as separate requests (note that these could be lost too), or as extra data piggy backed on real requests.
Although what I suggest is similar to what others suggest, I strongly encourage you to keep this layer of network resiliency to do only that, deal with drop requests/responses and not allow it to deal with duplicate resources from separate requests which is an application level task. Merging both pieces will mush all functionality and will not leave you with a clear separation of responsibilities.
Not an easy problem, but if you keep it clean you can make your app much more resilient to bad networks without introducing too much complexity.
And for some related experiences by others go here.
Good luck.
As the other responders have pointed out, the basic problem here is that the standard HTTP POST method is not idempotent like the other methods. There is an effort underway to establish a standard for an idempotent POST method known as Post-Once-Exactly, or POE.
Now I'm not saying that this is a perfect solution for everybody in the situation you describe, but if it is the case that you are writing both the server and the client, you may be able to leverage some of the ideas from POE. The draft is here: https://datatracker.ietf.org/doc/html/draft-nottingham-http-poe-00
It isn't a perfect solution, which is probably why it hasn't really taken off in the six years since the draft was submitted. Some of the problems, and some clever alternate options are discussed here:
http://tech.groups.yahoo.com/group/rest-discuss/message/7646
HTTP is a stateless protocol, meaning the server can't open an HTTP connection. All connections get initialized by the client. So you can't solve such an error on the server side.
The only solution I can think of: If you know, which client created the product, you can supply it the products it created, if it pulls that information. If the client never contacts you again, you won't be able to transmit information about the new product.

Make an ASP.NET MVC application Web Farm Ready

What will be the most efficient way to make an ASP.NET MVC application web-farm ready.
Most importantly sharing the current user's information (Context) and (not so important) cached objects such as look-up items (States, Street Types, counties etc.).
I have heard of/read MemCache but haven't seen a simple applicable way (documentation) on how to implement and test it.
Request context
Any request that hits a web farm gets served by an available IIS server. Context gets created there and the whole request gets served by the same server. So context shouldn't be a problem. A request is a stateless execution pipeline so it doesn't need to share data with other servers in any way shape or form. It will be served from the beginning to the end by the same machine.
User information is read from a cookie and processed by the server that serves the request. It depends then if you cache complete user object somewhere.
Session
If you use TempData dictionary you should be aware that it's stored inside Session dictionary. In a server farm that means you should use other means than InProc sessions, because they're not shared between IIS servers across the farm. You should configure other session managers that either use a DB or others (State server etc.).
Cache
When it comes to cache it's a different story. To make it as efficient as possible cache should as well be served. By default it's not. But looking at cache it barely means that when there's no cache it should be read and stored in cache. So if a particular server farm server doesn't have some cache object it would create it. In time all of them would cache some shared publicly used data.
Or... You could use libraries like memcached (as you mentioned it) and take advantage of shared cache. There are several examples on the net how to use it.
But these solutions all bring additional overhead of several things (like network and third process processing and data fetching etc.) if nothing else. So default cache is the fastest and if you explicitly need shared cache then decide for one. Don't share cache unless really necessary.

Resources