Breeze.js: Returning empty set when requested database does not exist - breeze

Currently we are using Breeze.js and Angular to develop our applications. Due to some persistent legacy issues, we have two databases ('Kenya' and 'Rwanda') that cannot be merged at this time, but have the same schema and metadata. Most of the time, the client knows which database to hit and passes the request through the .withParameters() function or the .saveOptions() function. Sometimes we want to request the same query from both databases (for example, if we are requesting a list of all available countries), and we use a EntityManager wrapper on the client to manage this and request the same query from each database. This is implemented through a custom EFContextProvider which uses the data returned to determine the appropriate database and creates the appropriate context in CreateContext().
To further complicate things, in some instances one or the other database won't exist (these are local deployments created through filtered replication), but the client won't know this. Therefore, when querying for a list of all countries, it issues two requests and one will cause failures because the context cannot be instantiated properly.
This is easy enough to detect on the Server. What I would like to do is to detect whether the requested context is available and, if not, return a 200 response and an empty set.
I can detect this in the Breeze DBContextProvider CreateContext() method, but cannot figure out how to cause the request to fallback gracefully to a empty-set response.
Thanks

Not exactly what I was looking for, but it probably makes more sense since most of the work is being done on the client-side:
Instead of trying to change the controller, I added a getAvailableDatabases to the C# controller actions and use that to determine which of the databases I will query from the client.

Related

Insert to asp.net mvc outputcache from console program

I am using Redis for asp.net MVC output cache. Some of my views take a fair bit of processing, currently I have an overnight process that generates the required data for the views and puts it in Redis cache so the views can render much quicker, however the data is only in the cache for the purpose of the initial render of the view and then the view is cached by output cache config.
It would be MUCH better if I could just render the view and put that directly into the cache from the overnight console program. How would I do this? I gather I would need to insert to Redis with the same key that ASP.NET MVC would give and call whatever internal render method that asp.net MVC uses?
I don't need instructions for inserting to Redis, rather what is the render method I need to call and how are the key names constructed for asp.net MVC OutputCache.
I am using asp.net MVC 5, however, bonus kudos if you can also answer for Core to futureproof the answer!
Please no suggestions of generating static files, that's not what I want, Thanks.
How are the key names constructed for asp.net mvc outputcache?
This part is easy to answer if you consult the source code for OutputCacheAttribute. The keys depend on the settings (e.g. the keys will have more data in them if you have set VaryByParam). You can determine the keys by checking how the attribute populates uniqueID for you use case. Notice that the keys are concatenated and then hashed (since they could get very long) and then base64-encoded. Here is the relevant code:
internal string GetChildActionUniqueId(ActionExecutingContext filterContext)
{
StringBuilder uniqueIdBuilder = new StringBuilder();
// Start with a prefix, presuming that we share the cache with other users
uniqueIdBuilder.Append(CacheKeyPrefix);
// Unique ID of the action description
uniqueIdBuilder.Append(filterContext.ActionDescriptor.UniqueId);
// Unique ID from the VaryByCustom settings, if any
uniqueIdBuilder.Append(DescriptorUtil.CreateUniqueId(VaryByCustom));
if (!String.IsNullOrEmpty(VaryByCustom))
{
string varyByCustomResult = filterContext.HttpContext.ApplicationInstance.GetVaryByCustomString(HttpContext.Current, VaryByCustom);
uniqueIdBuilder.Append(varyByCustomResult);
}
// Unique ID from the VaryByParam settings, if any
uniqueIdBuilder.Append(GetUniqueIdFromActionParameters(filterContext, SplitVaryByParam(VaryByParam)));
// The key is typically too long to be useful, so we use a cryptographic hash
// as the actual key (better randomization and key distribution, so small vary
// values will generate dramtically different keys).
using (SHA256Cng sha = new SHA256Cng())
{
return Convert.ToBase64String(sha.ComputeHash(Encoding.UTF8.GetBytes(uniqueIdBuilder.ToString())));
}
}
You'll notice later the uniqueID is used as a key into the internal cache:
ChildActionCacheInternal.Add(uniqueId, capturedText, DateTimeOffset.UtcNow.AddSeconds(Duration));
What is the render method I need to call?
Short answer: ExecuteResult.
Long answer: Holy crap, you are asking a lot here. Essentially you wish to instantiate some objects within the console process and call methods which will faithfully recreate the output that would have been created if you called it from within the AppDomain where the web site usually runs.
Web applications often rely on initialization and state that is created when the application starts up (e.g. setting up the composition root/IoC, or setting up Automapper, that sort of thing), so you'd have to run the initialization of your web site. A specific view may rely on contextual information such as the URL, cookies, and querystring parameters; it may rely on configuration; it may call internal services, which also rely on configuration, as well as the AppDomain account being set up a certain way; it may need to use things like client certificates which may be set up in the service account's personal store, etc.
Here is the general procedure of what the console app would have to do:
Instantiate the site's global object, calling its constructor, which may attempt to wire up events to the pipeline.
You will need to mock the pipeline and handle any events raised by the site. You will also need to raise events in a manner that simulates the way the ASP.NET pipeline works.
You will need to implement any quirks in the ASP.NET pipeline, e.g. in addition to raising events you will also need to call handlers that aren't subscribed to the events if they have certain predefined names, such as Application_Start.
You will need to emulate the HTTP request by constructing or mocking pipeline objects, such as HttpContext.
You will need to fire request-specific events at your code in the correct order to simulate HTTP traffic.
You will need to run your routing logic to determine the appropriate controller to instantiate, then instantiate it.
You will need to read metadata from your action methods to determine which filters to apply, then instantiate them, and allow them to subscribe to yet more events, which you must publish.
In the end you will need to get the ActionResult object that results from the action method and call its ExecuteResult method.
I don't think this is a feasible approach, but I'd like to hear back from you if you succeed at it.
What you really ought to do
Your console application should simply fire HTTP requests at your application to populate the cache in a manner consistent with actual end user usage. This is how everyone else does it.
If you wish to replace the cached page before it has expired, you can invalidate the cache by restarting the app pool, or by using a dependency.
If you are worried about your response time statistics, change the manner in which you measure them so that you exclude any time window where this refresh is occuring.
If you are worried about impacts to a Google crawl, you can modify the host load schedule and set it to 0 during your reset window.
If you really don't want to exercise the site
If you insist that you don't want to exercise the site to create the cache, I suggest you make the views lighter weight, and look at caching at lower layers in your application.
For example, if the reason your views take so long to render is that they must run complicated queries with a lot of joins, consider implementing a database cache in the form of a denormalized table. You can run SQL Agent jobs to populate the denormalized table on a nightly basis, thus refreshing your cache. This way the view can be lightweight and you won't have to cache it on the web server.
For another example, if your web application calls RESTful services that take a long time to run, consider implementing cache-control headers in your service, and modify your REST client to honor them, so that repeated requests for the same representation won't actually require a service call. See Caching your REST API.

Rails: working on temporary instance between requests and then commit changes to database

I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)

is there an easy way to mark an entity in the cache as "added"?

i would like to set an entity sent from the server to "added". it looks like entityaspect has methods setdeleted, setmodified, etc... but i can't seem to find one called setadded... what is the cleanest way to set an entity to "added"? i was thinking perhaps i would need to detach and then attach as "added". i have a server method called "newdeal" which creates a new entity ready for data entry... this method has business logic which i would prefer to keep on the server... when it gets to the client the entity is marked as "unmodified" which makes sense... but i would then like to change it to "added"...
thank you
#giancarloa, I'm assuming that, by the time the entity is sent from server to client, it has been persisted in the database. If that's the case, it wouldn't make sense to have its entityState set to Added as it would cause a duplicate error. If that's not how it works, please explain in detail what you are doing as I'm trying to get an idea of all the steps you're taking.
I'm also confused as to why create an entity in the server, send it to the client, update it, and then send it back to the server to save it in the DB - this just appear to cause more traffic and possibly reduce performance. Also, what it the user decides not to save? - then the work in the server would've been wasted.
Why not create the entity in the client and if it turns out to be saved, then the business logic would kick in the server during the beforeSaveEntity/beforeSaveEntities?
I had a similar problem. The breeze expect that entities returned from the server already exists in your database. This is not the case if your server fetched the entities from some other sources (not the database), returned them to client and then user can decide to the client if those entities should really be inserted in the database.
As you indicated, what you must do is skip the code that adds the entities into client's entity manager. later, you can add the detached entities to Entitymanager.
See the following answer for more details.
https://stackoverflow.com/a/18596070/174638

Multiple RESTful Web Service Calls vs. MySQL JOINs

I am currently constructing a RESTful web service using node.js for one of my current iPhone applications. At the moment, the system works as follows:
client makes requests to node.js server, server does appropriate computations and MySQL lookups, and returns data
client's reactor handles the response and updates the UI
One thing that I've been thinking about is the differences (in terms of performance and best practice) of making multiple API calls to my server vs one call which executes multiple join statements in the MySQL database and then returns a constructed object.
For example:
Lets say I am loading a user profile to display in the UI. A user has a profile picture, basic info, and news feed items. Using option one I would do as follows:
Make a getUser request to the server, which would do a query in the DB like this:
Select * from user join user_info on user.user_id=user_info.user_id left join user_profile_picture on user_profile_picture.user_id=user.user_id.
The server would then return a constructed user object containing the info from each table
Client waits for a response for the server and updates everything at once
Option 2 would be:
Make 3 asynchronous requests to the server:
getUser
getUserInfo
getUserProfile
Whenever any of the requests are received, the UI is updated
So given these 2 options, I am wondering which would offer better scalability.
At the moment, I am thinking of going with option 2 for these reasons:
Each of the async requests will be faster than the query in option a, therefore displaying something to the user faster
I am also integrating Memecache and I feel that the 3 separate calls will be easier for caching specific results (e.g not caching a user profile, but caching user, user_info and user_profile_picture).
Any thoughts or experiences?
I think the key question here is whether or not these API calls will always be made together. If they are, it makes more sense to setup a of a single endpoint and perform a join. However, if that is not the case then you should keep the separate.
Now, what you can do is of course use a query syntax that let's you specify whether or not a particular endpoint should give you more data and combine it with a join. This does require more input sanitation, but it might be worth it, since you could then minimize requests and still get an adaptable system.
On the server side, it's unlikely that either of your two approaches should be noticably slower than the other unless you're dealing with thousands of rows at a time

Proper way to sync modified objects using RestKit and CoreData

I need help understanding how to manage the API for an iPhone application that persists remote objects using CoreData. From my understanding, when the iOS app fetches the remote objects, it loads all the objects at the resource path.
For subsequent requests, I want to reduce overhead by having the web server return only objects that have been updated since the last update time. I perform this by returning objects where updated_at is newer than the last_update time of the request.
RestKit then parses and maps the modified objects to CoreData. Is this implementation the proper way of performing synchronization using RestKit and CoreData or am I missing a layer somewhere in between?
Thanks!
Typically RESTful interfaces should try and head "back to the basics". In your case I recommend using the HTTP Header If-Modified-Since. It is slightly cleaner than passing another parameter because RestKit will handle the HTTP status responses without you doing anything.
Otherwise, your method seems normal. Server synchronization is an enormous problem and there is a lot of literature and methods dealing with it. If you want to do something more complex then a quick web search will turn up a handful of methods, but your approach is what I usually end up doing.
If a user can edit data on your app, then there is the problem of synchronizing modifications made on the device. For this you typically set a object scope "modified" flag and only upload an object if it is modified.

Resources