In the previous build I used setProperties and ommitted trackingImplementation, I decided to use knockout manually to speed things up, (I have a lot of entities).
Now its changed to this syntax, and even though I comment out the "ko" line, it still creates observables, is there a way to prevent this?
core.config.initializeAdapterInstances({
// modelLibrary: "ko",
dataService: "webApi"
});
I'm loading around 5000-10000 entities into the cache, these are 'guests' which will be ticked off a guest list during a an evening, I need them stored offline because if I lose connection then app fails to do its job. I'm also running on mobile devices which has an additional hit, when I use KO every entity brought across from server becomes a list of observable properties, this is obviously overkill and crashes safari on iPhone.
Instead I wait for the user to do a search from the 10,000 entities using breezejs local query then for each guest in search results instantiate a guest with observables. This allows me the use of knockout for binding and leaves the rest of the entities alone, it works well and performs well on ios devices too.
Just having a read of 'extending entities' now
Thanks
I see in your comment that you tried ...modelLibrary: backingStore.... I was going to suggest this. But then I worried that your manual KO properties would by-pass the backingStore properties and the wheels would come off. I think it could work (haven't tried) if you added KO computeds that read and wrote the backingStore properties ... but they'd have to have different, non-conflicting names, right?
Let's go back to root causes for a moment. What is too slow about using the ko model library and what are you doing that is faster? I'm having trouble imagining how the definition of the KO properties could be a performance issue. How have you measured the speed difference? What were the speed differences?
Also, did you try defining the KO properties in your own entity type constructor and registering that constructor with the MetadataStore as described in "Extending entities"?
Related
I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core
I'm having an issue with an umbraco site of mine: For some reason some of the nodes are timing out when I try to click on them in the back-end of the site.
The front-end works fine and there aren't any slowdown issues there, however I'm unable to edit these same nodes in the back-end as the system seems to just hang. This is making it incredibly difficult to debug as I have no idea what properties are actually causing the problems here. What's strange is I can create a node of the same document type and enter in some dummy values and that works fine, yet I can't seem to edit the existing nodes.
I've tried republishing the entire site, republishing the individual nodes, deleting the umbraco.config file and nothing has worked up to this point.
What's also interesting is that if I close down the browser the system seems to stop hanging and I can log in and try again.
Has anyone encountered this before or know where to begin?
Thanks
I have encountered something similar. The longer you work with Umbraco the slower it becomes and if you check the memory usage in Chrome's task manager, you can see that certain actions upon nodes bump the memory usage up a little further. The answer is just to close down the tab and open a new one.
I have reported this and Umbraco cannot replicate this. However, I do think that this is possibly due to maybe a package installed into Umbraco, maybe uComponents. It's very difficult to pin point.
Update:
If you can access some nodes but not others, then this is actually slightly easier to debug. I would check what similarities the nodes that timeout have.
Are they all of the same document type?
Do they all use the same data type?
I would guess that the nodes in question are using a data type that is performing an operation when the node is loading, and that operation is timing out. For example, do you have any data types that load data from the database, like enums? Do you have any datatypes that load data from a web service?
Do you have any usercontrol data types wrapped in the UserControlWrapper data type? These would be somewhere to check.
Finally, check:
The databases [umbracoLog] table. Any Umbraco-specific errors will be listed there.
Check the computer's event viewer. This will show any unhandled errors.
My money's on a database timeout.
I have an old SL4/ria app, which I am looking to replace with breeze. I have a question about memory use and caching. My app loads lists of Jobs (a typical user would have access to about 1,000 of these jobs). Additionally, there are quite a few lookup entity types. I want to make sure these are cached well client-side, but updated per session. When a user opens a job, it loads many more related entities (anywhere from 200 - 800 additional entities) which compose multiple matrix-style views for the jobs. A user can view the list of jobs, or navigate to view 1 job at a time.
I feel that I should be concerned with memory management, especially not knowing how browsers might deal with this. Originally I felt this should all be 1 EntityManager and I would detachEntities when user navigates away from a job, but I'm thinking this might benefit from multiple managers by intended lifetime. Or perhaps I should create a new dataservice & EntityManager each time the user navigates to a new hash '/#/' area, since comments on clear() seems to indicate that this would be faster? If I did this, I suppose I will be using pub/sub to notify other viewmodels of changes to entities? This seems complex and defeating some of the benefits of breeze as the context.
Any tips or thoughts about this would be greatly appreciated.
I think I understand the question. I think I would use a multi-manager approach:
Lookups Manager - holds once-per session reference (lookup) entities
JobsView Manager - "readonly" list of Jobs in support of the JobsView
JobEditor Manager - One per edit session.
The Lookups Manager maintains the canonical copy of reference entities. You can fill it once with a single call to server (see docs for how). This Lookups Manager will Breeze-export these reference entities to other managers which Breeze-import them as they are created. I am assuming that, while numerous and diverse, the total memory footprint of reference entities is pretty low ... low enough that you can afford to have more than one copy in multiple managers. There are more complicated solutions if that is NOT so. But let that be for now.
The JobsView Manager has the necessary reference entities for its display. If you only displayed a projection of the Jobs, it would not have Jobs in cache. You might have an array and key map instead. Let's keep it simple and assume that it has all the Jobs but not their related entities.
You never save changes with this manager! When editing or creating a Job, your app always fires up a "Job Editor" view with its own VM and JobEditor Manager. Again, you import the reference entities you need and, when editing an existing Job, you import the Job too.
I would take this approach anyway ... not just because of memory concerns. I like isolating my edit sessions into sandboxes. Eases cancellation. Gives me a clean way to store pending changes in browser storage so that the user won't lose his/her work if the app/browser goes down. Opens the door to editing several Jobs at the same time ... without worrying about mutually dependent entities-with-changes. It's a proven pattern that we've used forever in SL apps and should apply as well in JS apps.
When a Job edit succeeds, You have to tell the local client world about it. Lots of ways to do that. If the ONLY place that needs to know is the JobsView, you can hardcode a backchannel into the app. If you want to be more clever, you can have a central singleton service that raises events specifically about Job saving. The JobsView and each new JobEditor communicate with this service. And if you want to be hip, you use an in-process "Event Aggregator" (your pub/sub) for this purpose. I'd probably be using Durandal for this app anyway and it has an event aggregator in the box.
Honestly, it's not that complicated to use and importing/exporting entities among managers is a ... ahem ... breeze. Well worth it compared to refreshing the Jobs List every time you return to it (although you'll want a "refresh button" too because OTHER users could be adding/changing those Jobs). You retain plenty of Breeze benefits: querying, validation, change-tracking, batch saves, entity navigation (those reference lists work "for free" in Breeze).
As a refinement, I don't know that I would automatically destroy the JobEditor view/viewmodel/manager when I returned to the JobsView. In my experience, people often return to the same Job that they just left. I might hold on to a view so you could go back and forth quickly. But now I'm getting tricky.
I'm running a db4o server with multiple clients accessing it. I just ran into the issue of one client not seeing the changes from another client. From my research on the web, it looks like there are basically two ways to solve it.
1: Call Refresh() on the object (from http://www.gamlor.info/wordpress/2009/11/db4o-client-server-and-concurrency/):
const int activationDeph = 4;
client2.Ext().Refresh(objFromClient2, activationDeph);
2: Instead of caching the IObjectContainer, open a new IObjectContainer for every DB request.
Is that right?
Yes, #1 is more efficient, but is that really realistic to specify which objects to refresh? I mean, when a DB is involved, every time a client accesses it, it should get the latest information. That's why I'm leaning towards #2. Plus, I don't have major efficiency concerns.
So, am I right that those are the two approaches? Or is there another?
And, wait a sec... what happens when your object goes out of scope? On a timer, I call a method that gets an object from the DB server. That method instantiates the object. Since the object went out of scope, it's not there to refresh. And when I call the DB, I don't see the changes from the client. In this case, it seems like the only option is to open a new IObjectContainer. No?
** Edit **
I thought I'd post some code using the solution I finally decided to use. Since there were some serious complexities with using a new IObjectContainer for every call, I'm simply going to do a Refresh() in every method that accesses the DB (see Refresh() line below). Since I've encapsulated my DB access into logic classes, I can make sure to do the Refresh() there, every time. I just tested this and it seems to be working.
Note: The Database variable below is the the db4o IObjectContainer.
public static ApplicationServer GetByName(string serverName)
{
ApplicationServer appServer = (from ApplicationServer server in Database
where server.Name.ToUpperInvariant() == serverName.ToUpperInvariant()
select server).FirstOrDefault();
Database.Ext().Refresh(appServer, 10);
return appServer;
}
1) As you said, the major problem with this that you usually really don't know what objects to refresh.
You can use the committed event to refresh objects as soon as any client has committed. db4o will distribute that event. Note that this also consumes some network traffic & time to send the events. And there will be a time frame where your objects have a stale state.
2) It actually the cleanest method, but not for every db request. Use a object container for every logical unit of work. Any operation which is one 'atomic' unit of work in your business-operations.
Anyway in general. db4o was never build with the client server scenario as first priority, and it shows in the concurrent scenarios. You cannot avoid working with stale (and even inconsistent) object state and there is no concurrency control options (except the low level semaphores).
My recommendation: Use a client container per unit of work. Be aware that even then you might get stale data, which then might lead to a inconsistent view & update. When there are rarely any contentions & races in your application scenario and you can tolerate a mistake once in a while, then this is fine. However if you really need to ensure correctness, then I recommend to use a database which has a better concurrency store =(
What are the benefits of using attached objects vs detached objects?
What I'm currently doing in my repository is manually detach my objects before i update or delete them. So if I'm updating or deleting, I do not do a round trip but I delete by ID. I think working with detached scenario works for me. Am I doing something wrong?
I'm working with an n-teir app that utilizes asp.net mvc and wcf.
Using attached objects will allow you to manipulate, track changes, do concurrency optimization. In most cases I use attached objects for updates or in a stateful application. This will also allow you to have lazy-loading and to benifit from the context cache. If you are using entity framework in a statefull manner, this is great because you can reduce the number of calls to the database when you require a single object from the context. Using the GetObjectByKey will query the context before making a query to the database. if the object was previously loaded, it will save you a round trip to the database.
Using detached objects are great! it allows for faster reads, simpler objects to materialize, a smaller memory footprint for the entity context. It is also best when sending data over the wire (wcf.. services). Anything out of scope, or even when you are converting the objects to domain objects. Since you do not require any tracking on the objects, this is a good optimization to start with. This can be acheived quickly using the NoTracking merge option on the entity set.
Detached objects will also greatly simplify working with EF in environments where you have many instances of your context. Simply attach the object before making changes and saving.
Note: using NoTracking will not allow you to use lazy-loading, change tracking, GetObjectByKey or any of the statefull functionalities of entity framework. Using NoTracking you will need to use eager loading ("Include()") to load related entities / navigation properties. EntityKeys will also not be loaded.
Edit:
Lazy-loading on detached entities, does not work because it does not have a context to refer to for its queries. The entity may also be missing the required proxies and entitykeys.
I would greatly suggest using eager loading. This may also be an optimisation in the end because it is hard to guage the impact of lazy-loading. Because it can produce situations where if you are iterating a collection, it will make a request to the database for each object in the collection. This can be quite problematic when you have large collections.
entity.User.Attach(model);
entity.ObjectStateManager.ChangeObjectState(model,System.Data.EntityState.Modified);
entity.SaveChanges();
return View(model);