I used the model first approach using DBContext to create EF 4.1 Data Model. I would like to turn off Optimistic Concurrency Checking on my entire database because I do not need it.
What is the easiest way to do this? I would prefer to do this through the designer, but if not possible, maybe there is an DBContext/ObjectContext way to do this?
Please help thanks.
There is no optimistic concurrency by default. If you want to use the concurrency checking you must configure it in your model - each property has Concurrency Mode configuration which defaults to None. Unless you change it to Fixed value optimistic concurrency is not used. There is also no global configuration to turn concurrency checking on or off.
Related
I work with Ruby on Rails and want to cache some objects that I receive from the database. However, security is my priority and I am not sure if marshalling is the best choice over, for example, JSON.
Are there any security risks related to unmarshalling database objects? Is it possible to construct such an object that unmarshalling will result in remote code execution? If yes, how?
ok, I thought about it more, and attained enlightenment. Of course, I can store those objects and highly likely nothing will happen, but I know that this is a possible attack vector. So, I can avoid possible issues completely and not summon Murphy's laws upon me. Thanks to #SergioTulentsev for his patience!
Starting with an existing SQL Server database and no EF model, is there any reason to favor the SqlEntityConnection type provider over SqlDataConnection?
You can upgrade to newer versions of Entity Framework while still using SqlEntityConnection. This allows you to take advantage of bug fixes and performance improvements. There will never be a newer version of LINQ-to-SQL, which powers SqlDataConnection. For me, that's reason enough to prefer SqlEntityConnection.
One reason you might want to use EF is for its finer-grained control on lazy-loading of results. For example you can use the Include method to make a given query eagerly load a specified set of nested values. Here is more detail on this.
I have just began learning Core data. When it comes to multithreading, some blogs say that in this case we should use children contexts (by creating a context and setting its parent) and just invoke the performBlock: method. However some other blogs say that we should avoid this approach since it has introduced many bugs.
I have just began developing an application that manipulates a large data base and the project manager voted for Core data (instead of SQLLite).
Could any one please give me some directions. Should i use the children contexts strategy (introduced since iOS 5) or is there a better way to perform multithreading with Core Data ?
Thanks.
Should i use the children contexts strategy (introduced since iOS 5)
or is there a better way to perform multithreading with Core Data ?
In addition to the concept you mentioned, Managed Object Contexts have built-in concurrency support without parent contexts (see https://developer.apple.com/library/ios/releasenotes/DataManagement/RN-CoreData/index.html).
If you create one using initWithConcurrencyType:, you can use performBlock: and performBlockAndWait: and the threading will be handled for you, assuming you follow the basic patterns outlined in the link above. The parent/child context approach can help you with synchronization.
There's also an NSOperation-based approach outlined here: http://www.objc.io/issue-2/common-background-practices.html. I personally wouldn't use it, because the built-in APIs are sufficient, but the article is very well written and should give you a good idea of what's going on.
How you implement this depends on the needs of your app.
some other blogs say that we should avoid this approach since it has
introduced many bugs.
I would ignore them, and focus on writing clean code for yourself. There are plenty of apps that use multithreading + Core Data without bugs.
I am writing a program using Ruby on Rails and PostgreSQL. The system generates alot of reports which are frequently updated and frequently accessed by users. I am torn between whether I should use Postgres triggers to create the report tables (like Oracle materialized views) or the Rails built in ActiveRecord callbacks. Has anyone got any thoughts or experiences on this?
Callback are useful in the following cases:
Combine all business logic in Rails models which ease maintainability.
Make use of existing Rails model code
Easy to debug
Ruby code is easier to be written/read than sql "maintainability"
Triggers are useful in the following cases:
Performance is a big concern. It is faster than callbacks.
If your concern is ease and clean then use callbacks. If your concern is performance then use triggers.
We had the same problem, and since this is an interesting topic, I'll elaborate based on our choice/experience.
I think the concept is more complex than what highlighted in the current answer.
Since we're talking about reports, I assume that the use case is updating of data warehousing tables - not a "generic" application (this assumption/distinction is crucial).
Firstly, the "easy to debug" idea is not [necessarily] true. In our case, it's actually counterproductive to think so.
In sufficiently complex applications, some types of callbacks (data warehousing updates/millions of lines of code/mid (or more) sized team) are simply impossible to maintain, because there are so many places/ways the database will be updated, that it will be practically impossible to debug missed callbacks.
Triggers don't have to be necessarily designed as the "complex and fast" logic.
Specifically, triggers may also works as low-level callback logic, therefore being simple and lean: they would simply forward the update events back to the rails code.
To wrap up, in the use case mentioned, rails callbacks should be avoided like the plague.
An efficient and effective design is to have RDBMS triggers adding records to a queue table, and a rails-side queueing system, which acts upon them.
(Since this post is old, I'm curious about which has been the experience of the OP)
I am working on a asp.net mvc application.
I have a situation where I have to make many threads which are going to access the database using linqtosql. My question is, will it be fine to leave every thing on linqtosql to maintain the synchronization, because the threads will be accessing the database at the same time. Or I have to write my own code to do that.
If each thread is using its own database context, you will be fine. However, I don't believe the database context object is thread safe. So, it's best to make sure each thread has its own context.
Randy
I'm not sure what kind of synchronization you mean, but databases have been designed such that multiple clients (threads, processes, machines) can access/read/change data at the same time. Linq2Sql is - speaking very simply - just one of the mechanisms to emit SELECT/DELETE/UPDATE statements against the database.
If you are using ASP.NET MVC I would seriously take a look at using S#arp Architecture. It uses nHibernate but provides some scaffolding to make the data layer very easy to create and work with. It uses fluent nhibernate and the AutoPersistenceModel so there is no need to play with XML files for mappings. It also includes a number of very handy to have MVC tools.
Linq2SQL seems to have some pretty serious shortcomings whenever I've tried to do anything remotely sophisticated with it and I would probably only recommend it for very simple scenarios. Perhaps it's just me but I have observed some pretty ugly behaviour with L2S.