Copying multiple table data with relations using EF core - ef-core-2.2

I have seven tables that contain hundreds of rows of related data. These tables build up quite a complex quoting tool that adds materials and costs etc.
Is it possible using EF Core and a couple of lines of code to load up all those entities, and then write them back as new ones but generating new ID's on the way and correctly relating everything to each other so I end up with complete copies of all the data. I can then change the CompanyID on the Header table and voila, a company has a complete copy of templates that they can now configure themselves.
I am about to write a procedure to load up entities one by one, loop over them, save rows one by one etc, store the id's, blah blah blah. I'm happy to write that procedure because I cannot see an automatic way to do it.

Only way I can think of to do it with EF Core. Since it does not support tampering with the primary keys you could try something like this if you dont want to create a procedure but I can see a way to perform cascade update on primary keys. This kind of code would allow you to change the primary keys using custom ids based on your needs. You will still need for loops. So yeah you first update the primary table and the below it the tables that are depending on it.
foreach(var row in your_entity){
your_context.your_new_entity.AddObject(row);
your_context.SaveChanges();
row.Id = your_id;
for(var row2 in your_depending_entity){
row2.foreignkey = your_id
.... // and so on
}
}

Related

Automatic denormalizing by query

I wonder if it's possible to create a logic that automatically creates a denormalized table and it's data (and maintains it) by a specific SQL-like query.
Given a system where the user can maintain his data model and data. All data are stored in "relational" tables, but those tables are only used for the user to maintain his data. If he wants to display data on a webpage he has to write a query (SQL) which will automatically turn into a denormalized table and also be kept up-to-date when updating/deleting the relational data.
Let's say I got a query like this:
select t1.a, t1.b from t1 where t1.c = 1
The logic will automatically create a denormalized table with a copy of the needed data according to the query. It's mostly like a view (I wonder if views will be more performant than my approach). Whenever this query (give it a name) is needed by some business logic it will be replaced by a simple query on that new table.
Any update in t1 will search for all queries where t1 is involved and update the denormalized data automatically, but for performance win it will only update the rows infected (in this example just one row). That's the point where I'm not sure if it's achievable in an automatic way. The example query is simple, but what if there are queries with joins, aggregation or even sub queries?
Does an approach like this exist in the NoSQL world and maybe can somebody share his experience with it?
I would also like to know whether creating one table per query does conflict with any best practises when using NoSQL databases.
I have an idea how to solve simple queries just by finding the involved entity by its primary key when updating data and run the query on that specific entity again (so that joins will be updated, too). But with aggregation and sub queries I don't really know how to determine which denormalized table's entity is involved.

How to mark data as demo data in SQL database

We haave Accounts, Deals, Contacts, Tasks and some other objects in the database. When a new organisation we want to set up some of these objects as "Demo Data" which they can view/edit and delete as they wish.
We also want to give the user the option to delete all demo data so we need to be able to quickly identify it.
Here are two possible ways of doing this:
Have a "IsDemoData" field on all the above objects : This would mean that the field would need to be added if new types of demo data become required. Also, it would increase database size as IsDemoData would be redundant for any record that is not demo data.
Have a DemoDataLookup table with TableName and ID. The ID here would not be a strong foreign key but a theoretical foreign key to a record in the table stated by table name.
Which of these is better and is there a better normalised solution.
As a DBA, I think I'd rather see demo data isolated in a schema named "demo".
This is simple with some SQL database management systems, not so simple with others. In PostgreSQL, for example, you can write all your SQL with unqualified names, and put the "demo" schema first in the schema search path. When your clients no longer want the demo data, just drop the demo schema.

How to make sure that it is possible to update a database table column only in one way?

I am using Ruby on Rails v3.2.2 and I would like to "protect" a class/instance attribute so that a database table column value can be updated only one way. That is, for example, given I have two database tables:
table1
- full_name_column
table2
- name_column
- surname_column
and I manage the table1 so that the full_name_column is updated by using a callback stated in the related table2 class/model, I would like to make sure that it is possible to update the full_name_column value only through that callback.
In other words, I should ensure that the table2.full_name_column value is always
"#{table1.name_column} #{table1.surname_column}"
and that it can't be another value. So, for example, if I try to "directly" update the table1.full_name_column, it should raise something like an error. Of course, that value must be readable.
Is it possible? What do you advice on handling this situation?
Reasons to this approach...
I want to use that approach because I am planning to perform database searches on table1 columns where the table1 contains other values related to a "profile"/"person" object... otherwise, probably, I must make some hack (maybe a complex hack) to direct those searches to the table2 so to look for "#{table1.name_column} #{table1.surname_column}" strings.
So, I think that a simple way is to denormalize data as explained above, but it requires to implement an "uncommon" way to handling that data.
BTW: An answer should be intend to "solve" related processes or to find a better approach to handle search functionalities in a better way.
Here's two approaches for maintaining the data on database level...
Views and materialized tables.
If possible, the table1 could be VIEW or for example MATERIALIZED QUERY TABLE (MQT). The terminology might differ slightly, depending on the used RDMS, I think Oracle has MATERIALIZED VIEWs whereas DB2 has MATERIALIZED QUERY TABLEs.
VIEW is simply an access to data that is physically in some different table. Where as MATERIALIZED VIEW/QUERY TABLE is a physical copy of the data, and therefore for example not in sync with source data in real time.
Anyway. these approaches would provide read-only access to data, that is owned by table2, but accessible by table1.
Example of very simple view:
CREATE VIEW table1 AS
SELECT surname||', '||name AS full_name
FROM table2;
Triggers
Sometimes views are not convenient as you might actually want to have some data in table1 that is not available from anywhere else. In these cases you could consider to use database triggers. I.e. create trigger that when table2 is updated, also table1 is updated within the same database transaction.
With the triggers the problem might be that then you have to give privileges to the client to update table1 also. Some RDMS might provide some ways to tune access control of the triggers, i.e. the operations performed by TRIGGERs would be performed with different privileges from the operations that initiate the TRIGGER.
In this case the TRIGGER could look something like this:
CREATE TRIGGER UPDATE_NAME
AFTER UPDATE OF NAME, SURNAME ON TABLE2
REFERENCING NEW AS NEWNAME
FOR EACH ROW
BEGIN ATOMIC
UPDATE TABLE1 SET FULL_NAME = NEWNAME.SURNAME||', '||NEWNAME.NAME
WHERE SOME_KEY = NEWNAME.SOME_KEY
END;
By replicating the data from table2 into table1 you've already de-normalized it. As with any de-normalization, you must be disciplined about maintaining sync. This means not updating things you're not supposed to.
Although you can wall off things with attr_accessible to prevent accidental assignment, the way Ruby works means there's no way to guarantee that value will never be modified. If someone's determined enough, they will find a way. This is where the discipline comes in.
The best approach is to document that the column should not be modified directly, block mass-assignment with attr_accessible, and leave it at that. There's no concept of a write-protected attribute, really, as far as I know.

How to store many item flags in core data

I am trying to do the following in my iPad app. I have a structure that allows people to create grouped lists which we call "Templates". So The top level CoreOffer(has Title) which can have many groups(has grouptitle/displayorder) which can have many items(has ItemTitle, DisplayOrder). As shown below. This works great, I can create Templates perfectly.
Image link
http://img405.imageshack.us/img405/9145/screenshot20110610at132.png
But once Templates are created people than can use them to map against the Template which I will call an Evaluation. A Template can be used many times. The Evaluation will contain a date(system generated) and which items from this particular Template have been selected.
Example below, people will be able to check particular rows in the screen below, this is then an Evaluation.
Image link
http://img41.imageshack.us/img41/8049/screenshot20110610at133.png
I am struggling to figure out how to create and store this information in the core data model without duplicating the Template. (struggling coming from a SQL background!) In SQL this would involve something like an evaluation table recording each itemid and its selection status.
I expect its quite simple but I just cant get my head around it!
Thanks
The first thing you want to do is clean up the naming in your data model. Remember, you are dealing with unique objects here and not the names of tables, columns, rows, joins etc in SQL. So, you don't need to prefix everything with "Core" (unless you have multiple kinds of Offer, Group and Item entities.)
Names of entities start with uppercase letters, names of attributes and relationships with lower case. All entity names are singular because the modeling of the entity does not depend on how many instances of the entity there will be or what kind of relationships it will have. To-one relationship names should be singular and to-many plural. These conventions make the code easy to read and convey information about the data model without having to see the actual graphic.
So, we could clean up your existing model like:
Offer{
id:string
title:string
groups<-->>Group.offer
}
Group{
title:string
displayOrder:number
offer<<-->Offer.groups
items<-->>Item.group
}
Item{
title:string
displayOrder:number
isSelected:Bool
group<<-->Group.items
}
Now if you read a keypath in code that goes AnOfferObj.groups.items you can instantly tell you are traversing two to-many relationships without knowing anything else about the data model.
I am unclear exactly what you want your "Evaluations" to "copy". You appear to either want them to "copy" the entire graph of any Offer or you want them "copy" a set of Item objects.
In either case, the solution is to create an Evaluation entity that can form a relationship with either Offer or Item.
In the first case it would look like:
Evaluation{
title:string
offer<<-->Offer.evaluations
}
Offer{
id:string
title:string
groups<-->>Group.offer
evaluations<-->>Evaluation.offer
}
... and in the second case:
Evaluation{
title:string
items<<-->>Item.evaluations
}
Item{
title:string
displayOrder:number
isSelected:Bool
group<<-->Group.items
evaluations<<-->>Evaluation.items
}
Note that in neither case are you duplicating or copying anything. You are just creating a reference to an existing group of objects. In the first case, you would find all the related Item objects for a particular Evaluation object by walking a keypath of offer.groups.items. In the second case, you would walk just the keypath of the items relationship of the Evaluation object with items.
Note that how you ultimately display all this in the UI is independent of the data model. Once you have the objects in hand, you can sort or otherwise arrange them as you need to based on the needs of view currently in use.
Some parting advice: Core Data is not SQL. Entities are not tables. Objects are not rows. Attributes are not columns. Relationships are not joins. Core Data is an object graph management system that may or may not persist the object graph and may or may not use SQL far behind the scenes to do so. Trying to think of Core Data in SQL terms will cause you to completely misunderstand Core Data and result in much grief and wasted time.
Basically, forget everything you know about SQL. That knowledge won't help you understand Core Data and will actively impede your understanding of it.

Efficient collection Update/Insert in Entity Framework

I have a similar challenge to this post: Batch insert/update with entity framework from a couple years ago, I was hoping that the story may have changed since then.
In short, I am running a RESTful service, and as such I'd like a PUT to be document-oriented and take an object along with a collection of child elements in. The child elements have a unique string that I can use for determining existence.
Unlike the referenced poster, I don't have a query requirement; all I want to do is be able to take in a collection of my child elements and perform an insert on the child table for any that aren't already there, and an insert or delete on the many-to-many table to account for the current state of the collection. Ideally, with some efficiency. I realize that I might end up doing this as a sproc, I just wanted to see if there's an EF-native way that works first.
To do this you must either know which items are new or you must query DB first and merge your received items with loaded items. EF will not handle this for you. Also be aware that there are still no batch modifications. Each insert, update or delete is executed in separate roundtrip to database.

Resources