Poco Parent-Child relation lost Client-side - entity-framework-4

I have this scenario:
STRUCTURE :
Entity A
ChildEntity B
Assume this is data in my database
A1
(childs)
B1
B2
B3
A2
(childs)
B4
B5
B6
Using linq to SQL. I want a list of B items
with a reference to parents (A items)
Server side it's all right before my web service send back data to client
I got a structure like this:
B1
A1
(childs)
B1
B2
B3
B2
A1 (parent)
(childs)
B1
B2
B3
...
...
If I try to navigate the Graph all items seems to be in the right place.
Client side, after server serialization and client deserialization I have this situation:
B1
A1 (parent)
(childs)
B1
B2
B3
B2
NULL
B3
NULL
B4
A2 (parent)
(childs)
B4
B5
B6
B5
NULL
B6
NULL
ONLY one of the childs of A items keeps the reference to the parent. I tried to look at XML generated both client and server side but couldn't find the problem.
Can anyone try to help me understand why this happens?
Or have a suggestion?
NOTE :
If I try to compress the list server side, as byte[], and decompress it back client side, casting the compressed object to List<B>, all items maintain the correct relation also client side, everything is okay. So I imagine it's a problem in Serialization/Deserialization
I'm using:
standard serialization (DataContractSerializer)
standard microsoft POCO template
EF4.

I found where the problem was :
The XMLSerializer when serializing properties puts them in the XML apparently in random order (maybe it follows some order it gets from Datamodel edmx, i've not investigated about it)
So i got this scenario in XML generated by the XMLSerializer :
<b:CHILD_Entity **z:Id="i1"** xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/">
<b:CHILD_ID>008acd9a-2074-46b0-b73b-90b8c9123f6f</b:CHILD_ID>
<b:ReasonInfo>HRC->PLC 11</b:ReasonInfo>
<b:StartTime>2011-11-29T09:22:36.553</b:StartTime>
<b:EndTime>2011-11-29T10:31:58.507</b:EndTime>
<b:Quantity>0</b:Quantity>
<b:PARENT_Entity z:Id="i2">
<b:Description i:nil="true"></b:Description>
<b:UtilizationType>COMMITTED</b:UtilizationType>
<b:Reason>SETUP</b:Reason>
<b:ReasonInfo></b:ReasonInfo>
<b:CHILD_Entities>
<b:**CHILD_Entity** **z:Ref="i1"**></b:CHILD_Entity>
<b:**CHILD_Entity** **z:Id="i3"** xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/">
<b:CHILD_ID>008acd9a-2074-46b0-b73b-90b8c9123f6f</b:CHILD_ID>
<b:ReasonInfo>HRC->PLC 11</b:ReasonInfo>
<b:StartTime>2011-11-29T09:22:36.553</b:StartTime>
**<b:ProductionCapabilityUtilization z:Ref="i2"></b:ProductionCapabilityUtilization>**
***<b:PARENT_ID>32864280-fe68-4b21-8375-b57ae8bbd7e6</b:PARENT_ID>***
..........
</b:CHILD_Entity>
..................
</b:CHILD_Entities>
**<b:PARENT_ID>32864280-fe68-4b21-8375-b57ae8bbd7e6</b:PARENT_ID>**
</b:PARENT_Entity>
***<b:PARENT_ID>32864280-fe68-4b21-8375-b57ae8bbd7e6</b:PARENT_ID>***
<b:PROP1>MLM</b:PROP1>
<b:PROP2>MLM1</b:PROP2>
<b:Reason>RUN</b:Reason>
</b:CHILD_Entity>
<b:**CHILD_Entity** **z:Ref="i3"** xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/"></b:CHILD_Entity>
the problem is that could happen that sometimes FIXUP generated code try to set PARENTID on CHILD_Entity before PARENT_Entity.PARENT_ID is set and after CHILD_Entity.PARENT_Entity SETTER was fired.
so we got a strange scenario where CHILD_Entity.PARENT has a value but CHILD_Entity.Parent.PARENT_ID = NULL because the DESERIALIZER has still not processed that property from the XML
To solve this kind of problems i had to modify POCO Template this way :
Set a counter variable int propertyPosCounter = 1;
Specify ORDER when setting DataMemberAttribute
[DataMemberAttribute(Order = <#=propertyPosCounter.ToString()#>)]
Increment the counter after that propertyPosCounter +=1;
REPEAT This process for each Primitive Properties - Complex Properties - Navigation Properties
in order we got always the right order in generated XML to Deserialize Client Size using POCO and it's Fixup concept
After this adjustemant to the POCO.tt template i solve those kind of conflicts and got no problems
Serializing -> Deserializing my data structures passed from my WCF service to my client application
the result in the XML serialized is :
<b:CHILD_Entity z:Id="i1" xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/">
<b:CHILD_ID>008acd9a-2074-46b0-b73b-90b8c9123f6f</b:CHILD_ID>
<b:PARENT_ID>32864280-fe68-4b21-8375-b57ae8bbd7e6</b:PARENT_ID>
<b:PROP1>MLM</b:PROP1>
<b:PROP2>MLM1</b:PROP2>
<b:Reason>RUN</b:Reason>
<b:ReasonInfo>HRC->PLC 11</b:ReasonInfo>
<b:StartTime>2011-11-29T09:22:36.553</b:StartTime>
<b:EndTime>2011-11-29T10:31:58.507</b:EndTime>
<b:Quantity>0</b:Quantity>
<b:PARENT_Entity z:Id="i2">
<b:PARENT_ID>32864280-fe68-4b21-8375-b57ae8bbd7e6</b:PARENT_ID>
<b:Description i:nil="true"></b:Description>
<b:UtilizationType>COMMITTED</b:UtilizationType>
<b:Reason>SETUP</b:Reason>
<b:ReasonInfo></b:ReasonInfo>
<b:CHILD_Entity>
<b:CHILD_Entity z:Ref="i1"></b:CHILD_Entity>
<b:**CHILD_Entity** **z:Id="i3"** xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/">
<b:CHILD_ID>008acd9a-2074-46b0-b73b-90b8c9123f6f</b:CHILD_ID>
***<b:PARENT_ID>32864280-fe68-4b21-8375-b57ae8bbd7e6</b:PARENT_ID>***
<b:ReasonInfo>HRC->PLC 11</b:ReasonInfo>
<b:StartTime>2011-11-29T09:22:36.553</b:StartTime>
..........
**<b:ProductionCapabilityUtilization z:Ref="i2"></b:ProductionCapabilityUtilization>**
</b:CHILD_Entity>
</b:CHILD_Entity>
</b:PARENT_Entity>
</b:CHILD_Entity>
I tried to be as clear as possible but the scenario is not trivial, so if anyone needs clarification do not hesitate to ask for it, i will try to answer as soon as i can.
Hope this post could help someone else :)
Regards,
Luigi Martinez Bianchi

Related

Insert a lot of data that depends on the previously inserted data

I am trying to store a "navigation" path on the database
The paths are stored on the logfile as a string, something like "a1 b1 c1 d1" where each one is a "token"
I want that for each token I store the path to it, as an example I can have
a1 -> b1 -> c1
a1 -> b1 -> c2
a1 -> b2 -> c2
So, if I ask all the subtokens for a1 I will get [b1 => 2, b2 => 1] on a token => count format.
This way I can get all the subtokens for a given token and the "usage count" for each of those subtokens.
It is possible to have
a1 -> b1 -> c1
g1 -> h1 -> b1
But for me, those two b1 are not the same, the count should not be the same.
There should not be a LOT of tokens, but there will be a lot of entries on the logfile so I will expect a big count value for those tokens.
I am representing the data like that (sqlite3):
id; parent_id; token; count
where the parent_id is a FK to the same table.
My issue is. I have around 50k entries on my log and I can have more.
I am inserting the data on the database using the following procedure
search for a entry that has the parent_id + token (for the first token the parent_id is null)
EXISTS: Update the count
DON'T EXISTS: Create a entry
Save the ID of the updated entry/new entry as a parent_id
Repeat until there are no more tokens to consume
With 50k entries having an average of 4 tokens per entry it gives 200k tokens to process.
It does not write a lot of data on the database as a lot of those tokens repeats, even if I can have the same token with different parent_id.
The issue is.... it is too slow.... I cannot perform an insert in chunks as I depends on the id of an existing one or the id of a new one. Worse I also need to update the count.
I was thinking to use some sort of tree to store this data, but there is the problem where it is possible that there are old records that needs to be preserved and this data needs to be counted on top of the existing one.
I can create the tree using the database + update it with the current data, but it feels like an overcomplicated solution for a problem.
Does anyone have any idea on how to optimize the insertion of this data?
I am using rails (active record) + sqlite 3.

Wrong character being stored to database

I created a dropdown for a view where the user can either select Y or N.
ASCII 78 = 'N'
ASCII 89 = 'Y'
http://ascii.cl/
Your serialization code runs as expected. Assuming you have debugged the controller code that is serializing the model to XML, it's possible another transformation is occurring when this XML is stored to the database.

How to define what DatabaseGenerated.Computed should compute?

I am just doing my first steps with EF CodeFirst, especially with dataAnnotations.
Now I'm doing my best to understand the "DatabaseGenerated" attribute.
What I know so far:
using this attribute gives me three options to handle creation of a property value: Computed, Id and None.
using this attribute means, that the property can not be updated manually - it is done by the database
So - as I can imagine what happens when using th "Id" option, I have no idea what happens when using "Computed" option. I red that this should tell the db to compute the field value.
For example: field "sum" = field "price" + field "shipping".
But how can I use that in that way? I looked around and did not find any examples. Could you please help me?
You can't use EF to tell the database how to compute the column -- you can only tell EF that the column is database generated so it should be retrieved from the database for your use in code.
To control how the database computes the column you have to manually instruct it either outside of EF or in your database initialization logic.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
context.Database.ExecuteSqlCommand("RAW SQL HERE");
}
Your SQL (MS T-SQL) might look like this (more here):
CREATE TABLE t2 (a int, b int, c int, x float,
y AS CASE x
WHEN 0 THEN a
WHEN 1 THEN b
ELSE c
END)
For SQL here's some info about Computed Colums:
http://msdn.microsoft.com/en-us/library/ms191250(v=sql.105).aspx
Also for reference:
http://msdn.microsoft.com/en-us/library/gg193958.aspx

Using SharePoint's Data Query Webpart to link two lists

I have two SharePoint Lists: A & B. List A has a column where the user can add multilple references (displayed as hyperlinks) for each entry to entries in B
A: B:
... | RefB | ... Name | OtherColumns....
----------------- -----------------------
... | B1 | ... B1 |
... | B2,B3 | ... B2 |
... | B1,B3 | ... B3 |
Now I want to display all entries from list B that are referenced by an (specific) entry in A. I.e: I set the filter to [Entry 2] and the Web part displays all the stuff from entries B2 and B3. Is this even possible?
I think the problem you've got which is ruining some of the way's I'm thinking of solving it is that the RefB column is multi-valued. You may have some joy doing filtering with the DataView but it might get messy fast, as you try to split RefB on the comma and compare against the resulting array of values.
I think the problem could be made easier by having only a single value in the RefB column.
Three solutions come to mind.
Have only one value in RefB per item in Table A and repeat the other fields in Table A. You'd have to accept some data redundancy and would need to be careful with data entry.
The normal relational database way of solving your data redundancy problem would be to have a 3rd table joining tabe A to table B. If you're not familiar with relational database techniques, there are lots of straight-forward tutorials on data normalisation on the net. While there's some more work, it may lead to a cleaner solution. Be careful when trying to fake a relational database within SharePoint though - it's not meant for relational data. You may be better off using a SQL database.
Put everything in one table, though I think you've already ruled this one out.

DELPHI - help need with a ClientDataSet

I have a ClientDataSet with the following data
IDX EVENT BRANCH_ID BRANCH
1 E1 7 B7
2 E2 5 B5
3 E3 7 B7
4 E4 1 B1
5 E5 2 B2
6 E6 7 B7
7 E7 1 B1
I need to transform this data into
IDX EVENT BRANCH_ID BRANCH
1 E1 7 B7
2 E2 5 B5
4 E4 1 B1
5 E5 2 B2
The only fields of importance are the BRANCH_ID and BRANCH
and BRANCH_ID must be unique
As there is a lot of data I do not what two have copies of it.
QUESTION:
Can you suggest a way to transfrom the data using a Cloned version of the original data ?
Cloning won't allow you to actually change data in a clone and not have same change reflected in the original, so if that's what you want you might rethink the cloning idea.
Cloning does give you a separate cursor into the clone and allows you to filter and index (i.e. order) it independently of the master clientdataset. From the data you've provided it looks like you want to filter some branch data and order by branch_id. You can accomplish that by setting up a new filter and index on the clone. Here's a good article that includes examples of how to do that:
http://edn.embarcadero.com/article/29416
Taking a second look at your question, seems like all you'd need to do would be to set up a unique index on branch_id on the cloned dataset. Linked article above has info on how to set up index; check docs on clientdataset.addindex function for more details and info on setting the index to show only unique values, if I recall it may just mean you set branch_id as the primary key.
I can't think of a slick way to do this, but you could index on BRANCH_ID, add an fkInternalCalc boolean field to your dataset, then initialize that field to True on the first row of each branch (using group state or manually) and then filter the clone on the value of the field. You'd have to update the field on data changes though.
I have a feeling that a better solution would be to have a master dataset with a row for each branch.
You don't provide many details about your use case so I'll try to give you some hints:
"A lot of data" suggests that you might have it from a SQL backend. Using a 'SELECT DISTINCT...' or 'SELECT ... GROUP BY BRANCH_ID' (or similar syntax depending on what SQL backend do you) will give the desired result with ease and speed. Please confirm and I'll give you more details.
As the others said a simple 'clone' wouldn't work. The most simpler (and perhaps quicker) sollution, assuming that ussually the brances are few in number WRT to data is to have an index outside of your dataset. If really you want to filter your original data then add a status field (eg boolean) on your data and put a flag (eg. 'True') on the first occurence.
PseudoCode:
(Let's asume that:
your ClientDataSet is cds1
your cds1 have a status field cds1Status (boolean) - this is optional, needed only if you want to sort/filter/search the cds1
you have a lIndex which is a TStringList)
lIndex.Clear;
lIndex.Sorted:=True;
with cds1 do
try
DisableControls;
First;
while not Eof do //scan the dataset
begin
cVal:=cds1Branch_ID.AsString;
Edit; //we anyway update the Status field
if lIndex.Find(cVal, nDummy) then //nDummy - we don't use it.
begin //already in index
cds1Status.AsBoolean:=False; //we say here "No, isn't the 1st occurence"
end
else
begin //Not found! - Well, let's add it...
lIndex.Append(cVal); //update the index
cds1Status.AsBoolean:=True; //mark the first occurence
end;
Post; //save the changes in the status field
Next;
end; //scan
finally
EnableControls; //housekeeping
end;
//WARNING! - NOT tested. I wrote it from my head but I think that you got the idea...
...Depending on what you try to accomplish (which would be the best thing that you might share with us) and what selectivity do you have on BRANCH_ID perhaps the Status engine isn't needed at all. If you have a very low selectivity on that field (selectivity = no. of unique values / no. of records) perhaps it's much faster to have a new dataset and copy there only the unique values rather than putting each record of the original cds in Edit + Post states. (Changing dataset states are costly operations. Especially if your cds is linked to a remote data storage - ie. a server).
hth,
PS: My sollution is intended to be mostly simple. Also you can test with lIndex.Sorted:=False and use lIndex.IndexOf instead of Find. In some (rare) cases is better. Depends on your data. If you want to complicate the things and the speed is really a concern you can implement a full-blown BTree index to do your searces (libraries available). Also you can use the index engine of CDS and index the BRANCH_ID and do many 'Locate' on a clone but because your selectivity is clearly < 1 scaning the cds's entire index theorethically should be slower that a scan on a unique index, especially if your custom-made index is tailored to your data type, structure, distribuition etc.
just my2c

Resources