Add custom data to breeze metadata on server - breeze

In a app I'm developing I create some UI widgets that are generated based on breeze metadata and attributes on my classes and properties. So on the server I read the attributes, and add them on breeze's generated metadata. On the client, after I load the metadata I add those attributes on the type.
An example of this
(on the client, after the enriched metadata loads):
function finishMetadata(md) {
for (var i = 0; i < md.schema.entityType.length; i++) {
var et = md.schema.entityType[i];
var etype = manager.metadataStore.getEntityType(et.name);
//isExportable is a class attribute
etype.isExportable = et.isExportable;
for (var j = 0; j < et.property.length; j++) {
var p = et.property[j];
var prop = etype.getProperty(p.name);
//displayName is a property attribute
prop.displayName = p.displayName ? p.displayName : p.name;
}
}
}
with this, when I call manager.metadataStore.getEntityType(entityName), I get all the entityType data, including the properties and all the attributes I added on the server.
This worked fine until today. I have added inheritance (TPT) on some classes (Customer:Person) and since the generated metadata from the Customer entity does not have the Person properties, I cannot add them to the metadataStore type. When I call metadataStore.getEntityType for Person, I get all attributes, but when I call it for Customer I do not get my custom attributes (and this is because on the code above Customer does not list the parent Person properties so I do not have the chance to plug in my custom attributes.
Anyway, this feels hacky and messy, even to explain it. So here we are, breeze its in version 1.4.7 and I wonder if there is an easier way of adding custom data to the metadata that would not break with TPT?
PS.: I know that I can hand-craft the metadata but I would like to stick with the default as much as possible to avoid problems with future changes. So, basically all metadata changes should be minimal and automatic, based on the classes.

Well, I ended up goind further down on the way I'm doing things. Instead of just use manager.metadataStore.getEntityType to get the type data (which does not bring my custom metadata), I look if my entityType has a baseType and if it does, I load that base type data and merge it with the child class. In my case, basically I get the dataProperties of the Person and use them instead of the dataProperties of the Customer. It works but I still looking for a cleaner and simpler way of doing something like this.

Related

How to correctly use Data Annotations to select which Items should be returned by the Web API?

I'm trying to specify a subset of data to be returned from a database query by Web API 2.
In particular, for this query, I first turn lazy loading on:
db.Configuration.LazyLoadingEnabled = true;
This is because there are potentially infinite levels of children. For example:
Parent: {"name":"Jon","children":[{"name":"Dave","children":["name":"Ed"...
Each person in the above sequence can also have a biography. In the database, the books also have related tables for, let's say, authors, reviewers etc.
As far as I know I can add data annotations to the model to specify which fields to return:
[Key]
= specifies the key which will be returned
[DataMember]
= specifies a property which will be returned
[JsonIgnore]
[IgnoreDataMember]
= specifies a property which will not be returned
[JsonObject(IsReference = true)]
= specifies that the object is being references from another object and therefore related objects should not be loaded
I'm struggling to load the related biographies. The id for the biography is returning, but the biography objects are null. From the parent object, I have annotated both the nullable int and the virtual object references to biography with [DataMember]. In the biography object, I have then specified the id with [Key] and the name with [DataMember] and all other properties with [JsonIgnore]
[IgnoreDataMember]. However the biographies are not being loaded. The db query is returning the items loaded, but they are then being nulled by web api, I assume because of some circular reference in the chain somewhere.
There are about 50 tables linked in some way, do I need to go through them all and add data annotations to everyone - even if I have used an ignore annotation to break the chain? Hoping for a simple solution, but any solution appreciated!
It seems to be working fine that [DataMember] will load a HashSet of related data, which gets instantiated in the constructor, but related databases objects (which are not instantiated in the constructor) do not get loaded.
It seems that the update statement doesn't turn lazy loading on:
db.Configuration.LazyLoadingEnabled = true;
The related items would only be returned if I went into debug mode and loaded the related data when hovering over the object (seems strange), but basically it was staying in lazy loading = false mode.
My solution has been to turn lazy loading on globally and to use data annotations as described above to avoid circular references.

Child navigation properties missing in imported entities in custom initializer

I have a custom entity definition like:
var Card = function () {};
var cardInitializer = function (card) {
// card.fields is defined in the metadata.
// card._cfields is an in-memory only field
// that breeze will not, and should not, track.
// Thus it is being added in the initializer
card._cfields = card.fields.slice();
};
When the data loads from the server everything is fine. The card.fields array has the corresponding data.
EDITED: Added more info and code of how manager is being set up
But when the data is round-tripped in local storage via .exportEntities and importEntities, the child data defined in the metadata, represented by the property card.fields in this example, is not loaded (the Array has length 0) during the initializer call, though it is subsequently available on the entity after load has completed.
Here is how the manager is being initialized:
var metadataStore = new breeze.MetadataStore();
metadataStore.importMetadata(options.metadata);
var queryOptions = new breeze.QueryOptions( {
fetchStrategy: breeze.FetchStrategy.FromLocalCache
});
var dataService = new breeze.DataService({
serviceName: "none",
hasServerMetadata: false
});
manager = new breeze.EntityManager({
dataService: dataService,
metadataStore: metadataStore,
queryOptions: queryOptions
});
entityExtensions.registerExtensions(manager, breeze);
var entities = localStorage[storage];
if(entities && entities !== 'null'){
manager.importEntities(entities);
}
Wow. You ask for free support from the harried developer of a free OSS product that you presumably value and then you shit on him because you think he was being flippant? And downgrade his answer.
Could you have responded more generously. Perhaps you might recognize that your question was a bit unclear. I guess that occurred to you because you edited your question such that I can see what you're driving at.
Two suggestions for next time. (1) Be nice. (2) Provide a running code sample that illustrates your issue.
I'll meet you half way. I wrote a plunker that I believe demonstrates your complaint.
It shows that the navigation properties may not be wired up when importEntities calls an initializer even though the related entities are in cache.
They do appear to be wired up during query result processing when the initializer is called.
I cannot explain why they are different in this respect. I will ask.
My personal preference is to be consistent and to have the entities wired up. But it may be that there are good reasons why we don't do that or why it is indeterminate even when processing query results. I'll try to get an answer as I said.
Meanwhile, you'll have to work around this ... which you can do by processing the values returned from the import:
var imported = em2.importEntities(exported);
FWIW, the documentation is silent on this question.
Look at the "Extending Entities" documentation topic again.
You will see that, by design, breeze does not know about any properties created in an initializer and therefore ignores such properties during serialization such as entity export. This is a feature not a limitation.
If you want breeze to "know" about an unmapped property you must define it in the entity constructor (Card)... even if you later populate it in the initialized function.
Again, best to look at the docs and at examples before setting out on your own.

Setting a collection of related entities in the correct way in EF4 using POCO's (src is the DB)

I have a POCO entity Report with a collection of a related POCO entity Reference. When creating a Report I get an ICollection<int> of ids. I use this collection to query the reference repository to get an ICollection<Reference> like so:
from r in referencesRepository.References
where viewModel.ReferenceIds.Contains(r.Id)
select r
I would like to connect the collection straight to Report like so:
report.References = from r in referencesRepository.References
where viewModel.ReferenceIds.Contains(r.Id)
select r;
This doesn't work because References is an ICollection and the result is an IEnumerable. I can do ToList(), but I think I will then load all of the references into memory. There also is no AddRange() function.
I would like to be able to do this without loading them into memory.
My question is very similar to this one. There, the only solution was to loop through the items and add them one by one. Except in this question the list of references does not come from the database (which seemed to matter). In my case, the collection does come from the database. So I hope that it is somehow possible.
Thanks in advance.
When working with entity framework you must load objects into memory if you want to work with them so basically you can do something like this:
report.References = (from r in referencesRepository.References
where viewModel.ReferenceIds.Contains(r.Id)
select r).ToList();
Other approach is using dummy objects but it can cause other problems. Dummy object is new instance of Reference object which have only Id set to PK of existing object in DB - it will act like that existing object. The problem is that when you add Report object to context you must manually set each instance of Reference in ObjectStateManager to Unchanged state otherwise it will insert it to DB.
report.References = viewModel.ReferenceIds.Select(i => new Reference { Id = i }).ToList();
// later in Report repository
context.Reports.AddObject(report);
foreach (var reference in report.References)
{
context.ObjectStateManager.ChangeObjectState(reference, EntityState.Unchanged);
}

How to move from untyped DataSets to POCO\LINQ2SQL in legacy application

Good day!
I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable.
Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable.
The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model".
What is the fast\easy way of doing this?
Additionally to the answer below here is a nice solution based on AutoMapper: http://elegantcode.com/2009/10/16/mapping-from-idatareaderidatarecord-with-automapper/
An approach that you could take is using the DataReader and transfer. So for every object you want to work with define the class in a data transfer object folder (or however your project is structured) then in you data access layer have something along the lines of the below.
We used something very similar to this in a project with a highly normalized database but in the code we did not need that normalization so we used procedures to put the data into more usable objects. If you need to be able to save these objects as well you will need handle translating the objects into database commands.
What is the fast\easy way of doing
this?
Depending on the number of classes etc this is could not be the fastest approach but it will allow you to use the objects very similarly to the Linq objects and depending on the type of collections used (IList, IEnumerable etc) you will be able to use the extension methods on those types of collections.
public IList<NewClass> LoadNewClasses(string abc)
{
List<NewClass> newClasses = new List<NewClass>();
using (DbCommand command = /* Get the command */)
{
// Add parameters
command.Parameters["#Abc"].Value = abc;
// Could also put the DataReader in a using block
IDataReader reader = /* Get Data Reader*/;
while (reader.Read())
{
NewClass newClass = new NewClass();
newClass.Id = (byte)reader["Id"];
newClass.Name = (string)reader["Name"];
newClasses.Add(newClass);
}
reader.Close();
}
return newClasses;
}

Code re-use with Linq-to-Sql - Creating 'generic' look-up tables

I'm working on an application at the moment in ASP.NET MVC which has a number of look-up tables, all of the form
LookUp {
Id
Text
}
As you can see, this just maps the Id to a textual value. These are used for things such as Colours. I now have a number of these, currently 6 and probably soon to be more.
I'm trying to put together an API that can be used via AJAX to allow the user to add/list/remove values from these lookup tables, so for example I could have something like:
http://example.com/Attributes/Colours/[List/Add/Delete]
My current problem is that clearly, regardless of which lookup table I'm using, everything else happens exactly the same. So really there should be no repetition of code whatsoever.
I currently have a custom route which points to an 'AttributeController', which figures out the attribute/look-up table in question based upon the URL (ie http://example.com/Attributes/Colours/List would want the 'Colours' table). I pass the attribute (Colours - a string) and the operation (List/Add/Delete), as well as any other parameters required (say "Red" if I want to add red to the list) back to my repository where the actual work is performed.
Things start getting messy here, as at the moment I've resorted to doing a switch/case on the attribute string, which can then grab the Linq-to-Sql entity corresponding to the particular lookup table. I find this pretty dirty though as I find myself having to write the same operations on each of the look-up entities, ugh!
What I'd really like to do is have some sort of mapping, which I could simply pass in the attribute name and get out some form of generic lookup object, which I could perform the desired operations on without having to care about type.
Is there some way to do this to my Linq-To-Sql entities? I've tried making them implement a basic interface (IAttribute), which simply specifies the Id/Text properties, however doing things like this fails:
System.Data.Linq.Table<IAttribute> table = GetAttribute("Colours");
As I cannot convert System.Data.Linq.Table<Colour> to System.Data.Linq.Table<IAttribute>.
Is there a way to make these look-up tables 'generic'?
Apologies that this is a bit of a brain-dump. There's surely imformation missing here, so just let me know if you'd like any further details. Cheers!
You have 2 options.
Use Expression Trees to dynamically create your lambda expression
Use Dynamic LINQ as detailed on Scott Gu's blog
I've looked at both options and have successfully implemented Expression Trees as my preferred approach.
Here's an example function that i created: (NOT TESTED)
private static bool ValueExists<T>(String Value) where T : class
{
ParameterExpression pe = Expression.Parameter(typeof(T), "p");
Expression value = Expression.Equal(Expression.Property(pe, "ColumnName"), Expression.Constant(Value));
Expression<Func<T, bool>> predicate = Expression.Lambda<Func<T, bool>>(value, pe);
return MyDataContext.GetTable<T>().Where(predicate).Count() > 0;
}
Instead of using a switch statement, you can use a lookup dictionary. This is psuedocode-ish, but this is one way to get your table in question. You'll have to manually maintain the dictionary, but it should be much easier than a switch.
It looks like the DataContext.GetTable() method could be the answer to your problem. You can get a table if you know the type of the linq entity that you want to operate upon.
Dictionary<string, Type> lookupDict = new Dictionary<string, Type>
{
"Colour", typeof(MatchingLinqEntity)
...
}
Type entityType = lookupDict[AttributeFromRouteValue];
YourDataContext db = new YourDataContext();
var entityTable = db.GetTable(entityType);
var entity = entityTable.Single(x => x.Id == IdFromRouteValue);
// or whatever operations you need
db.SubmitChanges()
The Suteki Shop project has some very slick work in it. You could look into their implementation of IRepository<T> and IRepositoryResolver for a generic repository pattern. This really works well with an IoC container, but you could create them manually with reflection if the performance is acceptable. I'd use this route if you have or can add an IoC container to the project. You need to make sure your IoC container supports open generics if you go this route, but I'm pretty sure all the major players do.

Resources