Meaning and purpose of using datamodel in JSF - jsf-2

I just wanted to know the meaning and the situations where DataModel is used in JSF. I was not able to get a clear picture of it in the Java EE doc here.

I find the linked javadoc quite clear:
DataModel is an abstraction around arbitrary data binding technologies
that can be used to adapt a variety of data sources for use by
JavaServer Faces components that support per-row processing for their
child components.
DataModel is used as a wrapper class to the data you want to display in a component supporting a per-row processing. This is typically the case of h:dataTable.
There is for instance ResultSetDataModel which is a subclass of DataModel and helps to display a SQL ResultSet in a h:dataTable. Without the ResultSetDataModel abstraction, you would have to transform the underlying ResultSet into a normal Collection. Thanks to ResultSetDataModel you don't have to do any transformation and you can plug the ResultSet directly in the JSF component.

Related

How to create nodes in neo4j with properties defined by a dictionary via neo4jclient in C#

As a complete novice programmer I am trying to populate my neo4j DB with data from heterogeneous sources. For this I am trying to use the Neo4jClient C# API. The heterogeneity of my data comes from a custom, continuously evolving DSL/DSML/metamodel that defines the possible types of elements, i.e. models, thus creating classes for each type would not be ideal.
As I understand, my options are the following:
Have a predefined class for each type of element: This way I can easily serialize my objects that is if all properties are primitive types or arrays/lists.
Have a base class (with a Dictionary to hold properties) that I use as an interface between the models that I'm trying to serialize and neo4j. I've seen an example for this at Can Neo4j store a dictionary in a node?, but I don't understand how to use the converter (defined in the answer) to add a node. Also, I don't see how an int-based dictionary would allow me to store Key-Value pairs where the keys (that are strings) would translate to Property names in neo4j.
Generate a custom query dynamically, as seen at https://github.com/Readify/Neo4jClient/wiki/cypher#manual-queries-highly-discouraged. This is not recommended and possibly is not performant.
Ultimately, what I would like to achieve is to avoid the need to define a separate class for every type of element that I have, but still be able to add properties that are defined by types in my metamodel.
I would also be interested to somehow influencing the serializer to ignore non-compatible properties (similarly to XmlIgnore), so that I would not need to create a separate class for each class that has more than just primitive types.
Thanks,
J
There are 2 problems you're trying to solve - the first is how to program the C# part of this, the second is how to store the solution to the first problem.
At some point you'll need to access this data in your C# code - unless you're going fully dynamic you'll need to have some sort of class structure.
Taking your 3 options:
Please have a look at this question: neo4jclient heterogenous data return which I think covers this scenario.
In that answer, the converter does the work for you, you would create, delete etc as before, the converter just handles the IDictionary instance in that case. The IDictionary<int, string> in the answer is an example, you can use whatever you want, you could use IDictionary<string, string> if you wanted, in fact - in that example, all you'd need to do would be changing the IntString property to be an IDictionary<string,string> and it should just work.
Even if you went down the route of using custom queries (which you really shouldn't need to) you will still need to bring back objects as classes. Nothing changes, it just makes your life a lot harder.
In terms of XmlIgnore - have you tried JsonIgnore?
Alternatively - look at the custom converter and get the non-compatible properties into your DB.

Using navigation properties in entity framework code first

Context:
Code First, Entity Framework 4.3.1;
User ---- Topic, 1 to Many relation;
User with public virtual ICollection<Topic> CreatedTopics Navigation Property(Lazy Loading);
Topic with public virtual User Creator Navigation Property;
DataServiceController : DbDataController<DefaultDbContext>, Web API beta, ASP.NET MVC 4 Beta , Single Page Application;
System.Json for Json serialization;
Web API Action:
public IQueryable<Topic> GetTopics()
{
// return DbContext.Topics; // OK
return DbContext.Topics.Include("Creator"); //With Exception
}
Result: "an unhandled microsoft .net framework exception occurred in w3wp.exe"
The Problem here seems to be: I should not Add Navigation Property in both Entities(Cause Circular Reference?), and if I delete the CreatedTopics Navigation Property in User Class, It will be OK again.
So, In a similar Context like listed above, Here are my questions:
How to deal with Navigation Properties in the situation of 1 to Many relation;
Further more, how about a Many to Many relation, do i have to divide it into two 1 to Many relations;
What is the Best Practices and Precautions of using Navigation Properties?
I Have read many related posts, but still not clear enough :(,
Thanks for any help!
Dean
This is not a problem of code first or EF - it is a problem of serialization. Simply the serializer used to convert your object graph to some representation passed in a Web API message is not able to work with circular references by default. Depending on the message format you want to use Web API uses different serializers by default - here is more about default serializers used by Web API and about the way how to change it. The following text suppose that you are using DataContractJsonSerializer or DataContractSerializer (should be default for XML serialization) but the same is possible for JSON.NET (should be default for JSON serialization - JSON serialization can be switched to DataContractJsonSerializer but the default serializer is better).
So what you can do? You can tell the serializer that it should track those circular references by marking your classes with DataContract(IsReference = true) and each passed property with DataMember attribute (check the linked article for description how to achieve it with JSON.NET). This will allow serializer correctly recognizing cycles and the serialization will in theory succeed. In theory because this also demands to not using lazy loading. Otherwise you can serialize much more data than you expected (in some catastrophic scenarios it can lead to serializing whole content of your database).
When you serialize entity graph with lazy loading enabled you serailze a Topic and its Creator but serialization will also visit CreatedTopics property => all related topics are lazy loaded and processed by serialization and serialization continues to visit Creator of all newly loaded topics! This process continues until there is no other object to lazy load. Because of this you should never use lazy loading when serializing entities.
Other option is to exclude back reference from serialization. You just need to serialize Creator. You don't need to serialize CreatedTopics so you can mark the property with IgnoreDataMember attribute (JsonIgnore for JSON.NET). The problem is that if you also have Web API action for returning User with all his CreateTopics this will not work because of the attribute.
The last option is not using entities. This option is usually used in web services where you create special DTO objects satisfying requirements for specific operation and you handle conversion between entities and DTOs inside the operation (possible with help of some tool like AutoMapper).
There is no difference between handling one-to-one, one-to-many, or many-to-many relations. If you have navigation properties on both sides you must always deal with this problem.

DTO Pattern + Lazy Loading + Entity Framework + ASP.Net MVC + Auto Mapper

Firstly, Sorry For lengthy question but I have to give some underlying information.
We are creating an Application which uses ASP.net MVC, JQuery Templates, Entity Framework, WCF and we used POCO as our domain layer. In our application, there is a WCF Services Layer to exchange data with ASP.net MVC application and it uses Data Transfer Objects (DTOs) from WCF to MVC.
Furthermore, the application uses Lazy Loading in Entity Framework by using AutoMapper when converting Domain-TO-DTOs in our WCF Service Layer.
Our Backend Architecture as follows (WCF Services -> Managers -> Repository -> Entity Framework(POCO))
In our application, we don't use View Models as we don't want another mapping layer for MVC application and we use only DTOs as View Models.
Generally, we have Normal and Lite DTOs for domain such as Customer, CustomerLite, etc (Lite object is having few properties than Normal).
Now we are having some difficulties with DTOs because our DTO structure is becoming more complex and when we think maintainability (with general Hierarchical structure of DTOs) we lose Performance.
For example,
We have Customer View page and our DTO hierarchy as follows
public class CustomerViewDetailsDTO
{
public CustomerLiteDto Customer{get;set;}
public OrderLiteDto Order{get;set;}
public AddressLiteDto Address{get;set;}
}
In this case we don't want some fields of OrderLiteDto for this view. But some other view needs that fields, so in order to facilitates that it we use that structure.
When it comes to Auto Mapping, we map CustomerViewDetailsDTO and we will get additional data (which is not required for particular view) from Lazy Loading (Entity Framework).
My Questions:
Is there any mechanism that we can use for improving performance while considering the maintainability?
Is it possible to use Automapper with more map view based mapping functions for same DTO ?
First of don't use Lazy Loading as probably it will result in Select N+1 problems or similar.
Select N + 1 is a data access anti-pattern where the database is
accessed in a suboptimal way.
In other words, using lazy loading without eager loading collections causes Entity Framework to go to the database and bring the results back one row at a time.
Use jsRender for templates as it's much faster than JQuery Templates:
Render Bemchmark
and here's an nice info for how to use it: Reducing JavaScript Code Using jsRender Templates in HTML5 Applications
Generally, we have Normal and Lite DTOs for domain such as Customer,
CustomerLite, etc (Lite object is having few properties than Normal).
Your normal DTO is probably a ViewModel as ViewModels may or may not map one to one to DTOs, and ViewModels often contain logic pushed back from the view, or help out with pushing data back to the Model on a user's response.
DTOs don't have any behavior and their purpose is to reduce the number of calls between tiers of the application.
Is there any mechanism that we can use for improving performance while considering the maintainability?
Use one ViewModel for one view and you won't have to worry about maintainability. Personally i usually create an abtract class that is a base, and for edit,create or list i inherit that class and add properties that are specific for a view. So for an example Create view doesn't need PropertyId (as someone can hijack your post and post it) so only Edit and List ViewModels have PropertyId property exposed.
Is it possible to use Automapper with more map view based mapping functions for same DTO ?
You can use AutoMapper to define every map, but the question is, how complicated the map will be. Using one ViewModel per view and your maps will be simple to write and maintain. I must point out that it is not recommended to use Automapper in the data access code as:
One downside of AutoMapper is that projection from domain objects
still forces the entire domain object to be queried and loaded.
Source: Autoprojecting LINQ queries
You can use a set of extension that are limited as of now to speed up your mapping in data access code: Stop using AutoMapper in your Data Access Code
Regards

Property in Entity partial class

I have an entity/table that uses sqlgeography.
Since EF 4.X doesn't support spatial types I'm instead sending the bytes of the field back and forth.
I have stored procs on the database side that handles the converstion and properties on the code side to do that job.
To add the properties in the code I used a partial class.
One of those properties is for the SqlGeography which simply wraps around the byte[] property to handle getting and setting.
This property is hidden from EF using the NotMappedAttribute.
The other is the property exposing the byte[] itself and is decorated with the EdmScalarPropertyAttribute and DataMemberAttribute.
I then go to the EF model designer (*.edmx) to point the entity model at the Insert/Update/Delete stored procs.
It finds the stored procs alright and realises that they (when appropriate) take a VARBINARY parameter.
It also has a drop down allowing you to select a property on the entity class which maps to that parameter.
However this drop down doesn't list either of my properties. I don't care about the SqlGeography property since that is meant to be hidden from EF, however it is vital for me to be able to point it at the byte[] property, as that is where the data comes from.
I would very much like to avoid database triggers or wrapper classes and addiitonal fields to fudge this in to working.
I tried manually editing the .edmx file to include the byte[] property, but then it just complains it's unmapped.
Can anyone give me some insight in to how to get this to work? Or an alternative method of achiving the end result?
We could use a view to create the binary field for us, but this then involves manually creating a lot of the xml for the relationships within the data.
This pretty much voids the point of using EF which is to make life simple and easy.
For this project We'll just add a binary field to the table then have sprocs to handle the converstion on the server and a property in a partial entity class for exposing the geography type in the model.
Next project I doubt we'll be using EF. Dapper is so much more painless, even if theres a touch more code writing involved.
Here's the links for using views if anyone thinks it would be applicable to them:
http://thedatafarm.com/blog/data-access/yes-you-can-read-and-probably-write-spatial-data-with-entity-framework/
http://smehrozalam.wordpress.com/2009/08/12/entity-framework-creating-a-model-using-views-instead-of-tables/
In the end we created a computed column for each table that exposes the spatial data as bytes.
We then use stored procs for inserting and updating the spatial data.

Is there a performance difference when adding Fields toboth TxxxQuery and TClientDataSet

When I use a TClientDataSet which is connected to a TxxxQuery component, I can add TFields to both components at design time. I recognized, when I don't specify the TFields in the TxxxQuery component, they are retrieved when the query is executed at runtime.
My questions is: Is there a performance difference when I add the TFields at design time to the TxxxQuery component?
When you add the fields at design time you get the strongly typed QueryName_FieldName fields you can use directly from code, skipping the name-based QueryName["FieldName"] lookup required if you don't have them.
From a performance stand-point the difference is most likely insignificant; From a language perspective having the fields added at design time provides better type safety, but only if you access the fields from code, and only if you use the QueryName_FieldName.Value syntax, not the named-based QueryName["FieldName"] syntax. If you use data-bound controls there's no difference.
I personally only add fields to TClientDataSet at design time when I need to use the client dataset without binding it to an other data source (ie: use it as a temporary table for reporting).

Resources