Is there some Interface available in Apache Jena like ModelBuilder in RDF4J?
I can see ModelMaker in Jena but that is not something similar to builder I suppose.
Following is the function using rdf4j that need to be implemented in Jena:
public static org.eclipse.rdf4j.model.Model convertGraph2RDFModel(Graph graph, String label) {
ModelBuilder builder = new ModelBuilder();
GraphTraversalSource t = graph.traversal();
GraphTraversal<Vertex, Vertex> hasLabel = t.V().hasLabel(label);
Vertex s;
if(hasLabel.hasNext()){
s = hasLabel.next();
extractModelFromVertex(builder, s);
}
return builder.build();
}
private static void extractModelFromVertex(ModelBuilder builder, Vertex s) {
builder.subject(s.label());
Iterator<VertexProperty<String>> propertyIter = s.properties();
while (propertyIter.hasNext()){
VertexProperty<String> property = propertyIter.next();
builder.add(property.label(), property.value());
}
Iterator<Edge> edgeIter = s.edges(Direction.OUT);
Edge edge;
Stack<Vertex> vStack = new Stack<Vertex>();
while(edgeIter.hasNext()){
edge = edgeIter.next();
s = edge.inVertex();
builder.add(edge.label(), s.label());
vStack.push(s);
}
Iterator<Vertex> vIterator = vStack.iterator();
while(vIterator.hasNext()){
s = vIterator.next();
extractModelFromVertex(builder,s);
}
}
I don't know if Jena has similar functionality, but you could of course just continue using the RDF4J ModelBuilder, serialize its output Model to, say, a Turtle or TriG string (or file), then use Jena to read it in again.
org.eclipse.rdf4j.model.Model m = ...; // RDF4J Model built by the ModelBuilder
java.io.Writer writer = new StringWriter();
org.eclipse.rdf4j.rio.Rio.write(m, writer, RDFFormat.TRIG);
String = writer.toString();
// Use Jena's parser to read the string back in.
Or alternatively just iterate over the RDF4J model and convert each statement directly (without serializing and deserializing in between):
org.eclipse.rdf4j.model.Model rdf4jModel = ...; // RDF4J Model built by the ModelBuilder
org.apache.jena.rdf.model.Model jenaModel = ...; // (empty) Jena model to receive converted rdf4j model
rdf4jModel.forEach(stmt -> jenaModel.add(convert(stmt)));
...
public org.apache.jena.rdf.model.Statement convert(
org.eclipse.rdf4j.model.Statement stmt) {
... // create a Jena statement from the RDF4J one.
}
I'll admit it's probably easier to settle on using a single framework in most applications, but there's no fundamental reason you can't use bits of RDF4J and Jena in combination.
Related
I would like to
add/remove/update elements/attributes/values to the "subTree"
be able to save the updated "targetDoc" back to the "target" file location.
determine which tree model would be best for this xpath + tree modification procedure.
I thought I should somehow be able to get a MutableNodeInfo object, but I don't know how to do this. I tried using the processor.setConfigurationProperty(FeatureKeys.TREE_MODEL, Builder.LINKED_TREE); but this still gives me an underlying node of TinyElementImpl. I require xpath 2.0 to avoid having to enter default namespaces, which is why I am using saxon s9api instead of Java's default DOM model. I would also like to avoid using xslt/xquery if possible because these tree modifications are being done dynamically, making xslt/xquery more complicated in my situation.
public static void main(String[] args) {
// XML File namesspace URIs
Hashtable<String, String> namespaceURIs = new Hashtable<>();
namespaceURIs.put("def", "http://www.cdisc.org/ns/def/v2.0");
namespaceURIs.put("xmlns", "http://www.cdisc.org/ns/odm/v1.3");
namespaceURIs.put("xsi", "http://www.w3.org/2001/XMLSchema-instance");
namespaceURIs.put("xlink", "http://www.w3.org/1999/xlink");
namespaceURIs.put("", "http://www.cdisc.org/ns/odm/v1.3");
// The source/target xml document
String target = "Path to file.xml";
// An xpath string
String xpath = "/ODM/Study/MetaDataVersion/ItemGroupDef[#OID/string()='IG.TA']";
Processor processor = new Processor(true);
// I thought this tells the processor to use something other than
// TinyTree
processor.setConfigurationProperty(FeatureKeys.TREE_MODEL,
Builder.LINKED_TREE);
DocumentBuilder builder = processor.newDocumentBuilder();
XPathCompiler xpathCompiler = processor.newXPathCompiler();
for (Entry<String, String> entry : namespaceURIs.entrySet()) {
xpathCompiler.declareNamespace(entry.getKey(), entry.getValue());
}
try {
XdmNode targetDoc = builder.build(Paths.get(target).toFile());
XPathSelector selector = xpathCompiler.compile(xpath).load();
selector.setContextItem(targetDoc);
XdmNode subTree = (XdmNode) selector.evaluateSingle();
// The following prints: class
// net.sf.saxon.tree.tiny.TinyElementImpl
System.out.println(subTree.getUnderlyingNode().getClass());
/*
* Here, is where I would like to modify subtree and save modified doc
*/
} catch (SaxonApiException e) {
e.printStackTrace();
}
}
I think you can supply a DOM node to Saxon and run XPath against it but it that case you don't use the document builder for Saxon's native trees, you build a DOM using the javax.xml.parsers.DocumentBuilder and once you have a W3C DOM node you can supply it to Saxon using the wrap method of a Saxon DocumentBuilder. Here is sample code taken from the file S9APIExamples.java in the Saxon 9.6 resources file:
// Build the DOM document
File file = new File("data/books.xml");
DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance();
dfactory.setNamespaceAware(true);
javax.xml.parsers.DocumentBuilder docBuilder;
try {
docBuilder = dfactory.newDocumentBuilder();
} catch (ParserConfigurationException e) {
throw new SaxonApiException(e);
}
Document doc;
try {
doc = docBuilder.parse(new InputSource(file.toURI().toString()));
} catch (SAXException e) {
throw new SaxonApiException(e);
} catch (IOException e) {
throw new SaxonApiException(e);
}
// Compile the XPath Expression
Processor proc = new Processor(false);
DocumentBuilder db = proc.newDocumentBuilder();
XdmNode xdmDoc = db.wrap(doc);
XPathCompiler xpath = proc.newXPathCompiler();
XPathExecutable xx = xpath.compile("//ITEM/TITLE");
// Run the XPath Expression
XPathSelector selector = xx.load();
selector.setContextItem(xdmDoc);
for (XdmItem item : selector) {
XdmNode node = (XdmNode) item;
org.w3c.dom.Node element = (org.w3c.dom.Node) node.getExternalNode();
System.out.println(element.getTextContent());
}
There are also samples showing how to use Saxon with JDOM and other mutable tree implementations but I think you need Saxon PE or EE to have direct support for those.
The MutableNodeInfo interface in Saxon is designed very specifically to meet the needs of XQuery Update, and I would advise against trying to use it directly from Java; the implementation isn't likely to be robust when handling method calls other than those made by XQuery Update.
In fact, it's generally true that the Saxon NodeInfo interface is designed as a target for XPath, rather than for user-written Java code. I would therefore suggest using a third party tree model; the ones I like best are JDOM2 and XOM. Both of these allow you to mix direct Java navigation and update with use of XPath 2.0 navigation using Saxon.
This is my scenario: we are building a routing system by using neo4j and the spatial plugin. We start from the OSM file and we read this file and import nodes and relationships in our graph (a custom graph model)
Now, if we don't use the batch inserter of neo4j, in order to import a compressed OSM file (with compressed dimension of around 140MB, and normal dimensions around 2GB) it takes around 3 days on a dedicated server with the following characteristics: CentOS 6.5 64bit, quad core, 8GB RAM; pease note that the most time is related to the Neo4J Nodes and relationships creation; in-fact if we read the same file without doing anything with neo4j, the file is read in around 7 minutes (i'm sure about this becouse in our process we first read the file in order to store the correct osm nodes ids and then we read again the file in order to create the neo4j graph)
Obviously we need to improve the import proces so we are trying to use the batchInserter. So far, so good (I need to check how much it will perform by using the batchInserter but I guess it will be faster); so the first thing I did was: let's try to use the batch inserter in a simple test case (very similar to our code, but without modifying our code directly)
I list my software versions:
Neo4j: 2.0.2
Neo4jSpatial: 0.13-neo4j-2.0.1
Neo4jGraphCollections: 0.7.1-neo4j-2.0.1
Osmosis: 0.43.1
Since I'm using osmosis in order to read the osm file, I wrote the following Sink implementation:
public class BatchInserterSinkTest implements Sink
{
public static final Map<String, String> NEO4J_CFG = new HashMap<String, String>();
private static File basePath = new File("/home/angelo/Scrivania/neo4j");
private static File dbPath = new File(basePath, "db");
private GraphDatabaseService graphDb;
private BatchInserter batchInserter;
// private BatchInserterIndexProvider batchIndexService;
private SpatialDatabaseService spatialDb;
private SimplePointLayer spl;
static
{
NEO4J_CFG.put( "neostore.nodestore.db.mapped_memory", "100M" );
NEO4J_CFG.put( "neostore.relationshipstore.db.mapped_memory", "300M" );
NEO4J_CFG.put( "neostore.propertystore.db.mapped_memory", "400M" );
NEO4J_CFG.put( "neostore.propertystore.db.strings.mapped_memory", "800M" );
NEO4J_CFG.put( "neostore.propertystore.db.arrays.mapped_memory", "10M" );
NEO4J_CFG.put( "dump_configuration", "true" );
}
#Override
public void initialize(Map<String, Object> arg0)
{
batchInserter = BatchInserters.inserter(dbPath.getAbsolutePath(), NEO4J_CFG);
graphDb = new SpatialBatchGraphDatabaseService(batchInserter);
spatialDb = new SpatialDatabaseService(graphDb);
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
//batchIndexService = new LuceneBatchInserterIndexProvider(batchInserter);
}
#Override
public void complete()
{
// TODO Auto-generated method stub
}
#Override
public void release()
{
// TODO Auto-generated method stub
}
#Override
public void process(EntityContainer ec)
{
Entity entity = ec.getEntity();
if (entity instanceof Node) {
Node osmNodo = (Node)entity;
org.neo4j.graphdb.Node graphNode = graphDb.createNode();
graphNode.setProperty("osmId", osmNodo.getId());
graphNode.setProperty("latitudine", osmNodo.getLatitude());
graphNode.setProperty("longitudine", osmNodo.getLongitude());
spl.add(graphNode);
} else if (entity instanceof Way) {
//do something with the way
} else if (entity instanceof Relation) {
//do something with the relation
}
}
}
Then I wrote the following test case:
public class BatchInserterTest
{
private static final Log logger = LogFactory.getLog(BatchInserterTest.class.getName());
#Test
public void batchInserter()
{
File file = new File("/home/angelo/Scrivania/MilanoPiccolo.osm");
try
{
boolean pbf = false;
CompressionMethod compression = CompressionMethod.None;
if (file.getName().endsWith(".pbf"))
{
pbf = true;
}
else if (file.getName().endsWith(".gz"))
{
compression = CompressionMethod.GZip;
}
else if (file.getName().endsWith(".bz2"))
{
compression = CompressionMethod.BZip2;
}
RunnableSource reader;
if (pbf)
{
reader = new crosby.binary.osmosis.OsmosisReader(new FileInputStream(file));
}
else
{
reader = new XmlReader(file, false, compression);
}
reader.setSink(new BatchInserterSinkTest());
Thread readerThread = new Thread(reader);
readerThread.start();
while (readerThread.isAlive())
{
try
{
readerThread.join();
}
catch (InterruptedException e)
{
/* do nothing */
}
}
}
catch (Exception e)
{
logger.error("Errore nella creazione di neo4j con batchInserter", e);
}
}
}
By executing this code, I get this exception:
Exception in thread "Thread-1" java.lang.ClassCastException: org.neo4j.unsafe.batchinsert.SpatialBatchGraphDatabaseService cannot be cast to org.neo4j.kernel.GraphDatabaseAPI
at org.neo4j.cypher.ExecutionEngine.<init>(ExecutionEngine.scala:113)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:53)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:43)
at org.neo4j.collections.graphdb.ReferenceNodes.getReferenceNode(ReferenceNodes.java:60)
at org.neo4j.gis.spatial.SpatialDatabaseService.getSpatialRoot(SpatialDatabaseService.java:76)
at org.neo4j.gis.spatial.SpatialDatabaseService.getLayer(SpatialDatabaseService.java:108)
at org.neo4j.gis.spatial.SpatialDatabaseService.containsLayer(SpatialDatabaseService.java:253)
at org.neo4j.gis.spatial.SpatialDatabaseService.createLayer(SpatialDatabaseService.java:282)
at org.neo4j.gis.spatial.SpatialDatabaseService.createSimplePointLayer(SpatialDatabaseService.java:266)
at it.eng.pinf.graph.batch.test.BatchInserterSinkTest.initialize(BatchInserterSinkTest.java:46)
at org.openstreetmap.osmosis.xml.v0_6.XmlReader.run(XmlReader.java:95)
at java.lang.Thread.run(Thread.java:744)
This is related to this code:
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
So now I'm wondering: how can I use the batchInserter for my case? I have to add the created nodes to the SimplePointLayer....so how can I create it by using the batchInserter graph db service?
Is there any little simple sample?
Any tip is really really appreciated
cheers
Angelo
The OSMImporter class in the code has an example of using the batch inserter to import OSM data. The main thing is that the batch inserter is not really supported by neo4j spatial, so you need to do a few things manually. If you look at the class OSMImporter.OSMBatchWriter, you will see how it does things. It is not using the SimplePointLayer at all, since that does not support the batch inserter. It is creating the graph structure it wants directly. The simple point layer is quite simple, certainly much simpler than the OSM model created by the code I'm referencing, so I think you should be able to write a batch-inserter compatible version yourself without too much trouble.
What I would recommend is that you create the layer and nodes using the batch inserter to create the correct graph structure, then switch to the normal embedded API and use that to iterate through the nodes and add them to the spatial index.
I use leucene.net for my site and it Index some of the words fine and correct but it doesn't index some words like "الله"!
I have see the indexed file with Luke and it shows that "الله"is not indexed.
I have used ArabicAnalyzer for indexing.
you can see my site at www.qoranic.com , if you search "مریم" it will be ok but if you search "الله" it shows nothing.
any idea is appreciated in forward.
The ArabicAnalyzer does some transformation to that input; it will transform the input الله to له. This is due to the usage of the ArabicStemFilter (and ArabicStemmer) which is documented with ...
Stemming is defined as:
Removal of attached definite article, conjunction, and prepositions.
Stemming of common suffixes.
This shouldn't be an issue since you should be parsing the user provided query through the same analyzer when searching, producing the same tokens.
Here's the sample code I used to see what terms an analyzer produced from a given input.
using System;
using Lucene.Net.Analysis.AR;
using Lucene.Net.Analysis.Tokenattributes;
using System.IO;
namespace ConsoleApplication {
public static class Program {
public static void Main() {
var luceneVersion = Lucene.Net.Util.Version.LUCENE_30;
var input = "الله";
var analyzer = new ArabicAnalyzer(luceneVersion);
var inputReader = new StringReader(input);
var stream = analyzer.TokenStream("fieldName", inputReader);
var termAttribute = stream.GetAttribute<ITermAttribute>();
while(stream.IncrementToken()) {
Console.WriteLine("Term: {0}", termAttribute.Term);
}
Console.WriteLine("Done.");
Console.ReadLine();
}
}
}
You can overcome this behavior (remove the stemming) by writing a custom Analyzer which uses the ArabicNormalizationFilter, just as ArabicAnalyzer does, but without the call to ArabicStemFilter.
public class CustomAnalyzer : Analyzer {
public override TokenStream TokenStream(String fieldName, TextReader reader) {
TokenStream result = new ArabicLetterTokenizer(reader);
result = new LowerCaseFilter(result);
result = new ArabicNormalizationFilter(result);
return result;
}
}
I am using Entity Frameowrk 4.0 and I am calling a stored procedure which returns an ObjectResult and I tried to use MOQ and have not been able to mock ObjectResult. Has anybody been able to mock ObjectResult using moq?
TIA
Yaz
I have this problem too; I'm using database-first design, and the EF 4.x DbContext Generator template to generate my DbContext.
I'm modifying the context generator in the following ways:
Instead of DbSet<T> for entity sets, I am returning IDbSet<T>; this allows me to use InMemoryDbSet<T> for unit testing (Google for implementations);
Instead of ObjectResult<T> for stored procedures, I am returning IEnumerable<T>. Inside the virtual method created for the stored procedure, I load the ObjectResult<T> into a List<T> and return that;
Finally, I extract an interface exposing the entity sets and function imports. This means I can then mock the entire DbContext for super-speedy unit tests. You should still write integration tests that test the database functionality for real, of course.
ObjectResult (according to the MSDN docs) is a sealed class as such you cannot mock it. The way Mocking libraries like Moq work is that when you do something like
Mock<Foo> fooMock = new Mock<Foo>();
It generates (using Reflection.Emit and various other magic tricks) a class that looks a little like this
public class FooMockingProxy : Foo {
public override void This() {
// Mocking interceptors to support Verify and Setup
}
public override string That() {
// Mocking interceptors to support Verify and Setup
}
}
i.e. It takes the class (interface) you want to Mock and subclasses it (or implements it in the case of an interface). This allows it to put in instrumentation that allows it to check if a method has been called, or returns a certain value (this supports the various Setup and Verify methods). The restrictions to this method of mocking are:-
Sealed classes (can't be subclassed)
Private members (can't be accessed from a subclass)
Methods or properties classes that are not virtual (and therefore cannot be overriden).
One technique you can take when approaching sealed classes is to wrap them in some kind of interface that is Mockable. Alternatively you can try and Mock an interface that the sealed class implements that only your code consumes.
ObjectResult is typically used with Linq therefore it is mainly used as IEnumerable. Even though object is sealed, you can mock it and setup IEnumerable behavior.
Here is some sample code where TResult is the stored procedure result type and TDbContext is the DbContext and it will return 1 item.
var valueEnumerator = new TResult[] { new TResult() }.GetEnumerator();
var mockStoredProcedure = new Mock<ObjectResult<TResult>();
mockStoredProcedure.Setup(x => x.GetEnumerator()).Returns(valueEnumerator);
var mockEntities = new Mock<TDbContext>();
mockEntities.Setup(x => x.[stored procedure method]()).Returns(mockStoredProcedure.Object);
You can add any values to array in example above or use any other collection (you only need the enumerator).
Give this code a try. It works for me with EF 6.1.2 and Moq 4.2
I could not find a way to mock a sealed class, and wanted to test that the parameters of a stored procedure matched the entity model. Here is my solution:
namespace CardiacMonitoringTest
{
[TestClass]
public class CardiacMonitoringDataTest
{
[TestMethod]
public void TestEntityStoredProcedure()
{
List<string> SPExceptions = new List<string>();
SPExceptions.Add("AfibBurdenByDay");
SPExceptions.Add("GetEventTotalsByCategory");
EntitiesCon db = new EntitiesCon();
foreach (MethodInfo mi in typeof(EntitiesCon).GetMethods())
{
string ptype = mi.ReturnType.Name;
if (ptype.IndexOf("ObjectResult") > -1)
{
List<SqlParameter> ExtractedParameters = SPListParm(ConfigurationManager.ConnectionStrings["CardiacMonitoring"].ConnectionString, mi.Name);
ExtractedParameters = ExtractedParameters.Where(a => a.ParameterName != "#RETURN_VALUE" && a.ParameterName != "#TABLE_RETURN_VALUE").ToList();
ParameterInfo[] EntityParameters = mi.GetParameters();
if ((from b in SPExceptions where b.ToLower() == mi.Name.ToLower() select b).Count() > 0)
{
continue;
}
foreach (ParameterInfo pi in EntityParameters)
{
try
{
Assert.IsTrue(
(from a in ExtractedParameters where pi.Name.ToLower() == a.ParameterName.Replace("#", "").ToLower() select a).Count() == 1);
}
catch (Exception ex)
{
Trace.WriteLine("Failed SP:" + mi.Name + " at parameter:" + pi.Name);
throw (ex);
}
try
{
Assert.IsTrue(EntityParameters.Count() == ExtractedParameters.Count());
}
catch (Exception ex)
{
Trace.WriteLine("Failed SP:" + mi.Name + " on parameter count:" + EntityParameters.Count() + " with detected count as:" + ExtractedParameters.Count());
throw (ex);
}
}
}
}
}
private List<SqlParameter> SPListParm(string ConnectionString, string SPName)
{
try
{
SqlConnection conn = new SqlConnection(ConnectionString);
SqlCommand cmd = new SqlCommand(SPName, conn);
cmd.CommandType = CommandType.StoredProcedure;
conn.Open();
SqlCommandBuilder.DeriveParameters(cmd);
SqlParameter[] prmDetectParameters = new SqlParameter[cmd.Parameters.Count];
cmd.Parameters.CopyTo(prmDetectParameters, 0);
List<SqlParameter> toReturn = new List<SqlParameter>();
toReturn.AddRange(prmDetectParameters);
return (toReturn);
}
catch (Exception ex)
{
Trace.WriteLine("Failed detecting parameters for SP:" + SPName);
throw (ex);
}
}
}
}
I have an object which has a circular reference to another object. Given the relationship between these objects this is the right design.
To Illustrate
Machine => Customer => Machine
As is expected I run into an issue when I try to use Json to serialize a machine or customer object. What I am unsure of is how to resolve this issue as I don't want to break the relationship between the Machine and Customer objects. What are the options for resolving this issue?
Edit
Presently I am using Json method provided by the Controller base class. So the serialization I am doing is as basic as:
Json(machineForm);
Update:
Do not try to use NonSerializedAttribute, as the JavaScriptSerializer apparently ignores it.
Instead, use the ScriptIgnoreAttribute in System.Web.Script.Serialization.
public class Machine
{
public string Customer { get; set; }
// Other members
// ...
}
public class Customer
{
[ScriptIgnore]
public Machine Machine { get; set; } // Parent reference?
// Other members
// ...
}
This way, when you toss a Machine into the Json method, it will traverse the relationship from Machine to Customer but will not try to go back from Customer to Machine.
The relationship is still there for your code to do as it pleases with, but the JavaScriptSerializer (used by the Json method) will ignore it.
I'm answering this despite its age because it is the 3rd result (currently) from Google for "json.encode circular reference" and although I don't agree with the answers (completely) above, in that using the ScriptIgnoreAttribute assumes that you won't anywhere in your code want to traverse the relationship in the other direction for some JSON. I don't believe in locking down your model because of one use case.
It did inspire me to use this simple solution.
Since you're working in a View in MVC, you have the Model and you want to simply assign the Model to the ViewData.Model within your controller, go ahead and use a LINQ query within your View to flatten the data nicely removing the offending circular reference for the particular JSON you want like this:
var jsonMachines = from m in machineForm
select new { m.X, m.Y, // other Machine properties you desire
Customer = new { m.Customer.Id, m.Customer.Name, // other Customer properties you desire
}};
return Json(jsonMachines);
Or if the Machine -> Customer relationship is 1..* -> * then try:
var jsonMachines = from m in machineForm
select new { m.X, m.Y, // other machine properties you desire
Customers = new List<Customer>(
(from c in m.Customers
select new Customer()
{
Id = c.Id,
Name = c.Name,
// Other Customer properties you desire
}).Cast<Customer>())
};
return Json(jsonMachines);
Based on txl's answer you have to
disable lazy loading and proxy creation and you can use the normal methods to get your data.
Example:
//Retrieve Items with Json:
public JsonResult Search(string id = "")
{
db.Configuration.LazyLoadingEnabled = false;
db.Configuration.ProxyCreationEnabled = false;
var res = db.Table.Where(a => a.Name.Contains(id)).Take(8);
return Json(res, JsonRequestBehavior.AllowGet);
}
Use to have the same problem. I have created a simple extension method, that "flattens" L2E objects into an IDictionary. An IDictionary is serialized correctly by the JavaScriptSerializer. The resulting Json is the same as directly serializing the object.
Since I limit the level of serialization, circular references are avoided. It also will not include 1->n linked tables (Entitysets).
private static IDictionary<string, object> JsonFlatten(object data, int maxLevel, int currLevel) {
var result = new Dictionary<string, object>();
var myType = data.GetType();
var myAssembly = myType.Assembly;
var props = myType.GetProperties();
foreach (var prop in props) {
// Remove EntityKey etc.
if (prop.Name.StartsWith("Entity")) {
continue;
}
if (prop.Name.EndsWith("Reference")) {
continue;
}
// Do not include lookups to linked tables
Type typeOfProp = prop.PropertyType;
if (typeOfProp.Name.StartsWith("EntityCollection")) {
continue;
}
// If the type is from my assembly == custom type
// include it, but flattened
if (typeOfProp.Assembly == myAssembly) {
if (currLevel < maxLevel) {
result.Add(prop.Name, JsonFlatten(prop.GetValue(data, null), maxLevel, currLevel + 1));
}
} else {
result.Add(prop.Name, prop.GetValue(data, null));
}
}
return result;
}
public static IDictionary<string, object> JsonFlatten(this Controller controller, object data, int maxLevel = 2) {
return JsonFlatten(data, maxLevel, 1);
}
My Action method looks like this:
public JsonResult AsJson(int id) {
var data = Find(id);
var result = this.JsonFlatten(data);
return Json(result, JsonRequestBehavior.AllowGet);
}
In the Entity Framework version 4, there is an option available: ObjectContextOptions.LazyLoadingEnabled
Setting it to false should avoid the 'circular reference' issue. However, you will have to explicitly load the navigation properties that you want to include.
see: http://msdn.microsoft.com/en-us/library/bb896272.aspx
Since, to my knowledge, you cannot serialize object references, but only copies you could try employing a bit of a dirty hack that goes something like this:
Customer should serialize its Machine reference as the machine's id
When you deserialize the json code you can then run a simple function on top of it that transforms those id's into proper references.
You need to decide which is the "root" object. Say the machine is the root, then the customer is a sub-object of machine. When you serialise machine, it will serialise the customer as a sub-object in the JSON, and when the customer is serialised, it will NOT serialise it's back-reference to the machine. When your code deserialises the machine, it will deserialise the machine's customer sub-object and reinstate the back-reference from the customer to the machine.
Most serialisation libraries provide some kind of hook to modify how deserialisation is performed for each class. You'd need to use that hook to modify deserialisation for the machine class to reinstate the backreference in the machine's customer. Exactly what that hook is depends on the JSON library you are using.
I've had the same problem this week as well, and could not use anonymous types because I needed to implement an interface asking for a List<MyType>. After making a diagram showing all relationships with navigability, I found out that MyType had a bidirectional relationship with MyObject which caused this circular reference, since they both saved each other.
After deciding that MyObject did not really need to know MyType, and thereby making it a unidirectional relationship this problem was solved.
What I have done is a bit radical, but I don't need the property, which makes the nasty circular-reference-causing error, so I have set it to null before serializing.
SessionTickets result = GetTicketsSession();
foreach(var r in result.Tickets)
{
r.TicketTypes = null; //those two were creating the problem
r.SelectedTicketType = null;
}
return Json(result);
If you really need your properties, you can create a viewmodel which does not hold circular references, but maybe keeps some Id of the important element, that you could use later for restoring the original value.