I'm working on visualizing several geojson files with a large set of properties. I would like to use json-ld to add some meaning to some of these properties. I don't have a lot experience with JSON-LD, but sucessfully applied the jsonld.js to expand, compact, etc. my geojson file and #context. In doing so I noticed that the end results only returns the graph that is actually described in the context. I can understand that, but since it only represents a small part of all my properties, I have some difficulty using the results.
It would help me if I could somehow merge the results of the jsonld operation with the original geojseon file. eg:
"properties": {
"<http://purl.org/dc/terms/title>": "My Title",
"<http://purl.org/dc/terms/type>": "<http://example.com/mytype>",
"NonJSONLDPropertyKey" : "NonJSONLDPropertyValue",
etc.
I would still be able to recognize the properties with an URI, but could also work with the non-json-ld properties. Any suggestions how this might work? Or is there a better approach?
You could map all other properties to blank nodes... that is identifiers that are scoped to the document. The simplest way to do so is to add a
"#vocab": "_:"
declaration to your context.
Related
When I use a map constructor like:
Person p = new Person(name: "Bob")
through something that is called via a grails.gsp.PageRenderer, the field values are not populated. When I use an empty constructor and then set the fields individually like:
Person p = new Person()
p.name = "Bob"
it succeeds. When I use the map constructor via a render call, it also succeeds.
Any ideas as to why this is the case?
Sample project is here in case anyone wants to dig deeper: https://github.com/danduke/constructor-test/
Actual use case, as requested by Jeff below:
I have a computationally expensive view to render. Essentially it's a multi-thousand page (when PDF'd) view of some data.
This view can be broken into smaller pieces, and I can reliably determine when each has changed.
I render one piece at a time, submitting each piece to a fixed size thread pool to avoid overloading the system. I left this out of the example project, as it has no bearing on the results.
I cache the rendered results and evict them by key when data in that portion of the view has changed. This is why I am using a page renderer.
Each template used may make use of various tag libraries.
Some tag libraries need to load data from other applications in order to display things properly (actual use case: loading preferences from a shared repository for whether particular items are enabled in the view)
When loaded, these items need to be turned into an object. In my case, it's a GORM object. This is why I am creating a new object at all.
There are quite a few opportunities for improvement in my actual use case, and I'm open to suggestions on that. However, the simplest possible (for me) demonstration of the problem still does suggest that there's a problem. I'm curious whether it should be possible to use map constructors in something called from a PageRenderer at all. I'm surprised that it doesn't work, and it feels like a bug, but obviously a very precise and edge case one.
"Technically it is a bug" (which is the best kind of bug!), and has been reported here: https://github.com/grails/grails-core/issues/11870
I'll update this if/when additional information is available.
I am experimenting with Parse for creating the backend for my application and I need to support localized data.
I can't be the first one that tries to do that, but I am unable to find anything about it. I was thinking of keeping the data like this:
// Post class
{
"title": {
"en": "Good morning!",
"de": "Guten Tag!"
},
// Other properties
}
But then the queries would need to be targeted specifically against a localization on the client side since you can't query the title property directly. So I need to do some client side magic first. Does it seem like a bad way to do it? Have this been solved better?
It depends what the data is and how it's being added / updated. I wouldn't use a dictionary with multiple keys like that, I'd either use different objects with a language column so I could query for just the language I want or I'd have multiple language specific columns so I could include only what I want. The former is easier to manage and likely more efficient in the long run.
I'm using neo4j (but probably this also applies to other databases). The user can give his own key/value pairs. But i also need to define some properties by the system. How do i prevent a name clash (on the key)? I could prefix all the system properties, but it seems a bit weird. Also i could make another node and put all the system properties there, but that might make for some difficult queries. What's a good way to solve this?
Neo4jis property graph.
Basically, there is no magic there. You already mentioned all possible solutions.
From my perspective best solution is - add prefixes to user defined properties (for example #). This will keep queries simple enough, and doesn’t affect any performance problems.
Additionally, if this properties are only for READ and you are never going to run query against them, then you can look into storing JSON with user-defined data in your nodes:
SET n.user_data = ‘{“key”: “value”}’
I hope this isn't too vague, but I'm stuck on a problem that has put me in an Unfortunate Position.
I'm Flash developer getting my feet wet with with AS3 and am trying to build an interior decoration tool for a client. My thinking so far has been: create the basic user interface, get the screen flow down, and then finally use a couple of simple arrays to store user selections and stuff like that.
Naturally my 'couple of simple arrays' is totally inadequate to model the many user decisions that my program has to take into account. So I find myself trying to create an enormous, multi-dimensional array with several layers of nesting and before Panic sets in.
Here's an example of my thinking for the 'bedding' component of the application in pseudo ActionScript:
bedding['size'] = 'king':String
bedding['cover'] = cover:Array
cover['type'] = 'coverlet':String
cover['style'] = 'style_one':String
cover['variation'] = 'varation_one':String
cover['fabric'] = fabrics:Array
fabrics[0] = 'paisely':String
fabrics[1] = 'argyle':String
fabrics[2] = 'plaid':String
cover['trim'] = trim:Array
trims[0] = trim_pair:Array
trim_pair['type'] = 'trim_one':String
trim_pair['color'] = 'blue':String
trims[1] = trim_pair:String
trims[2] = trim_pair:String
cover['embellishments'] = embellishment_pair:Array
embellishment_pair['type'] = 'monogram':String
embellishment_pair['letters'] = 'TL':String
... keep in mind that this is just a fraction of what goes into bedding and there are several other of these kinds of arrays that would go into a room like flooring and walls and funture... all equally complex. And I'll need to frequently access different combinations like, how many options under bedding have no value associated and things like that.
So, I realize I'm out of my league and am going to get hurt on this, but I'd like to try to get this right so that I get better and any help you guys can provide is great.
My questions are:
1) Could it be that using nested arrays like this actually isn't such a bad thing and I should just stick it out? That would suprise me, but I want to make sure I'm not already on the right path.
2) If not, where do I go from here if I want to do this right?
Off the top of my head I feel like I could maybe make everything class based. So my sheets are a class and beds have instances of sheets and rooms have instances of beds... etc. It think it would be complicated but might be the way to go.
Or maybe, I go the XML route and store all of the room options in nested blank XML nodes that a user then populates as they move through the application.
These are my thoughts but I'd like to hear what more experienced members of the community say.
Thank you so much for your help!
My suggestion would be to use a strongly typed model. Look into using collections and value objects to store and retrieve data. A collection could be a class that wraps an Array and provides a clean interface for fetching the value objects that it stores. Value objects are simple objects representing data that can be assembled in various ways to create more complex collections. Value objects can also be passed around to transfer data to various parts of an application. The advantage to using collections and value objects is that your code will be ( potentially ) more explicit and over-all easier to read than if you went with a dynamic approach. For some, the downside to this approach is that you end up with too many classes. Personally, I prefer working with many small to medium size classes versus one monolithic class.
If you are not familiar with the concept of value objects: http://en.wikipedia.org/wiki/Data_transfer_object.
AFAIK, AS3 is not well suited to the type of complex data model you're trying to create.
You need to completely decouple the UI/Flash tier from your "inventory" system. The UI should be completely abstract, with no knowledge of, or coupling to, your data schema or content. This could be accomplished with a middle-tier webservice-styled system that handles all the business logic around searching/retrieving/updating your data.
Store everything your UI needs to handle presentation-side rendering in your product metadata. This will allow you to add new products and types without having to update the UI every time new products are introduced. For example, if a product comes with an image, store a URI to the image with the product record and load it on demand. You could extend this all the way to custom animations, I believe- just reference an outside .SWF file and load it into your application on request.
I am working on a large project at work that requires me to create OData's for a large variety of Remote Function Calls. I was able to work out how to model and create OData's for simple RFCs; however, I am struggling with more complex RFCs that use multiple tables as well as simple exporting and importing parameters.
I want to output these tables as well as the importing and exporting parameters via GetEntity and GetEntitySet with just one call. I have done extensive searching online to find solutions but the best solution seems to be redefining the RFC's or calling the OData multiple times which is not ideal.
Is there any way to combine multiple tables with several entries in the output? When I say output, I am referring to the resulting XML from GetEntity/GetEntitySet.
For example, take the below fake RFC definition that takes a PERNR, and outputs a list of direct reports and a structure of employee details.
IMPORTING
PERNR
EXPORTING
S_EMPLOYEE_DETAILS
TABLES
T_DIRECT_REPORTS
Is there a way to combine the table, structure, and importing parameters into one output?
The first thing to understand is that the OData protocol is not intended to solely work like classical function calls. It is based however on entity/relationship kind of model.
So in your case id sugest to create an entity type named 'Employee' with the appropiate properties of your structure S_EMPLOYEE_DETAILS. With this you can e.g. implement the method GET_EMPLOYEE_ENTITY to retrieve a single instance of an employee via PERNR.
The next thing to do would be to get the direct reports of this employee. Since this is a relation 1:N from Employee to Employee in your case you can create a navigation property called 'DirectReports' with appropiate cardinality. Then in your GET_EMPLOYEE_ENTITYSET you can return the instances of table T_DIRECT_REPORTS (note that navigation property is not empty and you have to read the keys of the parent!).
Once you got this working you can move on to the 'best-practise' and implement the method GET_EXPANDED_ENTITY with filling the expand clauses, which is in my opinion the preferred way as you dont need to implement two seperate methods and is consiered faster as well (if many expands happen).
Both methods of implementation can be called via
GET EmployeeSet('12345678')?$expand=DirectReports