Mapping to Core Data (inconsistent naming web API) - ios

I'm working to import and export data into Core Data from a web API.
The web API that i'm interfacing with doesn't have consistent naming with itself, and certainly doesn't match the naming conventions that i'd use for attributes in my core data model. (I don't have control over changing the API conventions).
To illustrate the issue, in one api call the data for a contact might look something like this :
"rows": [
{
"name": "Bob",
"group": "Testing Group A",
"email_address" : "bob#fakedata.com"
}
]
And in another different call that still returns contacts it might look something like this :
"rows": [
{
"Name": "Bob",
"group_name": "Testing Group A",
"Email" : "bob#fakedata.com"
}
]
Notice the small differences in the key naming? In the past, i've resolved issues like this by having a "mapping" for each API call. The mapping is just an NSDictionary that has the key's of the core data names I use, and the values of the API server keys.
So resolving each of these two calls would require each to have an NSDictionary like the following
dict = #{ #"name" : #"name", #"group" : #"group", #"email" : #"email_address" };
dict = #{ #"name" : #"Name", #"group" : #"group_name", #"email" : #"Email" };
This works pretty well, and it's certainly one path to solve this problem, but having these mappings in every API call isn't very elegant, and certainly is poor design for code maintainability.
So the real question here is : does anyone have a better solution for managing the mapping of web api's to Core Data? Obviously having a well written web API is the ideal solution, but even mapping well written API's can have minor differences (For example, core data requires attributes to begin with a lowercase letter).
My proposed solution is to add the mappings into the core data attribute under the "User Info" (attached image below to see example), but i have zero experience using this feature of attributes, and I don't know if there is a way better option that i've overlooked. Thanks for any help.
Additional Notes : Yes, i've used Restkit extensively, and it does have convenient mappings (similar to how I've explained using an NSDictionary above). But for this project, i'm eliminating dependency on such a framework that I don't have control over, and don't completely understand. I'm pulling this data in with a simple NSURLConnection.
update
If you go down this route (which has been very nice btw, the accepted answer helped a lot). I recommend not using the key word "map" simply because it's not the default. Use "key" instead because this doesn't require making two edits to the user info field. For my particular project there are many mappings and this has been annoying. Too late to change now, but learn from my mistake.

Wow, that is one screwed up web API.
Your suggested approach is more or less how I'd deal with it. But instead of having multiple mapX keys, I'd use a single map key whose value was a comma-separated list of mappings. In this case, the key map would have a value of Company,Company_Name,company. That way you read one known key instead of repeatedly testing to see if the next one exists. You can easily convert the comma-delimited list to an array by using NSString's componentsSeparatedByString: method.
A different approach would be to put all this in a property list that you can read at run time. That would be effective but I prefer keeping all of the information in one place, and the user info dictionary is ideal.
As an aside, for what it's worth, Core Data does not require that attribute names begin with a lower case letter. However, Xcode's data model editor does enforce that restriction-- forcing you to follow a guideline that you might well have cause to violate. If you're so inclined, you can edit the model file by hand and change attribute names to start with upper case letters. The file is XML, and if the tool compatibility setting is Xcode 4.0 or higher it's very easy to read. Once you do this you can even use Xcode's built-in class generation with those attributes.

Related

Internationalization/localization of data in Parse?

I am experimenting with Parse for creating the backend for my application and I need to support localized data.
I can't be the first one that tries to do that, but I am unable to find anything about it. I was thinking of keeping the data like this:
// Post class
{
"title": {
"en": "Good morning!",
"de": "Guten Tag!"
},
// Other properties
}
But then the queries would need to be targeted specifically against a localization on the client side since you can't query the title property directly. So I need to do some client side magic first. Does it seem like a bad way to do it? Have this been solved better?
It depends what the data is and how it's being added / updated. I wouldn't use a dictionary with multiple keys like that, I'd either use different objects with a language column so I could query for just the language I want or I'd have multiple language specific columns so I could include only what I want. The former is easier to manage and likely more efficient in the long run.

How to implement a Persistent Cache in Siesta with a structured model layer

I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.

Breeze JS client with dynamic objects

I'm investigating using Breeze for client side caching and querying. Unfortunately the existing web service returns (JSON) objects that for a given type may have variable number and type of fields. They will all have a unique id and a few base fields, but for example a Person may have name, age and address say, and another Person may have name, birthdate and favoriteColor.
What each Person has is described by metadata sent embedded into each object (so each Person also has a metadata field).
Querying is obviously problematic here but assume for now that we will not be querying on any field that is not on all items of a given type.
We are using AngularJS too, in case that is relevant.
My question is, how would one handle this situation using Breeze? Would we be better off just using a simple object cache and "querying" simply by iterating over the cache with a predicate function?
Perhaps you should take a look on Jhon Papa's pluralsight video lecture for querying using client cache on pluralsight , which is a complete demonstration of breeze and angularJs. Also you can refer this

Merging JSON-LD results with original JSON

I'm working on visualizing several geojson files with a large set of properties. I would like to use json-ld to add some meaning to some of these properties. I don't have a lot experience with JSON-LD, but sucessfully applied the jsonld.js to expand, compact, etc. my geojson file and #context. In doing so I noticed that the end results only returns the graph that is actually described in the context. I can understand that, but since it only represents a small part of all my properties, I have some difficulty using the results.
It would help me if I could somehow merge the results of the jsonld operation with the original geojseon file. eg:
"properties": {
"<http://purl.org/dc/terms/title>": "My Title",
"<http://purl.org/dc/terms/type>": "<http://example.com/mytype>",
"NonJSONLDPropertyKey" : "NonJSONLDPropertyValue",
etc.
I would still be able to recognize the properties with an URI, but could also work with the non-json-ld properties. Any suggestions how this might work? Or is there a better approach?
You could map all other properties to blank nodes... that is identifiers that are scoped to the document. The simplest way to do so is to add a
"#vocab": "_:"
declaration to your context.

What's the simplest way to encode a chosen 'root' Core Data entity together with all of its relationships?

I use Core Data within my iOS 7 app to handle the editing and creation of entities. The entities have relationships between them, which all have inverses (as Apple advises).
For the sake of this question, let's pick any one of these interrelated entities and call it the Root entity: the thing that I want to encode with; the thing that logically lives on the 'top' of the hierarchy. I will call this the 'object graph'.
The question is:
What's the easiest way of encoding and decoding such an object graph to and from NSData?
The reason I want to do this is that I'd like my Core Data object graph to be persisted onto a cloud service, without the need of writing my own NSIncrementalStore subclass (it's a bit involved...!).
AutoCoding together with HRCoder almost looks like it could do the job, but I've experimented with this combination and it doesn't quite work with NSManagedObjects at the time of writing.
Still, I'm seeking alternatives. There can't only be one way to do this, surely.
It doesn't have to be JSON, but it'd be nice. Binary would be fine.
It seems to me you do not need to subclass NSIncrementalStore. You can create records and save them to your store with a plain vanilla store created via addPersistentStoreWithType:... with a NSPersistentStoreCoordinator.
The straight-forward way is to handle the incoming JSON by simply taking the data and copying it to the properties of your NSManagedObject subclasses, like this:
object.title = jsonDictionary[#"title"];
object.numericAttribute = [jsonDictionary[#"numericAttribute] integerValue];
If you take care about naming the attribute and entity names exactly the same you can maybe use some shortcuts using KVC, like
[object setValue:jsonDictionary[key] forKey:key];
I once did the above for a large legacy project where it was not feasible to repeat the old attribute names, so I used a custom property list (plist) to match around 800 attribute names.

Resources