I want to use Parse (parse.com) in my app. Parse uses PFObject models. I'd like to use my own models throughout my code (so that it doesn't depend on parse). If possible I'd like to design my app so that I can replace parse with another cloud service as seamlessly as possible if I wanted to.
Is this a good idea? What's the best way to abstract the model storage so that there is no (or minimal) traces of Parse code in my app?
Perhaps use the adapter design pattern to map parse objects to my own objects? Should this be an independent class or part of the model logic?
If anyone has tried something like this I'd like to hear your thoughts. Should I just use parse models directly in my code? Or perhaps a singleton factory to generate my models based on parse objects?
Any tips/thoughts/comments ?
I've found relatively clean way to manage this.
Basically I've created a protocol called NPDictionaryRepresenting which classes can conform to in order to specify how they should be converted into a dictionary or initialized from a dictionary.
#protocol NPDictionaryRepresenting <NSObject>
- (NSDictionary *)dictionaryRepresentation;
+ (id)objectWithDictionaryRepresentation:(NSDictionary *)dictionary;
#end
Each of my models that I need stored in Parse will conform to this and implement their own custom behaviour. This protocol is abstracted through the use of dictionaries so it doesn't depend on Parse in any way.
Then I've implemented a NPNetworkAdapter base class to handle all network storage. I've also implemented an NPParseNetworkAdapter class which inherits from NPNetworkAdapter. This is the only class which knows anything about Parse. Its interface deals with objects that conform to NPDictionaryRepresenting. The parse network adapter is able to create PFObjects by extracting dictionary representations of my objects. Conversely it's able to fetch PFObjects and give me back my own models by instantiating them using dictionaries.
The drawback with this implementation is it doesn't work very well with object relationships (but I'm working on it).
If anyone has any comments on this approach I'd love to hear them.
I realise this is an old question, but I'm busy working on a project that poses this exact same problem so I thought I'd comment. Firstly, I think you did well to identify this and to try and avoid coupling your code too tightly with Parse.
The route I have decided to take is to make use of Protocols (Interfaces) for my model classes with the underlying implementation being the Parse objects - using the Parse subclassing feature; I've combined this with the use of factory classes to decouple object creation and implementation specifics from most of my application code. This may seem like overkill and does require a bit of extra code upfront, however, I believe it will pay dividends with testing and if the time ever comes to change how I access back-end services.
The other alternative for me was to make use of wrapper classes which just wrapped the PFObjects. However, in my case, the wrapper classes would've just been dumb delegation classes without the added benefit Protocols provide for testing, so I stuck with the Protocols approach.
Related
My IOS application includes similar views that draws data from server and visualise them.
I want to combine common networking code in a class to ensure reusability and to avoid repeated code.
Should I locate networking code in a super class or in an associated class. I couldn't make a decision which method should I use, generalisation or association (aggregation)?
What would you do if you were me?
It is not good solution to create view superclass for storing client-server communication code due causes:
Client-server communication isn't a part of data presentation (View). Logically it is separate entity.
If you use associated object you could use it anywhere, not only in Views that represent loaded data. It makes your architecture more flexible.
There are more reasons to not use inheritance in your case but I think these two points are enough to make decision.
To my mind you should use associated object (aggregation).
I have an approach in designing MVC components which is to separate the Data Model (DB persistence) from the View Model of my component despite they represent theoretically the same element. I just map later the two models.
Do you think it's a good approach? Or I should try to make only one model?
When I run into issues like this, I try to create only the necessary classes that I need. This will help to keep the project smaller and to avoid confusion as to what class or object you are supposed to be using. I always try to picture the next guy coming and working on my code and what he would think of and where he would stumble in my logic. I would only use a ViewModel if you are creating objects from multiple Models you retrieved from the database.
Based on your statement it seems that you've created a duplicate data model which is mapped to a second model and would like to know if this is an optimal approach.
Recommendation
I don't think this is necessarily an optimal solution, but a lot depends on your use-case. What I typically do is create a data model that represents unique entities. Then I create a data management class that handles the interactions and use-cases of the data. The data manager would cover things like adding/removing custom objects from a collection. The approach I take is basically a lightweight approach to Apple's use of it's Core Data Framework (docs).
So for example one could use a dictionary, array, or set (or some combination of these) to manage the collection of custom objects together with a shared singleton object acting as a data manager and leveraging built-in archiving/unarchiving capabilities to handle a data graph requirements for an app. Actually,y the result is about the same as a simple use of Core Data, so I'd definitely recommend you get familiar with the standard approach used by Apple (it's embedded into every project template by default).
The good news is that once you choose an approach and develop it very carefully you could end up with a sharable resource that can be reused in many different projects. So for example, the data manager class might encapsulate the movement of data internally (files, local urls, etc) and externally (urls, soa, etc) and even deal with caching, serialization, etc.
I use only classes with get / set methods that operate a mapping of the DB and the VIEW. These are development policies. Using hybrid objects have a greater lightness of the project is in development at runtime. In some scenarios, there may be redundancy in the classes. It is important to aspire to the perfection of the code :-)
could someone tell me the main features that distinguish Magical Record from RESTKit?
They're both popular but they seem complementary, but I just need help in seeing what the relevant differences are. Is there a typical use case in which both frameworks are needed?
Thanks!
Magical Record is a wrapper around Core Data that gives you a number of higher level APIs that you can use to interact. This means you write less code to do common tasks.
RestKit is a wrapper around Core Data (or your basic model objects) and your RESTful interface to your server. RestKit can map your external data model to your internal data model and enact all of your server interaction. This means you write less code for interacting with the server and populating your model.
So, they aren't really comparable. You could look at using both together as they could be complementary.
To give some context, I'm new to iOS/Objective-C with a web dev (Ruby/JS/C#) background. I understand how the classes work, but I don't understand why the original implementors wrote these two classes (NSKeyedArchiver and NSKeyedUnarchiver) instead of consolidating both encoding and decoding logic into a single class.
Reading the Apple documentation for the abstract class NSCoder a NSCoder has methods to both encode and decode. The only thing I can think of is that the code was long so the original implementer split it into 2... It seems to me that it'd be more convenient to the developer that only a single class is used, but maybe I'm missing something nuanced about this. So are there any historical reasons for this? Was NSCoder a "convenience" in that it defines both the encoding/decoding APIs, but meant to be separated into encoder/decoders? Am I misunderstanding what a NSCoder is supposed to do?
I think that keeping archiving and unarchiving functionality in separate classes is the result of applying the Single Responsibility Principle, which says that a class has to have a single, narrow, responsibility, which should be fully encapsulated inside that class. Indeed, when you create an instance of NSCoder's subclass, you do that either to archive a group of objects, or to unarchive data into a group of objects, but not both.
This design is not ideal, because now you have several pairs of classes (i.e. NSArchiver/NSUnarchiver and NSKeyedArchiver/NSKeyedUnarchiver) linked by communicational cohesion, while a single-class design would have lead to this data dependency being fully encapsulated. This is a tradeoff on which the designers of the Cocoa library could have gone either way. It appears that they picked single responsibility principle, at the price of introducing a data format dependency.
I am building an iPhone app that has some default data inside it via a property list. This data is the source for my Model. What is the best way to manage this data so the user can read (and in a couple of cases write) data from this plist?
I currently am subclassing NSObject and mapping the plist data to properties in that object, with methods to read/write data to the object. I have read about the NSCoding protocol and NSCoder but am not sure how to implement this in my custom class.
Any help will be appreciated.
Seems the answer is in this link:
http://mojomonkeycoding.com/tag/nscoding/
I guess you do not worry about super being initWithCoder in these cases.
If you really want to keep the data in a plist then you can use NSString:propertyList. It'll take the property list and parse it into the necessary structures for you. You can then use NSPropertyListSerialization to write it back out.
Frankly, what you're doing is easier unless you have a ton of different entities to track. There are a lot of ways to simply load and save data, if that's all you're interested in. Besides NSCoder (which is a lot of boilerplate code for my taste) you could use CoreData and not worry about the serialization process at all--CoreData manages it all for you semi-automagically.