Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I migrated my app from CoreData and I'm deeply impressed how simple things can be. I could delete a lot of code :)
One thing that makes me feel a bit uncomfortable is that Realm spreads all over my application creating a big dependency: My app has a MVVM architecture and I would feel best if only the model would be the place where Realm lives.
Just after the migration I send Result and List objects to my view models. Wouldn't it be better to have [Type] objects instead?
What do you think? How do you structure your apps with realm?
You will have to make your own decision on whether having Realm as a dependency is too much of a risk in the architecture, but there are a couple good reasons why you should use the Realm objects/collections directly:
Realm is not built on SQLite and is not an ORM. As a result, when you access a Realm object or a collection, that data is memory mapped and lazy loaded only when accessed. This means that if you want to convert a Result into a Swift array of Objects, or worse copies of those Objects into a class not dependent on Realm, then this will result in reading/copying all the data upfront that is in the Result, versus the efficient manner Realm does for you.
By default, Realm instances auto-update. What this means is that by using a Realm Object or Result directly you can bind your view or in your case view model to changes on those objects. Realm instances send out notifications when they are updated (relevant docs), allowing you to update the view model and then the view off of this (for example if you have a table view backed by a Result, you can trigger a reloadData on the table view off of the notification, since the Result instance will now have the latest objects). Or you can also use Key-Value Observing on a specific Realm object to respond to changes on its properties to update the view/view model (relevant docs).
Hopefully this helps shape your thoughts on architecture. As of writing this post, we are working on object-level notifications that will enable further data-binding capabilities. You can follow the progress on this here.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I have a very large monolithic legacy application that I am tasked with breaking into many context-bounded applications on a different architecture. My management is pushing for the old and new applications to work in tandem until all of the legacy functionality has been migrated to the current architecture.
Unfortunately, as is the case with many monolithic applications, this one maintains a very large set of state data for each user interaction and it must be maintained as the user progresses through the functionality.
My question is what are some ways that I can satisfy a hybrid legacy/non-legacy architecture responsibly so that in the future state all new individual applications are hopelessly dependent on this shared state model?
My initial thought is to write the state data to a cache of some sort that is accessible to both the legacy application and the new applications so that they may work in harmony until the new applications have the infrastructure necessary to operate independently. I'm very skeptical about this approach so I'd love some feedback or new ways of looking at the problem.
Whenever I've dealt with this situation I take the dual writes approach to the data as it mostly a data migration problem. As you split out each piece of functionality you are effectively going to have two data models until the legacy model is completely deprecated. The basic steps for this are:
Once you split out a component start writing the data to both the old and new database.
Backfill the new database with anything you need from the old.
Verify both have the same data.
Change everything that relies on this part of the data to read from the new component/database.
Change everything that relies on this part of the data to write to the new component/database.
Deprecate that data in old database, i.,e. back it up then remove it. This will confirm that you've migrated that chunk.
The advantage is there should no data loss or loss of functionality and you have time to test out each data model you've chosen for a component to see if it works with the application flow. Slicing up a monolith can be tricky deciding where your bounded contexts lie is critical and there's no perfect science to it. Always keep in mind where you need your application to scale and which pieces are required to perform.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am looking for an answer to this question in the context of the VIPER Architectural pattern -
If you have an application that talks to both a web api and a database how many dataManagers should you have one, two or three?
Case
a) dataManager
b) APIDataManager and LocalDataManager
c) dataManager, APIDataManager and LocalDataManager
Where in
a) The interactor talks to a single dataManager that talks to any services you may have (remote or local).
b) The interactor knows the difference between local and remote information - and calls either the APIDataManager or the LocalDataManager, which talk to remote and local services respectively.
c) The interactor only talks to a general dataManager, the general dataManager then talks to the APIDataManager and LocalDataManager
EDIT
There may be no definitive solution. But any input would be greatly appreciated.
Neither VIPER nor The Clean Architecture dictate that there must be only one data manager for all interactors. The referenced VIPER article uses only one manager just as an example that the specific storage implementation is abstracted away.
The interactor objects implement the application-specific business rules. If what the app does is talk to the server, then turn around and talk to the local disk store, then it’s perfectly normal for an interactor to know about this. Even more, some of the interactors have to manage exactly this.
Don’t forget that the normal object composition rules apply to the interactors as well. For example, you start with one interactor that gets data from the server and saves it to the local store. If it gets too big, you can create two new interactors, one doing the fetching, another one—saving to the local store. Then your original interactor would contain these new ones and delegate all its work to them. If you follow the rules for defining the boundaries, when doing the extract class refactoring, you won’t event have to change the objects that work with the new composite interactor.
Also, note that in general it is suggested not to name objects with manager or controller endings because their roles become not exactly clear. You might name the interface that talks to the server something like APIClient, the one that abstracts your local storage—something like EntityGateway or EntityRepository.
It depends on where the abstraction lies within your app, that is distinguishing what you do from how you do it. Who is defining that there are two different data stores?
If local and remote data stores are part of the problem domain itself (e.g. sometimes the problem requires fetching remote data, and other times it requires fetching local data), it is sensible for the interactor to know about the two different data stores.
If the Interactor only cares about what data is requested, but it does not care about how the data is retrieved, it would make sense for a single data manager to make the determination of which data source to use.
There are two different roles at play here—the business designer, and the data designer. The interactor is responsible for satisfying the needs of the business designer, i.e. the business logic, problem domain, etc. The data layer is responsible for satisfying the needs of the data designer, i.e. the server team, IT team, database team, etc.
Who is likely to change where you look to retrieve data, the business designer, or the data designer? The answer to that question will guide you to which class owns that responsibility.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have seen in some (REST) iOS apps that they use "pure" model object, e.g. "Product", core data object, e.g. "ProductCore", and an object to represent the remote objects e.g. "ProductJSON".
I myself usually also use this architecture, I think it leads to clear separation of concerns. It has also some practical benefits, for example there are situations in which I want to create a model object but not add it to core data yet. Or others where I want to send the models directly to the server and not store them in core data.
On the other side, it consumes more memory and I have to maintain more classes. It's also not necessary for a memory cache as core data has one. Temporary objects (e.g. form data which hasn't been validated yet) can also be deleted without performance issues, as managed objects are only in memory until saved. There are also not portability benefits, as anything that understands Swift/ObjC also understands core data... extensibility can be achieved at least with extensions. Maybe subclassing.
So I was wondering, is there an overall preferred way to setup model clases in applications? In which context does an additional layer with pure model objects make sense, where is it an overkill?
Edit: I don't consider this an "opinion based" question. The optimal architecture can be different depending on the requirements, but which is one better under which circumstances should be able to be determined based on facts.
I am not sure what is meant by a pure object. Here is what I am doing:
Service model represents the data sent to and received from web services, and corresponds to their JSON payloads. I write adapters to map JSON to service models and vice versa.
Entity models represent persistent data. These are the Core Data classes corresponding to my data model, and inherit from NSManagedObject.
View models represent data displayed in a view. Each view has its own view model. This approach maps the view model precisely to the view. An adapter class builds the view model from entity models and/or service models (if the data to be displayed is not persistent). The adapter shapes the data to the view, and does things like formatting dates to simplify the job of the view controller.
A table view cell, for example, might display elements from several entity models. The view model would contain a class representing the data to be displayed in each cell, and the adapter would build one for each cell. Populating the table view cell in the view controller becomes a very simple task of mapping one-to-one between view model and cell fields.
This approach might seem overkill, but I have found it to be extremely effective and worth the effort. It greatly simplifies the code in the view controllers, and makes testing easier.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm using Core Data in my app and I have many "categories" of data. Some are smaller and some are bigger, and I thought to divide some of the categories to different entities.
So I wanted to ask, if there are any advantages or disadvanteges to using multiple entities even if not mandatory, and if I should also create an entity for smaller "categories" of data? Thanks!
Here is the rule that I use:
If incorporating data into the entity itself with attributes does not result in excessively repetitive data, I would prefer to add attributes rather than new entities.
This is a subtle tradeoff in which you have to consider
Performance
Complexity of the data model
Code legibility.
If you consider these factors carefully with the guidance above, I am sure you can make good decisions on when to create new entities rather than using attributes.
For example, if you have an entity like Story that has attributes like title, text, date, etc. and would like to add a category, in most cases it would make sense to create a to-one or even to-many relationship to a Category entity rather than using a string attribute. Presumably, there will be hundreds of stories and dozens of categories, and the flexibility to have more than one category is a definite advantage.
On the other hand, if you have an entity Story which is always one of not more than three types, i.e. "report", "analysis" or "opinion", you would be better off with an enum type of attribute rather than a relationship to a new entity.
I'm not overly sure what you mean by categories but, obviously the bigger the entity the more memory your app will take up when you load the entities into your NSManagedObjectContext. However there's no real point splitting entities up into two when you're just going to load them both anyway.
In terms of performance Core Data docs mention that even 10,000 objects is a pretty small database. Just be careful with BLOBs, dont load data you dont need into memory and release data when you can using NSManagedObjectContext's reset method.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've been developed apps for iOS for sometime and find that there are many repeating tasks. So I want to write base classes that the upcoming projects will subclass, so that it will cost less time and more easily to track code across projects. The most concerned are
Write good base model class that has many strategies (Core Data, Archiving, ...). This model class also has some JSON-to-property converting techniques like Mantle so that model on device and on server are the same
Write good base networking class (mostly with AFNetworking)
Write good base ViewController class. I see some repetitive tasks : avoiding keyboard with ScrollView, logging, crash tracking, loading views from Nibs, ...
Find and use some other good categories for UIView, UINib, Autolayout, ...
These are just my concerns. It may seems a vague topic and I don't ask for how to use libraries or how to make reusable components.
I just want to ask about experience for making these kinds of base classes and where I can learn from
You are not the only one that has a problem with this, I've been going through same problem with many of the projects. So the best solution to this problem is the open source libraries. The good ones are usually updated often and keep up with Apple's SDK releases. I will explain what I use to keep boilerplate code at a minimum.
Base model - Since I only use Network and Core Data models, I use MagicalRecord for Core Data and JSONModel for network based models (that map to API responses).
Networking classes - are coupled with AFNetworking and previously mentioned JSONModel, I did not find to need anything else. I can easily extend those with categories.
There are many libraries to avoid UITextField's with keyboard in a UIScrollView, but mostly I just use custom code. But if I need one, I follow TPKeyboardAvoiding. For crash tracking I just use Crashlytics or Flurry, they provide their own SDK, so I do not need much code. And I do not use NIB's anymore.
There are many useful categories around on the web. I created my own repository as a CocoaPod, which keeps all useful categories in a single pod. I keep the repository up to date and add new categories and small classes when I need them. The down side of it is that you usually do not need all of them, so sometimes too much code is loaded. But until now I did not notice any performance downsides. If you want, you can take a look on GitHub, how it looks.
Do not forget about project initialization, I've been working on my own custom Xcode project templates to solve this problem.