iOS Application VIPER Architecture - how many dataManagers? [closed] - ios

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am looking for an answer to this question in the context of the VIPER Architectural pattern -
If you have an application that talks to both a web api and a database how many dataManagers should you have one, two or three?
Case
a) dataManager
b) APIDataManager and LocalDataManager
c) dataManager, APIDataManager and LocalDataManager
Where in
a) The interactor talks to a single dataManager that talks to any services you may have (remote or local).
b) The interactor knows the difference between local and remote information - and calls either the APIDataManager or the LocalDataManager, which talk to remote and local services respectively.
c) The interactor only talks to a general dataManager, the general dataManager then talks to the APIDataManager and LocalDataManager
EDIT
There may be no definitive solution. But any input would be greatly appreciated.

Neither VIPER nor The Clean Architecture dictate that there must be only one data manager for all interactors. The referenced VIPER article uses only one manager just as an example that the specific storage implementation is abstracted away.
The interactor objects implement the application-specific business rules. If what the app does is talk to the server, then turn around and talk to the local disk store, then it’s perfectly normal for an interactor to know about this. Even more, some of the interactors have to manage exactly this.
Don’t forget that the normal object composition rules apply to the interactors as well. For example, you start with one interactor that gets data from the server and saves it to the local store. If it gets too big, you can create two new interactors, one doing the fetching, another one—saving to the local store. Then your original interactor would contain these new ones and delegate all its work to them. If you follow the rules for defining the boundaries, when doing the extract class refactoring, you won’t event have to change the objects that work with the new composite interactor.
Also, note that in general it is suggested not to name objects with manager or controller endings because their roles become not exactly clear. You might name the interface that talks to the server something like APIClient, the one that abstracts your local storage—something like EntityGateway or EntityRepository.

It depends on where the abstraction lies within your app, that is distinguishing what you do from how you do it. Who is defining that there are two different data stores?
If local and remote data stores are part of the problem domain itself (e.g. sometimes the problem requires fetching remote data, and other times it requires fetching local data), it is sensible for the interactor to know about the two different data stores.
If the Interactor only cares about what data is requested, but it does not care about how the data is retrieved, it would make sense for a single data manager to make the determination of which data source to use.
There are two different roles at play here—the business designer, and the data designer. The interactor is responsible for satisfying the needs of the business designer, i.e. the business logic, problem domain, etc. The data layer is responsible for satisfying the needs of the data designer, i.e. the server team, IT team, database team, etc.
Who is likely to change where you look to retrieve data, the business designer, or the data designer? The answer to that question will guide you to which class owns that responsibility.

Related

dealing with state data in an incremental migration from a monolithic legacy app [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I have a very large monolithic legacy application that I am tasked with breaking into many context-bounded applications on a different architecture. My management is pushing for the old and new applications to work in tandem until all of the legacy functionality has been migrated to the current architecture.
Unfortunately, as is the case with many monolithic applications, this one maintains a very large set of state data for each user interaction and it must be maintained as the user progresses through the functionality.
My question is what are some ways that I can satisfy a hybrid legacy/non-legacy architecture responsibly so that in the future state all new individual applications are hopelessly dependent on this shared state model?
My initial thought is to write the state data to a cache of some sort that is accessible to both the legacy application and the new applications so that they may work in harmony until the new applications have the infrastructure necessary to operate independently. I'm very skeptical about this approach so I'd love some feedback or new ways of looking at the problem.
Whenever I've dealt with this situation I take the dual writes approach to the data as it mostly a data migration problem. As you split out each piece of functionality you are effectively going to have two data models until the legacy model is completely deprecated. The basic steps for this are:
Once you split out a component start writing the data to both the old and new database.
Backfill the new database with anything you need from the old.
Verify both have the same data.
Change everything that relies on this part of the data to read from the new component/database.
Change everything that relies on this part of the data to write to the new component/database.
Deprecate that data in old database, i.,e. back it up then remove it. This will confirm that you've migrated that chunk.
The advantage is there should no data loss or loss of functionality and you have time to test out each data model you've chosen for a component to see if it works with the application flow. Slicing up a monolith can be tricky deciding where your bounded contexts lie is critical and there's no perfect science to it. Always keep in mind where you need your application to scale and which pieces are required to perform.

Which layer should DBContext, Repository, and UnitOfWork be in? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I want to use Layered Architecture and EF, Repository and UoW Pattern in the project.
Which layer should DBContext, Repository, and UnitOfWork be in?
DAL or BLL?
I would put your DbContext implementation in your DAL (Data Access Layer). You will probably get different opinions on this, but I would not implement the repository or unit of work patterns. Technically, the DBContext is the unit of work and the IDbSet is a repository. By implementing your own, you are adding an abstraction on top of an abstraction.
More on this here and here.
DAL is an acronym for Data Access Layer. DbContext, repositories and Unit Of Work are related to working with data so you should definitely place them in DAL.
"Should" is probably not the correct word here, as there are many views on this subject.
If you want to implement these patterns out of the book, I would check out this link from the ASP.NET guys:
https://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
But I actually have started to layer it like this:
Controller / Logic <- Where business logic and boundary objects are created and transformed.
Repository <- Where logic related to persistence and transforming entities and query objects
Store <- Where the actual implementations of storage mechanisms reside. This is abstracted away behind an interface.
This way leverages the fact that both the business logic and repository logic are testable, decoupled and free to use whatever mechanism for persistence - or lack thereof. Without the rest of the application knowing anything about it.
This is offcourse true with other patterns as well, this is just my take on this.
DbContext should never cross beyond the boundary of the DAL, if you want to put your repositories or units of work there, you are free to, just do not let them leak their details or dependencies upwards. The DbContext should in my opinion be scoped to as narrow scopes as possible, to keep it as clean as possible - you never know where that context has been... please wear protection! But jokes aside, if you have a async, multithreaded, multinode, big application, using these DbContexts everywhere passing them around, you will get into general concurrency and data race issues.
What I like to do is start with a InMemory store, that I inject into my controller. As soon as that store starts serving multiple entities, and the persistence logic get's more and more complex - I refactor it into a store with a repository on top. Once all my tests pass and I get it working like I want, I start to create database or file system based implementations of that store.
Again my opinions here, because this is a pretty general question, which has few "true" answers, just a lot of opinions.
Most opinions on this are valid, they just have different strengths and weaknesses, and the important part is to figure out which strengths you need, and how you will work with the weaknesses.
Your repository should have a reference to DbSet<T> objects, and once you add, update or remove, from one or more repositories, you should invoke SaveChanges from the UnitOfWork. Therefore you should place DbContext into Unit of Work implementation.

Realm architecture pattern [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I migrated my app from CoreData and I'm deeply impressed how simple things can be. I could delete a lot of code :)
One thing that makes me feel a bit uncomfortable is that Realm spreads all over my application creating a big dependency: My app has a MVVM architecture and I would feel best if only the model would be the place where Realm lives.
Just after the migration I send Result and List objects to my view models. Wouldn't it be better to have [Type] objects instead?
What do you think? How do you structure your apps with realm?
You will have to make your own decision on whether having Realm as a dependency is too much of a risk in the architecture, but there are a couple good reasons why you should use the Realm objects/collections directly:
Realm is not built on SQLite and is not an ORM. As a result, when you access a Realm object or a collection, that data is memory mapped and lazy loaded only when accessed. This means that if you want to convert a Result into a Swift array of Objects, or worse copies of those Objects into a class not dependent on Realm, then this will result in reading/copying all the data upfront that is in the Result, versus the efficient manner Realm does for you.
By default, Realm instances auto-update. What this means is that by using a Realm Object or Result directly you can bind your view or in your case view model to changes on those objects. Realm instances send out notifications when they are updated (relevant docs), allowing you to update the view model and then the view off of this (for example if you have a table view backed by a Result, you can trigger a reloadData on the table view off of the notification, since the Result instance will now have the latest objects). Or you can also use Key-Value Observing on a specific Realm object to respond to changes on its properties to update the view/view model (relevant docs).
Hopefully this helps shape your thoughts on architecture. As of writing this post, we are working on object-level notifications that will enable further data-binding capabilities. You can follow the progress on this here.

Is it advisable to have "pure" model objects additionally to managed objects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have seen in some (REST) iOS apps that they use "pure" model object, e.g. "Product", core data object, e.g. "ProductCore", and an object to represent the remote objects e.g. "ProductJSON".
I myself usually also use this architecture, I think it leads to clear separation of concerns. It has also some practical benefits, for example there are situations in which I want to create a model object but not add it to core data yet. Or others where I want to send the models directly to the server and not store them in core data.
On the other side, it consumes more memory and I have to maintain more classes. It's also not necessary for a memory cache as core data has one. Temporary objects (e.g. form data which hasn't been validated yet) can also be deleted without performance issues, as managed objects are only in memory until saved. There are also not portability benefits, as anything that understands Swift/ObjC also understands core data... extensibility can be achieved at least with extensions. Maybe subclassing.
So I was wondering, is there an overall preferred way to setup model clases in applications? In which context does an additional layer with pure model objects make sense, where is it an overkill?
Edit: I don't consider this an "opinion based" question. The optimal architecture can be different depending on the requirements, but which is one better under which circumstances should be able to be determined based on facts.
I am not sure what is meant by a pure object. Here is what I am doing:
Service model represents the data sent to and received from web services, and corresponds to their JSON payloads. I write adapters to map JSON to service models and vice versa.
Entity models represent persistent data. These are the Core Data classes corresponding to my data model, and inherit from NSManagedObject.
View models represent data displayed in a view. Each view has its own view model. This approach maps the view model precisely to the view. An adapter class builds the view model from entity models and/or service models (if the data to be displayed is not persistent). The adapter shapes the data to the view, and does things like formatting dates to simplify the job of the view controller.
A table view cell, for example, might display elements from several entity models. The view model would contain a class representing the data to be displayed in each cell, and the adapter would build one for each cell. Populating the table view cell in the view controller becomes a very simple task of mapping one-to-one between view model and cell fields.
This approach might seem overkill, but I have found it to be extremely effective and worth the effort. It greatly simplifies the code in the view controllers, and makes testing easier.

Which data layer / handling architecture or pattern to choose for a non-enterprise web application? (MVC / EF) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need some help in making a design choice for my application. It’s a fairly straightforward web application, definitely not enterprise class or enterprise-anything.
The architecture is standard MVC 5 / EF 6 / C# ASP.NET, and the pages talk to a back-end database that’s in SQL server, and all the tables have corresponding entity objects generated from VS 2013 using the EF designer and I don’t see that changing anytime in the near future. Therefore creating super abstract “what if my database changes” etc. separations is possibly pointless. I am a one-man operation so we're not talking huge teams etc.
What I want is a clean way to do CRUD and query operations on my database, using DbContext and LINQ operations – but I’m not good with database related code design. Here are my approaches
1. Static class with methods - Should I create a static class (my DAL) that holds my datacontext and then provide functions that controllers can call directly
e.g. MyStaticDBLib.GetCustomerById(id)
but this poses problems when we try to update records from disconnected instances (i.e. I create an object that from a JSON response and need to ‘update’ my table). The good thing is I can centralize my operations in a Lib or DAL file. This is also quickly getting complicated and messy, because I can’t create methods for every scenario so I end up with bits of LINQ code in my controllers, and bits handled by these LIB methods
2. Class with context, held in a singleton, and called from controller
MyContext _cx = MyStaticDBLib.GetMyContext(“sessionKey”);
var xx = cx.MyTable.Find(id) ; //and other LINQ operations
This feels a bit messy as my data query code is in my controllers now but at least I have clean context for each session. The other thinking here is LINQ-to-SQL already abstracts the data layer to some extent as long as the entities remain the same (the actual store can change), so why not just do this?
3. Use a generic repository and unitofwork pattern – now we’re getting fancy. I’ve read a bit about this pattern, and there’s so many different advises, including some strongly suggesting that EF6 already builds the repository into its context therefore this is overkill etc. It does feel overkill but need someone here to tell me that given my context
4. Something else? Some other clean way of handling basic database/CRUD
Right now I have the library type approach (1. above) and it's getting increasingly messy. I've read many articles and I'm struggling as there's so many different approaches, but I hope the context I've given can elicit a few responses as to what approach may suit me. I need to keep it simple, and I'm a one-man-operation for the near future.
Absolutely not #1. The context is not thread safe and you certainly wouldn't want it as a static var in a static class. You're just asking for your application to explode.
Option 2 is workable as long as you ensure that your singleton is thread-safe. In other words, it'd be a singleton per-thread, not for the entire application. Otherwise, the same problems with #1 apply.
Option 3 is typical but short-sighted. The repository/unit of work patterns are pretty much replaced by having an ORM. Wrapping Entity Framework in another layer like this only removes many of the benefits of working with Entity Framework while simultaneously increasing the friction involved in developing your application. In other words, it's a lose-lose and completely unnecessary.
So, I'll go with #4. If the app is simple enough, just use your context directly. Employ a DI container to inject your context into the controller and make it request-scoped (new context per request). If the application gets more complicated or you just really, really don't care for having a dependency on Entity Framework, then apply a service pattern, where you expose endpoints for specific datasets your application needs. Inject your context into the service class(es) and then inject your service(s) into your controllers. Hint: your service endpoints should return fully-formed data that has been completely queried from the database (i.e. return lists and similar enumerables, not queryables).
While Chris's answer is a valid approach, another option is to use a very simple concrete repository/service façade. This is where you put all your data access code behind an interface layer, like IUserRepository.GetUsers(), and then in this code you have all your Entity Framework code.
The value here is separation of concerns, added testability (although EF6+ now allows mocking directly, so that's less of an issue) and more importantly, should you decide someday to change your database code, it's all in one place... Without a huge amount of overhead.
It's also a breeze to inject via dependency injection.

Resources