I am at the beginning of an enterprise level application with 2 years of developing in front of me, and I was wondering how to organize my code while having in mind that the project will be huge and long.
I have created an architecture that I currently believe has potential but I’m not sure since I wasn’t able to find anybody that writes code that way, and also it’s against the paradigm that you can find in redux-toolkit documentation.
Let’s start with a short explanation on what I am trying to achieve. I want to make redux-toolkit easily replaceable in the future. I want to write 80% of my code decoupled from react and redux-toolkit and only look at them like external libraries, one for presentation layer and other for state management.
I won’t go into details about the application, domain and infrastructure separation. I think you can find many articles and blogs easily about DDD and clean architecture.
Let’s see what I have so far, this is the plan:
This is the big picture, so ui layer(component) fires redux-toolkit action / custom hook / custom action/ someOtherLib action. Actions have a responsibility to inject setState and getState callbacks (that IStateManagment domain Interface requires) and also responsibility to call certain usecases.
After this happens, usecase (userStory) gets a copy of the state from getState callback and implements all of the business logic (mutates state copy) and at the end calls setState callback. It also has a dependency on the domain layer, that contains model and logic attached to that model.
And finally reducer detects action fired from setState callback and stores new state to the state management.
I would also like to share more on how I’ve implemented this with redux-toolkit and custom hook local store (I’ve also implemented this with custom store implementation but I won’t talk about that, but you can find it on github).
I’ve implemented multiple state management stores on purpose to test how easily I can replace it inside my react components.
This is the implementation of dispatching (or calling custom hook for localstate):
Next is Domain state management interface, it defines what must be provided by external state management:
Next is calling the usecase, action creators have a responsibility to call certain usecases:
After that we have a usecase where all the business logic is implemented in one place:
And at the end single reducer to handle setState action:
You can find whole projects with 3 different state managements on github:
https://github.com/WingsDevelopment/react-clean-architecture
I know this is much, but I tried to share this as clear as possible.
I am concerned about making the decision to go down this road. I am not sure what are cons of this approach and if I am missing something and if so, what?
Related
I read about VIPER architecture here http://www.objc.io/issue-13/viper.html (and in a few other sources), but I still can't figure out one thing, should each presenter interacts with at most one Interactor?
Here is a longer discussion about it that might better explain my question: Use Case with 2 ways for the same action
As I get it, the presenter is unique per VC. However, when a presenter needs several interactors, he may use them.
The interactors as for my opinion is a layer of business logic, they can interact with each other and the presenters can interact with many of them.
However, it's important to put the right logic in the right layer. For instance, be careful not to put the business logic in the presenter layer since its very tempting to while having to navigate between several interactors. Keep in mind to put the business logic only in the interactors.
Ideally, NO. One Presenter should know about the existance of only ONE Interactor. But the interactor itself can have many Data Managers. I usually use at least two data maangers: one for API requests and one for Local Data management.
For more advanced tips and helpful good practices on VIPER architecture, I recommend this post: https://www.ckl.io/blog/best-practices-viper-architecture (sample project included)
Ideally no, as Marcelo points out. However I feel like adding the "D" to VIPER for data managers is also not ideal.
A big problem in VIPER is that the presenter already knows which use case to "fire off" upon receiving input from the view, therefore it already has some apriori knowledge of the business use cases. This fact alone calls into question the entire architecture in my view, since if the presenter merely notified the interactor of a use-case-agnostic or purely UI event on the view, it would serve no purpose for existing. The presenter in VIPER has to have a tiny bit of "business logic", no matter what.
So, since the presenter already must orchestrate use cases by talking to the interactor, have one interactor per business use case, and the presenter can connect them if there is more than one business use case per "module". The boundaries in VIPER are often times difficult to maintain in practice without workarounds, but it is a nice architecture for forcing devs to think about separation of concerns.
According to the CheeseCake Best Pratices, the only what to connect multiple VIPER modules should be by using the Router Layer. This way, if you remove any module the only layer that will present errors in other related modules would be the Router. Also, to avoid coupling, Entities may be put in a separate module exclusive for this purpose. Finally, to mitigate the code duplication, similar entities (e.g., user and profile) would be grouped into the same data managers.
My opinion is: try to find out the trade-off between code replication and module decoupling. I'm refactoring just now my code and the code replication saved a lot of time when removing particular modules that were without any dependencies.
A little bit late to the party, but I just Googled this question searching for something else.
It really depends on your implementation of VIPER. There is no single correct one, from what I've seen people implement it in really different ways and it should be adjusted to fit your specific needs.
Some projects have tightly coupled interactor per view, where the view displays data, presenter passes the data between the view and interactor (converting it in the process) and the interactor handles the business logic per view. In this case the presenter would talk to only a single interactor.
Other projects implement interactors per use case, or in other words, "minor" feature. That way you can avoid duplicating business logic between the modules. The presenter can talk to multiple interactors here.
There are also projects that implement large interactor per "big" feature or should we say per area of the app. That way the interactors tend to be pretty large, but also really become the "smart" layer responsible for the business logic decision for the app and they tend to have access to everything they need to make those decisions.
Let's give an example here - let's say you have a log out button in the side menu of your app and also in the settings and that for the log out you need to clear your database, keychain, user defaults and the networking session. A rather common scenario.
In the first case, where you have and interactor per view, you'd obviously have duplicated business logic. I believe that's how the "original" VIPER works, but it's probably not the best approach.
In the second case you'd probably have a "user session interactor" handling just logging in and out the user.
In the third case, you'd have a "user interactor" that would not only handle the session, but also save and manage all the user's data.
My usual approach is the third option, with the big downside of the need to split the interactors over multiple files. Many people use the second option. It may also happen that for your project the first option is the best - for example if there is little overlap between screens in your project and they are tightly aligned with features.
I'm working on an iOS app where we need different binaries for each customer based on their needs. A customer may want to change all the colors, icons and texts. We can do that through white labeling process. The problem here, though, is when they ask for different behavior, for instance, removing login screen and making it optional to login.
I thought we can use dependency injections and use different handlers for each customer if needed. For instance, we can have LoginHandler1 and LoginHandler2, both implementing ILoginHandler and inherit from UIViewController.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
Any suggestions?
Thanks
You should create the entire object graph up-front, in the composition root. Object creation, and constructor injection, should not take much time at all as long as your constructors are not doing any actual work.
That being said, there are times when creating the entire object graph at the start of the application may take longer than is acceptable. In those cases, you can use lazy-loading to defer the costly initialization until later - while still creating the objects in the composition root.
Mark Seemann describes this approach in more detail here: Compose object graphs with confidence.
I thought we can use dependency injections and use different handlers for each customer if needed.
You thought right. Flexibility is one of the main reasons people use DI.
However, use of dependency injection is costly, it slows down the app because resolving is expensive comparing to normal instantiation.
It really doesn't cost that much at all. Have you tried it yourself? Unless the object in question (i.e. the object being injected) is very expensive to instantiate, you have no real reason to stay away from DI and Inversion Of Control. Also, as #Lilshieste noted above, creating the object graph up front (see AppDelegate) will probably make this even less a problem.
A good way of doing that is described here:
http://cocoapatterns.com/passing-data-between-view-controllers/ and here http://cocoapatterns.com/ios-view-controller-transitions-mediator-pattern/
The other way is to define all these behaviors in the app and enable/disable them in a plist file. like "is login optional? yes/no"
While less "elegant", this solution is a pretty useful one, especially if the project is not really big in terms of number of classes and VCs. It is also the easiest one to implement if the app code is already laid out and introducing major design changes would ask for lots of refactoring.
Always take action based on the task at hand, there is rarely if ever a single solution to a software design problem.
I am currently building an ASP.net MVC application, which has be broken down into multiple modules (as well as a generic class library).
I have implemented a Unit Of Work pattern for my first module. This unit of work class contains a number of different repositories.
However, I was wondering whether or not it is good idea to have a separate Unit Of Work class for each module?
Well, EF supplies you with UnitOfWork and Repository patterns implemented itself. Usually they are not exactly what you want and it seem nice to add some methods to that native EF Repositories, but in most cases it doesn`t worth the trouble.
Implementing your own Repository based on EF is not a good idea if your project is simple. It adds a lot of work but not as much of value.
Implementing UnitOfWork based on EF is complete different story. The only reason i can see to do it is "to have different UoW for different parts of the solution". Avoid it otherwise, really.
We tried to add both this approaches ignoring prebuilt ones in our project. It was completely reasonable because we were designing modular solution and we didn`t even know how many modules we would have at the end. We expected to add new modules to the system when it is already running and heavy loaded. And i can say that it took a lot of time to develop such application. When you realize that you need to have access to one more entity from some module leads to changes in several places - the first evidence of inefficient design.
So, KISS and YAGNI are against it. If you are tangled by question "should i add this stuff to my project" - just don`t. You need a good reason to implement this parts yourself, not just some "nice design" bias, because it adds lots of complexity. Even if you think you would need it some day - wait until that day. If you would try to estimate which miscalculation would be more disastrous i am pretty sure that it is much easier to add something new to your project then remove something already existing.
Please see this and this
A unit of work is really just a way of keeping of track of a set of entities that have been loaded into memory. Once loaded, we can work with the entities in the normal way: changing state, adding new entities and removing other entities. When we are ready to save our changes we ask the unit of work to commit and it takes care of “flushing” the pending changes to the underlying database.
Is it a good idea to have a separate Unit Of Work class for each module?
My first thought is: how would a unit of work for one module differ from that of another? If they do, they probably shouldn't, because the domain should be persistence ignorant and the data layer should be business logic ignorant.
Take for instance the UoW that comes with Entity Framework itself: the context. [When you create a context, do stuff, call SaveChanges() and dispose of it, it acts as a UoW]. You can use one context class maybe for your whole application. You're not going to program any business logic in your context class. So there is no reason to have a context class per module unless each module uses really distinct parts of the database (which is hardly ever true). The same will hold for a UoW you create yourself.
It's a bit beyond the scope of your question, but you could ask yourself whether you need your own UoW and repository classes as EF offers basic implementations of both (context and DbSets).
Just curious why ManagedObjectContexts should be passed to UIViewControllers when they are created, rather than just grabbing them from a UIApplicationDelegate?
The docs say that this makes your applications more rigid, but I am failing to see the nuances of when to use which pattern.
Thanks!
Imagine that I ask you to do some task, like painting a room. If I just tell you "go paint a room," you'll need to ask me a lot of questions, like:
Which room?
Where's the paint?
Where are the brushes?
Should I use a dropcloth?
In short, you won't be able to complete the task without help from me. If you have to depend on me every time, you won't be a very flexible painter. One way to deal with that problem is for me to give you all the stuff you need at the outset. Instead of "go paint a room," I'll say "please paint room number 348 using this bucket of paint and this brush, and don't bother with a dropcloth." Now, you've got everything you need, and you can get right to work with no further help from me. You're a much more flexible worker because you no longer depend on me.
The same thing applies to view controllers (and objects generally); it's better to give them everything they need than to have them depend on a particular object like the app delegate. It's true not just for managed object contexts, but for any information they need to do their job.
This is mainly because you want to use dependency injection with your UIViewControllers instead of just grabbing everything from UIApplication, this keeps your delegate clean instead of full of reference hacks.
This is also to keep with the MVC pattern:
Model
View Controller (Only for view logic)
Controller (For coordinating between the view and the model)
I tend not to agree with this pattern.
First of all I try to treat Core Data as an implementation detail, and as any implementation detail it should be hidden behind a good facade. The facade is the interfaces I expose for my model objects. For example if I have two model objects; Cource and Student, any cource can have a number of students. I do not want to let the controller take upon the duty to setup predicates and sort descriptors, and jump through all Core Data hoops just to get a list of students for a particular class. There is a perfectly valid way to expose this in the model:
#interface Cource (StudentAccess)
-(NSArray*)studentsStortedByName;
#end
Then implement the ugly stuff once and for all in the Model class. Hiding all the complex details of Core Data, and no need to pass around managed object contexts. But how would I find the sources, it has to start somewhere right? Yes, it does but you need not expose it to the controller. Adding methods such as these are perfectly reasonable as well:
#interface Cource (CourceAccess)
+(Cource*)caurceByID:(NSString*)courceID;
+(NSArray*)allCources;
+(NSArray*)courcesHeldByTeacher:(Teacher*)teacher;
#end
This also helps in minimizing dependencies between controllers. And reducing he dependencies between the model and controller. Assuming I have a CourceViewController and a StudenViewController is I did not hide the Core Data details behind a facade and wanted to pass around the managed object context as well, then I would end up with a designated initializer like this:
-(id)initWithManagedObjectContext:(NSManagedObjectContext*)moc
student:(Student*)student;
Whereas with good a good facade I end up with this:
-(id)initWithStudent:(Student*)student;
Minimizing dependencies behind facades, in favor of dependency injection also makes it much easier to change the internal implementations. Passing around the managed object context encourages each controller to implement their own logic for basic stuff. Take for example studentsSortedByName method. At first it might be sorter by last/first name, if later changed to last/first name sort you would have to go to each and every controller that has sorted students and make the change. Where a good facade method requires you to change in one method, and all controller automagically get the update for free.
The Apple Docs try to foster the most widely applicable and sustainable design patterns.
Dependency injection is preferred because it allows for the most flexible, expandable, reusable and maintainable design.
As apps grow in complexity, using a quasi-singleton like parking the context in the app delegate breaks down. In more complex apps, you may have multiple context tied to multiple stores. You might want the same view-controller/view pair to display data from different context at different times or you may end up with multiple context on different threads/operations. You can't pile all those context up in the app delegate.
If you have a simple app with a single context then using the quasi-singleton with the app delegate can work well. I've used it on several smaller apps in the past without immediate issue but I did hit scalability problems on a couple of apps when the apps grew overtime.
Which pattern to use depends on your shipping constraints and you best guesses about of the evolution app over its entire lifecycle. If its a small one shot app, then the app delegate quasi-singleton will work fine. If the app is more complex, might grow more complex or might spawn other related apps that will reuse existing components, then dependency injection is the way to go.
I come from an MVC background (Flex and Rails) and love the ideas of code separation, reusability, encapsulation, etc. It makes it easy to build things quickly and reuse components in other projects. However, it has been very difficult to stick with the MVC principles when trying to build complex, state-driven, asynchronous, animated applications.
I am trying to create animated transitions between many nested views in an application, and it got me thinking about whether or not I was misleading myself... Can you apply principles from MVC to principles from Artificial Intelligence (Behavior-Trees, Hierarchical State Machines, Nested States), like Games? Do those two disciplines play nicely together?
It's very easy to keep the views/graphics ignorant of anything outside of themselves when things are static, like with an HTML CMS system or whatever. But when you start adding complex state-driven transitions, it seems like everything needs to know about everything else, and the MVC almost gets in the way. What do you think?
Update:
An example. Well right now I am working on a website in Flex. I have come to the conclusion that in order to properly animate every nested element in the application, I have to think of them as AI Agents. Each "View", then, has it's own Behavior Tree. That is, it performs an action (shows and hides itself) based on the context (what the selected data is, etc.). In order to do that, I need a ViewController type thing, I'm calling it a Presenter. So I have a View (the graphics laid out in MXML), a Presenter (defining the animations and actions the View can take based on the state and nested states of the application), and a Presentation Model to present the data to the View (through the presenter). I also have Models for value objects and Controllers for handling URLs and database calls etc... all the normal static/html-like MVC stuff.
For a while there I was trying to figure out how to structure these "agents" such that they could respond to their surrounding context (what's selected, etc.). It seemed like everything needed to be aware of everything else. And then I read about a Path/Navigation Table/List for games and immediately thought they have a centrally-stored table of all precalculated actions every agent can take. So that got me wondering how they actually structure their code.
All of the 3D video game stuff is a big secret, and a lot of it from what I see is done with a graphical UI/editor, like defining behavior trees. So I'm wondering if they use some sort of MVC to structure how their agents respond to the environment, and how they keep their code modular and encapsulated.
"Can you apply principles from MVC to
principles from Artificial
Intelligence (Behavior-Trees,
Hierarchical State Machines, Nested
States), like Games?"
Of course. 99.9% of the AI is purely in the Model. The Controller sends the inputs to it, the View is how you represent it on the screen to the user.
Now, if you want to start having the AI control something, you may end up nesting the concepts, and your game 'model' contains a Model for an entity, a Controller for the entity which is the AI sending commands to it, and a View for the entity which represents the perceptions of that entity that the Controller can work with. But that's a separate issue from whether it can 'play nicely'. MVC is about separating presentation and input from logic and state and that aspect doesn't care what the logic and state looks like.
Keep this in mind:
The things which need to react simply have to be aware of the things to which they need to react.
So if they need to know about everything, then they need to know about everything.
Otherwise, -how- do you make them aware? In 3D video games stuff, say first-person shooters, the enemies react to sound and sight (footsteps / gunshots and you / dead bodies, for instance). Note that I indicated an abstract basis, and parts of the decision tree.
It might be wrong in your specific case to separate the whole thing between several agents, and simpler to leave it to one main agent who can delegate orders to separate processes (/begin babble) : each view could be a process which could be told to switch to any (a number of) view by the main agent, depending on what data the main agent has received.
Hope that helps.. Take it all with a grain of salt :)
It sounds like you need to make more use of the Observer/Event Aggregator pattern. If multiple components need to react to arbitrary application events without introducing undue coupling, then using an event aggregator would help you out. Example: when an item is selected, an application event is published, relevant controllers tell their view to run animations, etc. Different components aren't aware of others, they just listen for common events.
Also, the code that makes the view do things (launch animation depending on model/controller state) - that's part of the View itself, so you don't have to make your architecture weird by having a controller and a viewcontroller. If it's UI specific code, then it's part of the view. I'm not familiar with Flex, but in WPF/Silverlight, stuff like that would go into the code-behind (though for the most part Visual State Manager is more than enough to deal with state animations so you can keep everything in XAML).