Controller A calling Model B: Code smell? - ruby-on-rails

I have a toys controller which users can use to claim toys to play with. Right now, the claim method is implemented in the controller level (as this answer suggested I do).
However, now it's getting a bit fat with claiming logic that really shouldn't be there: A child can't claim a toy if they already have 3 toys, a child can't claim a toy claimed by another child, and so on. The sensible spot for that logic (in my mind) is in the child model, because I'm describing the behavior of a child (what they may and may not do).
That said, if I do this, the toys#claim controller action is going to call methods from the child model. Is this a code smell/bad practice?
(I'm guessing someone's going to suggest I use a service object for this. If you do, could you please point out a simple tutorial? The recent RailsCast about this is a bit too complicated for me.)
Thanks in advance!

In general (outside of Rails), it is not a smell at all. In fact, I'd argue that having a pure 1:1 mapping between "models" and "controllers" is a smell.
Note: I am not a ROR dev. I have no experience in ROR or how it implements things. However, I do understand design patterns quite well, and understand application architecture. With that said:
Instead of worrying about 1:1 mappings, step back and think about the structure of the application.
What is the Controller supposed to be doing? Well, in general it is supposed to route user actions to the application. It is just a plumbing step.
Then what is a Model (layer) supposed to be doing? In general, the Model is a layer that encompasses all of your business logic in the application. It will handle database interaction, access controls, business operations, etc. Therefore, the model is actually the vast majority of your application.
The View on the other hand is your presentational layer. It should handle all rendering, pulling data from the model layer.
Based on that understanding, your models, views and controllers should be able to vary independently to each other. In general, I'd expect to see a fairly 1:1 relationship between controllers and views. What I mean by that is that each controller that exists, I'd expect to see a view. But there can be views that exist where there's no user interaction. In those cases, you may need a controller (to render the view), or depending on your architecture, you many not need one.
But the "model classes", which are a small part of the model layer (acting as proxies or adapters for the lower model functionality) may or may not be 1:1 with controllers or views. For example, you may have a view that pulls data from multiple models. You can have a controller which acts on multiple models.
Now you could step back and say that if a controller needs to act upon multiple models, then create a new model which abstracts that operation. Sometimes that is the right thing to do. Sometimes it's not. It all boils down to the specific operations and relationships involved...
At the end of the day, there's no "right" or "wrong" here. It really comes to a design decision that you need to make as you structure your application. I wouldn't worry too much about the "smell" component, as long as it makes sense in your application...

Related

MV4 Application with EF5 model first, without ViewModels or Repositories

I'm building a MVC4 app, I've used EF5 model first, and kept it pretty simple. This isn't going to a huge application, there will only ever be 4 or 5 people on it at once and all users will be authenticated before being able to access any part of the application, it's very simply a place order - dispatcher sees order - dispatcher compeletes order sort of application.
Basically my question is do I need to be worrying about repositories and ViewModels if the size and scope of my application is so small. Any view that is strongly typed to a domain entity is using all of the properties within that entity. I'm using TryOrUpdateModel in my controllers and have read some things saying this can cause a lot of problems, but not a lot of information on exactly what those problems can be. I don't want to use an incredibly complicated pattern for a very simple app.
Hopefully I've given enough detail, if anyone wants to see my code just ask, I'm really at a roadblock here though, and could really use some advice from the community. Thanks so much!
ViewModels: Yes
I only see bad points when passing an EF Entities directly to a view:
You need to do manual whitelisting or blacklisting to prevent over-posting and mass assignment
It becomes very easy to accidentally lazy load extra data from your view, resulting in select N+1 problems
In my personal opinion, a model should closely resembly the information displayed on the view and in most cases (except for basic CRUD stuff), a view contains information from more than one Entity
Repositories: No
The Entity Framework DbContext already is an implementation of the Repository and Unit of Work patterns. If you want everything to be testable, just test against a separate database. If you want to make things loosely coupled, there are ways to do that with EF without using repositories too. To be honest, I really don't understand the popularity of custom repositories.
In my experience, the requirements on a software solution tend to evolve over time well beyond the initial requirement set.
By following architectural best practices now, you will be much better able to accommodate changes to the solution over its entire lifetime.
The Respository pattern and ViewModels are both powerful, and not very difficult or time consuming to implement. I would suggest using them even for small projects.
Yes, you still want to use a repository and view models. Both of these tools allow you to place code in one place instead of all over the place and will save you time. More than likely, it will save you copy paste errors too.
Moreover, having these tools in place will allow you to make expansions to the system easier in the future, instead of having to pour through all of the code which will have poor readability.
Separating your concerns will lead to less code overall, a more efficient system, and smaller controllers / code sections. View models and a repository are not heavily intrusive to implement. It is not like you are going to implement a controller factory or dependency injection.

Map one entity to one viewmodel or to many viewmodels

I have for one customer entity multiple viewmodels depending on the existing views like Create,Update,Get,Delete. These viewmodels share the same properties up to 75% with the entity.
Should I better merge all the customer viewmodels to one big viewmodel?
So I have to map only and always from one entity to one viewmodel and the way back?
Do you see any disadvantage in flexibility for certain scenarios I have not in my mind now?
In the long run, keeping them separate will be better because while the data contained in each ViewModel may be similar or even identical, the intention is different. For example, the Create and Update ViewModels are certainly similar, but have a few important differences. First, the Create view model usually doesn't have the identity of the entity and having it there may be confusing since it doesn't make sense. Second, if the application supports partial updates, the update ViewModel may be a collection of changes to an existing entity, not the entity as a whole.
If you are striving for DRY you can achieve re-usability by means other than sharing the entire ViewModel class. Instead, you can create smaller re-usable components and re-use by composition instead of inheritance. Attempting to coerce a single ViewModel class to fulfill all requirements will be buggy beacause the code is more difficult to reason about. Many times, simple copy & paste gets the job done better than what OOP offers.
On one hand, splitting up the ViewModels as you describe makes your code base really clear, as you can make sure each ViewModel is exactly fit for purpose and has no unnecessary properties. On the other hand, it means that you have more code to maintain - a change to your entity may well mean changes to several ViewModels.
On the other hand, the one big ViewModel approach has basically exactly the opposite pros and cons - less code to maintain, but the ViewModels are less fit for purpose.
There isn't really a right or wrong answer here, you've got too weigh up the pros and cons of each approach and decide what will work best for you.
One sort of halfway approach is to have a single ViewModel for Create/Update, one for Retrieve and one for Delete, as the Create/Update are likely to be very similar.
Another, most OO option for you lies in good old inheritance: define common functionality between actions in one class MyVM and extend it (inherit from it) as you see fit for different actions: MyVMEdit, MyVMDelete, MyVMCreate, `MyVMList'.
This is the best of both worlds: you only maintain things once, and you extend them to fit precisely to every view.
There is no right approach here, since its not a math, any approach you take that gets job done will get the job done :) But ... sometimes we carried away from our roots too far :) its pure good old Object Oriented approach.
If inheritance (or extension) poses any issues (for any reason), you can embed MyVM portion inside every MyVM<Action> model and achieve the same level of abstraction / functionality balance.
As usual - right tool for the right job.
Hope this will help you.

What are the best practices for Asp.net MVC page construction, allowing for parallelism and efficiency?

I found it difficult to argue with anything in this critique of ASP.net MVC's framework for page composition.
http://www.matlus.com/problems-with-asp-net-mvc-framework-design/
Particularly these points:
No access to view or partial view instances
ViewData is your loosely typed information carrier
Controller is not really in control
Child Actions have no sense of the Request Context
Views are coupled to controllers
For small applications, I don't think that a lot of these prove to be much of a problem, but in large applications where you want to reuse a lot of shared components, or even if you just have a large application that depends on multiple backend sources of information to obtain all of the information necessary to render a view, it starts to break down.
Various half-solutions to these problems have been proposed but they do not appear to scale well or have undesirable design constraints.
Here is an example application scenario:
50% of page content is common across all pages within an application (header, footer, menus, etc.)
Your application may actually be comprised of multiple areas, each with their own controllers, etc. for independent development.
A number of the page elements (menus, page header information, footers, disclosures) that are in the common page content require one or more service calls to fill out the data for rendering.
Okay, so in asp.net mvc3, let's say that you decide that you want to share a common Razor layout that contains the 50% common UI markup. This assists with Separation of Concerns in that application developers don't need to be concerned about common ui and can focus on the logic and views specific to their domain of expertise.
However, this completely breaks down in the case that this shared layout needs data (some semblance of one or more model types) to render itself completely. You may have independent elements on the page that each need a particular data model, such as:
* primary menu model
* secondary menu model
* footer links model
* authorization model
* footer disclaimers model
And each of these models may have separate sources. So although you can share the template, there is not an easy way to share the logic to build each of these models -- and there is definitely not one that is generic, extensible, and performant that I have seen.
Some approaches to this problem that I have seen are:
Strongly type the common layout, which requires all view models to subclass a common base model class. (but there is no general solution to populating such a meta-model, and this is limiting in design and makes models huge and harder to test) Additionally, model population still falls to every controller, violating the Separation of Concerns and Single Responsibility Principle and complicating unit testing controllers by piling on lots of extra logic to populate the meta-model in addition to the view-specific model information.
Leave the common layout untyped, so you don't have to inherit from a common base model, but this requires you to use ViewData or ViewBag to communicate all of the disparate models that the template needs so you lose strong typing benefits and end up with a loose data contract. You still have the problem of a lack of a general solution to populating the meta-model and all that goes along with that.
Every controller has to subclass a common base controller class to support a common layout and model. Logic for building the common aspects of the meta-model goes here. But this is not always a desirable architecture or design constraint. This does at least resolve the Separation of Concerns issues.
Instead of a meta-model, use child actions via RenderAction() in your common layout to make reusable "portlet" style widgets that each know independently how to build their data model and provide it to their view. This is really good for Separation of Concerns, but has its own litany of downsides: views effectively making service calls during rendering via the child actions, child actions are completely unaware of the original request context, violates DRY principle as each child action is unaware of what has gone before it so each could make the same service calls over and over again in the same http request, and others. Imagine 20-30 elements of a page that all needed to invoke RenderAction() independently...
There are additional cases (some seen on stackoverflow as well) where there are other problems with RenderAction() as a solution. e.g. the fact that issuing multiple RenderAction() calls in a loop results in serial execution of all of those controller methods. There is no opportunity for parallelism with RenderAction(). I/O bound service calls in each child controller action cause the whole rendering process to wait on I/O. A controller only has knowledge of its immediate view and model and nothing has a complete picture of what is going to be inside the view in order to parallelize some operations.
The author of the above critique developed a different UI model on top of ASP.Net mvc called Quartz that allows a God Controller to have intimate knowledge of the views and can hand each of them a view model so has the opportunity to parallelize service calls in a central place to build those view models. I don't know if this is the best design to provide hooks for overcoming the problems but looks promising.
My question is, what is the best practice for building a complex application on top of ASP.Net MVC that cleanly solves these problems? I have thought of a couple possibilities (although none may be practical within ASP.Net MVC--that is TBD) but someone else must have ran into this already. What are the design patterns within ASP.Net MVC or what is coming down the pike that could make this a tractable problem?
Personally, I think that the advantages of using Child Actions via RenderAction outweigh the disadvantages.
You can create 'widget' sort of elements, and wrap up their logic in a controller action - this way the view calling the widget can remain quite ignorant of what the Child Action is doing and how it is doing it - leading to a nice separation of concerns.
You have detailed the disadvantages of this approach, however I think that the negative impact can be minimised with a reasonable caching strategy.
I'm not sure there's really much more I can contribute to this "question". I think you have a good understanding of the problems and solutions, advantages and disadvantages.
In the app I'm currently working on, we utilize a couple of these approaches by having both a base model object as well as a base controller. In order to minimize roundtrips, we store some data in session and re-populate in the model by overriding the OnActionExecuted in the base controller and grab the model out of the context and set properties out of session.
I'd certainly like to hear any wonderful solutions as well, but I think these are just the tradeoffs to deal with.

ruby on Rails dto objects - Where do you store them?

Does anyone here use DTO's to transfer data from the controller to the view? If so, where would you recommend storing those files? /apps/dtos, and then let them mirror the views dir structure? Any recommendations on testing these animals with rspec?
Please don't listen to the other answers. They are terrible. Rails helpers are terrible. Using rails models everywhere is terrible. I am begging you, design your application properly and decide if you need DTOs or not. Decide if you actually want to have rails models to handle things other than communication with the database. Decide if you actually want to not have a layer between your app and rails and so on and so forth. Rails design is suitable for only small apps or apps that have to be developed super quickly. But if it's not something trivial and you expect to develop it for some time, please invest your time into proper design. Don't be afraid to break Rails conveniences. And may the force with you.
The Rails convention is not to use distributed tiers for controller and view layers. The separation is there, but it is logical and relatively thin/lightweight compared to the types of frameworks you see in Java land.
The basic architecture is that the controller sets instance variables that are available in the corresponding view. In the general case, the instance variables will be model instances or collections of model instances (coming from the database). Models should be the core of your business logic. Controllers coordinate flows of data. Views display it. Helpers are used to format display values in the view ... anything that takes a model value and does something just for display purposes (you may find that a helper method used repeatedly may actually be better off on the model itself).
However, if you find that a view needs knowledge of many different models, you might find it easier to wrap models into another object at a higher-level of abstraction. Nothing prevents you from creating non-active-record objects that collect and coordinate your actual AR models. You can then instantiate these objects in the controller, and have them available to the view. You generally have to be at a pretty dense level of complexity in the controller to need this type of thing.
I would tend to throw such objects into apps/models - Rails already loads everything in this directory, keeps things easy from a config/expectation point of view.
If you're coming from a .NET or J2EE background you may be thinking about patterns like DTO. You may or may not be surprised (and possibly happy) to learn that Rails doesn't do things that way by convention.
In particular there is no need at all to formally transfer (or store) serialized objects between the controllers and views. Instance variables (typically model attribute values) created in the controller are available within the view for free as provided by the framework without any additional programming effort needed.
What I've been told is that generally, this is work that is handled by 'helpers'. They basically help you format your model objects for view consumption from within the view. So it's definitely not a 1-1 mapping of concepts, but that's the thinking in the rails world

Does Model-View-Controller Play Nicely with Artificial Intelligence and Behavior Trees?

I come from an MVC background (Flex and Rails) and love the ideas of code separation, reusability, encapsulation, etc. It makes it easy to build things quickly and reuse components in other projects. However, it has been very difficult to stick with the MVC principles when trying to build complex, state-driven, asynchronous, animated applications.
I am trying to create animated transitions between many nested views in an application, and it got me thinking about whether or not I was misleading myself... Can you apply principles from MVC to principles from Artificial Intelligence (Behavior-Trees, Hierarchical State Machines, Nested States), like Games? Do those two disciplines play nicely together?
It's very easy to keep the views/graphics ignorant of anything outside of themselves when things are static, like with an HTML CMS system or whatever. But when you start adding complex state-driven transitions, it seems like everything needs to know about everything else, and the MVC almost gets in the way. What do you think?
Update:
An example. Well right now I am working on a website in Flex. I have come to the conclusion that in order to properly animate every nested element in the application, I have to think of them as AI Agents. Each "View", then, has it's own Behavior Tree. That is, it performs an action (shows and hides itself) based on the context (what the selected data is, etc.). In order to do that, I need a ViewController type thing, I'm calling it a Presenter. So I have a View (the graphics laid out in MXML), a Presenter (defining the animations and actions the View can take based on the state and nested states of the application), and a Presentation Model to present the data to the View (through the presenter). I also have Models for value objects and Controllers for handling URLs and database calls etc... all the normal static/html-like MVC stuff.
For a while there I was trying to figure out how to structure these "agents" such that they could respond to their surrounding context (what's selected, etc.). It seemed like everything needed to be aware of everything else. And then I read about a Path/Navigation Table/List for games and immediately thought they have a centrally-stored table of all precalculated actions every agent can take. So that got me wondering how they actually structure their code.
All of the 3D video game stuff is a big secret, and a lot of it from what I see is done with a graphical UI/editor, like defining behavior trees. So I'm wondering if they use some sort of MVC to structure how their agents respond to the environment, and how they keep their code modular and encapsulated.
"Can you apply principles from MVC to
principles from Artificial
Intelligence (Behavior-Trees,
Hierarchical State Machines, Nested
States), like Games?"
Of course. 99.9% of the AI is purely in the Model. The Controller sends the inputs to it, the View is how you represent it on the screen to the user.
Now, if you want to start having the AI control something, you may end up nesting the concepts, and your game 'model' contains a Model for an entity, a Controller for the entity which is the AI sending commands to it, and a View for the entity which represents the perceptions of that entity that the Controller can work with. But that's a separate issue from whether it can 'play nicely'. MVC is about separating presentation and input from logic and state and that aspect doesn't care what the logic and state looks like.
Keep this in mind:
The things which need to react simply have to be aware of the things to which they need to react.
So if they need to know about everything, then they need to know about everything.
Otherwise, -how- do you make them aware? In 3D video games stuff, say first-person shooters, the enemies react to sound and sight (footsteps / gunshots and you / dead bodies, for instance). Note that I indicated an abstract basis, and parts of the decision tree.
It might be wrong in your specific case to separate the whole thing between several agents, and simpler to leave it to one main agent who can delegate orders to separate processes (/begin babble) : each view could be a process which could be told to switch to any (a number of) view by the main agent, depending on what data the main agent has received.
Hope that helps.. Take it all with a grain of salt :)
It sounds like you need to make more use of the Observer/Event Aggregator pattern. If multiple components need to react to arbitrary application events without introducing undue coupling, then using an event aggregator would help you out. Example: when an item is selected, an application event is published, relevant controllers tell their view to run animations, etc. Different components aren't aware of others, they just listen for common events.
Also, the code that makes the view do things (launch animation depending on model/controller state) - that's part of the View itself, so you don't have to make your architecture weird by having a controller and a viewcontroller. If it's UI specific code, then it's part of the view. I'm not familiar with Flex, but in WPF/Silverlight, stuff like that would go into the code-behind (though for the most part Visual State Manager is more than enough to deal with state animations so you can keep everything in XAML).

Resources