Should the Controller make direct assignments on the Model objects, or just tell the Model what needs to be done?
The controller has two traditional roles:
handling the input event from the UI (registered handler or callback)
notifying the model of an action--which may or may not result in a change on the model's state
It does not perform data validation, that is on the model, nor does it have any say in how information is presented.
It depends largely on the scope of your application. If it's relatively quick and dirty, then there's no sense in over-engineering, and sure, your controllers can talk to your model objects. On the other hand, if it needs to be more "enterprisey" for whatever reason, a good pattern to use in conjunction with MVC is the so-called "Business Delegate". This is where you can compose coarse-grained methods out of one or more methods on one or more model objects; for instance deleting an object and then returning a refreshed list without that object. This layer gives two advantages. For one, it decouples the controllers from whatever ORM system is being used for model objects. Furthermore, it is the layer that finally must constructively deal with any exceptions that may have occurred instead of re-throwing them.
I don't think a controller should be dealing with model objects.
I tend to think that controller is really part of the UI tier. I prefer to inject a service layer in-between the controller and the rest of the app. The web tier accepts HTTP requests, unmarshals parameters from request objects into objects that the service interface can deal with, and marshals the response to send back. All the work with transactions, units of work, and dealing with model and persistence objects is done by the service.
This approach is more service oriented. It separates the service from the user interface, leaving open the possibility that several clients can reuse the same service. It makes the layer that marshals requests to the service "thin", so it's easy to switch out SOAP services for REST or EJB or CORBA or whatever the next new thing will be.
The Model services don't have to know the existence of the controller, thus, controller can do the stuff what ever the view needs by utilising the model services.
Related
I'd like to create a good app in ASP.NET MVC 5 using EF 6 Code first concept. I want it to be well-designed i.e. having generally speaking: Presentation, Logic and Data layers separated. I want it to be testable :)
Here's my idea and some issues related with creating application
Presentation layer: It's my whole MVC - view models(not models), views, controllers
I believe that's validation should be done somewhere else (in my opinion - it's a part of business logic) but it's quite convenient to use attributes from the DataAnnotations namespace in ViewModelds and check validation in controller.
Logic layer: Services - classes with their interfaces to rule business logic.
I put there functions like: AddNewPerson(PersonViewModel Person), SendMessageToPerson(...).
They will use DB context to make their actions (there's a chance that not all of them will be relying on context). There's a direct connection between service and db - I mean the service class have reference do context.
Where should I do mapping between ViewModel and Model? I've heard that service is a bad place for it - so maybe in controllers. I've heard that service should do the work related with db exclusively.
Is it right? Is my picture of service layer is good?
Data layer: I've read about Repository and UoW patterns a lot. There're some articles which suggest that EF6 implements these two things. I don't want to create extra code if there's no need for such a behavior. The question is: am i right to assume that i don't need them?
Here's my flow:
View<->Controllers(using ViewModels)<->Services(using Models)<->DB.
**I'm gonna use DI in my project.
What do you think about my project structure?
There is no reason to use a Unit of Work pattern with Entity Framework if you have no need to create a generic data access mechanism. You would only do this if you were:
using a data access technology that did not natively support a Unit of work pattern (EF does)
Wanted to be able to swap out data providers sometime in the future.. however, this is not as easy as it might seem as it's very hard NOT to introduce dependencies on specific data technologies even when using an Unit of Work (maybe even BECAUSE you are)... or
You need to have a way of unifying disparate data sources into an atomic transaction.
If none of those are the case, you most likely don't need a custom Unit of Work. A Repository, on the other hand can be useful... but with EF6 many of the benefits of a Repository are also available since EF6 provides mocking interfaces for testing. Regardless, stay away from a generic repository unless it's simply an implementation detail of your concrete repositories. Exposing generic repositories to your other layers is a huge abstraction leak...
I always use a Repository/Service/Façade pattern though to create a separation between my data and business (and UI and business for that matter) layers. It provides a convenient way to mock without having to mock your data access itself and it decouples your logic from the specific that are introduced by the Linq layer used by EF (Linq is relatively generic, but there are things that are specific to EF), a façade/repository/server interface decouples that).
In general, you're on the right path... However, let me point out that using Data Attributes on your view models is a good thing. This centralizes your validation on your model, rather than making you put validation logic all over the place.
You're correct that you need validation in your business logic as well, but your mistake is the assumption that you should only have it on the business logic. You need validation at all layers of your application.. And in particular, your UI validation may have different requirements than your business logic validation.
For instance, you may implement creating a new account as a multi-step wizard in your UI, this would require different validation than your business layer because each step has only a subset of the validation of the total object. Or you might require that your mobile interface has different validation requirements from your web site (one might use a captcha, while the other might use a touch based human validation for instance).
Either way, it's important to keep in mind that validation is important both at the client, server, and various layers...
Ok, let’s clarify a few things...
The notion of ViewModel (or the actual wording of ViewModel) is something introduced by Microsoft Martin Fowler. In fact, a ViewModel is nothing more than a simple class.
In reality, your Views are strongly typed to classes. Period. To avoid confusion, the wording ViewModel came up to help people understand that
“this class, will be used by your View”
hence why we call them ViewModel.
In addition, although many books, articles and examples use the word ViewModel, let's not forget that it's nothing more than just a Model.
In fact, did you ever noticed why there is a Models folder inside an MVC application and not a ViewModels folder?
Also, ever noticed how at the top of a View you have #model directive and not # viewmodel directive?
That's because everything could be a model.
By the way, for clarity, you are more than welcomed to delete (or rename) the Models folder and create a new one called ViewModels if that helps.
Regardless of what you do, you’ll ultimately call #model and not #viewmodel at the top of your pages.
Another similar example would be DTO classes. DTO classes are nothing more than regular classes but they are suffixed with DTO to help people (programmers) differentiate between all the other classes (including View Models).
In a recent project I’ve worked on, that notion wasn’t fully grasped by the team so instead of having their Views strongly typed to Models, they would have their Views strongly typed to DTO classes. In theory and in practice everything was working but they soon found out that they had properties such as IsVisible inside their DTO’s when in fact; these kind of properties should belongs to your ViewModel classes since they are used for UI logic.
So far, I haven’t answered your question but I do have a similar post regarding a quick architecture. You can read the post here
Another thing I’d like to point out is that if and only if your Service Layer plans on servicing other things such as a Winform application, Mobile web site, etc...then your Service Layer should not be receiving ViewModels.
Your Service Layer should not have the notion of what is a ViewModel. It should only accept, receive, send, etc... POCO classes.
This means that from your Controller, inside your ActionResult, once the ModelState is Valid, you need to transform your ViewModel into a POCO which in turn, will be sent to the method inside your Service Layer.
In other words, I’d use/install the Automapper nugget package and create some extension methods that would convert a ViewModel into a POCO and vice-versa (POCO into a ViewModel).
This way, your AddNewPerson() method would receive a Person object for its parameter instead of receiving a PersonViewModel parameter.
Remember, this is only valid if and only if your Service Layer plans on servicing other things...
If that's not the case, then feel free to have your Service Layer receive, send, add, etc...ViewModels instead of POCOs. This is up to you and your team.
Remember, there are many ways to skin a cat.
Hope this helps.
I've been reading up on DDD a little bit, and I am confused how this would fit in when using an ORM like NHibernate.
Right now I have a .NET MVC application with fairly "fat" controllers, and I'm trying to figure out how best to fix that. Moving this business logic into the model layer would be the best way to do this, but I am unsure how one would do that.
My application is set up so that NHibernate's session is managed by an HttpModule (gets session / transaction out of my way), which is used by repositories that return the entity objects (Think S#arp arch... turns out a really duplicated a lot of their functionality in this). These repositories are used by DataServices, which right now are just wrappers around the Repositories (one-to-one mapping between them, e.g. UserDataService takes a UserRepository, or actually a Repository). These DataServices right now only ensure that data annotations decorating the entity classes are checked when saving / updating.
In this way, my entities are really just data objects, but do not contain any real logic. While I could put some things in the entity classes (e.g. an "Approve" method), when that action needs to do something like sending an e-mail, or touching other non-related objects, or, for instance, checking to see if there are any users that have the same e-mail before approving, etc., then the entity would need access to other repositories, etc. Injecting these with an IoC wouldn't work with NHibernate, so you'd have to use a factory pattern I'm assuming to get these. I don't see how you would mock those in tests though.
So the next most logical way to do it, I would think, would be to essentially have a service per controller, and extract all of the work being done in the controller currently into methods in each service. I would think that this is breaking with the DDD idea though, as the logic is now no longer contained in the actual model objects.
The other way of looking at that I guess is that each of those services forms a single model with the data object that it works against (Separation of data storage fields and the logic that operates on it), but I just wanted to see what others are doing to solve the "fat controller" issue with DDD while using an ORM like NHibernate that works by returning populated data objects, and the repository model.
Updated
I guess my problem is how I'm looking at this: NHibernate seems to put business objects (entities) at the bottom of the stack, which repositories then act on. The repositories are used by services which may use multiple repositories and other services (email, file access) to do things. I.e: App > Services > Repositories > Business Objects
The pure DDD approach I'm reading about seems to reflect an Active Record bias, where the CRUD functions exist in the business objects (This I call User.Delete directly instead of Repository.Delete from a service), and the actual business object handles the logic of things that need to be done in this instance (Like emailing the user, and deleting files belonging to the user, etc.). I.e. App > (Services) > Business Objects > Repositories
With NHibernate, it seems I would be better off using the first approach given the way NHibernate functions, and I am looking for confirmation on my logic. Or if I'm just confused, some clarification on how this layered approach is supposed to work. My understanding is that if I have an "Approve" method that updates the User model, persists it, and lets say, emails a few people, that this method should go on the User entity object, but to allow for proper IoC so I can inject the messagingService, I need to do this in my service layer instead of on the User object.
From a "multiple UI" point of view this makes sense, as the logic to do things is taken out of my UI layer (MVC), and put into these services... but I'm essentially just factoring the logic out to another class instead of doing it directly in the controller, and if I am not ever going to have any other UI's involved, then I've just traded a "fat controller" for a "fat service", since the service is essentially going to encapsulate a method per controller action to do it's work.
DDD does not have an Active Record slant. Delete is not a method that should be on an Entity (like User) in DDD.
NHibernate does support a DDD approach very well, because of how completely divorced it remains from your entity classes.
when that action needs to do something
like sending an e-mail, or touching
other non-related objects
One piece of the puzzle it seems you are missing is Domain Events. A domain entity shouldn't send an email directly. It should raise an event in the Domain that some significant event has happened. Implement a class whose purpose is to send the email when the event occurs, and register it to listen for the Domain Event.
or, for instance, checking to see if
there are any users that have the same
e-mail before approving
This should probably be checked before submitting the call to "approve," rather than in the function that does the approving. Push the decision up a level in calling code.
So the next most logical way to do it,
I would think, would be to essentially
have a service per controller
This can work, if it's done with the understanding that the service is an entry point for the client. The service's purpose here is to take in parameters in a DTO from the front end/client and translate that into method calls against an entity in order to perform the desired funcitonality.
The only limitations NHibernate creates for classes is all methods/properties must be virtual and a class must have a default constructor (can be internal or protected). Otherwise, it does not [edit] interfere with object structure and can map to pretty complex models.
The short answer to you question is yes, in fact, I find NHibernate enhances DDD - you can focus on developing (and altering) your domain model with a code first approach, then easily retro-fit persistence later using NHibernate.
As you build out your domain model following DDD, I would expect that much of the business logic that's found you way into you MVC controllers should probably reside in your domain objects. In my first attempt at using ASP.NET MVC I quickly found myself in the same position as yourself - fat controllers and an anemic domain model.
To avoid this, I'm now following the approach of keeping a rich domain model that implements the business logic and using MVC's model as essentially simple data objects used by my views. This simplifies my controllers - they interact with my domain model and provide simple data objects (from the MVC model) to the views.
Updated
The pure DDD approach I'm reading about seems to reflect an Active Record bias...
To me the active record pattern means entities are aware of their persistance mechanism and an entity maps directly to a database table record. This is one way of using NHibernate e.g. see Castle Active Record, however, I find this pollutes domain enitities with knowledge of their persistence mechanism. Instead, typically, I'll have a repository per aggregate root in my domain model which implements an abstract repository. The abstract repository provides basic CRUD methods such as:
public IList<TEntity> GetAll()
public TEntity GetById(int id)
public void SaveOrUpdate(TEntity entity)
public void Delete(TEntity entity)
.. which my concrete repositories can supplement/extend.
See this post on The NHibernate FAQ which I've based a lot of my stuff on. Also remember, NHibernate (depending on how you set up your mappings) will allow you to de-persist a complete object graph, i.e. your aggregate root plus all the objects hanging off it and once you've finished working with it, can cascade saves through you entire object graph, this certainly isn't active record.
...since the service is essentially going to encapsulate a method per controller action to do it's work...
I still think you should consider what functionality that you currently have in your controllers should, more logically, be implemented within your domain objects. e.g. in your approval example, I think it would be sensible for an entity to expose an approve method which does whatever it needs to do to within the entity and if, as in your example, needs to send emails, delegate this to a service. Services should be reserved for cross-cutting concerns. Then, once you've finished working with your domain objects, pass them back to your repository to persist changes.
A couple of books I've found useful on these topics are:
Domain-Driven Design by Eric Evans
Applying Domain-Driven Design and Patterns by Jimmy Nilsson
Say you have an ASP.NET MVC project and are using a service layer, such as in this contact manager tutorial on the asp.net site: http://www.asp.net/mvc/tutorials/iteration-4-make-the-application-loosely-coupled-cs
If you have viewmodels for your views, is the service layer the appropriate place to provide each viewmodel? For instance, in the service layer code sample there is a method
public IEnumerable<Contact> ListContacts()
{
return _repository.ListContacts();
}
If instead you wanted a IEnumerable, should it go in the service layer, or is there somewhere else that is the "correct" place?
Perhaps more appropriately, if you have a separate viewmodel for each view associated with ContactController, should ContactManagerService have a separate method to return each viewmodel? If the service layer is not the proper place, where should viewmodel objects be initialized for use by the controller?
Generally, no.
View models are intended to provide information to and from views and should be specific to the application, as opposed to the general domain. Controllers should orchestrate interaction with repositories, services (I am making some assumptions of the definition of service here), etc and handle building and validating view models, and also contain the logic of determining views to render.
By leaking view models into a "service" layer, you are blurring your layers and now have possible application and presentation specific mixed in with what should focused with domain-level responsibilities.
No, I don't think so. Services should care only about the problem domain, not the view that renders results. Return values should be expressed in terms of domain objects, not views.
As per the traditional approach or theory wise, ViewModel should be part of User interface layer. At least the name says so.
But when you get down to implementing it yourself with Entity Framework, MVC, Repository etc, then you realise something else.
Someone has to map Entity/DB Models with ViewModels(DTO mentioned in the end). Should this be done in [A] the UI layer (by the Controller), or in [B] the Service layer?
I go with Option B. Option A is a no no because of the simple fact that several entity models combine together to form a ViewModel. We may not pass unnecessary data to UI layer, whereas in option B, the service can play with data and pass only the required/minimum to the UI layer after mapping (to the ViewModel).
But still, let us go with option A, put ViewModel in the UI layer(and entity model in Service layer).
If the Service layer needs to map to the ViewModel, then the Service layer need to access ViewModel in UI layer. Which library/project? The Viewmodel should be in a separate project in the UI layer, and this project needs to be referenced by Service Layer. If the ViewModel is not in a separate project, then there is circular reference, so no go. It looks awkward to have Service layer accessing UI layer but still we could cope with it.
But what if there is another UI app using this service? What if there is a mobile app? How different can the ViewModel be? Should the Service access the same view model project? Will all UI projects access the same ViewModel project or they have their own?
After these considerations my answer would be to put the Viewmodel project in Service Layer. Every UI layer has to access the Service layer anyways! And there could be a lot of similar ViewModels that they all could use (hence mapping becomes easier for service layer). Mappings are done through linq these days, which is another plus.
Lastly, there is this discussion about DTO. And also about data annotation in ViewModels. ViewModels with data annotations(Microsoft.Web.Mvc.DataAnnotations.dll) cannot reside in service layer instead they reside in UI layer(but ComponentModel.DataAnnotations.dll can reside in service layer). If all projects are in one solution(.sln), then it doesn't matter which layer you put it. In enterprise applications, each layer will have its own solution.
So DTO actually is a ViewModel because mostly there will be one on one mapping between the two(say with AutoMapper). Again DTO still has the logic needed for the UI(or multiple applications) and resides in Service Layer. And the UI layer ViewModel(if we use Microsoft.Web.Mvc.DataAnnotations.dll) is just to copy the data from DTO, with some 'behavior'/attributes added to it.
[Now this discussion is about to take an interesting turn read on...:I]
And don't think data-annotation attributes are just for UI. If you limit the validation using System.ComponentModel.DataAnnotations.dll
then the same ViewModel can also be used for front-end & backend validation(thus removing UI-residing-ViewModel-copy-of-DTO). Moreover attributes can also be used in entity models. Eg: using .tt, Entity Framework data models can be autogenerated with validation attributes to do some DB validations like max-length before sending to the back end. This saves round-trips from UI to backend for validation. It also enables back-end to re-validate for security reasons. So a model is a self-reliant validator and you can pass it around easily. Another advantage is that if backend validation changes in DB then .tt (reads DB specifics and create the attribute for entity class) will automatically pick that up. This can force UI validation unit tests to fail as well, which is a big plus(so we can correct it and inform all UIs/consumers instead of accidentally forgetting and failing). Yes, the discussion is moving towards a good framework design. As you can see it is all related: tier-wise validation, unit test strategy, caching strategy, etc.
Although not directly related to the question. 'ViewModel Façade' mentioned in this must watch channel 9 link is also worth exploring. It starts exactly at 11 minutes 49 seconds in the video. Because this would be the next step/thought once your current question given above is sorted out: 'How to organize ViewModels?'
And Lastly, many of these model vs logic issues could be resolved with REST. Because every client can have the intelligence to query the data and get only the data that it needs. And it keeps the model in UI, there is no server/service layer model/logic. The only duplication then will be on the automated tests that each client need to perform. Also if there are changes in data then some clients fail if they do not adapt to the change. The question then is, are you removing service layer altogether(and the models they carry) or pushing the service layer up to your UI project(so model issue still persists) which calls the REST API. But in terms of the responsibility of Service layer, they are the same regardless.
Also in your example "_repository.ListContacts()" is returning a ViewModel from repository. This is not a mature way. Repositories should provide entity models or DB models. This gets converted to view models and it is this view model that is returned by the service layer.
This has come a bit of an "it depends" where I work - we have generally had a controller consuming some service(s) - then combining returned DTO's into a 'ViewModel' that would then get passed to the client - either via JSON result, or bound in the Razor Template.
Thing is, about 80% of the time - the mapping of DTO to ViewModel has been 1-1. We are starting to move towards 'Where needed, just consume the DTO directly, but when the DTO and what we need in our client/view don't match up - then we create a ViewModel and do the mapping between objects as needed'.
Although I'm still not convinced that this is the best or right solution - as it ends up leading to some heated discussions of 'are we just adding X to the DTO to meet the needs of the view?'
I suppose that depends on what you consider the "services" to be. I've never really liked the term service in the context of a single class; it's incredibly vague and doesn't tell you much about the actual purpose of the class.
If the "service layer" is a physical layer, such as a web service, then absolutely not; services in an SOA context should expose domain/business operations, not data and not presentation logic. But if service is just being used as an abstract concept for a further level of encapsulation, I don't see any problem with using it the way you desribe.
Just don't mix concepts. If your service deals with view models then it should be a presentation service and be layered over top of the actual Model, never directly touching the database or any business logic.
Hi I see very good answers here.
And for myself I do an other aproach.
I have to kinds of models , one is viewmodel and the other is shared models. The viewmodels stays on the UI layer and the shared models stays on a separate project.
The shared models can theoretically be used anyware because those are standalone.
This models provides some abstraction if you want to return specific data from your service layer or if you need something specific from your repository.
I don't really know if this is a good aproach but it works so well on my projects. For example
When I need to provide some information to create new objects to the database i can use the shared models directly to the service layer it saves me some complexity.
The shared models needs to be mapped sometimes , but you can ensure that your service is not leaking inesssary data to the UI.
You can see shared models as an extension but not to build your UI logic with it, you should have viewmodels to do that.
You can combine viewmodels with this shared models to save you time you can use inheritance.
The shared models has to be neutral and should not have any kind of logic.
I have an MVC app I'm writing. There will be the need for multiple instances of the same page to be open, each accessing different records from a database, these record objects will also need to be passed through a flow of pages, before finally being updated.
What's the best, and most correct, way of acheiving this - should/can I create a custom model binder that links to an object via it's unique ID and then create each record-object in the session, updating them as I go through each one's page flow and then finally calling the update method? Or is there a better way of dealing with this?
Cheers
MH
Technically, that would be possible, but I don't think it is advisable. When you look at the signature of IModelBinder, you will have to jump through some hoops related to the ControllerContext if you want to be able to access the rest of your application's context (such as how to dehydrate objects based on IDs).
It's possible, but so clunky that you should consider whether it's the right approach. In my opinion, a ModelBinder's responsibility is to map HTTP request data to strongly typed objects. Nothing more and nothing less - it is strictly a mapper, and trying to make it do more would be breaking the Single Responsibility Principle.
It sounds to me like you need an Application Controller - basically, a class that orchestrates the Views and the state of the underlying Model. You can read more about the Application Controller design pattern in Patterns of Enterprise Application Architecture.
Since a web application is inherently stateless, you will need a place to store the intermediate state of the application. Whether you use sessions or a custom durable store to do that depends on the application's requirements and the general complexity of the intermediate data.
I'm basing this question on Fowler PoEAA. Given your familiarity with this text, aren't the ViewModels used in ASP.NET MVC the same as DTOs? Why or why not? Thank you.
They serve a similar purpose (encapsulating data for another layer of the application) but they do it differently and for different reasons.
The purpose of a DTO is to reduce the number of calls between tiers of an application, especially when those calls are expensive (e.g. distributed systems). DTOs are almost always trivially serializable, and almost never contain any behaviour.
For example, you're developing an e-Commerce site. CreateCustomer and AddCustomerAddress are separate operations at the database level, but you might for performance reasons want to aggregate their data into a NewCustomerWithAddressDto so that your client only needs to make one round-trip to the server, and doesn't need to care that the server might be doing a bunch of different things with the parcel of data.
The term "ViewModel" means slightly different things in different flavours of MV*, but its purpose is mainly separation of concerns. Your Model is frequently optimised for some purpose other than presentation, and it's the responsibility of the ViewModel to decouple your View from the Model's implementation details. Additionally, most MV* patterns advise making your Views as "dumb" as possible, and so the ViewModel sometimes takes responsibility for presentation logic.
For example, in the same e-Commerce application, your CustomerModel is the wrong "shape" for presentation on your "New Customer" View. For starters, your View has two form fields for your user to enter and confirm their password, and your CustomerModel doesn't contain a password field at all! Your NewCustomerViewModel will contain those fields and might, depending on your flavour of MV*, be responsible for some presentation logic (e.g. to show/hide parts of the view) and basic validation (e.g. ensuring that both password fields match).
The purpose is different:
DTO's are used to transfer data
ViewModels are used to show data to an end user.
So normally ViewModels contain the presentation data, witch is in a lot of cases similar to what is in a DTO, but with some differences. Think of representation of enums, localization, currency, date formats, ... . This is because normally there should be no logic in your view.
DTOs in MVVM and MVP are usually Very Dumb Objects and are basically just a bunch of property setters and getters. ViewModels on the other hand can have some behaviour.
A practical positive side effect of having DTOs is allow easier serialization. If you have a rather complex object in, say C#, you will often find yourself having to selectively turn things off that you don't want serialized. This can get rather ugly and DTOs simplify this process.
A View Model and a Data Transfer object has similarities and differences.
Similar:
Transfer data in a record (object instance, perhaps serialized) to a receiver, whether a view or a service
Difference:
A View Model is intended to be sent to a View, where it will be displayed, with formatting.
A View Model also sends back data to a controller.
A DTO is usually not intended for presentation. It is intended to send raw data.
Both serve the same purpose to model data coming to and from the backend.
View Models model data hitting the backend from a front end visual system (forms, user inputs, etc) and vice versa model data to be sent to the front end for the same purpose to fulfill some visual requirement.
Data Transfer Objects model data hitting the backend from some client system (background api's, data that still needs to be worked on, a clients background services, etc) and vice versa model data to be sent to the client system. Even though the client system may have visual elements, the DTO call itself is used for a non visual use case.
The technical differences between the two arise from the semantic and contextual meaning of the two as mentioned above, such as view models possibly having more behaviour.
I always thought DTO's were supposed to be almost like for like your Entities and ViewModels were containers for those DTO's when presenting to the View.
In that instance you would create 'pseudo' DTO's that combine 2 or more other DTO's together to pass data in one 'model' to a method or API etc.
I never came up with a naming convention for those 'pseudo' DTO's though, so just ended up suffixing them with "DTO", but put them in the models Folder along with the view models 🤷♂️
The ViewModel may have presentation logic based on; the current user's permissions, the display type, the data in the DTO's etc.
I've always tried to keep my views as 'dumb' as possible with as little as possible code in them, and just bound the view to the properties in the view model.