aspnetBoilerplate is based on Domain Driven Design design pattern.
I see that aspnetBoilerplate compose an application using modules.
I didn't understand what a module is , i searched it's definition in the context of domain driven design and i found that it serves as a container for a specific set of classes of an application.
So does that means ,For example , in c# namespace is a module because it can contains many classes ?
But even with this definition , it's not clear in the context of aspnetBoilerplate, a module defintion in aspnetBoilerplate have this structure :
public class MyBlogApplicationModule : AbpModule
{
public override void Initialize()
{
IocManager.RegisterAssemblyByConvention(Assembly.GetExecutingAssembly());
}
}
so it's just one class,that have one method !
Also what is the relationship between model and dependency injection ?
Why there is a registration of the model as a service in an IocContainer?
Abp module is just a way for you to organize your code under the same domain/layer and at the same time still being able to configure/interact with other modules
E.g. your module is a separate library project that contains certain domain logic, to initialize your module correctly, you can place the initialization code in module life cycle hooks
Note: register DI in the life cycle hooks is an example of interacting with the DI service (that might be configured outside of your project)
See
https://aspnetboilerplate.com/Pages/Documents/Module-System#lifecycle-methods
Abp provides convenient way to register classes that follows the convention
IocManager.RegisterAssemblyByConvention(Assembly.GetExecutingAssembly());
Note: the recommended way is to only have a abp module per assembly
See
https://aspnetboilerplate.com/Pages/Documents/Dependency-Injection#registering-dependencies
Related
So I am writing an ASP.NET Core MVC application where users should be able to upload an Excel file, when a file gets uploaded I need to read the uploaded file an create a Model of the data inside the file.
I am currently creating this model in my Controller method but this made my method quite long.
My current solution is creating a class inside my Controller which deals with creating a model from an Excel file but I feel like this is the wrong way to do it.
So my question is: What is the right place to put the code that reads my excel file and puts it inside a model?
You should create a new .NET Standard library and create there the class that builds the model.
The recommended way is to use the class as an implementation and an interface (IExcelModelBuilder) that exposes all the public methods of that class (ExcelModelBuilder). This way you can inject this service into your controller constructor and, as a bonus, you can easily unit test it too.
You can read more about Dependency Injection in .NET Core.
You can register the service in your startup file:
// This method gets called by the runtime.
// Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
{...}
services.AddTransient<IExcelModelBuilder, ExcelModelBuilder>();
}
Step 1:Create a new .NET Standard library (Services)
Step 2:Add the reference into the mvc application of that library.
Step 3:Step two create a class that will be dealing with all the stuff like that if you have a limited number of tasks to perform ,
but if you want to separate it and wants a generic solution then Create an Interface (IUpload) and then implement all its methods in a class (Upload).also register the service in your startup file:
I'm having troubles getting the advantage of a IoC (DI) container like Ninject, Unity or whatever. I understand the concepts as follows:
DI: Injecting a dependency into the class that requires it (preferably via constructor injection). I totally see why the less tight coupling is a good thing.
public MyClass{
ISomeService svc;
public MyClass(ISomeService svc){
svc = svc;
}
public doSomething(){
svc.doSomething();
}
}
Service Locator: When a "container" is used directly inside the class that requires a dependancy, to resolve the dependancy. I do get the point that this generates another dependancy and I also see that basically nothing is getting injected.
public MyClass{
public MyClass(){}
public doSomething(){
ServiceLocator.resolve<ISomeService>().doSomething();
}
}
Now, what confuses me is the concept of a "DI container". To me, it looks exactly like a service locator which - as far as I read - should only be used in the entry point / startup method of an application to register and resolve the dependancies and inject them into the constructors of other classes - and not within a concrete class that needs the dependancy (probably for the same reason why Service locators are considered "bad")
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
public void main(){
DIContainer.register<ISomeService>(new SomeService());
// ...
var myclass = new MyClass(DIContainer.resolve<ISomeService>());
myclass.doSomething();
}
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
DI containers are supposed to help you create an object graph quickly. You just tell it which concrete implementations you want to use for which abstractions (the registration phase), and then it can create any objects you want want (resolve phase).
If you create the dependencies and pass them to the constructor (in the application initialization code), then you are actually doing Pure DI.
I would argue that Pure DI is a better approach in many cases. See my article here
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
I would say yes. You should create the object graph when your application starts up. This is called the composition root.
If you need to create objects after your application has started then you should use factories (mainly abstract factories). And such factories will be created with the other objects in the composition roots.
Your classes shouldn't do much in the constructor, this will make the cost of creating all the dependencies at the composition root low.
However, I would say that it is OK to create some types of objects using the new keyword in special cases. Like when the object is a simple Data Transfer Object (DTO)
I am new in MVC application development.I am willing to develop a new project using MVC, i looked a lot for architectures that suits to MVC application.
After reading many articles and blogs i came to know that repository pattern can be used for this.
Based on my understanding before starting real project i created a dummy project structure like described below [ Not using EDMX file, entity framework in project, defined custom DAL ]
Name of my dummy application is Repository.I took Country,state and city relationship as example to develop dummy application :
Repository_DAL_V1 class library This library has class as below :
SQLHelper.cs : This class have methods to get executed for queries like ExecutenonQuery etc.
Repository_DTO_V1 class library This library has class as below :
CountryDTO.cs : This class is inherited from CountryModel.cs as [ CountryDTO:CountryModel ].This will be used to move data between all layers of application.If there is any property that is supposed to be used in business then this will be created in DTO not in model.
CountryDTOMapper.cs : This is used to map data from database in form of DataTable into collection of DTO object.
Repository_Implementation web mvc project This is UI Layer.
Repository_IRepositories_V1 class library This library has class as below :
ICountryRepository.cs : This is a interface having declared functions like SaveCountry() etc.
Repository_Models class library This library has class as below :
CountryModel.cs : This class has properties exactly for all columns of table in database.
Repository_Repositories_V1 class library This library has class as below :
CountryRepository.cs : This is a repository class having defined functions like SaveCountry() etc.
Repository_ViewModel_V1 class library This library has class as below :
CountryViewModel.cs : This will be build for Country View on screen.
above projects reference details :
Repository_DTO_V1 has reference of Repository_Models.
Repository_IRepositories_V1 has reference of Repository_DTO_V1,Repository_Models.
Repository_Repositories_V1 has reference of Repository_IRepositories_V1,Repository_DAL_V1,Repository_DTO_V1,Repository_Models.
Repository_ViewModel_V1 has reference of Repository_Repositories_V1,Repository_IRepositories_V1,Repository_DTO_V1,Repository_Models.
I need guidence from all of you to steer me in the right direction.
I don't se why not using EF would be any different from any other project you done layer wise that is. There is nothing wrong with you structure of things if you are comftable with using mulitple layers. I don't know what your project is about but you should consider better naming, like Project name (Repository) and Implementations is better named Web.
EDIT
I recommend you to use Entity Framework 5 with Code First, that is if you want to use a ORM?
I'm certainly with you on the desire to give EF a miss.
As #Dejan.S says, The architecture you'll need depends on the size of your project.
I would start off simple, perhaps with 3 projects:
A web project.
A service project that controllers use to access models and business logic.
A domain project with your models and data access.
If you need to separate things (like a DAL), you can still do that later.
Also, check out ServiceStack's MVC Power Pack. You get a great micro ORM, IoC, caching, fast serialisers and so on out of the box.
I'm simply looking for advice on the best way I should handle this situation.
Right now I've got several files in a folder called Service. The files contact several functions which do random things of course. Each of these files needs access to the SM Adapter.
My question is, should I implement the ServiceManagerAwareInterface in each of these files OR should I just make a new class which implements the ServiceManagerAwareInterface and just extend my classes on the new class which implements this service?
Both ways work as they should, just not sure which way would be more proper.
If you think that your system will always rely on ZF2, both approaches are equivalent.
Now from an OO design perspective, personally I have a preference for the approach in which you extend your service then implement the ServiceManagerAwareInterface. I would even use an interface for the dependency over the ServiceLocator to protect even more my classes. Why?
Extending your classes does not cost you a lot, same for making your class depending on interfaces.
Let's take this example, Imagine you did not use this approach during a ZF1 project, during which you had probably resolved your dependencies with the Zend_Registry.
Now, let's assume you moved to a ZF2 implementation, how much time you think you'll spend refactoring your code from something like Zend_Registry::get($serviceX) to $this->getServiceManager()->get($serviceX) on your Service layer?
Now Assume you had made the choice of protecting your classes, first by creating your own Service locator interface, as simple as:
public interface MyOwnServiceLocatorInterface{
public function get($service);
}
Under ZF1 you had created an adapter class using the Zend_Registry:
public class MyZF1ServiceLocator implements MyOwnServiceLocatorInterface{
public function get($service){
Zend_Registry::get($service);
}
}
Your Service classes are not coupled to the Zend_Registry, which make the refactoring much more easier.
Now, You decide to move to ZF2 so you'll logically use the ServiceManger. You create then this new Adapter class:
public class MyZF2ServiceLocator implements
ServiceManagerAwareInterface,MyOwnServiceLocatorInterface
{
private $_sm;
public function get($service){
$this->_sm->get($service);
}
public function setServiceManager($serviceManager){
$this->_sm = $serviceManager;
}
}
Again, your Service classes are not coupled to the ZF2 ServiceManger.
Now, how would look like the configuration/registration of you Service layer on the ServiceManager. Well, you'll use your Module::getServiceConfig class for that:
//Module.php
public function getServiceConfig()
{
return array(
'factories'=>array(
'My\ServiceA'=>function($sm){
return new My\ServiceA($sm->get('My\Service\Name\Space\MyZF2ServiceLocator'));
}
//Some other config
)
}
As you can see, no refactoring is needed within your Service classes as we protected them by relying on interface and using adapters. As we used a closure factory, we don't even need to extend our Service classes and implement the ServiceLocatorAwareInterface.
Now, before concluding in my previous example i have to note that I did not treat the case in which my classes are constructed via factories, however, you can check one of my previous answers that address the factory topic but also the importance of loose coupling among an application layers.
you can add initializers to do that. It can reduce repetitive injection in getting the service that pass db adapter. OR, you can set abstract_factories, it will reduce repetitive SM registration. I just posted SM Cheatsheet here, Hope helpful :)
https://samsonasik.wordpress.com/2013/01/02/zend-framework-2-cheat-sheet-service-manager/
I've searched high and look for samples about using MEF for DI. I know its not DI but from what I hear (really hear in podcasts) it can be used as such...but I can't find any blog posts or samples.
I am using MEF in this project already (to support plugins) and thought it would be nice to leverage for DI also.
Maybe I am barking up the wrong tree?
This can be described by an example. For instance, let's say you have a core library that you base all your bespoke applications on. Call it MyCompany.Core. Normally, every application you write has to contain a reference to MyCompany.Core, and then the application has to take care of bootstrapping and calling into MyCompany.Core to start the appropriate services, etc., in the correct order. This doesn't make much sense when you consider that the core itself probably knows better how it's supposed to be started up, etc.
To use MEF for dependency injection, your core would do this:
[Import("/Application", typeof(IBespokeApplication))]
private IBespokeApplication bespokeApplication;
The core itself would contain the application startup code, and might call something like this once it had started up all of its services:
bespokeApplication.Start();
In the bespoke application, you have to export yourself:
[Export("/Application", typeof(IBespokeApplication))]
public class MyApplication : IBespokeApplication
{
public void Start()
{
/* start app */
}
}
Now the bespoke application could contain a direct reference to MyCompany.Core, and could call services directly, or you could even expose the services as Exports and Import them into the application. For instance, in the core:
[Export("/LoggingService", typeof(ILoggingService))]
public class NLogLoggingService : ILoggingService
{
/* ... */
}
Then in the bespoke application:
[Import("/LoggingService", typeof(ILoggingService))]
private ILoggingService loggingService;
...and when you want to use it:
loggingService.LogInformation("My Message");
As far as I can tell from the literature, that's the essence of dependency injection.