I'm looking for a better logging approach for a large MVC project. I tried to look around towards other's suggestions (like Log4Net, Nlog, ELMAH, EntityFramework's Self tracking & few others), but could not find reasonable answer. By term logging i don't simply mean logging few requests or exceptions etc. But log for every little change in properties. That's the reason I'm looking for some perfect (generic) approach, which can be used for several applications. I need to roughly log as:
public class someModelPerson
{
public string Name { get; set; }
public string Age { get; set; }
}
If through some action, the current object of someModelPerson is updated, I need to log at the end of the method as:
Property Name changed from 'John' to 'Don' by user 'xyz' on 12/12/12
I worked with EntityFramework selftracking and implemented something similar and generic (using templates etc) but it's no more recommended by Microsoft.
I'm okay if the solution requires a little customization accordingly.
plus do suggest the better DB schema for storing log of single model modifications. (like update_person method may change name, age, height of a person. How should it be logged at DB level? single entry? separate table for changeset?)
you can use Postsharp as stated in website :-
PostSharp offers a solution to all of these problems. The logging
pattern library allows you to configure where logging should be
performed and the pattern library takes over the task of keeping your
log entries in sync as you add, remove and refactor your codebase.
Let's take a look at how you can add trace logging for the start and
completion of method calls.
For more information take a look here :-
http://www.postsharp.net/diagnostics/net-logging
public class LoggingFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
filterContext.HttpContext.Trace.Write("(Logging Filter)Action Executing: " +
filterContext.ActionDescriptor.ActionName);
base.OnActionExecuting(filterContext);
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.Exception != null)
filterContext.HttpContext.Trace.Write("(Logging Filter)Exception thrown");
base.OnActionExecuted(filterContext);
}
}
Related
But how do you use it?
I have a Code First project set up, and trying out some stuff with this new EF6. Reading all kinds of posts/blogs from at least 2 years old about EF4/5. But nothing whatsoever about EF6.
Let's say I have these entities:
public DbSet<Person> Persons { get; set; }
public DbSet<Order> Orders { get; set; }
public DbSet<Invoice> Invoices { get; set; }
Do I still need to create repositories for each entity? Or would a class suffice with some methods to do some custom calculations aside from CRUD?
I know that this line:
kernel.Bind<MyDbContext>().ToSelf().InRequestScope();
Would suffice for DI, and that it will inject via constructor to upper layer classes where applicable.
The project has a class library and a web project asp.net-mvc. Where the class lib project contains my entities and has Migrations enabled.
Any light on this matter is really appreciated.
I've added a Repository layer on top of EF (which utilizes both Repository and UoW patterns inherently in its construction) in a couple of projects, and I've done that with one class that utilizes generics so that I only needed one file for all of my entities. You can decide if you want to do it or not, but I've found it useful in my projects.
My repositories have typically started out like what I've shown below, following up with more extension methods if/when I come across a need for them (obviously I'm not showing all of them, that's up for you to decide how to implement your repository).
public class Repository<T> : IRepository<T> where T : class
{
protected IDbContext Context;
protected DbSet<T> DbSet { get { return Context.Set<T>(); } }
public Repository(IDbContext context = null)
{
Context = context ?? new DbContext();
}
public void Add(T newRecord)
{
DbSet.Add(newRecord);
}
public void Update(T record)
{
var entry = Context.Entry(record);
DbSet.Attach(record);
entry.State = EntityState.Modified;
}
public void Remove(T record)
{
Context.Entry(record).State = EntityState.Deleted;
DbSet.Remove(record);
}
public IQueryable<T> Where(Expression<Func<T, bool>> predicate)
{
return DbSet.Where(predicate);
}
public bool Contains(Expression<Func<T, bool>> predicate)
{
return DbSet.Count(predicate) > 0;
}
public int Count(Expression<Func<T, bool>> predicate)
{
return DbSet.Count(predicate);
}
public int Save()
{
return Context.SaveChanges();
}
}
I've used repositories for 2 main reasons:
Unit testing. Doing this pattern allows me to fake the underlying data without having to have bad data in my database. All I need to do is simply create another implementation of IRepository that uses an in-memory list as its data source, and I'm all set for my pages to query that repository.
Extensibility. A fair number of times I've put in some methods into my repository because I found myself constantly doing the same logic with queries in my controllers. This has been extremely useful, especially since your client-side code doesn't need to know how it's doing it, just that it is doing it (which will make it easier if you need to change the logic of one file vs. multiple files).
This not all of it, obviously, but that should be enough for this answer. If you want to know more on this topic, I did write a blog post on it that you can find here.
Good luck on whatever you decide to do.
Entity Framework in itself can be considered a Repository. It facilitates work with data, in this case a database. This is all that a Repository does.
If you want to build another Repository on top of what EF provides, it is completely up to you - or to your business rules.
Many complex projects uses 2-3 layers of repositories with web services between. The performance is lower but you gain on other plans like security, resilience, separation of concerts, etc.
Your company might decide that it's in their best interest to never access data directly from front-end projects. They might force you to build a separate web-service project, which will be accessible only from localhost. So you will end up having EF as Repository in the webservice project. On the front-end side you will obviously need to build another Repository which will work with the web-service.
It also depends a lot of your project. If it's a small project it really it's overkill to build a second Repository on top of EF. But then again, read above. Nowadays security matters a lot more than performance.
To be politically correct I'm including the comment made by Wiktor Zychla:
"DbSet is a repository and DbContext is a Unit of Work. "Entity Framework is a Repository" could lead to unnecessary confusion."
I need to implement MVC architecture in my company, So can anyone suggest where to keep frequently used methods to call on all pages. Like:
states ddl, departments ddl also roles list and etc...
Please give me suggestions where to keep them in architecture.
Thanks
There are different solutions depending on the scale of your application. For small projects, you can simply create a set of classes in MVC application itself. Just create a Utils folder and a DropDownLists class and away you go. For simple stuff like this, I find it's acceptable to have static methods that return the data, lists, or enumerations you require.
Another option is to create an abstract MyControllerBase class that descends from Controller and put your cross-cutting concerns in there, perhaps as virtual methods or properties. Then all your actual controllers can descend from MyControllerBase.
For larger applications, or in situations where you might share these classes with other MVC applications, create a shared library such as MySolution.Utils and reference the library from all projects as required.
Yet another possibility for larger solutions is to use Dependency Injection to inject the requirements in at runtime. You might consider using something like Unity or Ninject for this task.
Example, as per your request (also in GitHub Gist)
// declare these in a shared library
public interface ILookupDataProvider
{
IEnumerable<string> States { get; }
}
public class LookupDataProvider: ILookupDataProvider
{
public IEnumerable<string> States
{
get
{
return new string[] { "A", "B", "C" };
}
}
}
// then inject the requirement in to your controller
// in this example, the [Dependency] attribute comes from Unity (other DI containers are available!)
public class MyController : Controller
{
[Dependency]
public ILookupDataProvider LookupDataProvider { get; set; }
public ActionResult Index()
{
var myModel = new MyModel
{
States = LookupDataProvider.States
};
return View(myModel);
}
}
In the code above, you'll need to configure your Dependency Injection technology but this is definitely outside the scope of the answer (check SO for help here). Once configured correctly, the concrete implementation of ILookupDataProvider will be injected in at runtime to provide the data.
One final solution I would suggest, albeit this would be very much overkill for small projects would be to host shared services in a WCF service layer. This allows parts of your application to be separated out in to highly-scalable services, should the need arise in the future.
I am using dependency injection for quite some time and I really like the technique, but I often have a problem of too many dependencies that should be injected 4 - 5 which seems to much.
But I cannot find a way to make it simpler. For instance I have a class with some business logic that sends messages, it accepts two other business logic dependencies to do what is needed (one to translate data to messages sent, and one to translate messages that are received).
But apart from this it needs some "technical" dependencies like ILogger, ITimerFactory (because it needs to create timers inside), IKeyGenerator (to generate unique keys).
So the whole list grows pretty big. Are there any good common ways to reduce the number of dependencies?
One way to handle those is to refactor towards Aggregates (or Facades). Mark Seemann wrote a good article on it, check it out (actually I highly recommend his book as well, just saying).
So say you have the following (as taken from the article):
public OrderProcessor(IOrderValidator validator,
IOrderShipper shipper,
IAccountsReceivable receivable,
IRateExchange exchange,
IUserContext userContext)
You can refactor it to:
public OrderProcessor(IOrderValidator validator,
IOrderShipper shipper,
IOrderCollector collector)
Where OrderCollector is a facade (it wraps the previous 3 dependencies):
public OrderCollector(IAccountsReceivable receivable,
IRateExchange exchange,
IUserContext userContext)
I hope this helps.
EDIT
In terms of the cross-cutting concerns (logging and caching for example) and a strategy to handle them, here is a suggestion (that's what I usually do), say you have the following:
public interface IOrderService
{
void DoAwesome();
}
public class OrderService : IOrderService
{
public void DoAwesome()
{
// do your thing here ... no logging no nothing
}
}
Here I'd use the decorator pattern to create an OrderService that has logging enabled:
public class OrderServiceWithLogging : IOrderService
{
private readonly IOrderService _orderService;
private readonly ILogger _logger;
public OrderServiceWithLogging(IOrderService orderService, ILogger logger)
{
_orderService = orderService;
_logger = logger;
}
public void DoAwesome()
{
_orderService.DoAwesome();
_logger.Log("Awesome is done!");
}
}
It might look like a bit of overhead but IMHO, it's clean and testable.
Another way would be to go into Aspect Oriented Programming and look into concepts such as interception, where basically you intercept certain method calls and perform tasks as a result. Many DI frameworks (I wanna say all?) support interception, so that might be something that you prefer.
Is there a simple way to apply a filter on action methods in ASP.NET MVC3 against specific types of UserAgents? I have a hacking network chewing on us in different ways. I can play cat and mouse with IPs, subnets, etc at the network/firewall level but would like to inject an app level assurance against things like Squid, etc as they appear to have certain patterns that arise. Not sure how this would affect performance but wondered if anyone has done this approach.
thanks in advance.
doug
You would have to create a custom action filter. Here is an example of one that returns an http 403 forbidden if the requesting user agent is a mozilla-based browser:
public class UserAgentActionFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (filterContext.HttpContext.Request.UserAgent.ToLowerInvariant().Contains("mozilla"))
{
filterContext.Result = new HttpStatusCodeResult(403);
}
base.OnActionExecuting(filterContext);
}
}
Just remember that user-agents can be faked if they realize that is how they are being blocked.
I am making an action filter to do a check and see if a user still has a valid key. I am just wondering what is in the base.OnActionExecuting(filterContext);?
Like do I need to call it?
Also where can I download the source for mvc 2.0? I know you could do it with 1.0 but I forgot where the files are.
Chobo2, you would only call the base if there are other actions, within the frame work, that might impact on the outcome of your method.
Take this;
public bool Overide { get; set; }
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (!this.Overide)
{
filterContext.Result = new RedirectResult("/Home/Index");
}
}
Clearly this code does not need to call base as all the code required to make the decision is contained within my method. If however, I was to set something in the framework that might affect execution then I'd call the base and let it decide.
I think the 90/10 rule would be that you wouldn't call base.
As for MVC 2 download