Localizing validation messages from Domain Objects (Entities) - localization

It's not my intent to engage in a debate over validation in DDD, where the code belongs, etc. but to focus on one possible approach and how to address localization issues. I have the following behavior (method) on one of my domain objects (entities) which exemplifies the scenario:
public void ClockIn()
{
if (WasTerminated)
{
throw new InvalidOperationException("Cannot clock-in a terminated employee.");
}
ClockedInAt = DateTime.Now;
:
}
As you can see, when the ClockIn method is called, the method checks the state of the object to ensure that the Employee has not been terminated. If the Employee was terminated, we throw an exception consistent with the "don't let your entities enter an invalid state" approach.
My problem is that I need to localize the exception message. This is typically done (in this application) using an application service (ILocalizationService) that is imported using MEF in classes that require access to its methods. However, as with any DI framework, dependencies are only injected/imported if the object was instantiated by the container. This is typically not the case with DDD.
Furthermore, everything I've learned about DDD says that our domain objects should not have dependencies and those concerns should be handled external from the domain object. If that is the case, how can I go about localizing messages such as the one shown above?
This is not a novel requirement as a great many business applications require globalization/localization. I'd appreciate some recommendations how to make this work and still be consistent with the goals of a DDD.
UPDATE
I failed to originally point out that our localization is all database driven, so we do have a Localization Service (via the injectable ILocalizationService interface). Therefore, using the static Resources class Visual Studio provides as part of the project is NOT a viable option.
ANOTHER UPDATE
Perhaps it would move the discussion along to state that the app is a RESTful service app. Therefore, the client could be a simple web browser. As such, I cannot code with any expectation that the caller can perform any kind of localization, code mapping, etc. When an exception occurs (and in this approach, attempting to put the domain object into an invalid state is an exception), an exception is thrown and the appropriate HTTP status code returned along with the exception message which should be localized to the caller's culture (Accept-Language).

Not sure how helpful this response is to you, but localization is really a front-end concern. Localizing exceptions messages as per your example is not common practice, as end users shouldn't see technical details such as those described in exception messages (and whoever will be troubleshooting your exceptions probably has a sufficient level English even if it is not their native language).
Of course if necessary you can always handle exceptions and present a localized, user-friendly message to your users in your front-end. But keeping it as a font-end concern should simplify your architecture.

As Clafou said, you shouldn't use exceptions for passing messages to the UI in any way.
If you still insist in doing this, one option is to throw an error code instead of the message
throw new InvalidOperationException("ERROR_TERMINATED_EMPLOYEE_CLOCKIN");
and then, when it happens, do whatever you need to do with the exception (log, look up localization, whatever).

If localisation is important part of the domain/application you should make it a first class citizen and inject wherever it belongs. I am not sure what you mean with "DDD says that our domain objects should not have dependencies" - please explain.

You are correct for trying to avoid adding internal dependencies to your domain model objects.
A better solution would be to handle the action inside of a service method such as:
public class EmployeeServiceImpl implements EmployeeService {
public void ClockEmployeeIn(Employee employee) throws InvalidOperationException {
if (employee.isTerminated()) {
// Localize using a resource lookup code..
throw new InvalidOperationException("Error_Clockin_Employee_Terminated");
}
employee.setClockedInAt(DateTime.Now);
}
}
You can then inject the service using your DI framework at the point where you will be making the clockin call and use the service to insulate your domain objects from changes to business logic.

Related

How to catch exceptions in grails REST controllers

If you use a controller to implement rest APIs, you want to deal with any exception thrown and return a generic or specific well formed REST response.
We can't use global error URL mapping method, as the application has a number of APIs and interfaces with different response requirements, and we also don't know which type of Grails' HTTP error codes will be thrown (e.g don't know if it will be a 400, 422, 500 etc). Also, if we used the error page mappings, we won't be able to put relevant data into the JSON response.
E.g. this will generate a GrailsRuntimeException:
class SomeController {
def payload = request.JSON
def someMethod() {
BigDecimal x = new BigDecimal(payload.notExists)
}
The problem is, it seems impossible to catch any error thrown.
E.g. neither this approach:
def handleRuntimeException(RuntimeException e) {
render("some JSON error message")
}
Nor this approach:
try {
:
}
catch (GrailsRuntimeException e) {
render("some JSON error message")
}
Works - it never catches the error.
Tried GroovyRuntimeException, Exception, MissingMethodException, Throwable etc.
The only solution we can think of is to not do any work in the controller, do everything in a service, where apparently we can catch errors.
This approach:
static mappings = {
"500"(controller: "error")
}
Is not want we need for several reasons:
We have several different APIs in different controllers which would require different response formats.
we also have UI controllers, which would want the default error system which shows the stack trace etc.
We want to handle the error in the controller where the exception happened, so we can clean up, or at least can log or return the stat which only the controller knows.
Have decided the only way is to move all code into services, and do nothing in the controller except pass the request, and render the resultant string. i.e. all parameter handling, especially number conversion, is done in the service.
It's ironic that the solution that you see as suboptimal that you're settling for is exactly what you should always do. This isn't PHP - don't put logic in Controllers (or GSPs).
Services are by default transactional, so they're a great place to put code that writes to the database since that should always happen in a transaction. They're also excellent for business logic whether it's transactional or not, and you can either annotate individual methods with #Transactional to partition the methods into ones that run in a transaction and ones that don't, or split the services into some that are fully transactional and some that aren't.
If you keep all of the HTTP-related code in the controllers, doing the data binding from params, and calling helper classes (services, domain classes, taglibs, etc.) you get a good separation of concerns, and if the service layer doesn't know anything about params, HttpServletRequest, HTTP sessions, etc. then it's easily reusable in other Grails apps and even in non-Grails apps. They'll also be easier to test, since there isn't so much inter-related code that needs to be mocked and otherwise made test-friendly.
Using this approach, the controllers basically become dumb routers, accepting requests, calling helpers to do the real work, and delegating page rendering or response writing, or redirecting or forwarding.
I am very late to answer this, but I think it might help people stumbling and searching for solution.
Here is one of my blogs explaining Custom exception handling in grails and error responses for RESTfull services
Hope this might help someone
I am using grails 3.2.4 and there is no issue as you have explained here. I have moved all business logics inside the service class and catching exceptions by parent controller trait. Here I am handling this exception in another ParentExceptionController trait which is implemented by classes where such exception is occurring from service class. Example:
UserService {
boolean create(Map params) {
throw new InvalidParameterException('some message')
}
UserController implements ParentExceptionController {
UserService userService
userService.create(params.userDetails)
}
trait ParentExceptionController {
Object handleInvalidParameterException(InvalidParameterExceoption exception) {
log.error 'log message'
respond([message: exception.message])
}
first and second both are working in my case.

DI Container and custom-scoped state in legacy system

I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.

Exception thrown Constructor Injection - AutoFac Dependency Injection

I have an Autofac DI Container and use constructor injection to inject configuration settings into my SampleClass. The Configuration Manager class is created as a singleInstance so the same single instance is used.
public ConfigurationManager()
{
// Load the configuration settings
GetConfigurationSettings();
}
public SampleClass(IConfigurationManager configurationManager)
{
_configurationManager = configurationManager;
}
I am loading the configuration settings from a App.config file in the constructor of the configuration Manager. My problem is i am also validating the configuration settings and if they are not in the App.config file a exception is thrown, which causes the program to crash. Which means I cant handle the exception and return a response.
I am doing this the wrong way? Is there a better way to load the configuration settings Or is there a way to handle the exception being thrown.
Edit
ConfigurationManager configurationManager = new ConfigurationManager();
configurationManager.GetConfigurationSettings();
//Try catch around for the exception thrown if config settings fail
//Register the instance above with autofac
builder.Register(configurationManager()).As<IConfigurationManager>().SingleInstance();
//Old way of registering the configurationManager
builder.Register(c => new ConfigurationManager()).As<IConfigurationManager>().SingleInstance();
You are doing absolutely the right thing. Why? You are preventing the system from starting when the application isn't configured correctly. The last thing you want to happen is that the system actually starts and fails later on. Fail fast! However, make sure that this exception doesn't get lost. You could make sure the exception gets logged.
One note though. The general advice is to do as little as possible in the constructor of a type. Just store the incoming dependencies in instance variables and that's it. This way construction of a type is really fast and can never really fail. In general, building up the dependency graph should be quick and should not fail. In your case this would not really be a problem, since you want the system to fail as soon as possible (during start-up). Still, for the sake of complying to general advice, you might want to extract this validation process outside of that type. So instead of calling GetConfigurationSettings inside that constructor, call it directly from the composition root (the code where you wire up the container) and supply the valid configuration settings object to the constructor of the ConfigurationManager. This way you -not only- make the ConfigurationManager simpler, but you can let the system fail even faster.
The core issue is that you are mixing the composition and execution of your object graph by doing some execution during composition. In the DI style, constructors should be as simple as possible. When your class is asked to perform some meaningful work, such as when the GetConfigurationSettings method is called, that is your signal to begin in earnest.
The main benefit of structuring things in this way is that it makes everything more predictable. Errors during composition really are composition errors, and errors during execution really are execution errors.
The timing of work is also more predictable. I realize that application configuration doesn't really change during runtime, but let's say you had a class which reads a file. If you read it in the constructor during composition, the file's contents may change by the time you use that data during execution. However, if you read the file during execution, you are guaranteed to avoid the timing issues that inevitably arise with that form of caching.
If caching is a part of your algorithm, as I imagine it is for GetConfigurationSettings, it still makes sense to implement that as part of execution rather than composition. The cached values may not have the same lifetime as the ConfigurationManager instance. Even if they do, encoding that into the constructor leaves you only one option, where as an execution-time cache offers far more flexibility and it solves your exception ambuguity issue.
I would not call throwing exceptions at composition-time a good practice. It is so because composition might have a fairly complex and indirect execution logic making reasonable exception handling virtually impossible. I doubt you could invent anything better than awful
try
{
var someComponent = context.Resolve<SampleClass>();
}
catch
{
// Yeah, just stub all exceptions cause you have no idea of what to expect
}
I'd recommend redesigning your classes in a way that their constructors do not throw exceptions unless they do really really need to do that (e.g. if they are absolutely useless with a null-valued constructor parameter). Then you'll need some methods that initialize your app, handle errors and possibly interact with user to do that.

OpenRasta: Uri seems to be irrelevant for handler selection

When registering two handlers for the same type, but with different URIs, the handler selection algorithm doesn't seem to check the uri when it determines which handler to use.
If you run the program below, you'll notice that only HandlerOne will be invoked (twice). It does not matter if I call for "/one" or "/two", the latter supposed to be handled by HandlerTwo.
Am I doing something wrong or is this something to be fixed in OpenRasta? (I'm using 2.0.3.0 btw)
class Program
{
static void Main(string[] args)
{
using (InMemoryHost host = new InMemoryHost(new Configuration()))
{
host.ProcessRequest(new InMemoryRequest
{
HttpMethod = "GET",
Uri = new Uri("http://x/one")
});
host.ProcessRequest(new InMemoryRequest
{
HttpMethod = "GET",
Uri = new Uri("http://x/two")
});
}
}
}
class Configuration : IConfigurationSource
{
public void Configure()
{
using (OpenRastaConfiguration.Manual)
{
ResourceSpace.Has.ResourcesOfType(typeof(object))
.AtUri("/one").HandledBy(typeof(HandlerOne));
ResourceSpace.Has.ResourcesOfType(typeof(object))
.AtUri("/two").HandledBy(typeof(HandlerTwo));
}
}
}
class HandlerOne
{
public object Get() { return "returned from HandlerOne.Get"; }
}
class HandlerTwo
{
public object Get() { return "returned from HandlerTwo.Get"; }
}
Update
I have a feeling that I could accomplish what I want similar using UriNameHandlerMethodSelector as described on http://trac.caffeine-it.com/openrasta/wiki/Doc/Handlers/MethodSelection, but then I'd have to annotate each handler methods and also do AtUri().Named(), which looks like boilerplate to me and I'd like to avoid that. Isn't AtUri(X).HandledBy(Y) making the connection between X and Y clear?
Eugene,
You should never have multiple registrations like that on the same resource type, and you probably never need to have ResourcesOfType<object> ever associated with URIs, that'll completely screw with the resolution algorithms used in OpenRasta.
If you're mapping two different things, create two resource classes. Handlers and URIs are only associate by resource class, and if you fail at designing your resources OpenRasta will not be able to match the two, and this is by design.
If you want to persist down that route, and I really don't think you should, then you can register various URIs to have a name, and hint on each of your methods that the name ought to be handled using HttpOperation(ForUriName=blah). That piece of functionality is only there for those very, very rare scenarios where you do need to opt-out of the automatic method resolution.
Finally, as OpenRasta is a compsable framework, you shouldnt have to go and hack around existing classes, you ought to plug yourself into the framework to ensure you override the components you don't want and replace them by things you code yourself. In this case, you could simply write a contributor that replaces the handler selection with your own moel if you don't like the defaults and want an MVC-style selection system. Alternatively, if you want certain methods to be selected rather than others, you can remove the existing operation selectors and replace them (or complement them with) your own. That way you will rely on published APIs to extend OpenRasta and your code won't be broken in the future. I can't give that guarantee if you forked and hacked existing code.
As Seb explained, when you register multiple handlers with the same resource type OpenRasta treats the handlers as one large concatenated class. It therefore guesses (best way to describe it) which potential GET (or other HTTP verb) method to execute, which ever it thinks is most appropriate. This isn't going to be acceptable from the developers prospective and must be resolved.
I have in my use of OpenRasta needed to be able to register the same resource type with multiple handlers. When retrieving data from a well normalised relational database you are bound to get the same type response from multiple requests. This happens when creating multiple queries (in Linq) to retrieve data from either side of the one-to-many relation, which of course is important to the whole structure of the database.
Taking advice from Seb, and hoping I've implemented his suggestion correctly, I have taken the database model class, and built a derived class from it in a resources namespace for each instance of when a duplicating resource type might have been introduced.
ResourceSpace.Has.ResourcesOfType<IList<Client>>()
.AtUri("/clients").And
.AtUri("/client/{clientid}").HandledBy<ClientsHandler>().AsJsonDataContract();
ResourceSpace.Has.ResourcesOfType<IList<AgencyClient>>()
.AtUri("/agencyclients").And
.AtUri("/agencyclients/{agencyid}").HandledBy<AgencyClientsHandler>().AsJsonDataContract();
Client is my Model class which I have then derived AgencyClient from.
namespace ProductName.Resources
{
public class AgencyClient: Client { }
}
You don't even need to cast the base class received from your Linq-SQL data access layer into your derived class. The Linq cast method isn't intended for that kind of thing anyway, and although this code will compile it is WRONG and you will receive a runtime exception 'LINQ to Entities only supports casting Entity Data Model primitive types.'
Context.Set<Client>().Cast<AgencyClient>().ToList(); //will receive a runtime error
More conventional casts like (AgencyClient) won't work as conversion to a SubClass isn't easily possible in C#. Convert base class to derived class
Using the AS operator will again compile and will even run, but will give a null value in the returned lists and therefore won't retrieve the data intended.
Context.Set<Client>().ToList() as IEnumerable<AgencyClient>; //will compile and run but will return null
I still don't understand how OpenRasta handles the differing return class from the handler to the ResourceType but it does, so let's take advantage of it. Perhaps Seb might be able to elaborate?
OpenRasta therefore treats these classes separately and the right handler methods are executed for the URIs.
I patched OpenRasta to make it work. These are the files I touched:
OpenRasta/Configuration/MetaModel/Handlers/HandlerMetaModelHandler.cs
OpenRasta/Handlers/HandlerRepository.cs
OpenRasta/Handlers/IHandlerRepository.cs
OpenRasta/Pipeline/Contributors/HandlerResolverContributor.cs
The main change is that now the handler repository gets the registered URIs in the initializing call to AddResourceHandler, so when GetHandlerTypesFor is called later on during handler selection, it can also check the URI. Interface-wise, I changed this:
public interface IHandlerRepository
{
void AddResourceHandler(object resourceKey, IType handlerType);
IEnumerable<IType> GetHandlerTypesFor(object resourceKey);
to that:
public interface IHandlerRepository
{
void AddResourceHandler(object resourceKey, IList<UriModel> resourceUris, IType handlerType);
IEnumerable<IType> GetHandlerTypesFor(object resourceKey, UriRegistration selectedResource);
I'll omit the implementation for brevity.
This change also means that OpenRasta won't waste time on further checking of handlers (their method signatures etc.) that are not relevant to the request at hand.
I'd still like to get other opinions on this issue, if possible. Maybe I just missed something.

ASP.NET MVC and IoC - Chaining Injection

Please be gentle, I'm a newb to this IoC/MVC thing but I am trying. I understand the value of DI for testing purposes and how IoC resolves dependencies at run-time and have been through several examples that make sense for your standard CRUD operations...
I'm starting a new project and cannot come up with a clean way to accomplish user permissions. My website is mostly secured with any pages with functionality (except signup, FAQ, about us, etc) behind a login. I have a custom identity that has several extra properties which control access to data... So....
Using Ninject, I've bound a concrete type* to a method (Bind<MyIdentity>().ToMethod(c => MyIdentity.GetIdentity()); so that when I add MyIdentity to a constructor, it is injected based on the results of the method call.
That all works well. Is it appropriate to (from the GetIdentity() method) directly query the request cookies object (via FormsAuthentication)? In testing the controllers, I can pass in an identity, but the GetIdentity() method will be essentially untestable...
Also, in the GetIdentity() method, I will query the database. Should I manually create a concrete instance of a repository?
Or is there a better way all together?
I think you are reasonably on the right track, since you abstracted away database communication and ASP.NET dependencies from your unit tests. Don't worry that you can't test everything in your tests. There will always be lines of code in your application that are untestable. The GetIdentity is a good example. Somewhere in your application you need to communicate with framework specific API and this code can not be covered by your unit tests.
There might still be room for improvement though. While an untested GetIdentity isn't a problem, the fact that it is actually callable by the application. It just hangs there, waiting for someone to accidentally call it. So why not abstract the creation of identities. For instance, create an abstract factory that knows how to get the right identity for the current context. You can inject this factory, instead of injecting the identity itself. This allows you to have an implementation defined near the application's composition root and outside reach of the rest of the application. Besides that, the code communicates more clearly what is happening. Nobody has to ask "which identity do I actually get?", because it will be clear by the method on the factory they call.
Here's an example:
public interface IIdentityProvider
{
// Bit verbose, but veeeery clear,
// but pick another name if you like,
MyIdentity GetIdentityForCurrentUser();
}
In your composition root you can have an implementation of this:
private sealed class AspNetIdentityProvider : IIdentityProvider
{
public MyIdentity GetIdentityForCurrentUser()
{
// here the code of the MyIdentity.GetIdentity() method.
}
}
As a trick I sometimes have my test objects implement both the factory and product, just for convenience during unit tesing. For instance:
private sealed class FakeMyIdentity
: FakeMyIdentity, IIdentityProvider
{
public MyIdentity GetIdentityForCurrentUser()
{
// just returning itself.
return this;
}
}
This way you can just inject a FakeMyIdentity in a constructor that expects an IIdentityProvider. I found out that this doesn’t sacrifice readability of the tests (which is important).
Of course you want to have as little code as possible in the AspNetIdentityProvider, because you can't test it (automatically). Also make sure that your MyIdentity class doesn't have any dependency on any framework specific parts. If so you need to abstract that as well.
I hope this makes sense.
There are two things I'd kinda do differently here...
I'd use a custom IPrincipal object with all the properties required for your authentication needs. Then I'd use that in conjunction with custom cookie creation and the AuthenticateRequest event to avoid database calls on every request.
If my IPrincipal / Identity was required inside another class, I'd pass it as a method parameter rather than have it as a dependency on the class it's self.
When going down this route I use custom model binders so they are then parameters to my actions rather than magically appearing inside my action methods.
NOTE: This is just the way I've been doing things, so take with a grain of salt.
Sorry, this probably throws up more questions than answers. Feel free to ask more questions about my approach.

Resources