I'm using SpecFlow, and I'd like to write a scenario such as the following:
Scenario: Pressing add with an empty stack throws an exception
Given I have entered nothing into the calculator
When I press add
Then it should throw an exception
It's calculator.Add() that's going to throw an exception, so how do I handle this in the method marked [Then]?
Great question. I am neither a bdd or specflow expert, however, my first bit of advice would be to take a step back and assess your scenario.
Do you really want to use the terms "throw" and "exception" in this spec? Keep in mind the idea with bdd is to use a ubiquitous language with the business. Ideally, they should be able to read these scenarios and interpret them.
Consider changing your "then" phrase to include something like this:
Scenario: Pressing add with an empty stack displays an error
Given I have entered nothing into the calculator
When I press add
Then the user is presented with an error message
The exception is still thrown in the background but the end result is a simple error message.
Scott Bellware touches this concept in this Herding Code podcast: http://herdingcode.com/?p=176
As a newbie to SpecFlow I won't tell you that this is the way to do it, but one way to do it would be to use the ScenarioContext for storing the exception thrown in the When;
try
{
calculator.Add(1,1);
}
catch (Exception e)
{
ScenarioContext.Current.Add("Exception_CalculatorAdd", e);
}
In your Then you could check the thrown exception and do asserts on it;
var exception = ScenarioContext.Current["Exception_CalculatorAdd"];
Assert.That(exception, Is.Not.Null);
With that said; I agree with scoarescoare when he says that you should formulate the scenario in a bit more 'business-friendly' wordings. However, using SpecFlow to drive the implementation of your domain-model, catching exceptions and doing asserts on them can come in handy.
Btw: Check out Rob Conery's screencast over at TekPub for some really good tips on using SpecFlow: http://tekpub.com/view/concepts/5
BDD can be practiced on feature level behavior or/and on unit level behavior.
SpecFlow is a BDD tool that focuses on feature level behavior.
Exceptions are not something that you should specify/observe on feature level behavior.
Exceptions should be specified/observed on unit-level behavior.
Think of SpecFlow scenarios as a live specification for the non technical stakeholder. You would also not write in the specification that an exception is thrown, but how the system behaves in such a case.
If you do not have any non technical stakeholders, then SpecFlow is the wrong tool for you! Don't waste energy in creating business readable specifications if there is nobody interested in reading them!
There are BDD tools that focus on unit level behavior. In .NET the most popular one is MSpec (http://github.com/machine/machine.specifications).
BDD on unit-level can also easily be practices with standard unit-testing frameworks.
That said, you could still check for an exception in SpecFlow.
Here are some more discussion of bdd on unit-level vs. bdd on feature-level:
SpecFlow/BDD vs Unit Testing
BDD for Acceptance Tests vs. BDD for Unit Tests (or: ATDD vs. TDD)
Also have look at this blog post:
Classifying BDD Tools (Unit-Test-Driven vs. Acceptance Test Driven) and a bit of BDD history
Changing the scenario not to have an exception is a probably good way to have the scenario more user oriented. However, if you still need to have it working, please consider the following:
Catch an exception (I really recommend catching specific exceptions unless you really need to catch all) in the step that invokes an operation and pass it to the scenario context.
[When("I press add")]
public void WhenIPressAdd()
{
try
{
_calc.Add();
}
catch (Exception err)
{
ScenarioContext.Current[("Error")] = err;
}
}
Validate that exception is stored in the scenario context
[Then(#"it should throw an exception")]
public void ThenItShouldThrowAnException()
{
Assert.IsTrue(ScenarioContext.Current.ContainsKey("Error"));
}
P.S. It's very close to one of the existing answers. However, if you try getting value from ScenarioContext using syntax like below:
var err = ScenarioContext.Current["Error"]
it will throw another exception in case if "Error" key doesn't exist (and that will fail all scenarios that perform calculations with correct parameters). So ScenarioContext.Current.ContainsKey may be just more appropriate
My solution involves a couple of items to implement, but at the very end it will look much more elegant:
#CatchException
Scenario: Faulty operation throws exception
Given Some Context
When Some faulty operation invoked
Then Exception thrown with type 'ValidationException' and message 'Validation failed'
To make this work, follow those 3 steps:
Step 1
Mark Scenarios you expect exceptions in with some tag, e.g. #CatchException:
#CatchException
Scenario: ...
Step 2
Define an AfterStep handler to change ScenarioContext.TestStatus to be OK. You may only want ignore errors in for When steps, so you can still fail a test in Then verifying an exception. Had to do this through reflection as TestStatus property is internal:
[AfterStep("CatchException")]
public void CatchException()
{
if (ScenarioContext.Current.StepContext.StepInfo.StepDefinitionType == StepDefinitionType.When)
{
PropertyInfo testStatusProperty = typeof(ScenarioContext).GetProperty("TestStatus", BindingFlags.NonPublic | BindingFlags.Instance);
testStatusProperty.SetValue(ScenarioContext.Current, TestStatus.OK);
}
}
Step 3
Validate TestError the same way you would validate anything within ScenarioContext.
[Then(#"Exception thrown with type '(.*)' and message '(.*)'")]
public void ThenExceptionThrown(string type, string message)
{
Assert.AreEqual(type, ScenarioContext.Current.TestError.GetType().Name);
Assert.AreEqual(message, ScenarioContext.Current.TestError.Message);
}
In case you are testing user interactions I will only advice what has already been said about focusing on the user experience: "Then the user is presented with an error message". But, in case you are testing a level below the UI, I'd like to share my experience:
I'm using SpecFlow to develop a business layer. In my case, I don't care about the UI interactions, but I still find extremely useful the BDD approach and SpecFlow.
In the business layer I don't want specs that say "Then the user is presented with an error message", but actually verifying that the service correctly responds to a wrong input. I've done for a while what has already been said of catching the exception at the "When" and verifying it at the "Then", but I find this option not optimal, because if you reuse the "When" step you could swallow an exception where you didn't expect it.
Currently, I'm using explicit "Then" clauses, some times without the "When", this way:
Scenario: Adding with an empty stack causes an error
Given I have entered nothing into the calculator
Then adding causes an error X
This allows me to specifically code the action and the exception detection in one step. I can reuse it to test as many error cases as I want and it doesn't make me add unrelated code to the non failing "When" steps.
ScenarioContext.Current is deprecated with the latest version of SpecFlow, it is now recommended to add a POCO to the steps test class in the constructor to store/retrieve context between steps, i.e.
public class ExceptionContext
{
public Exception Exception { get; set; }
}
private ExceptionContext _context;
public TestSteps(ExceptionContext context)
{
_context = context;
}
And in your [When] binding....
try
{
// do something
}
catch (MyException ex)
{
_context.Exception = ex;
}
In your [Then] binding, assert that _context.Exception is set and of the exception type you expected.
Related
I'd like to write a hook or check-in policy to TFS, that would for example find all occurences of:
catch(Exception e)
{
MySuperLoger.LogException("some msg", params, e);
throw;
}
in the code that is being checked in, and replace those with:
catch(Exception e)
{
MySuperLoger.LogExceptionWithoutStackTrace("some msg", params, e);
throw;
}
Disregarding the point of doing that, I really need to have possibility to edit code that is part of pending changes that are being checked in.
I tried googling this, then reading Team Services service hooks events docs but it didn't help me much.
I agree with Daniel. You should not change the files based at check-in/check-out, this will cause too many issues along the way. It's also why the Check-in policies don't have an elegant way to handle these kinds of issues. Buck Hodges from the Visual Studio/TFS team once wrote a detailed blog post on why this is a bad idea.
Resharper
It's better to invest in a bit of validation code. The easiest way to do this, if you have Resharper, is to create a Search template and then save that as a inspection. With Resharper you can even do a solution wide search and replace to very quickly fix all occurrences in the code.
Choosing Replace will search the whole solution for any occurrence and will allow you to fix this in one go. Notice that this will only hit occurrences in a Catch block.
You can save this pattern and then, from the Pattern Catalog, turn it into an Inspection of the desired error level:
Using Resharper CLI you can add these inspection to a commandline based build, which allows you to integrate it with Visual Studio Team Services Build quite easily.
Roslyn
If you do not own Resharper, it's going to be a little more work, implementing a Roslyn Analyzer is a great way to handle these issues in the IDE and in the Build process, both on the client as well as on a Continuous Integration build, but will require a bit of a learning curve.
Solve the core problem
Another alternative is to simply rename the old method in your SuperLogger or to mark it [Obsolete("Use LogExceptionWithoutStackTrace instead.")]. You can tell it to throw an error as well or simply redirecting the LogException method to the method that doesn't include the stack trace by "overloading" it.
// Will result in a compiler warning when this method is used
[Obsolete("Use MysuperLogger.LogExceptionWithoutStacktrace instead.")]
public static void LogException(string msg, Exception e)
// Add , true to have the compiler throw an error when this method is used
[Obsolete("Use MysuperLogger.LogExceptionWithoutStacktrace instead.", true)]
public static void LogException(string msg, Exception e)
Or mark it obsolete and redirect at the same time:
[Obsolete("Use MysuperLogger.LogExceptionWithoutStacktrace instead.")]
public static void LogException(string msg, Exception e)
{
LogExceptionWithoutStacktrace(msg, e);
}
With Obsolete marked Resharper will automatically offer a replacement if you format the error message cleverly:
This is a case where writing a custom code analysis rule (a Roslyn analyzer, FxCop, SonarQube, whatever) and enforcing it via a gated check-in or pull request is the correct course of action. Your commit/build process should never change code.
I am developing a RESTful service and I want to return 400 for all unsupported URLs.
My question is when should I choose method 1 over method 2 and vice-versa..
//method 1
public ActionResult Index()
{
//The url is unsupported
throw new HttpException(400, "Bad Request");
}
This one seems to be better?
//method 2
public ActionResult Index()
{
//The url is unsupported
return new HttpStatusCodeResult(HttpStatusCode.BadRequest, "Bad Request");
}
The second seems better as it doesn't involve exception throwing which comes at a micro-cost compared to the first example.
Being in a DevOps team, we are all in the mind-set where throwing more hardware at something to get a slightly better result is always a good cause. So I'm intentionally ignoring the micro-cost of firing a .NET exception.
If you're leveraging a telemetry framework like ApplicationInsights then just returning the status code gives you nothing more than a "failed request". It doesn't give you any useful information that allows you to either compile or get any information on the "why" of the failed request.
Telemetry platforms expect and want you to throw, because error telemetry is usually around .NET exceptions, so if you're not throwing you're creating a problem for operations.
I actually landed here, because I'm in the process of writing a Roslyn analyser and CodeFix for a project where folks love to write try{} catch { return BadRequest("put_the_reason_here"); } and neither DevOps or the Dev teams see nothing useful in the system telemetry in ApplicationInsights.
In my view you need to consider first if a request is made to the unsupported URLs. Then do you think of it is an exceptional situation or you expect that to happen? If you think of it as an exceptional situation then create and throw an exception (option 1). If you are expecting that you will receive many requests on the unsupported URL then treat it as a function of your application and use method 2.
That's said you will need to think about your clients' again if you are expecting too many requests on the unsupported URLs. In general I would prefer to throw an exception as I don't expect to receive too many requests on the unsupported URLs, and if it does happen then I would like to log it as an exception and investigate the reason.
Although this question is a bit old I figured I'd give my input since I came across this.
Errors are values. This goes for an HttpException (when unthrown) as well as an HttpStatusCodeResult. Thrown exceptions, however, create new code paths that are hidden away from your coworkers that may be one execution context higher up than yours and have to be given documentation that these code paths will be passed to them without notice. Values, however, tell you everything you need to know through their type. You can use their type in the expected execution path to tell whether an error has occured as well as find associated information with that error and log it.
I sometimes use (lightly extended) Exception's without throwing them to the next execution context to extract the useful debug information that David Rodriguez mentioned. There's never an excuse to be handing thrown exceptions to execution contexts above you that aren't actually exceptional, and this only really applies to things that are actually outside of your code's ability to handle (StackOverflowException, other fatal system exceptions, etc).
In a networked application, such as whatever MVC service you're running, the performance penalty from throwing exceptions is meaningless. The semantics and effects on maintainability, are not.
Network errors are values, and you should handle them like so.
you would throw an exception in code locations that cannot return an actionResult, such as in a controller constructor.
Can anyone please explain difference between below two approaches.
Logging in controller's OnException event:
try
{
//code
}
catch
{
//rollback trasanctions
throw;
}
Or, logging in catch block:
try
{
//code
}
catch
{
//logging here
//rollback trasactions
throw;
}
The Controller's OnException method is used when an unhandled exception occurs in the processing of the request. It is indicates what functionality should happen if an unexpected exception occurs. You should really only use this as a safeguard in the event that you messed up or the system failed in an unexpected, fatal way.
If you are executing some piece of code that you expect to throw a specific exception, wrap it in try block, and handle the specific exception accordingly. This defensive approach will help you debug issues as soon as they happen, rather than wait for them to bubble up to a point where you don't know the cause.
Think about it, if you have multiple action methods and only one OnException method per controller, then you have a much more complex issue to handle, because any of the action methods or filters could have thrown the error. However, if you catch an exception called by a specific service call then you already know exactly what caused the unexpected behavior, and it will be much easier to address accordingly.
Read this for greater understanding: Eric Lippert has an excellent article in which he breaks down the different categories of exceptions that we encounter and offers best practices for addressing them. It is available at http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx . In case you don't know who Eric Lippert is, he is very smart and you should listen to him if you code in C#. His main points are:
Don’t catch fatal exceptions; nothing you can do about them anyway, and trying to generally makes it worse.
Fix your code so that it never triggers a boneheaded exception – an "index out of range" exception should never happen in production code.
Avoid vexing exceptions whenever possible by calling the “Try” versions of those vexing methods that throw in non-exceptional circumstances. If you cannot avoid calling a vexing method, catch its vexing exceptions.
Always handle exceptions that indicate unexpected exogenous conditions; generally it is not worthwhile or practical to anticipate every possible failure. Just try the operation and be prepared to handle the exception.
Update
Just realized I didn't explicitly address the "logging" question. It probably makes most sense to avoid handling your fatal/exogenous errors in a controller scope, because you will end up duplicating your logic, often. This behavior is better handled in a global action filter.
This codeproject article Exception Handling in ASP.NET MVC explains how to override the default HandleErrorAttribute and leverage an ErrorController so that it can be applied globally.
In addition, the following 5-part blog series gives an in depth analysis of the different options you have for error handling in MVC applications: http://perspectivespace.com/error-handling-in-aspnet-mvc-3-index-of-posts
It's not my intent to engage in a debate over validation in DDD, where the code belongs, etc. but to focus on one possible approach and how to address localization issues. I have the following behavior (method) on one of my domain objects (entities) which exemplifies the scenario:
public void ClockIn()
{
if (WasTerminated)
{
throw new InvalidOperationException("Cannot clock-in a terminated employee.");
}
ClockedInAt = DateTime.Now;
:
}
As you can see, when the ClockIn method is called, the method checks the state of the object to ensure that the Employee has not been terminated. If the Employee was terminated, we throw an exception consistent with the "don't let your entities enter an invalid state" approach.
My problem is that I need to localize the exception message. This is typically done (in this application) using an application service (ILocalizationService) that is imported using MEF in classes that require access to its methods. However, as with any DI framework, dependencies are only injected/imported if the object was instantiated by the container. This is typically not the case with DDD.
Furthermore, everything I've learned about DDD says that our domain objects should not have dependencies and those concerns should be handled external from the domain object. If that is the case, how can I go about localizing messages such as the one shown above?
This is not a novel requirement as a great many business applications require globalization/localization. I'd appreciate some recommendations how to make this work and still be consistent with the goals of a DDD.
UPDATE
I failed to originally point out that our localization is all database driven, so we do have a Localization Service (via the injectable ILocalizationService interface). Therefore, using the static Resources class Visual Studio provides as part of the project is NOT a viable option.
ANOTHER UPDATE
Perhaps it would move the discussion along to state that the app is a RESTful service app. Therefore, the client could be a simple web browser. As such, I cannot code with any expectation that the caller can perform any kind of localization, code mapping, etc. When an exception occurs (and in this approach, attempting to put the domain object into an invalid state is an exception), an exception is thrown and the appropriate HTTP status code returned along with the exception message which should be localized to the caller's culture (Accept-Language).
Not sure how helpful this response is to you, but localization is really a front-end concern. Localizing exceptions messages as per your example is not common practice, as end users shouldn't see technical details such as those described in exception messages (and whoever will be troubleshooting your exceptions probably has a sufficient level English even if it is not their native language).
Of course if necessary you can always handle exceptions and present a localized, user-friendly message to your users in your front-end. But keeping it as a font-end concern should simplify your architecture.
As Clafou said, you shouldn't use exceptions for passing messages to the UI in any way.
If you still insist in doing this, one option is to throw an error code instead of the message
throw new InvalidOperationException("ERROR_TERMINATED_EMPLOYEE_CLOCKIN");
and then, when it happens, do whatever you need to do with the exception (log, look up localization, whatever).
If localisation is important part of the domain/application you should make it a first class citizen and inject wherever it belongs. I am not sure what you mean with "DDD says that our domain objects should not have dependencies" - please explain.
You are correct for trying to avoid adding internal dependencies to your domain model objects.
A better solution would be to handle the action inside of a service method such as:
public class EmployeeServiceImpl implements EmployeeService {
public void ClockEmployeeIn(Employee employee) throws InvalidOperationException {
if (employee.isTerminated()) {
// Localize using a resource lookup code..
throw new InvalidOperationException("Error_Clockin_Employee_Terminated");
}
employee.setClockedInAt(DateTime.Now);
}
}
You can then inject the service using your DI framework at the point where you will be making the clockin call and use the service to insulate your domain objects from changes to business logic.
So I'm starting to catch the TDD bug but I'm wondering if I'm really doing it right... I seem to be writing A LOT of tests.
The more tests the better, sure, but I've got a feeling that I'm over doing it. And to be honest, I don't know how long I can keep up writing these simple repetitive tests.
For instance, these are the LogOn actions from my AccountController:
public ActionResult LogOn(string returnUrl)
{
if (string.IsNullOrEmpty(returnUrl))
returnUrl = "/";
var viewModel = new LogOnForm()
{
ReturnUrl = returnUrl
};
return View("LogOn", viewModel);
}
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult LogOn(LogOnForm logOnForm)
{
try
{
if (ModelState.IsValid)
{
AccountService.LogOnValidate(logOnForm);
FormsAuth.SignIn(logOnForm.Email, logOnForm.RememberMe);
return Redirect(logOnForm.ReturnUrl);
}
}
catch (DomainServiceException ex)
{
ex.BindToModelState(ModelState);
}
catch
{
ModelState.AddModelError("*", "There was server error trying to log on, try again. If your problem persists, please contact us.");
}
return View("LogOn", logOnForm);
}
Pretty self explanatory.
I then have the following suite of tests
public void LogOn_Default_ReturnsLogOnView()
public void LogOn_Default_SetsViewDataModel()
public void LogOn_ReturnUrlPassedIn_ViewDataReturnUrlSet()
public void LogOn_ReturnUrlNotPassedIn_ViewDataReturnUrDefaults()
public void LogOnPost_InvalidBinding_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_InvalidBinding_DoesntCallAccountServiceLogOnValidate()
public void LogOnPost_ValidBinding_CallsAccountServiceLogOnValidate()
public void LogOnPost_ValidBindingButAccountServiceThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingButAccountServiceThrows_DoesntCallFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModelButFormsAuthThrows_ReturnsLogOnViewWithInvalidModelState()
public void LogOnPost_ValidBindingAndValidModel_CallsFormsAuthServiceSignIn()
public void LogOnPost_ValidBindingAndValidModel_RedirectsToReturnUrl()
Is that over kill? I haven't even shown the services tests!
Which ones (if any) can I cull?
TIA,
Charles
It all depends on how much coverage you need / want and how much dependability is an issue.
Here are the questions you should ask yourself:
Does this unit test help implement a feature / code change that I don't already have?
Will this unit test help regression test/debug this unit if I make changes later?
Is the code to satisfy this unit test non-trivial or does it deserve a unit test?
Regarding the 3rd one, I remember when I started writing unit tests (I know, not the same thing as TDD) I would have tests that would go like:
string expected, actual;
TypeUnderTest target = new TypeUnderTest();
target.PropertyToTest = expected;
actual = target.PropertyToTest;
Assert.AreEqual<string>(expected, actual);
I could have done something more productive with my time like choose a better wallpaper for my desktop.
I recommend this article by ASP.net MVC book author Sanderson:
http://blog.codeville.net/2009/08/24/writing-great-unit-tests-best-and-worst-practises/
I'd say you are doing a little more than you probably have to. While it is nice to test every possible path your code can take, some paths just aren't very important or don't result in real differences in behavior.
In your example take LogOn(string returnUrl)
The first thing you do in there is check the returnUrl parameter and re-assign it to a default value if it is null/empty. Do you really need a whole unit test just to make sure that one line of code happens as expected? It isn't a line likely to break easily.
Most changes that might break that line be things that would throw a compile error. A change in the default value being assigned is possible in that line (maybe you decide later that "/" isn't a good default value... but in your unit test, I bet you hard-coded it to check for "/" didn't you? So the change in the value will necessitates a change in your test... which means you aren't testing your behavior, instead you are testing your data.
You can achieve a test for the behavior of the method by simply having one test that does NOT supply a parameter. That will hit your "set default" part of the routine while still testing that the rest of the code behaves well too.
That looks about right to me. Yes, you will write a lot of Unit Tests and, initially, it will seem like overkill and TBH a waste of time; but stick with it, it'll be worth it. What you should be aiming for (rather than just 100% code coverage) is 100% function coverage. However... if you find that you're writing a lot of UTs for the same method it's possible that that method is doing too much. Try separating your concerns more. In my experience the body of an Action should be doing little more than newing-up a class to do the real work. It's that class that you should really be targeting with UTs.
Chris
100% coverage is very ideal, its really helpful if you have to massively refactor your code however, as the tests will govern your code specs to make sure it is correct.
I am personally not a 100% TDD (sometimes too lazy too) but if you intend to do 100%, maybe you should write some test helpers to take away some burden on these repetitive tests. For example, write a helper to test all your CRUD in a standard post structure with a callback to allow you pass in some evaluation might save you a lot of time.
I'm unit testing only code what i'm unsure about. Sure - you can never know what will back stab you but writing tests for trivial things seems like an overkill for me.
I'm not a unit-testing/tdd guru - but i think that it's fine if you do NOT write tests just to have them. They must be useful. If you are experienced enough with unit testing - you start to feel when they are going to be valuable and when not.
You might like this book.
Edit:
Actually - i just found a quote about this underneath isolation frameworks chapter. It's about overspecifying tests (one particular test), but i guess the idea remains in more global scope:
Overspecifying the tests
If your test has too many expectations, you may create a test that
breaks down with even the lightest of code changes, even though the
overall functionality still works. Consider this a more technical way of
not verifying the right things. Testing interactions is a double-edged
sword: test it too much, and you start to lose sight of the big picture—the overall functionality; test it too little, and you’ll miss the
important interactions between objects.