Based on SpecFlow documentation, the [BeforeScenarioBlock] hook will be called before "Given" and "When" statement. Is there any way to make the [BeforeScenarioBlock] hook only to be called before "Given" statement ?
A [BeforeScenarioBlock] will run before any 'block' in the scenario, ie before each separate set of Given, When or Then blocks. There is no built in way to specify that a hook should only run before a particular type of block I don't think but I believe it should be straight forward enough to make sure that the code only runs before specific blocks inside the hook code. Something like this:
[BeforeScenarioBlock]
public void BeforeScenarioBlock()
{
if (ScenarioContext.Current.CurrentScenarioBlock == ScenarioBlock.Given)
{
//execute the code before the given
}
}
Although I have not tested this.
Related
After my research, my own answer to my question is: unfortunately, no. But I'd like to hear your opinion as well.
I have 4 controllers each having 5 methods. Every method has a try-catch block. In case of an exception, I want to log on a file a message with all the parameters of that controllers. So I have to write the basically same log instruction 4x5=20 times. If I want to change the logging message, I have to do it 20 times. On one hand, sounds like a hard to maintain problem. But on the other hand, every controller has its own signature.
Would it be possible to have some base/parent/God method that virtually logs for every controller? And of course if I will be needing to adapt/override the logging instruction just for one controller this should be also possible. Is there any technique?
How about if I just need to generate for every controller methods a GUID?
There are many ways to achieve a single piece of code that can log all your controller methods. The easiest implementation which comes to my mind is to just write a method that takes an action or function. Then inside just call that action/function and wrap it in any logging that you wish to use. Like so:
public void ExecuteWithLogging(Action actionToExecute)
{
try
{
action() // the same as action.Invoke()
}
catch(Exception e)
{
// code your logging here
}
}
then inside your controller's methods you could use it like this:
ExecuteWithLogging(() =>
{
// your controller code
});
But there are other ways. You could probably use attributes to mark each method as logged. Or maybe you could write some middle-ware that just logs everything (like maybe in this article -> https://exceptionnotfound.net/using-middleware-to-log-requests-and-responses-in-asp-net-core/).
The options are many!
I have a method:
-(void)startTaskForResult:(long long*)result {
...
}
The function I want to unit test invoke above function:
-(void)doWork {
long long result = 0;
[self startTaskForResult:&result];
}
I am using OCMock library to do unit tests. In my test case, I want to set the result argument to an mocked value e.g. 100 without care about the actual implementation of -(void)startTaskForResult:(long long*)result.
I tried the following way:
-(void)testDoWork{
// try to set 100 to argument 'result'
OCMStub([classToTest startTaskForResult:[OCMArg setToValue:OCMOCK_VALUE((long long){100})]]);
// run the function, but it doesn't use mocked value 100 for argument 'result'
[classToTest doWork];
...
}
But, when I run my test, it does't use the mocked value 100 for argument result. What is the right way to set mocked value to argument in my case then?
Few points to answer your question:
Code for your problem:
- (void)testDoWork
{
id mock = OCMPartialMock(classToTest)
OCMStub([mock startTaskForResult:[OCMArg setToValue:OCMOCK_VALUE((long long){100})]]).andForwardToRealObject;
// set your expectation here
[classToTest doWork];
}
To solve your particular problem:
Your object should be partial mock
Your method should be stubbed (you did it)
Your stub should be forwarded to real object (i assume you need method startTaskForResult: implementation to be called)
However, you face the problems because you are using wrong approach to test;
There're 3 most common strategies to write unit tests:
Arrange-Act-Assert used to test methods
Given-When-Then used to test functions
Setup-Record-Verify used to test side effects. This usually requires mocking.
So:
If you want to test that startTaskForResult: returns particular value - you should call just that and expect return value (not your case, method return type is void)
If method changes the state of object - you should expect that state change, like property value or so
If calling of doWork has a side effect of calling startTaskForResult:, you should stub it and expect it's call, almost like i've written in code above. However (!!!), however you shouldn't expect things like this. This is not a kind of behaviour that has much sense to test, because it's internal class implementation details. One possible case, when both methods are public and it's explicitly stated in class contract, that one method should call another with some preliminary setup. In this case you expect method call with some state / arguments.
To have your application code testable, you require continuously refactoring your code. Some code is untestable, it's probably better to adopt application code rather then try to cover it with tests anyway. You lose the initial goal of tests - refactoring safety and low cost of making changes.
In my application, some of the Geb tests are a bit flaky since we're firing off an ajax validation http request after each form field changes. If the ajax call doesn't return fast enough, the test blows up.
I wanted to test a simple solution for this, which (rightly or wrongly, let's not get into that debate here...) is to introduce a short 100ms or so pause after each field is set, so I started looking at how & where I could make this happen.
It looks like I need to add a Thread.sleep after the NonEmptyNavigator.setInputValue and NonEmptyNavigator.setSelectValue methods are invoked. I created a subclass of GebSpec, into which I added a static initializer block:
static {
NonEmptyNavigator.metaClass.invokeMethod = { String name, args ->
def m = delegate.metaClass.getMetaMethod(name, *args)
def result = (m ? m.invoke(delegate, *args) : delegate.metaClass.invokeMissingMethod(delegate, name, args))
if ("setInputValue".equals(name) || "setSelectValue".equals(name)) {
Thread.sleep(100)
}
return result
}
}
However I added some debug logging, and I noticed that when I execute my spec I never hit this code. What am I doing wrong here...?
I know you asked not to get into a debate about putting sleeps whenever you set a form element value but I just want to assure you that this is really something you don't want to do. There are two reasons:
it will make your tests significantly slower and this will be painful in the long run as browser tests are slow in general
there will be situations (slow CI for instance) where that 100ms will not be enough, so in essence you are not removing the flakiness you are just somehow limiting it
If you really insist on doing it this way then Geb allows you to use custom Navigator implementations. Your custom non empty Navigator implementation would look like this:
class ValueSettingWaitingNonEmptyNavigator extends NonEmptyNavigator {
Navigator value(value) {
super.value(value)
Thread.sleep(100)
this
}
}
This way there's no need to monkey-patch NonEmptyNavigator and you will avoid any strange problems that might cause.
A proper solution would be to have a custom Module implementation that would override Navigator value(value) method and use waitFor() to check if the validation has completed. Finally you would wrap all your validated form elements in this module in your pages and modules content blocks. This would mean that you only wait where it's necessary and as little as possible. I don't know how big your suite is but as it will grow these 100ms will turn into minutes and you will be upset about how slow your tests are. Believe me, I've been there.
I've been working on a unit test with angular.mock.$httpBackend for an angular service that uses $http. I'm running into some issues related to injecting all the dependencies, because my test case needs to access the service, which in turn needs to access $httpBackend.
However, the specific issue that is tripping me up now is that sometimes the angular.mock.inject() convenience method executes the function it wraps immediately, and sometimes it just returns a copy of the function. I see in the source that this is based on a property called currentSpec.isRunning. What does this mean? Is this a Testacular or Jasmine property? I haven't gone that far down the rabbit hole yet...
Last I checked, the return value of angular.mock.inject() was based upon what type of Jasmine context you are in (I'm assuming they changed it up a bit in 1.2 with the addition of mocha support).
Essentially, if your in a spec (actually inside of a callback passed to beforeEach):
beforeEach(function () {
inject(function () { });
});
Then it will execute the injection immediately; however, if you are still defining the spec:
beforeEach(inject(function () { }));
Then it will return a function. Otherwise it would execute before your tests were to run, and not be terribly useful. This seems to just be provided as a slightly more convenient/less verbose syntax.
I have some integration tests written for MsTest. The integration tests have the following structure:
[TestClass]
public class When_Doing_Some_Stuff
{
[TestInitialize]
protected override void TestInitialize()
{
// create the Integration Test Context
EstablishContext();
// trigger the Integration Test
When();
}
protected void EstablishContext()
{
// call services to set up context
}
protected override void When()
{
// call service method to be tested
}
[TestMethod]
public void Then_Result_Is_Correct()
{
// assert against the result
}
}
I need to filter the code coverage results of a function by who is calling it. Namely, I want the coverage to be considered only if the function is called from a function named "When" or which has a certain attribute applied to it.
Now, even if a certain method in the system is called in the EstablishContext part of some tests, the method is marked as visited.
I believe there is no filter for this and I would like to make the changes myself, as OpenCover is... well.. open. But I really have no idea where to start. Can anyone point me in the right direction?
You might be better addressing this with the OpenCover developers; hmmm... that would be me then, if you look on the wiki you will see that coverage by test is one of the eventual aims of OpenCover.
If you look at the forking you will see a branch from mancau - he initially indicated that he was going to try to implement this feature but I do not know how far he has progressed or if he has abandoned his attempt (what he has submitted is just a small re-introduction of code to allow the tracing of calls).
OpenCover tracks by emitting a visit identifier and updating the next element in an array that resides in shared memory (shared between the profiler (C++/native/32-64bit) and the console (C#/managed/any-cpu)). What I suggested to him was (and this will be my approach when I get round to it, if no one else does and is why I emit the visit data in this way) that he may want to add markers into the sequence to indicate that he has entered/left a particular test method (filtered on [TestMethod] attribute perhaps) and then when processing the results in the console this can then be added to the model in some way. Threading may also be a concern as this could cause the interleaving of visit points for tests run in parallel.
Perhaps you will think of a different approach and I look forward to hearing your ideas.