Demonstration using Spock - spock

I'm going to be doing a presentation on Spock next week and as part of the presentation I need to give a demonstration. I have used Spock a bit before for a project but haven't used it in about a year or so.
The demonstration needs to be more than just a "hello world" type demonstration. I'm looking for ideas of cool things I can demonstrate using Spock ... Any ideas?
The only thing I have now is the basic example that is included in the "getting started" section of the Spock website.
def "length of Spock's and his friends' names"() {
expect:
name.size() == length
where:
name << ["Kirk", "Spock", "Scotty"]
length << [4,5,6]
/*
name | length
"Spock" | 5
"Kirk" | 4
"Scotty" | 6
*/
}

Same tool for end-to-end testing as well as unit testing. Since it is based on groovy you can provide your own simple domain specific dsl based automation framework leveraging spock. I've around 5000 automated tests running as part of CI using this framework.
For Acceptance Testing
use of power asserts focus on how easy it is to interpret the failed assertions
BDD with given-when-then
data driven specs and unrolling
business friendly reporting
Powerful UI automation by marrying with Geb
For unit and integration testing
interaction based testing and mocking
simplified xml etc testing because of groovy goodies
Get more ideas from their documentation

Related

how do I select test to run when using `bazel test ...`

I have a repo using bazel as build and test system. This repo has both python and golang. There are two types of tests, unit tests, and integration tests. I would like to run them in two separate test steps in our CI. I would like to automatically discover new tests in the repo when new tests are added. we are currently using bazel test .... but this will not help me to split the unit test and integration test. Is there any rule or existing method to do this? Thanks.
Bazel doesn't really have a direct concept of unit vs integration testing, but it does have the concept of a test "size", or how "heavy" a test is. This docs page gives an outline of the size attribute on test rules while the Test encyclopedia gives a great overview.
When the tests are appropriately sized, it's then possible to use --test_size_filters flag to run the test for that size.
For example,
bazel test ... --test_size_filters=small for running unit tests
bazel test ... --test_size_filters=large for integration tests
You may want to add additional flags for unit tests vs integration tests, so adding a new config to .bazelrc might be a good idea, then run via bazel test ... --config=integration for example.
--test_size_filters is the best way, because it is a wide used solution. If you need another separation, then tags are way to go:
py_test(
name = "unit_test",
tags = ["unit"],
)
py_test(
name = "integration_test",
tags = ["integration"],
)
And then
bazel test --test_tag_filters=unit //...
bazel test --test_tag_filters=integration //...
bazel test --test_tag_filters=-integration,-unit //... # each test which is not "unit" nor "integration"

Can a Gherkins test be written which will result in a failure

Let me start by stating I am very wet behind the ear with gherkins and cucumber.
I've put together a PoC for my company of an integration a Jenkins projects that will build and execute tests when there is a check in a Git repository. When the tests have completed Jenkins will then update the test managed in Xray for Jira.
The tests are cucumber written using gherkins. I have in vain attempted to cause a single test to produce a failure just to be able to add that to the demo I am going to be giving to upper management.
Here is the contents of my file HelloWorld.feature:
Feature: First Hello World
#firsth #hello #XT-93
Scenario Outline: First Hello World
Given I have "<task>" task
And Step from "<scenario>" in "<file>" feature file
When I attempt to solve it
Then I surely succeed
Examples:
| task | scenario | file |
| first | First Hello | First Feature |
Currently all the tests I have pass. I have attempted to modify that test so that it would fail but thus far have only been able to get it to show in Xray as EXECUTING or TO DO.
I have searched to see if it was possible to create a test that would always result in a test failure but have not been able to find anything.
I know do not know gherkins, I'm only using what was given to me to work with, so please forgive my question.
Thank you for any guidance anyone might be able to provide.
Cucumber assumes a step passes if no exception is thrown. Causing a test to fail is easy. Just throw an exception in the step definition.
Most unit testing frameworks give you an explicit way to fail a test. You haven't mentioned the tech stack in use, but MS Test for .NET gives you Assert.Fail("reason for failure goes here.");
Or simply throw an explicit exception: throw new Exception("fail test on purpose");
As long as the step throws an exception the entire scenario should fail.

How to avoid ignored tests being generated as in SpecFlow MsTest?

Here is the Sample feature file, has #ignore examples.
#ChildTest
Scenario: Sub Report
Given I have clicked on EmpId: '<EmpId>' to view Report
When Loading mask is hidden
Then I have clicked on 'Back to Results' link.
#ignore
Examples:
| EmpId | Date |
| CHILD_TEST_SKIPPED | dynamic |
I would like TestGenerator to AVOID Unit test method generation for #ignore examples
You cannot get the test generator to ignore those tests. SpecFlow tags become [Test category("ignore")] attributes above the test methods that get generated.
You will need to filter out those tests in Test Explorer. Enter -trait:ignore in the Test Explorer search bar to exclude those tests.
An alternative is to set the test to "pending":
Scenario: ...
Given pending
And in the step definition for Given pending call: Assert.Inconclusive("This test is temporarily disabled.");
Then the tests get executed, but report that they are neither passing nor failing. I do this quite a bit when implementing new features so I can write the tests ahead of time.

Test coverage in REST Assured

I am planning to use RESTAssured for API Testing in our project.
I am wondering if any mechanism is available to determine the test coverage.
I did google search but didn't find any answer.
First of all, what kind of coverage?
-Requirement
-Code
-Endpoint
-Method
-Bug
-Environment
-etc.
I guess endpoint and/or requirement coverage you want to earn: Create a coverage matrix where you mark the covered endpoint and its method (y axis) with test case (x axis). The input endpoints can be granted in prior since you should know them. The rest can be collectet in runtime with a collector util.
I personaly used a custom annotation for tests where I marked the used endpoint(s) whilst the test result was provided by testNg ITestListener. At the end with a custom reporter I made my own coverage result as a heatmap.

Unit Testing in ASP.NET MVC

Just embarking on using a test framework for writing unit tests and also the TDD approach. Not having any prior experience felt it would be good to go for XUnit although NUnit was the best alternative. Trying to transpose the MS Unit testing methods that I have been looking at in the MVC books I have, to XUnit equivalents and am already stumbling.
Specifically the following:
Testing list of entries for a view collection like Index:
CollectionAssert.AllItemsAreInstancesOfType((ICollection)result.ViewData.Model,typeof(MyObject)); (from MVC unleashed book)
How would you do this in XUnit or can't it be done like this?
What puts me off is the lack of documentation for XUnit and am wondering if NUnit is better option.........
Also it appears that the testing code is almost its own language. Would it be fair to say that there is a common set of tests that can be run for all projects?
Regards to TDD..I understand the concept but are the tests themselves the same as unit tests in what they contain and are testing? Not sure what the actual difference is apart from when they get written!
I am a fan of mspec. See these questions
Helpful links :
MSpec installer
It runs on top of NUnit. There are also mvc extension methods for things like
result.ShouldBeAView().and().ShouldHaveModelOfType<T>()
A controller test can look like this
[Subject(typeof(JobsController))]
public class when_viewing_imdex_page : specifications_for_jobs_controller
{
static ActionResult result;
Establish context =
() => result = controller.Index();
It should_return_a_view_result =
() => result.ShouldBeAView();
It should_return_a_view_with_formviewdata =
() => result.ShouldBeAView().And().ShouldHaveModelOfType<IList<Job>>();
It should_contain_a_list_of_jobs =
() => result.Model<IList<Job>>().ShouldNotBeEmpty();
}
I don't know about XUnit but in NUnit there is collections constraints
look at this NUnit
for your example you could use this code
Assert.That((ICollection)result.ViewData.Model, Is.All.InstanceOf<MyObject>());
I think it would be unfair for me to comment on which Testing Framework you should use, having only used NUnit, but it's always been enough for me.
Regarding your second and third points however; the majority of tests a very similar, but that is the point of TDD, start from the base, and continue refactoring until you have "no more, and no less".
Test Driven Development and Test After Development are both a form of Unit Testing; but within TDD, the tests are DRIVING the development, ensuring that every line you write has a purpose and is fully tested.
There are disadvantages; sometimes, you have to write the code before testing (especially when starting out), so you can figure out how to test it, so it may be fair to say that your development will contain a bit of both types mentioned.
TDD, and any form of automated testing is definitely worth the effort, if for nothing but the satisfaction you get from seeing hundreds of tests pass in your application on your final test run.

Resources