I am facing the following problems while trying to use the Spock framework
Forced to inherit from Specification , is there any way to use annotations instead ?
Cannot execute individual tests , only option I found from internet is #IgnoreRest annotation, is there any other way to do it ?
ad 1. There is no way around inheriting directly or indirectly from Specification (for good reason).
ad 2. It depends on whether the environment that you are executing tests in (IDE, build tool) allows to execute individual (JUnit) tests.
Related
I have an automation project that i've recently updated to run both a set of Android and iOS tests in parallel. I've got my report files aggregating ok, but since they're running the same features, the reports at the end can't really identify which ran on Android and which on iOS.
It is a pretty standard Cucumber, Gherkin, Java project. I'm trying to figure out the best way to get the word Android/iOS into the Feature name field, so on the generated report it can be easily identified.
The two options i've thought about are either during the #Before step, to somehow modify the feature name. However, it looks like all of the fields of the Scenario object have getters only.
The second option was if it was possible to reference a system property or environment variable in the .feature file itself. However, I haven't seen any ways that that is possible.
Has anyone tried this before? I can post code as necessary, it is more of a general question of how could I dynamically change a feature name, or alter a feature file itself by environment variable when it runs.
Thanks
I think this similar question may have the answer you're seeking.
Basically, you're going to create your own runner type (or modify the existing one) and inside it set a custom report path.
I'm trying to use SpecFlow for a .net project. I'm new to SpecFlow. The Development team are using NUnit, so it would seem that SpecFlow would be a good option in conjunction with Cucumber. However, the Development team have come back say that SpecFlow cannot be used saying they do not have an API/Service that is available to use at the level required. Currently all of their automated tests are through the UI using Test Complete, I am keen to move to API level testing.
Can anyone explain to me why SpecFlow cannot be used, I'm sorry it's a newbie question but no one can answer it, I've asked everyone I can think of, surely the first steps would be to see if we can use SpecFlow with NUnit but perhaps not.
Can anyone give me a guide on my next steps, how can I be sure this isn't an option without righting it off without concern that it's just being blocked?
Thank you
SpecFlow has a unit test generator that generates unit test code for a variety of unit test frameworks. SpecFlow generates NUnit tests in its default configuration. The getting started page on specflow.org explains a quick way to get up and running with SpecFlow and NUnit, http://www.specflow.org/getting-started/.
If the UI is HTTP based, SpecFlow can be used with WebDriver or another browser automation framework to test the UI. This blog post provides an overview of how to get started with SpecFlow, NUnit and WebDriver, http://blogs.lessthandot.com/index.php/enterprisedev/application-lifecycle-management/using-specflow-to/
I am unclear on the API you want to test. If you could provide more information on the specific API and UI you are trying to test, I could possibly provide some code examples or references for you.
Is the API exposed through HTTP?
Is the UI a web, mobile, or desktop
application?
Have you tried to use SpecFlow at all?
SpecFlow doesn't run tests. It simply maps readable language to tests. If their test can be written as a nunit test, then SpecFlow is available to you to use. With no change, here is how it would look.
Scenario: Running 'testname'
Then I execute the test 'TestName'
You would map that to
[Then(#"I execute the test '(.*)'")]
public void ExecuteSpecificTest(string testname)
{
// Using reflection, execute the method listed
}
Obviously you would want to do better than that. You want a given, when, then so you clearly show the setup, action, and then compare expected verses actual result but it isn't necessary. Best practices however is another discussion.
To sum it up, code is code and SpecFlow simply maps to code. You can use WatiN, WebDriver, or anything else to hook into the UI or an API. SpecFlow doesn't care. It simple executes the methods without knowing what's inside.
I've got an Erlang project comprising a bunch of different applications. I'm using Common Test to do some of the testing.
apps/foo/suites/foo_SUITE.erl
apps/bar/suites/bar_SUITE.erl
I'm starting to see duplication of utility code in those suites.
Where should I put my utility code so that it can be shared between the two suites?
I've considered adding another application:
apps/test_stuff
...but I can't make the CT suites depend on this without making the application under test depend on this (or can I?). I don't want to do that, because test_stuff is only needed when testing.
I have a similar problem with my eunit tests, both between applications (apps/foo/test vs. apps/bar/test), and where I'm using similar functionality between the eunit and CT tests in the same application (apps/bar/suites vs apps/bar/test). Can I use the same solution for this case as well? Or do I need to ask another question about that?
Do you think ct:require/1,2 could help you so that foo and bar SUITE would require test_stuff before it gets executed? For more information http://www.erlang.org/doc/man/ct.html#require-1
It depends on how you are packaging your final releases. For example, I use rebar for relase management. I have Cowboy fetched along with other dependencies for testing purposes, but in my reltool.config, I omit it, so it doesn't get packaged with the final product. I use rebar to run Common Test, and it's able to add Cowboy to the path without having it bundled as a lib with everything else or added as a dependency to the app I'm testing.
However, if you have another process which infers your release configuration from your dependencies, you'll have to find a way to exclude your test code when you generate a release.
Is there a way to check which Grails plugins are active and used durring application runtime?
I want to remove a plugin but I want to be absolutely sure that it is not used anymore...
Well, a brute force way would be to copy your Grails project (preferably using a source control tool like git's branching feature), remove the plugin, and make sure that:
No exceptions on a grails clean, grails compile, and grails refresh-dependencies.
All unit and integration tests pass (your team is writing those, right? ;) )
You can run the application and use it fairly normally; warning, this is the worst test, and by itself isn't sufficient, as you could end up with a BOMM.
If you're familiar with the classes in the plugin, but there are way too many Grails files to look through manually, you could use code search tools like those found in GGTS whatever IDE/text editor you're using. Even grep could be handy for finding references to those classes or some distinctly named methods.
Conversely, if the plugin is basically a black box, and your Grails app is small enough to get around, check the import statements at the top of your Controllers, Domains, and Services. If the plugin provides more client-side technology (like the jQuery plugin) check your GSPs and various items in the web-app directory (like Javascript files) for references to it.
I haven't yet installed my license of NCover 3, and am still running 1.5.8 on my build server. I am trying to exclude full assemblies and specific classes that I don't want included in the report, because they are artificially lowering the coverage results.
In NCoverExplorer, I was playing around with the options because there is a coverage exclusions section in the Options tab where you can specify full namespaces. I've entered the fully qualified classes, and for some reason, only a handful of them get excluded, and I cannot figure out why. For example, when I add System.ComponentModel.Composition to the list, it never gets excluded!
Is this just a bug in 1.5.8 that I have to live with for now, since it is a beta and also no longer supported? Although I do have a new license for the server, I'd like to be able to do some coverage at home on my personal computer.
I found a really great article on using a CoverageExcludeAttribute to make NCover automatically skip those classes / methods that are marked with this attribute. Is this the best option?
Did you try using regular expressions to include/exclude the assemblies -
//ias AuctionSniper([.\w]*?)(?<!Tests)
includes all assemblies that begin with AuctionSniper but don't end with Tests e.g. AuctionSniper.Main.exe
You can specify multiple patterns separated by semicolons.
or //ias .*vendorsupplied.*;.*tests
This works with NCover 3 - you can give it a try if it works for the free/community edition.
One way to get part way there is to also use the "assemblies to include in coverage" option. This allowed me to ignore System.ComponentModel.Composition. However, exlcuding most of the other classes and namespaces still doesn't work. It's a little odd how when I exclude an entire class, it only excludes the methods, but not any of the contained classes.