Ant task to configure RabbitMQ - ant

I am setting up my local build using Ant and have decided to use RabbitMQ. I would like to have any Ant task that I can use to configure my local installation for setting things up (stop, start, create queues etc..) and tearing them down as part of my test suite.
Has anyone come across anything like this?

I described a scenario in this question there the op was looking for a way to declare queues and bindings without the overhead of doing it at runtime.
In my solution I use a console utility to perform the queue declarations and have this called from a build step in my build server when running builds and tests.
During the normal course of coding and integration testing from the IDE, I simply make sure that I have used the utility fairly recently to make sure the queues have been established as per the current XML definitions. My test setups ensure that the queues themselves are empty before running.
Hope this helps.
Steve

Ant is a build tool. While running your automated tests is generally part of a build process, the setup of your queues are part of your specification's context and should be included in your tests. If you truly have a need to configure your exchanges and queues once before all test runs, many frameworks provide a facility to do this.

Related

Executable requirements, PHPUnit and Jenkins strategy

We use Jenkins and PHPUnit in our development. For long time I wanted to start to use executable requirements in our team. Architect/team leader/someone who defines requirements can write tests before actual code. Actual code then should be written by another team member. Therefore executable requirements tests are committed to repository before actual code is made, Jenkins rebuilds the project and rightfully fails. And project remains in failed state until new code is written which defeats XP rule to keep project in good state at all times.
Is there any way to tell PHPUnit that such and such tests should not be run under Jenkins while they may be executed locally by any dev with ease? Tweaking phpunit.xml is not really desirable: local changes to tests are better as they easier to keep track of.
We tried markTestIncomplete() and markTestSkipped() but they not really do what we need: executable requirements tests are really complete and should not be skipped. Use of these functions prevent easy execution of such tests in development.
The best approach in our case would be to have PHPUnit option like --do-not-run-requirements which should be used by PHPUnit executed by Jenkins. On dev machine this option should not be used and actual executable requirements tests should have #executableRequirements meta-comment in the beginning (removed only after actual code is created and tested). Issue that PHPUnit does not have such functionality.
May be there is a better way to achieve implementation of executable requirements without "false" failures in Jenkins?
With PHPUnit, tests can be filtered for execution. Either annotate tests that should not be executed in one environment using the #group annotation and then use --exclude-group <name-of-group> or the <group> element of PHPUnit's XML configuration file or the --filter <pattern> commandline option. Both approaches are covered in the documentation.
For long time I wanted to start to use Test Driven Development in our
team. I don't see any problem with writing tests before actual code.
This is not TDD.
To quote from wikipedia:
first the developer writes an (initially failing) automated test case
that defines a desired improvement or new function, then produces the
minimum amount of code to pass that test, ...
Notice the test case in the singular.
Having said that, you are quite welcome to define your own development methodology whereby one developer write tests in the plural, commits them to version control and another developer writes code to satisfy the tests.
The solution to your dilemma is to commit the tests to a branch and the other developer work in that branch. Once all the tests are passing, merge with trunk and Jenkins will see the whole lot and give its opinion on whether the tests pass or not.
Just don't call it TDD.
I imagine it would not be very straight forward in practice to write tests without any basic framework. Hence, 'minimum amount of code to pass the test' approach as you suggested is not a bad idea.
Not necessarily a TDD approach
-Who writes the tests? If someone who works with requirements or an QA member writes the tests, you could probably simply write empty tests (so they don't fail). This approach will make sure that the developer will cover all the cases that the other person has thought about. An example test method would be public void testThatObjectUnderTestReturnsXWhenACondition, public void testThatObjectUnderTestReturnsZWhenBCondition. (I like long descriptive names so there are no confusions as to what I am thinking or you can use comments to describe your tests). The DEVs can write code and finish the tests or let someone else finish the tests later. Another way of stating this is to write executable requirements. See Cucumber/Steak/JBehave as executable requirements tools.
Having said above, we need to differentiate whether you are trying to write executable requirements or unit/integration/acceptance tests.
If you want to write executable requirements, any one can write it and could be empty to stop them from failing. DEVs will then fill it up and make sure the requirements are covered. My opinion is to let the DEVs deal with unit /integration/acceptance tests using TDD (actual TDD) and not separate the responsibility of writing code and appropriate unit/integration/acceptance tests for the code they write.

Running Jmeter multiple independent jmx tests in parallel

I have a scenario where there are several independent jmx files, each of them has their own threadgroup etc. Using JMeter Ant script I can fire them all in sequence and collect the result in a jtl file. My challenge here is to do the same thing but fire off the tests in parallel. Please note that Include Controller is not an option since I want to use(or honor) the ThreadGroup and User Defined Variables in each jmx files.
Thanks for your help
Perhaps Parallel Ant Task is what you're looking for.
However <parallel> directive is not thread safe so I wouldn't recommend to use it with JMeter Ant task and consider using i.e. command-line mode, maven plugin or custom Java class which will spawn individual JMeter tests with it.
See 5 Ways To Launch a JMeter Test without Using the JMeter GUI guide for details of the approaches, hope this helps to find the one which matches your environment.
Yes, Ant parallel solves this problem.

Robot Framework use cases

Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine

continuous integration for many languages

I want to setup a continous integration system that upon a commit or similar trigger should:
run tests on a fortran/C/C++ code, if needed.
compile that code using cmake.
run tests on a rails app.
compile the rails ap.
restart the server.
I'm looking at Jenkins. Is it the best choice for this kind of work? Also, what's the difference between using a bash script that makes all that (if possible) and using jenkins? I'm asking not because I'm thinking about using a script, but to better understand jenkins.
It sounds like Jenkins would certainly be a reasonable choice for this. Apart from the ability to run arbitrary scripts as build steps, there's also a large number of plugins, which provide better integration with cmake for example.
Even if you're using a single bash script to do all of this, using Jenkins on top of it would still have a number of advantages. You get a web interface, email notifications and build history for free, with all that this entails. By integrating your tests "properly" with Jenkins, you can also get things like graphs that show how many tests succeeded/failed over time.
I am using Jenkins for java projects and have to say it is easy to configure. I used to add lots of plugins for better configuration of build steps, but tend to go back to using scripting languages for build and deploy steps because of two main reasons. If I have a build script, it's easier to configure the same job on a different Jenkins server or run the script manually if need be and the build configuration is not so cluttered (I still have one maven job with more than 50 post build steps). The second reason is, that it is easier to version the scripts in SVN, compared to having the build config in SVN.
So to answer your questions. I don't know if it is the 'best' tool, but it is good enough for me. Regarding scripting: use each tool for what it is build for. Jenkins a glorified cron deamon with great options when it comes to displaying analysis. The learning curve for people to use it is minimal (i.e. starting a job, seeing whether it failed.) Configuring Jenkins needs a little bit more learning, but it's very easy to set up simple jobs and go then to the more complicated tasks.
For the first four activities Jenkins will do the job and is rather the best choice nowadays, but for things like restarting the server (which is actually "remote execution"), better have a look at:
http://saltstack.com/
or:
https://wiki.opscode.com/display/chef/Home
http://cfengine.com/
http://puppetlabs.com/
http://cfengine.com/
Libraries like Fabric(Python) or Capistrano(Ruby) might be useful too.

Run JUnit tests in parallel

When running unit tests, Gradle can execute multiple tests in parallel without any changes to the tests themselves (i.e. special annotations, test runners, etc.). I'd like to achieve the same thing with ant, but I'm not sure how.
I've seen this question but none of the answers really appeal to me. They either involve hacks with ant-contrib, special runners set up with the #RunWith annotation, some other special annotations, etc. I'm also aware of TestNG, but I can't make the Eclipse plug-in migrate our tests - and we have around 10,000 of them so I'm not doing it by hand!
Gradle doesn't need any of this stuff, so how do I do it in ant? I guess Gradle uses a special runner, but if so, it's set up as part of the JUnit setup, and not mentioned on every single test. If that's the case, then that's fine. I just don't really want to go and modify c. 10,000 unit tests!
Gradle doesn't use a special JUnit runner in the strict sense of the word. It "simply" has a sophisticated test task that knows how to spin up multiple JVMs, run a subset of test classes in each of them (by invoking JUnit), and report back the results to the JVM that executes the build. There the results get aggregated to make it look like a single-JVM, single-threaded test execution. This even works for builds that define their own test listeners.
To get parallel test execution in Ant, you would need an Ant task that supports this feature (not sure if one exists). An alternative is to import your Ant build into Gradle (ant.importBuild "build.xml") and add a test task on the Gradle side.

Resources