FitNesse running remote test cases locally? - fitnesse

The background is : I am trying to implement an automated integration test solution. I want to have a FitNesse server running which QA/Users can maintain the test cases. During our nightly build, we want to have the test run locally in the build machine. (In our build script, we are going to startup Jetty, and FitNesse test cases are invoking the RESTful APIs)
When I am looking into the fitnesse-maven-plugin (http://mojo.codehaus.org/fitnesse-maven-plugin/), in the description of goal fitnesse:run, it said that:
This goal uses the fitnesse.runner.TestRunner class for calling a remote FitNesse web page and executes the tests or suites locally into a forked JVM
However, when I am using this plugin with FitNesse version 2009xxxx or 2008xxxx (with a special patch of this maven plugin), I found that the test is not running locally. Instead, I saw new test results created in the remote FitNesse wiki server.
May I know if it is due to change of behavior of FitNesse? (Coz the fitnesse maven plugin is depending on a much older version of FitNesse) Also, with the original Test Runner being deprecated, is it possible to have the behavior I am looking for? (Pages defined in remote server, but run locally in build machine)
Or, is such way of work no-longer a recommended approach to use FitNesse? (If so, I will need to change the approach of the automated test)

One solution I've used is the wiki import option feature. This can import the latest changes from the remote wiki to your local build server.
http://fitnesse.org/FitNesse.UserGuide.WikiImport
You can also tell it to auto-update when running the tests rather than having to re-import manually whenever they change.
Another possibility is to use a source control plugin to automatically commit changes by QA/Users from the remote wiki and pull them down as part of your build.

Related

jenkins tests with ranorex

I'm just getting started with Jenkins and I have a few doubts that must be silly, but I'm stuck at it.
After I build my project Jenkins save the build file in some specific path?
Using Ranorex for automation test, is it better to put my files locally on the server or push them to a repository?
Note: I just start tried to use this, at this moment I can check for changes at BitBucket, build the file, build the Ranorex test and run the test.
Jenkins is quite a versatile application that allows system setup to specific needs and requirements of the test project. So i'd say go with the way that seems most logical/easiest. It's kind of a learning process as well so you will be able to understand the working flow of Jenkins itself.
But to answer your 2 questions:
1) By build files i believe you mean the test reports? - For this I actually use the Jenkins UserContent folder. This requires the "Copy to slave" plugin to be installed. With this you will get an additional Post-build Action where you can specify the files that will be copied over to the UserContent folder. But don't forget to specify a common layout for the naming of report files through the Ranorex run parameters ("/rf"). The UserContent folder actually acts as a web server and you can directly link the URLs for email reports. (eg. "http://Jenkins-server.com/UserContent/Regression-Client-Test-#1.html")
2) This totally depends on the system setup. But i can give you an example on how our system is currently set up. So we have Jenkins which runs on a Linux machine. It is only used to manage and run the tests and the actual machine does not include the automation test project. Then we have the test machine which runs on Windows and holds the actual automation tests. This machine is connected to Jenkins through the Slave functionality. So basically when someone starts a test job Jenkins from the Linux machine sends a command to the slave to start the automated tests. When the test run has finished post-build actions take over and copy the report files from the Slave machine to the Linux machines UserContent folder.
Now when talking about the test project management. It's a good idea to use a gir repository which will add another layer of somewhat "security". But if you have a small team (or you are the only test developer) then there is no actual need for it. You just copy the project to the test machine to a specified folder whenever needed/updated and you are ready to run it.
Regards,
Martin

Alternatives for QC

In my project we are using QC to execute our test cases(QTP), moving forward we would be eliminating QC (for cost reasons).
As far as I explored MSBuild & Jenkins, they would be suitable.
But MSBuild will trigger the execution when a new build pushed to the repository. Also it will automatically test on the latest build.
Is there any other CI tool available to execute test cases through QTP?
I will be executing automation once in a release. Also we install our application by manual since it requires lots of configuration.
Take a look at HP Application Automation Tools.
This plugin basically replaces the need for QC, and is developed by HP.
Create a Jenkins job using this plugin on the same Jenkins installation used to build your code, then you can configure your job to run your tests as soon the code is available (e.g. on a nightly basis).
See here for a helpful guide on how to implement a simple Jenkins job using this plugin.
They also host the code on Github, which is very useful if you need to change the behavior of the plugin to suit your needs.

TFS build Fails when Test projects containing MSCRM Organization service Url as localhost

I have a VM where MS CRM is installed and can be access using http://localhost:5555/Orgname/main.aspx.
I have created Unit test cases in my VM by refering the Organization Url as
http://localhost:5555/XRMServices/2011/OrganizationService.svc?wsdl
When I build the test project it connects to CRM and executes the test methods without any error.
Where as when I do a check-in ,the build is getting failed due to the reference to url "localhost".
for Build we have a separate Build server.
Can any one let me know how to solve this.
Your tests are being executed on the build server and it looks like some of your tests are of the Integration kind and not of the Unit kind, as such it's looking for a configured CRM instance on that server (localhost resolves to the host itself for every machine), and can't find any. Which means you have a few options:
Install CRM on the build server, extend the build process to deploy CRM to the build server during build in order to run your tests. A build process like the one developed by Wael Hamze can be extremely helpful for such a solution.
Do not include a localhost address, but actually check in a location that points to a shared dev environment, the build server can connect to. This is not ideal, as the build may be dependent on specific data being present and concurrent builds may break due to strange interactions. If you configure the build agent to only run 1 concurrent builds, it may work.
Disable the tests that depend on an installed version of CRM. You could put a [TestCategory("Integration")] on these tests and then set a filter condition on the build to exclude this test category.
Or you could try to improve your tests by making them independent of your configured instance, using Fakes or any other mocking framework. There are several testing frameworks specifically made for CRM workflow activities and other parts specific to CRM.
You need to remove the Integration Test from the list of Unit Tests that are executed as part of the build. I recommend creating a new project called [projectundertest].IntegrationTests and adding all your integration tests there. Then configure the build to only execute UnitTests...
You build server is trying to execute all of the tests in your solution. Unfortunately you have an Integration Test masquerading as a Unit Test. When it is executed the test tried to access CRM and fails. This is correct behavior.

environment configuration for tests running in NUnit

I have some integration tests that hit a webserver and verify certain functionalities. Depending on the build environment, the server will be at a different address (http://localhost:8080/, http://test-vm/, etc).
I would like to run these tests from a TFS build.
I'm wondering whats the appropriate way to configure these tests? Would I just add a setting to the config file? I'm doing that currently. Incidentally we do have a separate branch per test environment, so I could have a different config file checked in for each environment. I wonder if there is a better way though?
I'd like the build project to be able to tell the test what server to test. This seems better because then I don't have to maintain config information on a per branch basis.
I believe I'd be using NUnit for Team Build (http://nunit4teambuild.codeplex.com/) to get NUnit/TFS to play together.
I can think of a couple options:
Edit the .config file via command line before the test runs.
If the setting depends on which machine the test is run from, you could put it in machine.config

Automated Deployment in Rails

I'm working on my first rails app and am struggling trying to find an efficient and clean solution for doing automated checkouts and deployments.
So far I've looked at both CruiseControl.rb (having been familiar with CruiseControl.NET) and Capistrano. Unfortunately, unless I'm missing something, each one of them only does about half of what I want (with each one doing a different half).
For what I've seen so far:
CruiseControl
Strengths
Automated builds on repository checkouts upon commit
Also runs unit/functional tests and reports back
Weaknesses
No built-in deployment mechanisms (best I can find so far is writing your own bash scripts)
Capistrano
Strengths
Built for deployments
Weaknesses
Has to be kicked off via a command (i.e. doesn't do automated checkouts upon commit)
I've found ways that I can string the two together -- i.e. have CruiseControl ping the repository for changes, do a checkout upon commit, run the tests, etc. and then make a call to Capistrano when finished to do the deployment (even though Capistrano is also going to do a repository checkout).
Basically, when all is said and done, I'd like to have three projects set up:
Dev: Checkout/Deployment is entirely no touch. When someone commits a file, something checks it out, runs the tests, deploys the changes, and reports back
Stage: Checkout/Deployment requires a button click
Prod: Button click does either a tagged check out or moves the files from stage
I have this working with a combination of CruiseControl.NET and MSBuild in the .NET world, and it was fairly straightforward. I would guess this is also a common pattern in the ruby deployment world, but I could easily be mistaken.
I would give Hudson a try (free and open source). I started off using CruiseControl but got sick of having to relearn the XML configuration every time I needed to change a setting or add a project. Then I started using Hudson and never looked back. Hudson is more or less completely configurable over the web. It was initially a continuous integration tool for Java but has plugins for other development stack such as .NET and Ruby on Rails. There's a Rake plugin. If that doesn't work, you can configure it to execute any arbitrary command line after running your Rake builds/tests.
I should also add it's extremely easy to get Hudson going:
java -jar hudson.war
Or you can drop the war in any servlet container.
I would use two system to build and deploy anyway. At least two reasons: you should be able to run it separately and you should have two config files one for deploy and one for build. But you can easily glue the two systems together.
Just create a simple capistrano task, that tests and reports back to you. You can use the "run" command to do anything you want.
If you don't want any command line tool there was webistrano 2 years ago.
To could use something like http://github.com/benschwarz/gitnotify/tree/master to trigger the build deploy if you use git as repository.
At least for development automated deployments, check out the hook scripts available in git:
http://git-scm.com/docs/githooks
I think you'll want to focus on the post-receive hook script, since this runs after a push to a remote server.
Also worth checking out Mislav's git-deploy on github. Makes managing deployments pretty clean.
http://github.com/mislav/git-deploy

Resources