I have a project where we have created a set of selenium tests, using rspec and capybara, that run against a remote server. This means that these tests do not run in the same Rails instance/environment as the application and, therefore, do not have access to that applications rake tasks.
What we are trying to figure out is a good method of cleaning/restoring the database before each run. We deploy the application via a Jenkins build task and then, if successful, kick of the selenium tests. We are using Selenium2 and the tests are run via SeleniumServer (formerly Selenium Grid). We do have the capability of firing off a Cap task when we deploy the application to restore the DB.
The question is how to do the restore while minimizing the number of migrations that we need to do (preferably limiting migrations to only the most recent ones) and pre-seeding the database with the required data.
Some interesting things to note about our setup: we have a fair bit of information to seed, not Gigs of it, but more than what you would want to enter into a seeds file and we have a fully partitioned database with both public and private schemas. We have a multi-tenant application and use private schemas to isolate data access.
So, what are some of the ways that other people have used to solve this problem?
I think most people use database-cleaner for this problem, but as I said at the beginning, the selenium tests are running outside of the Rails environment so database-cleaner won't work.
If you're using Jenkins, you could build another Jenkins job that is solely responsible for resetting / refreshing your database. This could contain scripts in your flavor of choice for cleaning up the database. Then set your current Jenkins testing job as a downstream project that gets kicked off upon the successful execution of your cleanup job.
Then when you want to kickoff a full test, just run the cleanup job and go make a sandwich :)
Related
I have a VM where MS CRM is installed and can be access using http://localhost:5555/Orgname/main.aspx.
I have created Unit test cases in my VM by refering the Organization Url as
http://localhost:5555/XRMServices/2011/OrganizationService.svc?wsdl
When I build the test project it connects to CRM and executes the test methods without any error.
Where as when I do a check-in ,the build is getting failed due to the reference to url "localhost".
for Build we have a separate Build server.
Can any one let me know how to solve this.
Your tests are being executed on the build server and it looks like some of your tests are of the Integration kind and not of the Unit kind, as such it's looking for a configured CRM instance on that server (localhost resolves to the host itself for every machine), and can't find any. Which means you have a few options:
Install CRM on the build server, extend the build process to deploy CRM to the build server during build in order to run your tests. A build process like the one developed by Wael Hamze can be extremely helpful for such a solution.
Do not include a localhost address, but actually check in a location that points to a shared dev environment, the build server can connect to. This is not ideal, as the build may be dependent on specific data being present and concurrent builds may break due to strange interactions. If you configure the build agent to only run 1 concurrent builds, it may work.
Disable the tests that depend on an installed version of CRM. You could put a [TestCategory("Integration")] on these tests and then set a filter condition on the build to exclude this test category.
Or you could try to improve your tests by making them independent of your configured instance, using Fakes or any other mocking framework. There are several testing frameworks specifically made for CRM workflow activities and other parts specific to CRM.
You need to remove the Integration Test from the list of Unit Tests that are executed as part of the build. I recommend creating a new project called [projectundertest].IntegrationTests and adding all your integration tests there. Then configure the build to only execute UnitTests...
You build server is trying to execute all of the tests in your solution. Unfortunately you have an Integration Test masquerading as a Unit Test. When it is executed the test tried to access CRM and fails. This is correct behavior.
In our TFS 2013 project we have a set of simple MSBuild based integration tests alongside our unit tests which test stored procedures and other logic which need a database server to be present, for example
[TestMethod]
[TestCategory("Integration")]
public void SomeTest()
{
InitialiseData();
var results = RunStoredProcedure();
AssertResultIsCorrect(result);
}
As you can see we have tagged these tests as "Integration" so that we can supply a test case filter that excludes these tests when we don't want to run them. These tests are all designed to run against an installed copy of our database (we have a process which deploys an up-to-date copy of our database to a SQL Server instance).
What we would like to do is create an integration test build definition in TFS which we can schedule to deploy our database and then run all of these tests against a specific SQL Server instance.
At the moment we use the standard TFS build process with two test sources - the first runs a "unit test" which installs the database, the second contains all of the actual integration tests. The problem we have is passing the connection string & database name into the unit tests from the build definition. Currently the connection string is in app.config files for each of the test projects, however this is less than ideal as it means that we are constantly getting failing test runs, either due to developers checking in the wrong connection string, or running tests locally against the build database at the same time that a build is running. This setup also limits us to running one build at a time.
Is there a way that we can specify the connection string and database name to use as part of the build workflow template instead?
With a combination of SlowCheetah for your config transformation and VS linked files, I think you can solve this (and based on the OP you probably already have :). Make a new solution configuration in your solution for the scenario you described. This solution will not be used on dev machines, only by TFS build definition (under Process, Items to build, Configurations to build).
The Configuration Manager for the solution would then use the solution configuration only for the test proj.
Add your SlowCheetah transform for your new solution configuration and put in your db conn string you need for TFS for that new transform.
Now in the tests project, copy over all the config files as linked files. This will allow the test executions to respect the test config file that SlowCheetah will transform. You may have to adjust your configuration reading in your test proj(s) to account for this.
This isolates the solution configuration to only the TFS server since only it will be building with your new solution configuration. Now TFS will have a config file that points to your specific TFS database connection that no other machines respect.
The background is : I am trying to implement an automated integration test solution. I want to have a FitNesse server running which QA/Users can maintain the test cases. During our nightly build, we want to have the test run locally in the build machine. (In our build script, we are going to startup Jetty, and FitNesse test cases are invoking the RESTful APIs)
When I am looking into the fitnesse-maven-plugin (http://mojo.codehaus.org/fitnesse-maven-plugin/), in the description of goal fitnesse:run, it said that:
This goal uses the fitnesse.runner.TestRunner class for calling a remote FitNesse web page and executes the tests or suites locally into a forked JVM
However, when I am using this plugin with FitNesse version 2009xxxx or 2008xxxx (with a special patch of this maven plugin), I found that the test is not running locally. Instead, I saw new test results created in the remote FitNesse wiki server.
May I know if it is due to change of behavior of FitNesse? (Coz the fitnesse maven plugin is depending on a much older version of FitNesse) Also, with the original Test Runner being deprecated, is it possible to have the behavior I am looking for? (Pages defined in remote server, but run locally in build machine)
Or, is such way of work no-longer a recommended approach to use FitNesse? (If so, I will need to change the approach of the automated test)
One solution I've used is the wiki import option feature. This can import the latest changes from the remote wiki to your local build server.
http://fitnesse.org/FitNesse.UserGuide.WikiImport
You can also tell it to auto-update when running the tests rather than having to re-import manually whenever they change.
Another possibility is to use a source control plugin to automatically commit changes by QA/Users from the remote wiki and pull them down as part of your build.
I have some integration tests that hit a webserver and verify certain functionalities. Depending on the build environment, the server will be at a different address (http://localhost:8080/, http://test-vm/, etc).
I would like to run these tests from a TFS build.
I'm wondering whats the appropriate way to configure these tests? Would I just add a setting to the config file? I'm doing that currently. Incidentally we do have a separate branch per test environment, so I could have a different config file checked in for each environment. I wonder if there is a better way though?
I'd like the build project to be able to tell the test what server to test. This seems better because then I don't have to maintain config information on a per branch basis.
I believe I'd be using NUnit for Team Build (http://nunit4teambuild.codeplex.com/) to get NUnit/TFS to play together.
I can think of a couple options:
Edit the .config file via command line before the test runs.
If the setting depends on which machine the test is run from, you could put it in machine.config
I'm working on my first rails app and am struggling trying to find an efficient and clean solution for doing automated checkouts and deployments.
So far I've looked at both CruiseControl.rb (having been familiar with CruiseControl.NET) and Capistrano. Unfortunately, unless I'm missing something, each one of them only does about half of what I want (with each one doing a different half).
For what I've seen so far:
CruiseControl
Strengths
Automated builds on repository checkouts upon commit
Also runs unit/functional tests and reports back
Weaknesses
No built-in deployment mechanisms (best I can find so far is writing your own bash scripts)
Capistrano
Strengths
Built for deployments
Weaknesses
Has to be kicked off via a command (i.e. doesn't do automated checkouts upon commit)
I've found ways that I can string the two together -- i.e. have CruiseControl ping the repository for changes, do a checkout upon commit, run the tests, etc. and then make a call to Capistrano when finished to do the deployment (even though Capistrano is also going to do a repository checkout).
Basically, when all is said and done, I'd like to have three projects set up:
Dev: Checkout/Deployment is entirely no touch. When someone commits a file, something checks it out, runs the tests, deploys the changes, and reports back
Stage: Checkout/Deployment requires a button click
Prod: Button click does either a tagged check out or moves the files from stage
I have this working with a combination of CruiseControl.NET and MSBuild in the .NET world, and it was fairly straightforward. I would guess this is also a common pattern in the ruby deployment world, but I could easily be mistaken.
I would give Hudson a try (free and open source). I started off using CruiseControl but got sick of having to relearn the XML configuration every time I needed to change a setting or add a project. Then I started using Hudson and never looked back. Hudson is more or less completely configurable over the web. It was initially a continuous integration tool for Java but has plugins for other development stack such as .NET and Ruby on Rails. There's a Rake plugin. If that doesn't work, you can configure it to execute any arbitrary command line after running your Rake builds/tests.
I should also add it's extremely easy to get Hudson going:
java -jar hudson.war
Or you can drop the war in any servlet container.
I would use two system to build and deploy anyway. At least two reasons: you should be able to run it separately and you should have two config files one for deploy and one for build. But you can easily glue the two systems together.
Just create a simple capistrano task, that tests and reports back to you. You can use the "run" command to do anything you want.
If you don't want any command line tool there was webistrano 2 years ago.
To could use something like http://github.com/benschwarz/gitnotify/tree/master to trigger the build deploy if you use git as repository.
At least for development automated deployments, check out the hook scripts available in git:
http://git-scm.com/docs/githooks
I think you'll want to focus on the post-receive hook script, since this runs after a push to a remote server.
Also worth checking out Mislav's git-deploy on github. Makes managing deployments pretty clean.
http://github.com/mislav/git-deploy