I am currently working on a distributed system consisting of two Grails apps (3.x), let's call them A and B, where A depends on B. For my functional tests I am wondering: How can I automatically start B when I am running the test suite of A? I am thinking about something like JUnit rules, but I could not find any docs on how to programmatically start/manage Grails apps.
As a side note, for nice and clean IDE integration I do not want to launch B as part of my build test phase, but as part of my test suite setup.
A couple of months later and more deeply in the topic of Microservices I would suggest not to consider system tests as candidates for one single project - while I would still keep my unit- and service-level tests (i.e. API testing with mocked collaborators) in one project with the affected service, I would probably spin up a system landscape via gradle and docker and then run an end-to-end test suite in the form of UI tests.
Related
I am new to continuous integration. I have developed windows application and its smoke testing using coded ui. I want to create build and release process in TFS and want to execute coded ui tests bases on every time new build is created. I would like to know if there are any limitations for coded ui when it comes to continuous integration? I am not able to find any article so far which explains about behavior of coded ui in TFS continuous integration
Your guidance in this regard is highly appreciated.
First off, it's worth noting that Coded UI is effectively deprecated; Selenium is recommended for web apps, and Appium is recommended for desktop apps.
That said, you need to use the Deploy Test Agent task to deploy interactive test agents to your target machines, followed by a combination of the Windows Machine File Copy (to deploy your tests to the test agents) and Run Functional Tests task to kick off your tests.
I am trying to run UI tests on multiple identical instances of the web application. For example, let's say the identical version of the application is available at 3 places:
https://some1.app.com
https://some2.app.com
https://some3.app.com
The intended system should check which instance is available and run a test (that is not already run) on it. It should be able to run 3 tests on the 3 instances simultaneously in the Jenkins environment.
I have explored the Jenkins Matrix Configuration, but that appears to run all tests on all possible combinations in the matrix. My intention is to divide and load balance the tests, not run on all combinations. Any ideas on how this can be done?
I am using JUnit4 with Ant for running the tests on Jenkins.
One solution would be a Matrix Project Plugin. You could configure your url as parameter a bit like in here: Building a matrix project
Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine
I have a problem of comprehension.
Unit tests are coded by developers in order to test a class (Java).
Integration tests are aimed to know if the different classes work together.
My problem is:
Based on continuous integration: I have Subversion (SVN) linked to Jenkins, and Sonar linked to Jenkins.
How are the integration tests created? Who does them? Are these tests already available in Sonar, or developers have to code them? Sonar launches integration tests thanks to Jenkins? How does it work...?
Integration tests are also coded by developers, to test multiple classes at one time, conceptually a "module", whatever that means in your world.
In my world, unit tests are tests that exercise one class, and have no dependencies externally. We allow file system access for mock data and logging, but that's all.
If a test exercises an actual database, or a running executable somewhere (e.g. web service) it is an integration test. We write them with junit, same as a unit test.
We find it works best for us to have separate Jenkins jobs linked in a pipeline to build, execute unit tests, execute integration tests, and load Sonar. While SonarQube is able to run tests for you, we prefer the separation which allows us to manually execute either set of tests via Jenkins without updating Sonar at the same time.
We've got automated coverage builds, but they only give us numbers for our unit tests. We've also got a bunch of system tests.
This leaves us with two problems: some code looks uncovered even though it's used in the system tests (WCF endpoints, DB access, etc.); and some code looks covered even though it's only used by the unit tests.
How do I set up NCover (running on a build server) to get coverage numbers from that process (a service) while running these unit tests? All of the processes are on the same box.
In fact, we have two services talking to each other, and both communicate with an ASP.NET MVC app and an IIS-hosted WCF service; so it's actually multiple processes.
(.NET 4.0, x64. Using NUnit and MSpec. CI server is TeamCity.)
Just to clarify, are over there and over here on the same build server?
If so, I assume the basic issue is how to cover multiple services (sorry If I've oversimplified).
If that's true, unfortunately, NCover 3 can't profile more than one service at a time. However, you can cover each service individually (sequentially, not simultaneously) and then merge the coverage files.
I realize this means running NCover a couple of times in your build script, but from a coverage perspective, that's how it would work.
Does this help?