I am new to continuous integration. I have developed windows application and its smoke testing using coded ui. I want to create build and release process in TFS and want to execute coded ui tests bases on every time new build is created. I would like to know if there are any limitations for coded ui when it comes to continuous integration? I am not able to find any article so far which explains about behavior of coded ui in TFS continuous integration
Your guidance in this regard is highly appreciated.
First off, it's worth noting that Coded UI is effectively deprecated; Selenium is recommended for web apps, and Appium is recommended for desktop apps.
That said, you need to use the Deploy Test Agent task to deploy interactive test agents to your target machines, followed by a combination of the Windows Machine File Copy (to deploy your tests to the test agents) and Run Functional Tests task to kick off your tests.
Related
We are initial stage of bringing devops into our daily activities. Now we are workig on .net and pthon coding. So we have to plan for continuous delivery on our activities.
In the first scenario for python development works, we are using web2py now. So developers can directly access the python files and work and test their webdevelopment work by web2py with certain port.
How jenkins can help here to automate this process and developers should get a easy GUI or way to test or compile their codes before it got deployed o web2py.
Also we need to automate the build activity of .net code as well. What are the best possible ways
My managers want we to determine which tests might have to be run, based on coding changes that were made to the application we are testing.
But, it is hard to know which tests are actually needed to be re-verified as a result of a code change. What we have done is common to test the entire area where the code change occurred / or the entire proj, solution.
We were told this could be achieved by TFS build or MTM tools. Could someone share the details?
PM:We are running on TFS 2015 update4,VS2017.
There is a concept of Test Impact Analysis which helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
The Test Impact Analysis (TIA) feature specifically enables this – TIA
is all about incremental validation by automatic test selection. For a
given code commit entering the pipeline TIA will select and run only
the relevant tests required to validate that commit. Thus, that test
run is going to complete faster, if there is a failure you will get to
know about it faster, and because it is all scoped by relevance,
analysis will be faster as well.
Test Impact Analysis for managed automated tests is available via a checkbox in the 2.* preview version of the VSTest task.
If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.
However this is only available with TFS2017 update1(need 2.* preview version of VSTS task). More details please refer this blog: Accelerated Continuous Testing with Test Impact Analysis
How does CircleCI and other CI tools help ?
I am not able to fully understand the internals of these tools and how they help with faster deployment of apps.
Are these tools useful only for github based open source projects ? Since the testing requirements for every app is different how is it possible to seamlessly automate it using the CI tools ?
The reason you are using Continous Integration (CI) is to have a well defined build system and always a releasable latest successfullbuild.
You can also integrate unit tests or integration tests.
I think it is not only useful for github-based projects, but also for projects with projects where more developers develop in parallel.
For more information: Wikipedia: Continous-Integration
CI is a developer practice which allows them to integrate code into their shared repository several times a day along with other developers in their team.
Each time a developer checks-in code its been verified auto-build and detect errors early.
It solves issues like
long and intense builds
spend less time in debugging
build more features quickly
How it works
CI server monitors for any changes in the code repo and starts building unit & integration tests. It assigns label for each version of build and also informs the team for a successful build or if it fails, the team fixes the issue and again starts to integrate.
I am currently working on a distributed system consisting of two Grails apps (3.x), let's call them A and B, where A depends on B. For my functional tests I am wondering: How can I automatically start B when I am running the test suite of A? I am thinking about something like JUnit rules, but I could not find any docs on how to programmatically start/manage Grails apps.
As a side note, for nice and clean IDE integration I do not want to launch B as part of my build test phase, but as part of my test suite setup.
A couple of months later and more deeply in the topic of Microservices I would suggest not to consider system tests as candidates for one single project - while I would still keep my unit- and service-level tests (i.e. API testing with mocked collaborators) in one project with the affected service, I would probably spin up a system landscape via gradle and docker and then run an end-to-end test suite in the form of UI tests.
What kind of agile tools are you using for Erlang development? What continuous integration (CI) server are you using to build Erlang code? The only reference I got was from Quora question How do I integrate Erlang unit tests in Jenkins (Hudson)?.
I am also interested in the nifty details of setting them up and making talk to each other.
As a company using Erlang actively, Klarna (www.klarna.com) use Jenkins (formerly Hudson) for daily regression test on nearly every dev commit. It's an org with about 80 people total in rnd and we use distribute mode of Jenkins which allows us to have more than 10 build slaves mastered by only one Jenkins server. Basically we have a code base with Eralng code which is version controlled by tools like svn or git. All these testcases are under common test framework and all works well under Jenkins.
Previously, we tried Cruise Control and gave it up since Jenkins does much better.
As Lukas mentioned, you probably will need a tool to gen xml files sine common test doesn't export them directly. Haven't really tried that module though, we do have an implementation of common test event handler to do the job, but it was abandoned due to performance, we do have a a critical requirement on test time. right now, we use a own made script to export xml from common test log directly.
There are a lot more you could do with Erlang and Jenkins, like code coverage analyze if you compile properly and export formatted xml to Cobertour plugin, gui test with selenium etc.
For setting up Jenkins, I think Jenkins home page has a good introduction.
Regarding agile tools, I guess it's really hard to define what a agile tool. Also what I believe is it's very much depend on the size of you org. You will probably need a good process view tool (team level or depart level), a good ticket tracking tool, code review tool, communication tool. There are bunch of them implemented under open source. According to our exp, none of them seems to be able to work seamlessly with Jenkins which means you will need to select and tweak by your own requirement. BUT that's the beauty of open source isn't it :)?
If you want to do it using Jenkins, I have written a common test hook which generates JUnit XML output for your tests which Jenkins can use to produce test statistics.
https://github.com/garazdawi/cth_tools/blob/master/src/cth_junit.erl
We use Jenkins for our Python code, so I think you may use Jenkins with Erlang code.
We use buildbot with our own recipes to hook unit tests.