Karate tests execution - jenkins

Our requirement for API testing is:
To deploy test-automation module (Karate feature files, custom java classes) into AWS ECS-Fargate cluster.
Trigger the tests via Jenkins pipeline after every build of the actual microservice.
In addition to above, test-automation module should be triggered to run test suite on-demand and/or at scheduled intervals (say nightly) and send reports.
I have gone through Karate Distributed Testing and stand-alone executable jar options, but doesn't seem suitable for my case. Is Distributed Testing supported only for "Web-UI" automation testing?
Any thoughts would be helpful.

For this use-case, just use a Maven + JUnit project and then there is no difference between Karate and any other Jenkins Java pipeline.
It should be Jenkin's responsibility to do a scheduled build. It is up to you how to get all this into Fargate, maybe building a Docker container is part of the answer, but I would recommend trying to keep it simple.
Here is some Docker related discussion that may help: https://github.com/intuit/karate/issues/396
Open a new question with specifics next time.

Related

Spinning up Karate test client in a docker container

I am setting up the integration test framework for a Java rest api in our project and we want to run integration test in the gitlab pipeline. Since these tests are running in the same project as the API, we are wondering couple of things:
We dont want to run Karate tests during the maven build process. We want to run them only at integration test stage after the application deployment stage is complete. How do we do that as the maven build process runs both the junit unit tests and karate tests.
Since the API requires authentication, we need to run the karate test in a docker container, since we can inject our credentials only in the container as we are using hashi-corp vault to store the credentials. How do we launch a container with Karate client.
There are ways to run only a subset using Maven. What I do is define a different JUnit test, and call that from the command-line. Read the docs for more: https://github.com/karatelabs/karate#command-line
As long as you can pass environment variables (which you certainly can in Docker) you are good. Refer: https://stackoverflow.com/a/52821230/143475

CI/CD pipeline and build server

Have seen the following graph representing Jenkins pipeline:
git push --> Git repo --> Jenkins CI server --> Maven build server --> Test server --> Deliver build artifacts --> Deploy
Given that is correct, I am struggling to understand how the above different(?) servers work under the hood and I need some clarification so that Jenkins procedures are not just a black box.
Jenkins CI server, Maven build server and Test server are in reality one physical server?
If the answer to the previous question is yes, these 3 servers are different logical servers?
In my understanding and my case (Java Spring project), Maven build server executes mvn install and since pom.xml contains npm install plus npm run test commands, it is Maven build server that executes the UI tests and not the Test server. Am I right?
Does the Test server execute only the back-end Java acceptance tests?
It could be one physical or virtual server server.
no
To use maven you just need to install maven plugin in jenkins. Configure which version of maven you want to use in "Manage jenkins" -> "Global tools configuration" -> add needed maven version.
Depends on which kind of tests you want to run - you can repeat steps from step 3 for the tool you need or you can install needed tools on the jenkins server manually and add it to the PATH.
At the end just use needed tools in your pipeline or other king of job in jenkins.
Without further context, we must put some points clear:
Jenkins may work on multiples nodes (could be real/virtual/pods)
Jenkins is an orchestrator... This means that it uses different tools in a order given by the pipeline. This order is pretty much: git > build > test > delivery > deploy
Testing server are used for install the software an run a variety of tests, pretty much for several teams.
Build servers are used for running commands to build software
With those clear, these are the questions:
1.- Depends on your infrastructure. Jenkins works with executors that may run on master or in nodes. If your Jenkins is just one server, then yes... It is one physical server. If you use nodes, then it is more probable than you may found one node handling the building, and another one the tests. The definite answer lies within your Jenkins and your pipeline
2.- No... Even with a Jenkins master-agent infrastructure, the building and testing occurs within Jenkins.
3.- Depends on your concept of test server. If you define it as a server where you use as a target for testing purposes, then the answer is no... If you define a test server as a machine that executes test, then the answer is yes.
4.- Depends on what you type or test do you have automatized. You can run unit tests, regression tests, smoke test, etc. For example, you may have some unit test for your back-end, and have some test in karma for your UI. Again, in your case you must check on your pipeline, and the code to check what kind of tests are you running.

Run Test automation code from Development repository on every push through Bitbucket pipelines

I am Test automation engineer and I have developed my automation code repository to test functional aspect of the product. I want this code to run when any developer pushes feature or bug on the beta environment.
I have built the pipeline on Automation repository, and I am using docker image for selenium and maven for the same. When I push any changes on my repository pipeline triggers but I want this same to happen from different repositories.
One solution I can think of it is Trigger automation pipeline from developer's pipeline through REST API (pipeline-initiated). But this is not a full proof solution as automation pipeline image will not be updated after the changes made by developers.
In short: We have automation tests written in one repo and development code run into one repo. As a part of CI/CD/CT, I want all of these things run automatically and we get the bug free build every time.
You should try Ansible for this scenario. As you already have you docker images . Just wrap it with ansible and use to to trigger automation on different repos push trigger.

Is Ansible a replacement for a CI tool like Hudson/Jenkins?

Recently, in our company, we decided to use Ansible for deployment and continuous integration. But when I started using Ansible I didn't find modules for building Java projects with Maven, or modules for running JUnit tests, or JMeter tests.
So, I'm in a doubtful state: it may be I'm using Ansible in a wrong way.
When I looked at Jenkins, it can do things like build, run tests, deploy. The missing thing in Hudson is creating/deleting an instance in cloud environments like AWS.
So, in general, for what purposes do we need to use Ansible/Jenkins? For CI do I need to use a combination of Ansible and Jenkins?
Please throw some light on correct usage of Ansible.
First, Jenkins and Hudson are basically the same project. I'll refer to it as Jenkins below. See How to choose between Hudson and Jenkins?, Hudson vs Jenkins in 2012, and What is the most notable difference between Jenkins and Hudson from a user perpective? for more.
Second, Ansible isn't meant to be a continuous integration engine. It (generally) doesn't poll git repos and run builds that fail in a sane way.
When can I simply use Jenkins?
If your machine environment and deployment process is very straightforward (such as Heroku or iron that is configured outside of your team), Jenkins may be enough. You can write a custom script that does a deploy as the final build step (or a chained step).
When can I simply use Ansible?
If you only need to "deploy" without needing to build/test, Ansible might be enough. For instance, you can run a deploy from the commandline or using Ansible Tower. This is great for small projects, static sites, etc.
How do they work together?
A good combination is to use Jenkins to build, test, and save artifacts. Add a step to call Ansible or Ansible Tower to handle the actual deployment process. That allows Ansible to handle machine configuration and lets Jenkins handle the CI process.
What are the alternatives to Jenkins?
I strongly recommend Thoughtworks Go (not to be confused with Go the language) instead of Jenkins. Others include CruiseControl, TravisCI, and Integrity.
Ansible is just a "glorified SSH loop".
CI is not only the software running, but the whole process of how success and failure is handled, who gets notification, and how the change is merged into the target version control.
If we only focus on the software, CI is a reactive scheduler triggered by code changes, and triggering typical build-validate-release-deploy sequence of "steps".
So in respect of software, Ansible without additional "sugaring" is just a toolkit to run things, which can be those very steps, but it is not CI.
The Ansible (without tower) totally lacks this reactive nature.
If you want to marry Ansible with CI, you can.
Ansible tower is a very Ansible oriented scheduler, but if you need CI software, I think you not necessarily need it. Any CI app capable of running shell script would be capable to launch Ansible playbooks.
Yet unlike Ansible tower - CI tools know to display test reports of all test frameworks, trigger notifications, etc.
Ansible tower can make sense in a complex environment with lots of groups touching Ansible code... The truth is I haven't seen a single real reason to pay for it. But if a manager liked the web interface nothing can stand "but others use it" logic.
I suspect the concept of Ansible tower was in response to puppet enterprise.
:)

Parallelizing tests with Jenkins

I am using Jenkins for integration testing.
Just to give the context. At the moment I have a separate build server which produces the build daily and Jenkins is not used as the build server. The build server executes the unit testing in my case.
When build process is complete it invokes the Jenkins job. In that job Jenkins start to deploy the build into the Virtual machine. I have a script for doing this.
Followed to that my plan is to run several scripts for doing the end-to-end testing.
Now I have several question in this regard:
How to parallelize the execution of the end-to-end tests?
As I am adding scripts after script I am getting worried how manageable it will be?
I am always using the web interface for adding and changing the scripts. How to do this from the command line?
Any ideas for a good tutorial? Any pointers from all of you? Thanks!
Looks like Build Flow Plugin is what I need.
https://github.com/jenkinsci/build-flow-plugin
You might want to try and see if you can use the Build Pipeline plugin before build flow. Much better visualization of what is going on, less scripting.
I link Build and deploy jobs in one sequence and then have unit and integration test jobs linked separately off the build job. You can then use Fail The Build plugin to have downstream jobs fail upstream ones.

Resources