Run tests inside Docker container with Jenkins - jenkins

We want to give it a try to setup CI/CD with Jenkins for our project. The project itself has Elasticsearch and PostgreSQL as runtime dependencies and Webdriver for acceptance testing.
In dev environment, everything is set up within one docker-compose.yml file and we have acceptance.sh script to run acceptance tests.
After digging documentation I found that it's potentially possible to build CI with following steps:
dockerize project
pull project from git repo
somehow pull docker-compose.yml and project Dockerfile - either:
put it in the project repo
put it in separate repo (this is how it's done now)
put somewhere on a server and jut copy it over
execute docker-compose up
project's Dockerfile will have ONBUILT section to run tests. Unit tests are run through mix tests and acceptance through scripts/acceptance.sh. It'll be cool to run them in parallel.
shutdown docker-compose, clean up containers
Because this is my first experience with Jenkins a series of questions arise:
Is this a viable strategy?
How to connect tests output with Jenkins?
How to run and shut down docker-compose?
Do we need/want to write a pipeline for that? Will we need/want pipeline when we will get to the CD on the next stage?
Thanks

Is this a viable strategy?
Yes it is. I think it would be better to include the docker-compose.yml and Dockerfile in the project repo. That way any changes are tied to the version of code that uses the changes. If it's in an external repo it becomes a lot harder to change (unless you pin the git sha somehow , like using a submodule).
project's Dockerfile will have ONBUILD section to run tests
I would avoid this. Just set a different command to run the tests in a container, not at build time.
How to connect tests output with Jenkins?
Jenkins just uses the exit status from the build steps, so as long as the test script exits with a non-zero code on failure and a zero code on success that's all you need. Test output that is printed to stdout/stderr will be visible from jenkins console.
How to run and shut down docker-compose?
I would recommend this to run Compose:
docker-compose pull # if you use images from the hub, pull the latest version
docker-compose up --build -d
In a post-build step to shutdown:
docker-compose down --volumes
Do we need/want to write a pipeline for that?
No, I think just a single job is fine. Get it working with a simple setup first, and then you can figure out what you need to split into different jobs.

Related

Gitlab CI: How to configure cypress e2e tests with multiple server instances?

My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.

CICD Jenkins with Docker Container, Kubernetes & Gitlab

I have a workflow where
1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A.
2) Their app should contain Dockerfile or docker-compose.yml
3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test
4) Jenkins should build a new docker image and deploy it.
Question:
If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?
Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.
I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.
I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)
Please advise me on the workflow. Thank you
I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.
For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".
For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.
So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).
So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.
Now this picture doesn't contradict with the containerization:
The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).
So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.
Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.
Whoa, that's a big one and there are many ways to solve this challenge.
Here are 2 approaches which you can apply:
Solve it by using different types of Jenkins Slaves
In the long run you should consider running Jenkins workloads on slaves.
This is not only desirable for your use case but also scales much better on higher workloads.
Because in worst case denser workloads can kill your Jenkins master.
See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference.
When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.
These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.
Solve it by doing everything within the Docker Environment
The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.
I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.
Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops#yourcompany.biz
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .
CMD [ "npm", "start"]
Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
...
steps {
script {
docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
}
}
}
}
Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.
If you'd try to define your questions a little more in detail, you'd get better answers.

Steps to run Test framework in Docker and Jenkins

Background:
I am a newbie to docker.
I have 2 automation frameworks in my local PC - One for Mobile and other a web application. I have integrated the test frameworks with Jenkins.
Both test frameworks have open Jar dependencies mentioned in Maven pom.xml.
Now i want that when I click on Jenkins Job run to execute tests, my tests should run in a docker container.
Can anyone please give me steps to
Configure Docker in this completer Integrated framework
How to push my dependencies in docker
How to integrate jenkins and Docker
how to run Tests of web and mobile apps in docker on jenkins job click
I'm not a Jenkins professional, but from my experience, there are many possible setups here:
Assumptions:
By "Automation Framework", I understand that there is some java module (built by maven, I believe for gradle it will be pretty much the same) that has some tests that in turn call various APIs that should exist "remotely". It can be HTTP calls, working with selenium servers and so forth.
Currently, your Jenkins job looks like this (it doesn't really matter whether its an "old-school" job "step-by-step" definition or groovy script (pipelines):
Checkout from GIT
run mvn test
publish test results
If so, you need to prepare a docker image that will run your test suite (preferably with maven) to take advantage of surefire reports.
So you'll need to build this docker image once (see docker build command) and make it available in the private repository / docker hub depending on what your organization prefers. Technically for this docker image, you can consider a Java image as a base image, get the maven (download and unzip + configure) then issue the "git pull command". You might want to pass credentials as system variables to the docker process itself (see '-e' flag)
The main point here is that maven inside the docker image will run the build, so it will resolve the dependencies automatically (you might want to configure custom repositories if you have them in settings.xml of maven). This effectively answers the second question.
One subtle point is results that should be somehow shown in Jenkins:
You might want to share the volume with surefire-results folder with the Jenkins "host machine" so that Jenkins's plugins that are supposed to show the results of tests will work. The same idea is applicable if you're using something like allure reports, spock reports and so forth.
Now when the image is ready the integration with Jenkins might be as simple as running a docker run command and wait till it's done. So now the Jenkins job will look like:
docker run pre-defined image -e <credentials for git>
show reports
This is one example of possible integration.
One slightly different option is running docker build as a job definition. This might be beneficial if for each build that image should be significantly different but it will make the build slower.
Following approach can be followed to achieve your goal
Create a docker file with all your setup as well as dependency ( refer)
Install docker plugin on jenkins to integrate the support of docker (refer)
Use Jenkinsfile's approach to pull the docker image or create it by dockerfile and run the test within docker.
below sample code just for reference
node
{
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id')
{
def customImage = docker.build("my-image")
docker.image('my-image').inside
{
//Run inside the container
sh 'run test'
}
}
}

How to trigger a Jenkins job at boot

When running Jenkins as docker container, some advanced setup may be lost at upgrade (or restart). My typical example is to download wildfly-cli jar into /var/lib/jenkins/war/WEB-INF/lib/ for wildfly-deployer
I find it easy to implement such setup thanks to a Jenkins job.
And I now face the following question: is there a way to trigger that Jenkins job only once after system/jenkins boot ?
I have an idea, which might be somewhat hacky: Build a custom docker container based on the original Jenkins container and add an extra step to your docker file.
That extra step would be triggering that Job. Jenkins does have an option to start job externally, e.g., from a script, or in your case from a docker file.
You can rebuild and restart that container and it will run the build once. Would that work for you?

Pipeline GitHub -> Travis CI -> Docker

I have a github-repository, that is linked to automated build on Docker. Consequently, on each commit to master-branch, docker triggers building of Docker-image.
Also, each commit is tested by Travis CI automatically.
My question is: is there any way to trigger Docker only if travis finishes successfully? Do I need some sort of webhook or something like that for my goal?
You could trigger the Travis CI test after the repository is pushed. Then, in the deploy step you could trigger a build on Docker. Or even do the build inside Travis, and just push the image to the repository you are using.
Travis has a nice overview of how to make this flow happen here.
The gist is that you're going to need to have sudo: required, so you're going to be running in a VM instead of inside Docker, as is the standard way in Travis. You also need to add docker as a service, much like you'd add redis or postgres for an integration test. The Pushing Docker Image to a Registry section has a lot of info on setting things up for the actual deployment. I'd use an actual deploy step with the script provider, rather than after_success, but that's up to you.

Resources