Building and running a docker container in a GoCD elastic agent? - docker

I have GoCD deployed in my Kubernetes cluster with the standard Helm chart, which configures it to use the Kubernetes elastic agent plugin and provides an example elastic profile which uses the gocd-agent-docker-dind docker image to provide docker-in-docker functionality.
What I'd like to do is to have my first stage build the Dockerfile in the repo, then have another stage execute the unit tests in the previously built docker image and parse the JUnit XML test result output. I managed to get the build and test execution working, but I'm having trouble extracting the test result files afterward.
I'm running a shell command like the following inside the elastic agent's image:
docker run -v "/test-results:/test-results" \
test-image:v${GO_PIPELINE_LABEL} \
--junit-xml /test-results/results.xml
But the directory is empty after the run, suggesting an issue with mounting a volume in a docker-in-docker situation.
Has anyone tried to accomplish anything like this before, or do you have any ideas how to resolve this?

Related

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

Spinning Docker / ECS containers from Jenkins Docker container

I had setup Jenkins using the Jenkins Docker Image on an AWS ECS Cluster with just one EC2 instance.
After the initial setup, I tried running the hello-world pipeline from Jenkins documentation. I see that I am getting "docker: not found"
I understand that this is because Docker is not installed and available within the Jenkins Docker container. However, I have a fundamental question on whether I should proceed with installing Docker inside the running Jenkins Docker container (to use that as the base image) or not. When I researched around, I found this blog post and this SO Answer.
I wanted to follow these suggestions and I tried mounting the volume /usr/bin/docker and the socket /var/run/docker.sock from the host EC2 / ECS instance to the Jenkins Container. After this, when I ran the docker version command to test the setup, I am getting linux library issues - docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory which indicates that the setup did not go well.
Here are my questions -
How to run Jenkins pipelines that use Docker containers when running Jenkins based on a Docker container? I want to be able to pull / build / run docker containers, say for example - run the hello-world pipeline example referenced above?
My end goal is to create 2 types of Jenkins jobs that do the following -
Jenkins Job Type 1
Check out repository from BitBucket cloud
Run a shell script to build a docker image for a java project (possibly using the maven jib plugin)
Publish to AWS ECR. (assuming this can be done using the cloudbees plugin)
Jenkins Job Type 2
Pull the image published from Job Type 1 from AWS ECR
Create a container from the image (which essentially runs the java application)
The container itself could be run on the same Jenkins ECR cluster with slaves. But, again should the slaves have docker installed within them to pull and run the image from ECR?
Asking these questions after a good amount of research and not finding answers. Any guidance is appreciated. Thanks.
I Googled the docker error you included in your post and found this StackOverflow post.
You have to install libltdl-dev in order to get everything working correctly
Since the errors are identical I suggest you give it a shot. As per the post, install libltdl-dev in the docker container.

Trying to use zap in a gitlab-ci workflow

I am trying to automatize the usage of Zap in the continuous integration workflow of my company. We are using gitlab-ci and I'd want to use a docker image embedding Zap as a service and, in a first time, just call a quick scan on a legally targetable website like itsecgames.com.
I am using the docker image nhsbsa/owasp-zap that exposes zap.sh as entry point.
My question is:
How can I use this image as a service in a gitlab-ci YAML script in order to do a quick scan on itsecgames.com?
Relevant information:
Here is my gitlab-ci.yml:
image: openjdk:8-jdk
variables:
PROJECT_NAME: "psa-preevision-viewer"
stages:
- zap
zap-scanner:
services:
- nhsbsa/owasp-zap:latest
stage: zap
script:
- nhsbsa__owasp-zap -cmd -quickurl http://itsecgames.com/ -quickprogress
When the gitlab runner tries to resolve this job, I get this error message:
$ nhsbsa__owasp-zap -cmd -quickurl http://itsecgames.com/ -quickprogress
/bin/bash: line 27: nhsbsa__owasp-zap: command not found
ERROR: Job failed: exit code 1
At this point I've tried different approaches like calling zap.sh directly instead of nhsbsa__owasp-zap, or nhsbsa-owasp-zap (according to gitlab-ci documentation, both names should work though).
There probably is something that I'm seriously misunderstanding, but isn't using a service in gitlab-ci the same as pulling an image and calling docker run on it on my own computer ? As a matter of fact if I use
docker run nhsbsa/owasp-zap -cmd -quickurl http://itsecgames.com/ -quickprogress
I get as expected an XML with the found vulnerabilities.
If that's important:
gitlab-runner version is 1.11.1
gitlab version is Community Edition 8.7.4
When you create a service in gitlab it spins up the docker container alongside and gives you a hostname in which to access it. The idea is you call your commands from the initial docker image and point them to your service image. As #Jakub-Kania mentioned it doesn't allow you to run it as a local command.
So in terms of our nhsbsa/owasp-zap image it means we have a owasp-zap daemon running and available at nhsbsa__owasp-zap:8080. We then use maven and the zap plugin to scan our application.
Something like this (we're also parsing the zap results in sonar) :
mvn -B --non-recursive -Pzap -DzapPort=$NHSBSA__OWASP_ZAP_PORT_8080_TCP_PORT -DzapHost=$NHSBSA__OWASP_ZAP_PORT_8080_TCP_ADDR
-DzapTargetUrl=$baseUrl
-DsonarUrl=$SONAR_URL -Dsonar.branch=release
br.com.softplan.security.zap:zap-maven-plugin:analyze sonar:sonar
Depending what you're application is written in you might want to run the docker run command as a script step rather than using a service.
#Simon Bennetts is there a way to use something like curl to pass a test request to a remote zap daemon?

Gitlab Continuous Integration on Docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?
The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.
I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.
Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Test Docker cluster in Jenkins

I have some difficulties to configure Jenkins to run test on a dockerized application.
First here is my set up: the project is on bitbucket and I have a docker-compose that run my application which is composed of 3 three conmtainers for now (one for mongo, one for redis, one for my node app).
The webhook of bitbucket works well and Jenkins is triggered when I push.
However what i would like to do for a build is:
get a repo where my docker-compose is, run the docker-compose in order to have my cluster running, and then run a "npm test" inside the repo (my test use mocha), and finally having Jenkins notified if the test have passed or not.
If someone could help me to get this chain of operation applied by Jenkins, it would be awesome.
The simplest way is use jenkins pipeline plugin or shell script.
To build docker image and run compose you could use docker-compose command. Important thing is that you need rebuild docker image from compose level (because if you run docker-compose run only jenkins can use previous bilded image). So you need run docker-compose build before.
Your dockerfile should copy all files of your application.
Next when your service is ready you could run command in docker image using: docker exec {CONTAINER_ID} {COMMAND_TO_RUN_TESTS}.

Resources