Get output from Gitlab CI using docker logs - docker

I am working on a script aggregating all job traces from pipeline’s jobs. My goal is to:
Send traces to Graylog server
Save job traces locally to make them accessible from the machine in case of Graylog shutdown.
My first thought was accessing the logs from my GitLab CI using using docker logs (or some other cli tool) on my machine with docker.
I know from this thread that it's possible to do from docker containers using for example:
echo "My output" >> /proc/1/fd/1
But is that possible to do from Gitlab-runner containers? My .gitlab-ci.yml for testing looks like this:
image: python:latest
stages:
- test
test:
stage: test
tags:
- test
script:
- echo "My output" >> /proc/1/fd/1
Generally I would like to be able to get "My output" from machine using docker logs command but I am not sure how to do this. I use docker executor for my Gitlab runner.
I hope my explanation is understandable.

You cannot do this with any of the official docker-based GitLab executors. Job output logs are not emitted from the runner or containers it starts. All output from a job container is captured and transmitted to the GitLab server in realtime. The output never reaches the docker logging driver. Therefore, you cannot use docker logs or similar utilities to obtain job logs.
You can obtain job logs either: (1) from the configured storage of the GitLab server or (2) by using the jobs API.
For example, you can run a log forwarder (like splunk universal forwarder, graylogs forwarder, etc.) directly on a self-hosted GitLab instance to forward job traces to respective external systems.

GitLab 15.6 (November 2022) might help here:
GitLab Runner 15.6
We’re also releasing GitLab Runner 15.6 today! GitLab Runner is the lightweight, highly-scalable agent that runs your CI/CD jobs and sends the results back to a GitLab instance.
GitLab Runner works in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab.
What's new:
Service container logs
Bug Fixes
GitLab Runner on Windows in Kubernetes: error preparation failed
The list of all changes is in the GitLab Runner CHANGELOG.
See Documentation.
The Service container logs means:
Logs generated by applications running in service containers can be captured for subsequent examination and debugging.
Please note:
Enabling CI_DEBUG_SERVICES may result in masked variables being revealed.
When CI_DEBUG_SERVICES is enabled, service container logs and the CI job’s logs are streamed to the job’s trace log concurrently, which makes it possible for a service container log to be inserted inside a job’s masked log.
This would thwart the variable masking mechanism and result in the masked variable being revealed.

Related

How to create a docker container inside docker in the GitLab CI/CD pipeline?

Since I do not have lots of experience with DevOps yet, I am struggling with finding an answer for the following question:
I'm setting up the CI/CD pipeline for my project (Python, FastAPI, Redis), which will have test and build stages. It can be described as follows:
Before stages: Install all dependencies (install python, copy files for testing, etc.)
The test stage uses docker-compose for running the Redis server, which is
necessary to launch the application for testing (unit test).
The build stage creates a new docker container
and pushes it to the Docker Hub if there is a new Gitlab tag.
The GitLab Runner is located on the AWS EC2 instance, the runner executor is a "docker" with an "Ubuntu:20.04" image. So, the question:
How to run "docker-compose"/"docker build" inside the docker executor and whether it can be done at all without any negative consequences?
I thought about several options:
Switch from docker executor to something else (maybe to shell or docker+ssh)
Use Docker-in-Docker, but I see cautions that it can be dangerous and not sure exactly why in my case.
What I've tried:
To use Redis as "services" in Gitlab job instead of docker-compose file, but I can't find a way to bind my application (host and port) to a server that runs inside the docker executor as a service.

CI/CD results don't consider whether containers started successfully internally

When I bring up a Docker container in detached mode, the results usually return sooner as they will not print out on the console. This is a problem for me when I'm running it through Gitlab's CI/CD.
So for example, when I have this command in the deployment stage of my gitlab-ci.yml:
ssh root#123.123.123.123 docker-compose up -d
This brings up all the containers in docker-compose in detached mode on my instance at the IP address. The console will usually output:
MyContainerA ... Done!
MyContainerB ... Done!
<exit>
To Gitlab CI, the deployment stage is completed successfully because there were no errors.
However, in reality, this doesn't guarantee that everything is good because the container may not have started up successfully internally. For example, the npm start may have failed and the container exits the next moment.
This makes the CI results (success/failed) to be unreliable. Is this normal when it comes to deploying Docker containers through CI/CD?
What should the correct way to deploy a Docker container through Gitlab CI/CD (or any other CI/CD per se) such that the CI's final result (success/failed) considers whether the containers actually started up successfully internally or not?

Best practice using docker inside Jenkins?

Hi I'm learning how to use Jenkins integrated with Docker and I don't understand what should I do to communicate them.
I'm running Jenkins inside a Docker container and I want to build an image in a pipeline. So I need to execute some docker commands inside the Jenkins container.
So the thing here is where docker come from. I understand that we need to bind mount the docker host daemon (socket) to the Jenkins container but this container still needs the binaries to execute Docker.
I have seen some approaches to achieve this and I'm confused what should I do. I have seen:
bind mount the docker binary (/usr/local/bin/docker:/usr/bin/docker)
installing docker in the image
if I'm not wrong the blue ocean image comes with Docker pre-installed (I have not found any documentation of this)
Also I don't understand what Docker plugins for Jenkins can do for me.
Thanks!
Docker has a client server architecture. The server is the docker deamon and the client is basically the command line interface that allows you to execute docker ... from the command line.
Thus when running Jenkins inside Docker you will need access to connect to the deamon. This is acheieved by binding the /var/run/docker.sock into the container.
At this point you need something to communicate with the Deamon which is the server. You can either do that by providing access to docker binaries. This can be achived by either mounting the docker binaries, or installing the
client binaries inside the Jenkins container.
Alternatively, you can communicate with the deamon using the Docker Rest API without having the docker client binaries inside the Jenkins container. You can for instance build an image using the API.
Also I don't understand what Docker plugins for Jenkins can do for me
The Docker plugin for Jenkins isn't useful for the use case that you described. This plugin allows you to provision Jenkins slaves using Docker. You can for instance run a compilation inside a Docker container that gets automatically provisioned by Jenkins
It is not best practice to use Docker with Jenkins. It is also not a bad practice. The relationship between Jenkins and Docker is not determined in such a manner that having Docker is good or bad.
Jenkins is a Continuous Integration Server, which is a fancy way of saying "a service that builds stuff at various times, according to predefined rules"
If your end result is a docker image to be distributed, you have Jenkins call your docker build command, collect the output, and report on the success / failure of the docker build command.
If your end result is not a docker image, you have Jenkins call your non-docker build command, collect the output, and report on the success / failure of the non-docker build.
How you have the build launched depends on how you would build the product. Makefiles are launched with make, Apache Ant with ant, Apache Maven with mvn package, docker with docker build and so on. From Jenkin's perspective, it doesn't matter, provided you provide a complete set of rules to launch the build, collect the output, and report the success or failure.
Now, for the 'Docker plugin for Jenkins'. As #yamenk stated, Jenkins uses build slaves to perform the build. That plugin will launch the build slave within a Docker container. The thing built within that container may or may not be a docker image.
Finally, running Jenkins inside a docker container just means you need to bind your Docker-ized Jenkins to the external world, as #yamenk indicates, or you'll have trouble launching builds.
Bind mounting the docker binary into the jenkins image only works if the jenkins images is "close enough" - it has to contain the required shared libraries!
So when sing a standard jenkins/jenkins:2.150.1 within an ubuntu 18.04 this is not working unfortunately. (it looked so nice and slim ;)
So the the requirement is to build or find a docker image which contains a compatible docker client for the host docker service is.
Many people seem to install docker in their jenkins image....

Trying to use zap in a gitlab-ci workflow

I am trying to automatize the usage of Zap in the continuous integration workflow of my company. We are using gitlab-ci and I'd want to use a docker image embedding Zap as a service and, in a first time, just call a quick scan on a legally targetable website like itsecgames.com.
I am using the docker image nhsbsa/owasp-zap that exposes zap.sh as entry point.
My question is:
How can I use this image as a service in a gitlab-ci YAML script in order to do a quick scan on itsecgames.com?
Relevant information:
Here is my gitlab-ci.yml:
image: openjdk:8-jdk
variables:
PROJECT_NAME: "psa-preevision-viewer"
stages:
- zap
zap-scanner:
services:
- nhsbsa/owasp-zap:latest
stage: zap
script:
- nhsbsa__owasp-zap -cmd -quickurl http://itsecgames.com/ -quickprogress
When the gitlab runner tries to resolve this job, I get this error message:
$ nhsbsa__owasp-zap -cmd -quickurl http://itsecgames.com/ -quickprogress
/bin/bash: line 27: nhsbsa__owasp-zap: command not found
ERROR: Job failed: exit code 1
At this point I've tried different approaches like calling zap.sh directly instead of nhsbsa__owasp-zap, or nhsbsa-owasp-zap (according to gitlab-ci documentation, both names should work though).
There probably is something that I'm seriously misunderstanding, but isn't using a service in gitlab-ci the same as pulling an image and calling docker run on it on my own computer ? As a matter of fact if I use
docker run nhsbsa/owasp-zap -cmd -quickurl http://itsecgames.com/ -quickprogress
I get as expected an XML with the found vulnerabilities.
If that's important:
gitlab-runner version is 1.11.1
gitlab version is Community Edition 8.7.4
When you create a service in gitlab it spins up the docker container alongside and gives you a hostname in which to access it. The idea is you call your commands from the initial docker image and point them to your service image. As #Jakub-Kania mentioned it doesn't allow you to run it as a local command.
So in terms of our nhsbsa/owasp-zap image it means we have a owasp-zap daemon running and available at nhsbsa__owasp-zap:8080. We then use maven and the zap plugin to scan our application.
Something like this (we're also parsing the zap results in sonar) :
mvn -B --non-recursive -Pzap -DzapPort=$NHSBSA__OWASP_ZAP_PORT_8080_TCP_PORT -DzapHost=$NHSBSA__OWASP_ZAP_PORT_8080_TCP_ADDR
-DzapTargetUrl=$baseUrl
-DsonarUrl=$SONAR_URL -Dsonar.branch=release
br.com.softplan.security.zap:zap-maven-plugin:analyze sonar:sonar
Depending what you're application is written in you might want to run the docker run command as a script step rather than using a service.
#Simon Bennetts is there a way to use something like curl to pass a test request to a remote zap daemon?

Gitlab Continuous Integration on Docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?
The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.
I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.
Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Resources