Retrieve gitlab pipeline docker image to debug job - docker

I've got a build script that is running in Gitlab and generates some files that are used later in the build process.
The reason is Gitlab pipeline fails and the failure is not reproduced locally. Is there a way to troubleshoot the failure?
As far as I know Gitlab pipelines are running in Docker containers.
Is there a way to get the docker image of the failed Gitlab pipeline to analyze it locally (e.g. take a look at the generated files)?

When the job container exits, it is removed automatically, so this would not be feasible to do.
However, you might have a few other options to debug your job:
Interactive web session
If you are using self-hosted runners, the best way to do this would probably be with a interative web session. That would allow you to have an interactive shell session inside the container. (be aware, you may have to edit the job to sleep for some time in order to keep the container alive long enough to inspect it)
Artifact files
If you're not using self-hosted runners, another option would be to artifact the generated files on failure:
artifacts:
when: on_failure
paths:
- path/to/generated-files/**/*
You can then download the artifacts and debug them locally.
Use the job script to debug
Yet another option would be to add debugging output to the job itself.
script:
- generate-files
# this is just an example, you can make this more helpful,
# depending on what information you need for debugging
- cat path/to/generated-files/*
Because debugging output may be noisy, you can consider using collapsible sections to collapse debug output by default.
script:
- generate-files
- echo "Starting debug section"
# https://docs.gitlab.com/ee/ci/jobs/index.html#custom-collapsible-sections
- echo -e "\e[0Ksection_start:`date +%s`:debugging[collapsed=true]\r\e[0KGenerated File Output"
- cat path/to/generated-files/*
- echo -e "\e[0Ksection_end:`date +%s`:debugging\r\e[0K"
Use the gitlab-runner locally
You can run jobs locally with, more or less, the same behavior of the GitLab runner by installing the gitlab runner locally and using the gitlab-runner exec command.
In this case, you could run your job locally and then docker exec into your job:
In your local repo, start the job by running the gitlab-runner exec command, providing the name of the job
In another shell, run docker ps to find the container ID of the job started by gitlab-runner exec
exec into the container using its ID: docker exec -it <CONTAINER_ID> /bin/bash (assuming bash is available in your image)

Related

Docker container IDs clash in concurrent Jenkins Pipeline

I'm using Jenkins, Docker and a Makefile to run a pipeline in my Jenkins. The issue is the container IDs clash (obviously) when 2 builds from the same repository are running concurrently. The container names are defined in the Makefile
TEST_IMAGE = test-image-python
TEST_CONTAINER = test-container-python
run_tests: # using reports
docker run -d --name $(TEST_CONTAINER) $(TEST_IMAGE)
I'm using a (declarative) Jenkinsfile to set up the stages of the Jenkins pipeline, where the make run_tests command is executed. I've tried the options {disableConcurrentBuilds()} option, but that apparently only works for the same branch of the repository, and limiting Jenkins to run at only one at a time seems like a suboptimal solution.
What I thought of doing was adjusting the Makefile to append some kind of uuid to the container name, but I can't figure out how to do it. What I found for bash to generate a UUID is
UUID=$(cat /proc/sys/kernel/random/uuid)
which I tried to get into the Makefile with
generate_uuid:
bash UUID=$$(cat /proc/sys/kernel/random/uuid)
echo $(UUID)
... which doesn't work. In the end what I would like to have is either
TEST_CONTAINER = test-container-python_$(UUID)
or alternatively I could generate the UUID and append it when calling docker run
run_tests: # using reports
docker run -d --name $(TEST_CONTAINER)_$(UUID) $(TEST_IMAGE)
I'm fine with either, I'm just not sure how to get the last step done.

GitLab CI/CD SSH Session Hangs in Pipeline

I am using GitLab CI/CD to build and push a docker image to my private GitLab registry.
I am able to successfully SSH into my server from the pipeline runner, but any commands passed into the SSH session doesn't run.
I am trying to pull the latest image from my GitLab container registry, run it, and exit the session to gracefully (successfully) pass the data to my pipeline.
The command I am running is:
ssh -t user#123.456.789 "docker pull registry.gitlab.com/user/project:latest & docker run project:latest"
The above command connects me to my server, and I see the typical welcome message, but the session hangs and no commands are ran.
I have tried using the heredoc format to pass in multiple commands at once, but I can't get a single command to work.
Any advice is appreciated.
For testing, you can try
ssh user#123.456.789 ls
To chain command, avoid using the '&', which would make the first command run in the background, while acting as command separator.
Try:
ssh user#123.456.789 "ls; pwd"
If this work, then try the two docker command, separated by ';'
Try with a docker run -td (that I mentioned here) in order to detach the docker process, without requiring a tty.

Jenkins with make and docker

I have been playing around with Jenkins, and I'm now able to connect github and set triggers. I want to build my code using make and docker, however when i execute make or docker in the shell, they are not found. How do I configure Jenkins' build step to run make and docker
I would install make and the docker daemon on your Jenkins server. This will allow you to build and push docker images from within your Jenkins build pipelines using the Executable Shell Build task. You will also be able to run make commands there as well.
docker build -t <USER>/<REPO_NAME>:<TAG> .
docker push <USER>/<REPO_NAME>:<TAG>
There are also Jenkins plugins available for building your docker images too.
I would NOT recommend running Jenkins using a Docker container, then running Docker inside that container. This is known as Docker in Docker(aka. DinD), and should be avoided for the reasons stated in this article.
You can install Docker on the same machine where your jenkins is running.
Or you can run a docker container which contains both jenkins and docker.
If you purpose is to learn jenkins, I suggest running Jenkins within a docker and Docker daemon on your host machine.
just have Docker installed on your host machine.
then issue the command which runs
docker run \
--rm -u root -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock --name myjenkinsserver jenkinsci/blueocean
then you are ready to go.
add a pipeline job as follows:
pipeline {
agent { docker 'gcc:latest' }
stages {
stage('build') {
steps {
sh 'make --version'
}
}
}
}
now you can run make commands.
In general, it is better to run jenkins jobs on Jenkins slave machines or in other terms, Jenkins agents. You can create custom Jenkins agents which include necessary tools, in your case, such as make.

jenkins pipeline docker build on docker agent

I've got a jenkins declarative pipeline build that runs gradle and uses a gradle plugin to create a docker image. I'm also using a dockerfile agent directive, so the entire thing runs inside a docker container. This was working great with jenkins itself installed in docker (I know, that's a lot of docker). I had jenkins installed in a docker container on docker for mac, with -v /var/run/docker.sock:/var/run/docker.sock (DooD) per https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/. With this setup, the pipeline docker agent ran fine, and the docker build command within the pipeline docker agent ran fine as well. I assumed jenkins also mounted the docker socket on its inner docker container.
Now I'm trying to run this on jenkins installed on an ec2 instance with docker installed properly. The jenkins user has the docker group as its primary group. The jenkins user is able to run "docker run hello-world" successfully. My pipeline build starts the docker agent container (based on the gradle image with various things added) but when gradle attempts to run the docker build command, I get the following:
* What went wrong:
Execution failed for task ':docker'.
> Docker execution failed
Command line [docker build -t config-server:latest /var/lib/****/workspace/nfig-server_feature_****-HRUNPR3ZFDVG23XNVY6SFE4P36MRY2PZAHVTIOZE2CO5EVMTGCGA/build/docker] returned:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Is it possible to build docker images inside a docker agent using declarative pipeline?
Yes, it is.
The problem is not with Jenkins' declarative pipeline, but how you're setting up and running things.
From the error above, looks like there's a missing permission which needs to be granted.
Maybe if you share what your configuration looks like and how your're running things, more people can help.

Starting new Docker container with every new Bamboo build run and using the container to run the build in

I am new to Bamboo and are trying to get the following process flow using Bamboo and Docker:
Developer commits code to a Bitbucket branch
Build plan detects the change
Build plan then starts a Docker container on a dedicated AWS instance where Docker is installed. In the Docker container a remote agent is started as well. I use the atlassian/bamboo-java-agent:latest docker container.
Remote agent registers with Bamboo
The rest of the build plan runs in the container
Container and agent gets removed when plan completes
I setup a test build plan and in the plan My first task is to start a Docker instance like follows:
sudo docker run -d --name "${bamboo.buildKey}_${bamboo.buildNumber}" \
-e HOME=/root/ -e BAMBOO_SERVER=http://x.x.x.x:8085/ \
-i -t atlassian/bamboo-java-agent:latest
The second task is to get the source code and deploy. 3rd task is test and 4th task is shutting down the container.
There are other agents online on Bamboo as well and my build plan sometimes uses those and not the Docker container that I started as part of the build plan.
Is there a way for me to do the above?
I hope it all makes sense. I am truly new to this and any help will be appreciated.
We (Atlassian Build Engineering) have created a set of plugins to run Docker based agents in a cluster (ECS) that comes online, builds a single job and then exits. We've recently open sourced the solution.
See https://bitbucket.org/atlassian/per-build-container for more details.
first you need to make sure the "main" docker container is not exiting when you run it.
check with
docker ps -a
you should see it is running
now assuming it is running you can execute commands inside the container
to get into the container
docker exec -it containerName bash
to execute commands inside the container from outside the container
docker exec -it containerName commandToExecuteInsideTheContainer
you could as part of the containers dockerfile COPY a script in it that does something.
Then you can execute that script from outside the container using the above approach.
Hope this gives some insight.

Resources