GitLab CI/CD SSH Session Hangs in Pipeline - docker

I am using GitLab CI/CD to build and push a docker image to my private GitLab registry.
I am able to successfully SSH into my server from the pipeline runner, but any commands passed into the SSH session doesn't run.
I am trying to pull the latest image from my GitLab container registry, run it, and exit the session to gracefully (successfully) pass the data to my pipeline.
The command I am running is:
ssh -t user#123.456.789 "docker pull registry.gitlab.com/user/project:latest & docker run project:latest"
The above command connects me to my server, and I see the typical welcome message, but the session hangs and no commands are ran.
I have tried using the heredoc format to pass in multiple commands at once, but I can't get a single command to work.
Any advice is appreciated.

For testing, you can try
ssh user#123.456.789 ls
To chain command, avoid using the '&', which would make the first command run in the background, while acting as command separator.
Try:
ssh user#123.456.789 "ls; pwd"
If this work, then try the two docker command, separated by ';'
Try with a docker run -td (that I mentioned here) in order to detach the docker process, without requiring a tty.

Related

Retrieve gitlab pipeline docker image to debug job

I've got a build script that is running in Gitlab and generates some files that are used later in the build process.
The reason is Gitlab pipeline fails and the failure is not reproduced locally. Is there a way to troubleshoot the failure?
As far as I know Gitlab pipelines are running in Docker containers.
Is there a way to get the docker image of the failed Gitlab pipeline to analyze it locally (e.g. take a look at the generated files)?
When the job container exits, it is removed automatically, so this would not be feasible to do.
However, you might have a few other options to debug your job:
Interactive web session
If you are using self-hosted runners, the best way to do this would probably be with a interative web session. That would allow you to have an interactive shell session inside the container. (be aware, you may have to edit the job to sleep for some time in order to keep the container alive long enough to inspect it)
Artifact files
If you're not using self-hosted runners, another option would be to artifact the generated files on failure:
artifacts:
when: on_failure
paths:
- path/to/generated-files/**/*
You can then download the artifacts and debug them locally.
Use the job script to debug
Yet another option would be to add debugging output to the job itself.
script:
- generate-files
# this is just an example, you can make this more helpful,
# depending on what information you need for debugging
- cat path/to/generated-files/*
Because debugging output may be noisy, you can consider using collapsible sections to collapse debug output by default.
script:
- generate-files
- echo "Starting debug section"
# https://docs.gitlab.com/ee/ci/jobs/index.html#custom-collapsible-sections
- echo -e "\e[0Ksection_start:`date +%s`:debugging[collapsed=true]\r\e[0KGenerated File Output"
- cat path/to/generated-files/*
- echo -e "\e[0Ksection_end:`date +%s`:debugging\r\e[0K"
Use the gitlab-runner locally
You can run jobs locally with, more or less, the same behavior of the GitLab runner by installing the gitlab runner locally and using the gitlab-runner exec command.
In this case, you could run your job locally and then docker exec into your job:
In your local repo, start the job by running the gitlab-runner exec command, providing the name of the job
In another shell, run docker ps to find the container ID of the job started by gitlab-runner exec
exec into the container using its ID: docker exec -it <CONTAINER_ID> /bin/bash (assuming bash is available in your image)

Jenkins pipeline: docker.withServer(...) does not execute docker commands on remote server

I'm using Docker Pipeline Plugin version 1.10.
I have my Jenkins installed in a container. I have a remote server that runs a Docker daemon. The daemon is reachable from the Jenkins machine via TCP (tested). I disabled TLS security on the Docker daemon.
I'm not able to make the docker.withServer(...) step work.
As a basic test I simply put following content in a Jenkinsfile (if I'm correct this is a valid pipeline content):
docker.withServer('tcp://my.docker.host:2345') {
def myImage = docker.build('myImage')
}
When the pipeline executes I get this error: script.sh: line 2: docker: command not found like the docker command was still trying to execute locally (there is no docker command installed locally) rather than on my remote Docker daemon.
Am I missing anything ? Is it required to have the docker command installed locally when trying to execute Docker commands on a remote server..?
have you tried
withDockerServer('tcp://my.docker.host:2345') {
.....
}
Documentation here
docker needs to be installed on jenkins master in order for jenkins to be able to launch the docker on my.docker.host.
the first docker command runs on jenkins master, but with a parameter to pass the command to my.docker.host
the container itself will then run on my.docker.host
Note that you only need to install docker on the jenkins master; the daemon does not need to be running on jenkins master.
Check if you have set up port correctly. Default port for daemon is 2375. It has to be checked on both docker daemon (option -H 0.0.0.0:2375) and on the jenkins client

Configuring Jenkins SSH options to slave nodes

I am running Jenkins on Ubuntu 14.04 (Trusty Tahr) with slave nodes via SSH. We're able to communicate with the nodes to run most commands, but when a command requires a tty input, we get the classic
the input device is not a TTY
error. In our case, it's a docker exec -it command.
So I'm searching through loads of information about Jenkins, trying to figure out how to configure the connection to the slave node to enable the -t option to force tty instances, and I'm coming up empty. How do we make this happen?
As far as I know, you cannot give -t to the ssh that Jenkins fires up (which makes sense, as Jenkins is inherently detached). From the documentation:
When the SSH slaves plugin connects to a slave, it does not run an interactive shell. Instead it does the equivalent of your running "ssh slavehost command..." a few times...
However, you can defeat this in your build scripts by...
looping back to yourself: ssh -t localhost command
using a local PTY generator: script --return -c "command" /dev/null

Starting new Docker container with every new Bamboo build run and using the container to run the build in

I am new to Bamboo and are trying to get the following process flow using Bamboo and Docker:
Developer commits code to a Bitbucket branch
Build plan detects the change
Build plan then starts a Docker container on a dedicated AWS instance where Docker is installed. In the Docker container a remote agent is started as well. I use the atlassian/bamboo-java-agent:latest docker container.
Remote agent registers with Bamboo
The rest of the build plan runs in the container
Container and agent gets removed when plan completes
I setup a test build plan and in the plan My first task is to start a Docker instance like follows:
sudo docker run -d --name "${bamboo.buildKey}_${bamboo.buildNumber}" \
-e HOME=/root/ -e BAMBOO_SERVER=http://x.x.x.x:8085/ \
-i -t atlassian/bamboo-java-agent:latest
The second task is to get the source code and deploy. 3rd task is test and 4th task is shutting down the container.
There are other agents online on Bamboo as well and my build plan sometimes uses those and not the Docker container that I started as part of the build plan.
Is there a way for me to do the above?
I hope it all makes sense. I am truly new to this and any help will be appreciated.
We (Atlassian Build Engineering) have created a set of plugins to run Docker based agents in a cluster (ECS) that comes online, builds a single job and then exits. We've recently open sourced the solution.
See https://bitbucket.org/atlassian/per-build-container for more details.
first you need to make sure the "main" docker container is not exiting when you run it.
check with
docker ps -a
you should see it is running
now assuming it is running you can execute commands inside the container
to get into the container
docker exec -it containerName bash
to execute commands inside the container from outside the container
docker exec -it containerName commandToExecuteInsideTheContainer
you could as part of the containers dockerfile COPY a script in it that does something.
Then you can execute that script from outside the container using the above approach.
Hope this gives some insight.

Installing Gitlab CI using Docker for the Ci and the Runners, and make it persistent after reboot

I have a server running Gitlab. Let's say that the address is https://gitlab.mydomain.com.
Now what I want to achieve is to install a Continuous Integration system. Being that I am using Gitlab, I opt for Gitlab CI, as it feels the more natural way to go. So I go to the Docker repo and I found this image.
So I run the image to create a container with the following
docker run --restart=always -d -p 9000:9000 -e GITLAB_URLS="https://gitlab.mydomain.com" anapsix/gitlab-ci
I give it a minute to boot up and I can now access to the CI through the URL http://gitlab.mydomain.com:9000. So far so good.
I log in the CI and I am greeted by this message:
Now you need Runners to process your builds.
So I come back to the Docker Hub and I find this other image. Apparently to boot up this image I have to do it interactively. I follow the instructions and it will create configuration files:
mkdir -p /opt/gitlab-ci-runner
docker run --name gitlab-ci-runner -it --rm -v /opt/gitlab-ci-runner:/home/gitlab_ci_runner/data sameersbn/gitlab-ci-runner:5.0.0-1 app:setup
The interactive setup will ask me for the proper data that it needs:
Please enter the gitlab-ci coordinator URL (e.g. http://gitlab-ci.org:3000/ )
http://gitlab.mydomain.com:9000/
Please enter the gitlab-ci token for this runner:
12345678901234567890
Registering runner with registration token: 12345678901234567890, url: http://gitlab.mydomain.com:9000/.
Runner token: aaaaaabbbbbbcccccccdddddd
Runner registered successfully. Feel free to start it!
I go to http://gitlab.mydomain:9000/admin/runners, and hooray, the runner appears on stage.
All seems like to work great, but here comes the problem:
If I restart the machine, due to an update or whatever reason, the runner is not there anymore. I could maybe add --restart=always to the command when I run the image of the runner, but this would be problematic because:
The setup is interactive, so the token to register runners have to be input manually
Every time the container with Gitlab CI is re-run the token to register new runners is different.
How could I solve this problem?
I have a way of pointing you in the right direction but im trying to make it myself, hope we both manage to get it up heres my situation.
im using coreOS + docker trying to do exactly what youre trying to do, and in coreOS you can setup a service that starts the CI everytime you restart the machine (as well as gitlab and the others) my problem is trying to make that same installation automatic.
after some digging i found this: https://registry.hub.docker.com/u/ubergarm/gitlab-ci-runner/
in this documentation they state that they can do it in 2 ways:
1-Mount in a .dockercfg file containing credentials into the /root
directory
2-start your container with this info:
-e CI_SERVER_URL=https://my.ciserver.com \
-e REGISTRATION_TOKEN=12345678901234567890 \
Meaning you can setup to auto start the CI with your configs, ive been trying this for 2 days if you manage to do it tell me how =(

Resources