How to make GitLab Runner in Docker see a custom CA Root certificate - docker

I have installed and configured:
an on-premises GitLab Omnibus on ServerA running on HTTPS
an on-premises GitLab-Runner installed as Docker Service in ServerB
ServerA certificate is generated by a custom CA Root
The Configuration
I've have put the CA Root Certificate on ServerB:
/srv/gitlab-runner/config/certs/ca.crt
Installed the Runner on ServerB as described in Run GitLab Runner in a container - Docker image installation and configuration:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Registered the Runner as described in Registering Runners - One-line registration command:
docker run --rm -t -i
-v /srv/gitlab-runner/config:/etc/gitlab-runner
--name gitlab-docker-runner gitlab/gitlab-runner register \
--non-interactive \
--executor "docker" \
--docker-image alpine:latest \
--url "https://MY_PRIVATE_REPO_URL_HERE/" \
--registration-token "MY_PRIVATE_TOKEN_HERE" \
--description "MyDockerServer-Runner" \
--tag-list "TAG_1,TAG_2,TAG_3" \
--run-untagged \
--locked="false"
This command gave the following output:
Updating CA certificates...
Runtime platform arch=amd64 os=linux pid=5 revision=cf91d5e1 version=11.4.2
Running in system-mode.
Registering runner... succeeded runner=8UtcUXCY
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
I checked with
$ docker exec -it gitlab-runner bash
and once in the container with
$ awk -v cmd='openssl x509 -noout -subject' '
/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt
and the custom CA root is correctly there.
The Problem
When running Gitlab-Runner from GitLab-CI, the pipeline fails miserably telling me that:
$ git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#ServerA/foo/bar/My-Project.wiki.git
Cloning into 'My-Project.wiki'...
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#ServerA/foo/bar/My-Project.wiki.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
ERROR: Job failed: exit code 1
It does not recognize the Issuer (my custom CA Root), but according to The self-signed certificates or custom Certification Authorities, point n.1, it should out-of-the-box:
Default: GitLab Runner reads system certificate store and verifies the GitLab server against the CA’s stored in system.
I've then tried the solution from point n.3, editing
/srv/gitlab-runner/config/config.toml:
and adding:
[[runners]]
tls-ca-file = "/srv/gitlab-runner/config/certs/ca.crt"
But it still doesn't work.
How can I make Gitlab Runner read the CA Root certificate?

You have two options:
Ignore SSL verification
Put this at the top of your .gitlab-ci.yml:
variables:
GIT_SSL_NO_VERIFY: "1"
Point GitLab-Runner to the proper certificate
As outlined in the official documentation, you can use the tls-*-file options to setup your certificate, e.g.:
[[runners]]
...
tls-ca-file = "/etc/gitlab-runner/ssl/ca-bundle.crt"
[runners.docker]
...
As the documentation states, "this file will be read every time when runner tries to access the GitLab server."
Other options include tls-cert-file to define the certificate to be used if needed.

While I've still not got why it doesn't work out-of-the-box, I've found the Egg of Columbus:
Gitlab-Runner configuration:
[[runners]]
name = "MyDockerServer-Runner"
url = "https://MY_PRIVATE_REPO_URL_HERE/"
token = "MY_TOKEN_HERE"
executor = "docker"
...
[runners.docker]
image = "ubuntu:latest"
# The trick is the following:
volumes = ["/cache","/srv/gitlab-runner/config:/etc/gitlab-runner"]
...
Gitlab-ci.yml pipeline:
MyJob:
image: ubuntu:latest
script:
- awk -v cmd='openssl x509 -noout -subject' '/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt
- git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#ServerA/foo/bar/My-Project.wiki.git
- wget -O foo.png https://ServerA/foo/bar/foo.png
before_script:
- apt-get update -y >/dev/null
- apt-get install -y apt-utils dialog >/dev/null
- apt-get install -y git >/dev/null
- apt-get install -y wget >/dev/null
# The trick is the following:
- cp /etc/gitlab-runner/certs/ca.crt /usr/local/share/ca-certificates/ca.crt
- update-ca-certificates
That's it:
Mount the volume once (per Docker executor)
Update the CA certificates once (per job)
And everything will work as expected: git clone, wget https, etc...
A great workaround, until someone at GitLab will fix it or explain me where I'm wrong (be my guest!)

Not sure it's the best approach, but at least it worked for me. You can create a customized gitlab runner image and add your root CA inside:
├── Dockerfile
└── myca.crt
# Dockerfile
FROM gitlab/gitlab-runner:latest
COPY myca.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
Build it:
docker build -t custom-gitlab-runner .
And rerun all your commands, just remember to use this new image name.
Off-topic, but related and might be useful
Dockerized gitlab-runner seem to also ignore entries in your /etc/hosts, so if you have launched Gitlab on a custom domain, e.g. https://gitlab.local.net, you need to pass the values from /etc/hosts when launching/registering gitlab runner:
docker run -d --name gitlab-runner --restart always \
--add-host="gitlab.local.net:192.168.1.100" \
...
If you want to launch docker:dind (docker in docker service) container to build docker images, you also need to set these values inside /srv/gitlab-runner/config/config.toml:
[[runners]]
url = "https://gitlab.local.net/"
executor = "docker"
pre_clone_script = "echo '192.168.1.100 gitlab.local.net registry.local.net' >> /etc/hosts"
...

From the output you provided i think that the certificate might be OK but you are lacking the CRL file : server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
The CRL file is used to verify that even if the certificate is valid is hasn't been revoked by the CA owner. You shoudl then need to :
1) Generate a CRL file based on your CA:
openssl ca -gencrl -keyfile ca.key -cert ca.crt -out crl.pem
source: https://blog.didierstevens.com/2013/05/08/howto-make-your-own-cert-and-revocation-list-with-openssl/
2) Instruct the runner to use it :
[[runners]]
...
tls-ca-file = "/etc/gitlab-runner/ssl/ca-bundle.crt"
crl-file = "/etc/gitlab-runner/ssl/ca.crl"
3) Of course setting GIT_SSL_NO_VERIFY will work but you will be more sensitive to man-in-the-middle attacks

Related

Cannot start keycloak in docker with letsencrypt certificates

I can run Keycloak with the following command
./bin/kc.sh start-dev \
--https-certificate-file=/etc/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/etc/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Works as expected
On the same computer, I try to run using Docker
docker run -p 80:8080 -p 443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
quay.io/keycloak/keycloak:latest \
start-dev \
--https-certificate-file=/ect/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/ect/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
It fails
2022-12-23 23:11:59,784 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-12-23 23:11:59,785 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: /ect/letsencrypt/live/keycloak.fhir-poc.hcs.us.com/cert.pem
2022-12-23 23:11:59,787 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Key material not provided to setup HTTPS. Please configure your keys/certificates.
Any suggestions besides a reverse proxy?
The problem is based on the directory linked structure of letsencrypt in linux and the permissions to access these files.
Letsencrypt linked directory structure works like:
/etc/letsencrypt/live/<your-domain/.pem -> /etc/letsencrypt/archive/<your-domain/.pem
The problem is the link from the live to the archive folder/file.
The permissions are mostly not correct.
A quick-fix is create a cert-mirror and copy the related files from /etc/letsencrypt/live/<your-domain/*.pem
to a new cert folder like /opt/cert
change permissions in /opt/cert to 777: chmod 777 -R /opt/certs
create a cron.monthly job in /etc/cron.monthly which copy the files to /opt/certs + change permissions correctly every month that your certs mirror always up-to-date
This will make your example working. Please keep in mind that permissions like 777 are let everyone access this file. You should use the correct permissions in productive environment.
I discovered the answer
letsencrypt certificates in the "live" folder are symlinks to the "archive" folder and I needed a custom docker image for keycloak to mount my certificates. So I followed the keycloak docs for creating a custom docker image and started a container with that image
Following
https://www.keycloak.org/server/containers
https://eff-certbot.readthedocs.io/en/stable/using.html#where-are-my-certificates
to build a custom image and change the cert permissions
Dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KEYCLOAK_ADMIN=root
ENV KEYCLOAK_ADMIN_PASSWORD=change_me
WORKDIR /opt/keycloak
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
COPY kc-export.json /opt/keycloak/kc-export.json
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak/kc-export.json
VOLUME [ "/opt/keycloak/certs" ]
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then start the container
docker run -p 8443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
my-keycloak-image:latest \
start-dev \
--https-certificate-file=/opt/keycloak/certs/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/opt/keycloak/certs/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME

Dockerfile `RUN --mount=type=ssh` doesn't seem to work

In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?
The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.

SSH keys for Docker executor

I have created an image where I run some tasks.
I want to be able to push some files to a remote server that runs Windows Server 2022.
The gitlab-runner runs on an Ubuntu machine.
I managed to do that using shell executors. But now I want to do the same inside a docker container.
Using the following guide
https://docs.gitlab.com/ee/ci/ssh_keys/#ssh-keys-when-using-the-docker-executor
I don't understand in which user I will create the keys.
In a shell executor case I used gitlab-runner user in which I created a pair of keys. I added the public key to the server that I want to push files to and it worked.
However, I added the same private key into the gitlab CI/CD variable as the guide suggests.
Then inside the job I added the following:
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- scp -P <port> myfile.txt username#ip:remote_path
But the job fails with errors
Host key verification failed.
lost connection
Should I use the same private key from gitlab-runner user?
PS: The echo "$SSH_PRIVATE_KEY" works. I can see the key I added in the gitlab CI/CD variable.
I used something similar in my CI process, works like a charm, I recall I've used some base64 formatted runner key due to some formatting errors:
- echo $GITLAB_RUNNER_SSH_KEY | base64 -d > $HOME/.ssh/runner_key
- chmod -R 600 ~/.ssh
- eval $(ssh-agent -s)
- ssh-add $HOME/.ssh/runner_key

Docker, how to deal with ssh keys, known_hosts and authorized_keys

In docker, how to scope with the requirement of configuring known_hosts, authorized_keys and ssh connectivity in general, when container have to talk with external systems?
For example, I'm running jenkins container and try to checkout the project from github in job, but connection fails with the error host key verification failed
This could be solved by login into container, connect to github manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with ansible and docker). Another (clunky) solution would be to provision the running container with ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment.
So what is the correct way with docker to manage (and automate) connectivity with external systems?
To trust github.com host you can issue this command when you start or build your container:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
This will add github public key to your known hosts file.
If everything is done in the Dockerfile it's easy.
In my Dockerfile:
ARG PRIVATE_SSH_KEY
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan example.com > /root/.ssh/known_hosts && \
# Add the keys and set permissions
echo "$PRIVATE_SSH_KEY" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
...do stuff with private key
# Remove SSH keys
RUN rm -rf /root/.ssh/
You need to obviously need to pass the private key as an argument to the building(docker-compose build or docker build).
One solution is to mount host's ssh keys into docker with following options:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
This works perfectly for git.
There is a small trick but git version should be > 2.3
export GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
git clone git#gitlab.com:some/another/repo.git
or simply
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
you can even point to private key file path like this:
GIT_SSH_COMMAND="ssh -i /path/to/private_key_file -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
This is how I do it, not sure if you will like this solution though. I have a private git repository containing authorized_keys with a collection of public keys. Then, I use ansible to clone this repository and replace authorized_keys:
- git: repo=my_repo dest=my_local_folder force=yes accept_hostkey=yes
- shell: "cp my_local_folder/authorized_keys ~/.ssh/"
Using accept_hostkey is what actually allows me to automate the process (I trust the source, of course).
Try this:
Log into the host, then:
sudo mkdir /var/jenkins_home/.ssh/
sudo ssh-keyscan -t rsa github.com >> /var/jenkins_home/.ssh/known_hosts
The Jenkins container sets the home location to the persistent map, as such, running this in the host system will generate the required result.
Detailed answer to the one provided by #Konstantin Suvorov, if you are going to use a Dockerfile.
In my Dockerfile I just added:
COPY my_rsa /root/.ssh/my_rsa # copy rsa key
RUN chmod 600 /root/.ssh/my_rsa # make it accessible
RUN apt-get -y install openssh-server # install openssh
RUN ssh-keyscan my_hostname >> ~/.ssh/known_hosts # add hostname to known_hosts
Note that "my_hostname" and "my_rsa" are your host-name and your rsa key
This made ssh work in docker without any issues, so I could connect to DBs

How can I let the gitlab-ci-runner DinD image cache intermediate images?

I have a Dockerfile that starts with installing the texlive-full package, which is huge and takes a long time. If I docker build it locally, the intermedate image created after installation is cached, and subsequent builds are fast.
However, if I push to my own GitLab install and the GitLab-CI build runner starts, this always seems to start from scratch, redownloading the FROM image, and doing the apt-get install again. This seems like a huge waste to me, so I'm trying to figure out how to get the GitLab DinD image to cache the intermediate images between builds, without luck so far.
I have tried using the --cache-dir and --docker-cache-dir for the gitlab-runner register command, to no avail.
Is this even something the gitlab-runner DinD image is supposed to be able to do?
My .gitlab-ci.yml:
build_job:
script:
- docker build --tag=example/foo .
My Dockerfile:
FROM php:5.6-fpm
MAINTAINER Roel Harbers <roel.harbers#example.com>
RUN apt-get update && apt-get install -qq -y --fix-missing --no-install-recommends texlive-full
RUN echo Do other stuff that has to be done every build.
I use GitLab CE 8.4.0 and gitlab/gitlab-runner:latest as runner, started as
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/local/gitlab-ci-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest \
; \
The runner is registered using:
docker exec -it gitlab-runner gitlab-runner register \
--name foo.example.com \
--url https://gitlab.example.com/ci \
--cache-dir /cache/build/ \
--executor docker \
--docker-image gitlab/dind:latest \
--docker-privileged \
--docker-disable-cache false \
--docker-cache-dir /cache/docker/ \
; \
This creates the following config.toml:
concurrent = 1
[[runners]]
name = "foo.example.com"
url = "https://gitlab.example.com/ci"
token = "foobarsldkflkdsjfkldsj"
tls-ca-file = ""
executor = "docker"
cache_dir = "/cache/build/"
[runners.docker]
image = "gitlab/dind:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
cache_dir = "/cache/docker/"
(I have experimented with different values for cache_dir, docker_cache_dir and disable_cache, all with the same result: no caching whatsoever)
I suppose there's no simple answer to your question. Before adding some details, I strongly suggest to read this blog article from the maintainer of DinD, which was originally named "do not use Docker in Docker for CI".
What you might try is declaring /var/lib/docker as a volume for your GitLab runner. But be warned, depending on your file-system drivers you may use AUFS in the container on an AUFS filesystem on your host, which is very likely to cause problems.
What I'd suggest to you is creating a separate Docker-VM, only for the runner(s), and bind-mount docker.sock from the VM into your runner-container.
We are using this setup with GitLab with great success (>27.000 builds in about 12 months).
You can take a look at our runner with docker-compose support which is actually based on the shell-executor of GitLab's runner.
Currently you cannot cache intermediate layers in GitLab Docker-in-Docker. Altough there are plans to add that (that are mentioned in the link below). What you can do today to speed up your DinD build is to use the overlay filesystem. To do this you need to be running a liunx kernel >=3.18 and make sure you load the overlay kernel module. Then you set this variable in your gitlab-ci.yml:
variables:
DOCKER_DRIVER: overlay
For more information see this issue and in particular this comment on "The state of optimising Docker Builds!", see the "Using docker executor with dind" section.
https://gitlab.com/gitlab-org/gitlab-ce/issues/17861#note_12991518
For build dependencies that do not change so ofter you can do kinda manual caching with gitlab image registry.
In CI script you do not explicitely call docker build but rather wrap it in a shell script
# cat build_dependencies.sh
registry=registry.example.com
project=group/project
imagebase=$registry/$project/linux
docker pull $imagebase/devbase:1.0
if [ $? -ne 0 ]; then
docker build -f devbase.dockerfile -t $imagebase/devbase:1.0 .
docker push $imagebase/devbase:1.0
fi
...
and call that script in your CI
...
script:
- ./build_dependencies.sh
The downside to this is that when your devbase.dockerfile is updated this would get unnoticed by CI, so you need to force build and push of a new image. So for dynamicly changing images this does not work well, but for your use case this seems like a possible way to go.

Resources