At work we have a bunch of internal servers that use self-signed certificates. I'm trying to install these certs into a Jupyter notebook image so it can access the servers, but for some reason they're not being found. Here is a minimal Dockerfile:
FROM jupyter/datascience-notebook:notebook-6.4.2
USER root
RUN echo 'Acquire::http::proxy "http://proxy.internal.server";' >> /etc/apt/apt.conf.d/99proxy
ENV http_proxy http://proxy.internal.server
ENV https_proxy http://proxy.internal.server
ENV NO_PROXY internal.server
COPY certificates/* /usr/local/share/ca-certificates/
RUN update-ca-certificates
After doing this, when I try to copy a file, eg with curl -O https://internal.server/file, it fails with a message that the cert is invalid. I have to add the -k flag to turn SSL verification off for it to succeed.
If I follow the same procedure but starting from a vanilla Ubuntu image, then there's no problem. (I do have to install ca-certificates and curl.)
Is there something about the Jupyter image that is messing with the cert store? What is the correct procedure for installing certs?
The reason is that the Jupyter images use conda and conda is shipped with openssl and it's own CA certificates through the ca-certificates package.
You can see it in the image
python -c "import ssl; print(ssl.get_default_verify_paths())"
# DefaultVerifyPaths(cafile='/opt/conda/ssl/cert.pem', capath=None,
# openssl_cafile_env='SSL_CERT_FILE',
# openssl_cafile='/opt/conda/ssl/cert.pem',
# openssl_capath_env='SSL_CERT_DIR',
# openssl_capath='/opt/conda/ssl/certs')
I have not the ideal solution to use custom CA certificates. You can try playing with the various environment variables.
export SSL_CERT_DIR=/etc/ssl/certs
export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
As last resort you can try to
Add the certificate to the conda ca file
openssl x509 -in /path/to/custom/ca.crt -outform PEM >> $CONDA_PREFIX/ssl/cacert.pem
Overwrite the conda CA file with a symlink to the system location.
However, those fixes will break if the ca-certificate package is updated.
Related
I am trying to create a simple docker image that runs .NET Core APIs. The problem is, my environment is behind a proxy with self-signed certificate i.e. not trusted :(
Following is my docker file
## runtime:3.1 does not support certoc or openssl or powershell which forced me to change image to nanoserver-1809
#FROM mcr.microsoft.com/dotnet/core/runtime:3.1
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-nanoserver-1809
ARG source
ARG BUILD_ENV=development
# Option - 1
# ADD z-scaler-certificate.crt /usr/local/share/ca-certificates/z-scaler-certificate.crt
# RUN certoc -addstore root /usr/local/share/ca-certificates/z-scaler-certificate.crt
# Option - 2
# RUN powershell IMPORT-CERTIFICATE -FilePath /usr/z-scaler-certificate.crt -CertStoreLocation 'Cert:\\LocalMachine\Root'
# Option - 3
# RUN CERT_DIR=(openssl version -d | cut -f2 -d \")/certs; cp /usr/z-scaler-certificate.crt $CERT_DIR; update-ca-certificates; fi
# Option - 4
ADD z-scaler-certificate.crt /container/cert/path
RUN update-ca-certificates
WORKDIR /app
COPY ${source:-bin/Debug/netcoreapp3.1} .
ENTRYPOINT ["dotnet", "Webjob.dll"]
I tried almost all possible options I could try from internet but all fails with the same error -
executor failed running [cmd /S /C update-ca-certificates]: unable to find user ContainerUser: invalid argument
I need help in figuring out what is that I am doing wrong that the certificate is not being added to the store?
In order to execute admin tasks you should use ContainerAdministrator user
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-nanoserver-1809
ARG source
ARG BUILD_ENV=development
USER ContainerAdministrator
...
When working with containers, I'd recommend keeping to standard Linux tech unless there is a good reason. This is the most standard option and will work on the MS Debian images:
COPY z-scaler-certificate.crt /usr/local/share/certificates/z-scaler-certificate.crt
RUN update-ca-certificates
I am assuming here that your CRT file is a valid root certificate.
In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?
The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.
the question has 2 parts, the 1st part: how to add root certificate? is simple and we can have reference from like How do I add a CA root certificate inside a docker image?
the 2nd part, which is what I actually want to ask, is: how to keep the root certificate only in docker build time?
maybe we can use buildctl and RUN --mount=type=secret; but it cannot cover all cases.
say I would like to pass sites with self-signed certificate like:
RUN curl https://x01.self-signed-site/obj01
RUN npm install --registry https://x02.self-signed-site/npm
RUN pip install -i https://x03.self-signed-site/pypi/simple
RUN mvn install
...
thus, we need to config certificate for each tool:
(prepare certificate and prepare .npmrc, .curlrc, ...)
(for, curl, npm, pip, we can use env vars; but we cannot guarantee we can use this way for other tools)
therefore, we need to download self-signed certificate into image and also modify some files to apply the cert config. how to keep the change only in build time (no persistent layer in final image)?
we resolved this problem by using docker save and docker load; but currently, docker load does not work as we expect (see also how to keep layers when do `docker load`)
anyway, below is our solution in pseudo-code:
docker save -o out.tar <image>
mkdir contents && cd contents
tar xf ../out.tar
open manifest.json, get config <hash>.json as config.json
remove target layers in:
- config.json[history]
- config.json[rootfs][diff_ids]
- manifest.json[0][Layers]
remove layer tarballs (get layer_hashes from maniefst.josn[0][Layers]):
- <layer_hash>/*
fill gap between missing layers:
- <layer_hash_next>/json[parent] = <layer_hash_prev>
tar cf ../new.tar *
docker rmi <image>
docker load -i ../new.tar
ref: https://github.com/stallpool/track-network-traffic/blob/main/bin/docker_image_cleanup.py
I have installed and configured:
an on-premises GitLab Omnibus on ServerA running on HTTPS
an on-premises GitLab-Runner installed as Docker Service in ServerB
ServerA certificate is generated by a custom CA Root
The Configuration
I've have put the CA Root Certificate on ServerB:
/srv/gitlab-runner/config/certs/ca.crt
Installed the Runner on ServerB as described in Run GitLab Runner in a container - Docker image installation and configuration:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Registered the Runner as described in Registering Runners - One-line registration command:
docker run --rm -t -i
-v /srv/gitlab-runner/config:/etc/gitlab-runner
--name gitlab-docker-runner gitlab/gitlab-runner register \
--non-interactive \
--executor "docker" \
--docker-image alpine:latest \
--url "https://MY_PRIVATE_REPO_URL_HERE/" \
--registration-token "MY_PRIVATE_TOKEN_HERE" \
--description "MyDockerServer-Runner" \
--tag-list "TAG_1,TAG_2,TAG_3" \
--run-untagged \
--locked="false"
This command gave the following output:
Updating CA certificates...
Runtime platform arch=amd64 os=linux pid=5 revision=cf91d5e1 version=11.4.2
Running in system-mode.
Registering runner... succeeded runner=8UtcUXCY
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
I checked with
$ docker exec -it gitlab-runner bash
and once in the container with
$ awk -v cmd='openssl x509 -noout -subject' '
/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt
and the custom CA root is correctly there.
The Problem
When running Gitlab-Runner from GitLab-CI, the pipeline fails miserably telling me that:
$ git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#ServerA/foo/bar/My-Project.wiki.git
Cloning into 'My-Project.wiki'...
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#ServerA/foo/bar/My-Project.wiki.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
ERROR: Job failed: exit code 1
It does not recognize the Issuer (my custom CA Root), but according to The self-signed certificates or custom Certification Authorities, point n.1, it should out-of-the-box:
Default: GitLab Runner reads system certificate store and verifies the GitLab server against the CA’s stored in system.
I've then tried the solution from point n.3, editing
/srv/gitlab-runner/config/config.toml:
and adding:
[[runners]]
tls-ca-file = "/srv/gitlab-runner/config/certs/ca.crt"
But it still doesn't work.
How can I make Gitlab Runner read the CA Root certificate?
You have two options:
Ignore SSL verification
Put this at the top of your .gitlab-ci.yml:
variables:
GIT_SSL_NO_VERIFY: "1"
Point GitLab-Runner to the proper certificate
As outlined in the official documentation, you can use the tls-*-file options to setup your certificate, e.g.:
[[runners]]
...
tls-ca-file = "/etc/gitlab-runner/ssl/ca-bundle.crt"
[runners.docker]
...
As the documentation states, "this file will be read every time when runner tries to access the GitLab server."
Other options include tls-cert-file to define the certificate to be used if needed.
While I've still not got why it doesn't work out-of-the-box, I've found the Egg of Columbus:
Gitlab-Runner configuration:
[[runners]]
name = "MyDockerServer-Runner"
url = "https://MY_PRIVATE_REPO_URL_HERE/"
token = "MY_TOKEN_HERE"
executor = "docker"
...
[runners.docker]
image = "ubuntu:latest"
# The trick is the following:
volumes = ["/cache","/srv/gitlab-runner/config:/etc/gitlab-runner"]
...
Gitlab-ci.yml pipeline:
MyJob:
image: ubuntu:latest
script:
- awk -v cmd='openssl x509 -noout -subject' '/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt
- git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#ServerA/foo/bar/My-Project.wiki.git
- wget -O foo.png https://ServerA/foo/bar/foo.png
before_script:
- apt-get update -y >/dev/null
- apt-get install -y apt-utils dialog >/dev/null
- apt-get install -y git >/dev/null
- apt-get install -y wget >/dev/null
# The trick is the following:
- cp /etc/gitlab-runner/certs/ca.crt /usr/local/share/ca-certificates/ca.crt
- update-ca-certificates
That's it:
Mount the volume once (per Docker executor)
Update the CA certificates once (per job)
And everything will work as expected: git clone, wget https, etc...
A great workaround, until someone at GitLab will fix it or explain me where I'm wrong (be my guest!)
Not sure it's the best approach, but at least it worked for me. You can create a customized gitlab runner image and add your root CA inside:
├── Dockerfile
└── myca.crt
# Dockerfile
FROM gitlab/gitlab-runner:latest
COPY myca.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
Build it:
docker build -t custom-gitlab-runner .
And rerun all your commands, just remember to use this new image name.
Off-topic, but related and might be useful
Dockerized gitlab-runner seem to also ignore entries in your /etc/hosts, so if you have launched Gitlab on a custom domain, e.g. https://gitlab.local.net, you need to pass the values from /etc/hosts when launching/registering gitlab runner:
docker run -d --name gitlab-runner --restart always \
--add-host="gitlab.local.net:192.168.1.100" \
...
If you want to launch docker:dind (docker in docker service) container to build docker images, you also need to set these values inside /srv/gitlab-runner/config/config.toml:
[[runners]]
url = "https://gitlab.local.net/"
executor = "docker"
pre_clone_script = "echo '192.168.1.100 gitlab.local.net registry.local.net' >> /etc/hosts"
...
From the output you provided i think that the certificate might be OK but you are lacking the CRL file : server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
The CRL file is used to verify that even if the certificate is valid is hasn't been revoked by the CA owner. You shoudl then need to :
1) Generate a CRL file based on your CA:
openssl ca -gencrl -keyfile ca.key -cert ca.crt -out crl.pem
source: https://blog.didierstevens.com/2013/05/08/howto-make-your-own-cert-and-revocation-list-with-openssl/
2) Instruct the runner to use it :
[[runners]]
...
tls-ca-file = "/etc/gitlab-runner/ssl/ca-bundle.crt"
crl-file = "/etc/gitlab-runner/ssl/ca.crl"
3) Of course setting GIT_SSL_NO_VERIFY will work but you will be more sensitive to man-in-the-middle attacks
I am trying to deploy a docker configuration with images on a private docker registry.
Now, every time I execute docker login registry.example.com, I get the following error message:
error getting credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY
The only solution I found for non-MacOS users was to run export $(dbus-launch) first, but that did not change anything.
I am running Ubuntu Server and tried with both the Ubuntu Docker package and the Docker-CE package.
How can I log in without an X11 session?
Looks like this is because it defaults to use the secretservice executable which seems to have some sort of X11 dependency for some reason. If you install and configure pass docker will use that instead which seems to solve the problem.
In a nutshell (from https://github.com/docker/compose/issues/6023)
sudo apt install gnupg2 pass
gpg2 --full-generate-key
This generates a you a gpg2 key. After that's done you can list it with
gpg2 -k
Copy the key id (from the line labelled [uid]) and do
pass init "whatever key id you have"
Now docker login should work.
There are a couple of bugs logged on launchpad regarding this:
https://bugs.launchpad.net/ubuntu/+source/golang-github-docker-docker-credential-helpers/+bug/1794307
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
This works: sudo apt remove golang-docker-credential-helpers
You can remove the offending package golang-docker-credential-helpers without removing all of docker-compose.
The following worked for me on a server without X11 installed:
dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
and then
echo 'foo' | docker login mydockerrepo.com -u dockeruser --password-stdin
Source:
bug reported in debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910823#39
bug reported on ubuntu:
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
secretservice requires a GUI. You can use pass without a GUI.
Unfortunately, Docker's documentation on how to configure Docker Credential Helpers is quite lacking. Here's a comprehensive guide how to configure pass with Docker (tested with Ubuntu 18.04):
1. Install the Docker Credential Helper for pass
Find the url for the latest version of docker-credential-pass from https://github.com/docker/docker-credential-helpers/releases . For example:
# substitute with the latest version
url=https://github.com/docker/docker-credential-helpers/releases/download/v0.6.2/docker-credential-pass-v0.6.2-amd64.tar.gz
# download and untar the binary
wget $url
tar -xzvf $(basename $url)
# move the binary to a dir in your $PATH
sudo mv docker-credential-pass /usr/local/bin
# verify it works
docker-credential-pass list
2. Install and configure pass
apt install pass
# create a gpg2 key
gpg2 --gen-key
# if you have issues with lack of entropy, "apt install haveged" and try again
# create the password store using the gpg user id above
pass init $gpg_id
3. docker login
docker login
# You should not see any credentials stored in "auths" section.
# "credsStore": "pass" should have been automatically added.
# If the value is "secretservice", replace it with "pass".
cat ~/.docker/config.json
# verify credentials stored in `pass` store now
pass
There is a much easier answer than the ones already posted, which I found in a comment on https://github.com/docker/docker-credential-helpers/issues/105.
The solution is to rename docker-credential-secretservice out of the way
e.g: mv /usr/bin/docker-credential-secretservice /usr/bin/docker-credential-secretservice.broken
Once you do this, docker login works regardless of whether or not docker-compose is installed. No other package additions or removals are necessary.
I've resolved this issue by uninstalling docker-compose which was installed from Ubuntu repo and installing docker-compose by official instruction at https://docs.docker.com/compose/install/#install-compose
What helped me on Ubuntu 18.04 was:
Following the steps in #oberstet 's post and uninstalling the golang helper
Performing a login after the helper uninstall
Reinstalling docker via sudo apt-get install docker
Logging back in via sudo docker login