Keycloak SSL setup using docker image - docker

I am trying to deploy keycloak using docker image (https://hub.docker.com/r/jboss/keycloak/ version 4.5.0-Final) and facing an issue with setting up SSL.
According to the docs
Keycloak image allows you to specify both a
private key and a certificate for serving HTTPS. In that case you need
to provide two files:
tls.crt - a certificate tls.key - a private key Those files need to be
mounted in /etc/x509/https directory. The image will automatically
convert them into a Java keystore and reconfigure Wildfly to use it.
I followed the given steps and provided the volume mount setting with a folder with the necessary files (tls.crt and tls.key), But I am facing issues with SSL handshake, getting
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
error, blocking keycloak load in browser when trying to access it.
I have used letsencrypt to generate pem files and used openssl to create .crt and .key files.
Also tried just openssl to create those files to narrow down issue and the behavior is same(some additional info if this should matter)
By default, when I simply specify just the port binding -p 8443:8443 without specifying the cert volume mount /etc/x509/https the keycloak server generates a self signed certificate and I don't see issue in viewing the app in browser
I guess this might be more of a certificate creation issue than anything specific to keycloak, But, unsure how to get this to working.
Any help is appreciated

I also faced the issue of getting an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error, using the jboss/keycloak Docker image and free certificates from letsencrypt. Even after considering the advices from the other comments. Now, I have a working (and quite easy) setup, which might also help you.
1) Generate letsencrypt certificate
At first, I generated my letsencrypt certificate for domain sub.example.com using the certbot. You can find detailed instructions and alternative ways to gain a certificate at https://certbot.eff.org/ and the user guide at https://certbot.eff.org/docs/using.html.
$ sudo certbot certonly --standalone
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): sub.example.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for sub.example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/sub.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/sub.example.com/privkey.pem
Your cert will expire on 2020-01-27. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
2) Prepare docker-compose environment
I use docker-compose to run keycloak via docker. The config and data files are stored in path /srv/docker/keycloak/.
Folder config contains the docker-compose.yml
Folder data/certs contains the certificates I generated via letsencrypt
Folder data/keycloack_db is mapped to the database container to make its data persistent.
Put the certificate files to the right path
When I first had issues using the original letscrypt certificates for keycloak, I tried the workaround of converting the certificates to another format, as mentioned in the comments of the former answers, which also failed. Eventually, I realized that my problem was caused by permissions set to the mapped certificate files.
So, what worked for me is to just to copy and rename the files provided by letsencrypt, and mount them to the container.
$ cp /etc/letsencrypt/live/sub.example.com/fullchain.pem /srv/docker/keycloak/data/certs/tls.crt
$ cp /etc/letsencrypt/live/sub.example.com/privkey.pem /srv/docker/keycloak/data/certs/tls.key
$ chmod 755 /srv/docker/keycloak/data/certs/
$ chmod 604 /srv/docker/keycloak/data/certs/*
docker-compose.yml
In my case, I needed to use the host network of my docker host. This is not best practice and should not be required for your case. Please find information about configuration parameters in the documentation at hub.docker.com/r/jboss/keycloak/.
version: '3.7'
networks:
default:
external:
name: host
services:
keycloak:
container_name: keycloak_app
image: jboss/keycloak
depends_on:
- mariadb
restart: always
ports:
- "8080:8080"
- "8443:8443"
volumes:
- "/srv/docker/keycloak/data/certs/:/etc/x509/https" # map certificates to container
environment:
KEYCLOAK_USER: <user>
KEYCLOAK_PASSWORD: <pw>
KEYCLOAK_HTTP_PORT: 8080
KEYCLOAK_HTTPS_PORT: 8443
KEYCLOAK_HOSTNAME: sub.example.ocm
DB_VENDOR: mariadb
DB_ADDR: localhost
DB_USER: keycloak
DB_PASSWORD: <pw>
network_mode: host
mariadb:
container_name: keycloak_db
image: mariadb
volumes:
- "/srv/docker/keycloak/data/keycloak_db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: <pw>
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: <pw>
network_mode: host
Final directory setup
This is how my final file and folder setup looks like.
$ cd /srv/docker/keycloak/
$ tree
.
├── config
│ └── docker-compose.yml
└── data
├── certs
│ ├── tls.crt
│ └── tls.key
└── keycloak_db
Start container
Finally, I was able to start my software using docker-compose.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can see the mounted certificates within the container.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can doublecheck the mounted certificates within the container.
## open internal shell of keycloack container
$ sudo docker exec -it keycloak_app /bin/bash
## open directory of certificates
$ cd /etc/x509/https/
$ ll
-rw----r-- 1 root root 3586 Oct 30 14:21 tls.crt
-rw----r-- 1 root root 1708 Oct 30 14:20 tls.key
Considerung the setup from the docker-compose.yml, keycloak is now available at https://sub.example.com:8443

After some research the following method worked (for self-signed certs, I still have to figure out how to do with letsencrypt CA for prod)
generate a self-signed cert using the keytool
keytool -genkey -alias localhost -keyalg RSA -keystore keycloak.jks -validity 10950
convert .jks to .p12
keytool -importkeystore -srckeystore keycloak.jks -destkeystore keycloak.p12 -deststoretype PKCS12
generate .crt from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nokeys -out tls.crt
generate .key from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nocerts -nodes -out tls.key
Then use the tls.crt and tls.key for volume mount /etc/x509/https
Also, on the securing app, in the keycloak.json file specify the following properties
"truststore" : "path/to/keycloak.jks",
"truststore-password" : "<jks-pwd>",

For anyone who is trying to run Keycloak with a passphrase protected private key file:
Keycloak runs the script /opt/jboss/tools/x509.sh to generate the keystore based on the provided files in /etc/x509/https as described in https://hub.docker.com/r/jboss/keycloak - Setting up TLS(SSL).
This script takes no passphrase into account unfortunately. But with a little modification at Docker build time you can fix it by yourself:
Within your Dockerfile add:
RUN sed -i -e 's/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\\n -passin pass:"${SERVER_KEYSTORE_PASSWORD}" \\/' /opt/jboss/tools/x509.sh
This command modifies the script and appends the parameter to pass in the passphrase
-passin pass:"${SERVER_KEYSTORE_PASSWORD}"
The value of the parameter is an environment variable which you are free to set: SERVER_KEYSTORE_PASSWORD
Tested with Keycloak 9.0.0

Related

How to make Drone Docker Plugin use self-signed certs?

I'm facing the same problem as here - I have set up a private Docker Registry with TLS certification (certificates generated via Certbot), and I can interact with it directly via curl etc. (thus proving that the certificate is correct), but the Docker Plugin in my Drone flow gives an error x509: certificate signed by unknown authority.
As per this StackOverflow answer, I believe that putting the certificate at /etc/docker/certs.d/<my_registry_address:port>/ca.crt should fix this problem, but it doesn't appear to (neither does adding the certificate into the standard /etc/ssl/certs/ca-certificates.crt location)
Demonstration that the certificates work as-expected, having already built the Docker Drone Plugin locally as per https://github.com/drone-plugins/drone-docker:
$ docker run --rm -v <path_to_directory_containing_pems>:/custom-certs -it --entrypoint /bin/sh plugins/docker
/ # ls /custom-certs
accounts archive csr keys live renewal renewal-hooks
/ # apk add curl
...
OK: 28 MiB in 56 packages
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /custom-certs/live/docker-registry.scubbo.org/fullchain.pem
{"repositories":[...]}
/ # cat /custom-certs/live/docker-registry.scubbo.org/fullchain.pem >> /etc/ssl/certs/ca-certificates.crt
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
{"repositories":[...]}
Here's my .drone.yml, for a Runner instantiated with --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,<path_to_directory_containing_pems>:/custom-certs:
kind: pipeline
name: hello-world
type: docker
platform:
os: linux
arch: arm64
steps:
- name: copy-cert-into-place
image: busybox
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
# https://stackoverflow.com/a/56410355/1040915
# Note that we need to mount the whole `custom-certs` directory into the workflow and then copy the file to `/etc/...`,
# rather than mounting the file directly into `/etc/...`, because the original file is a symlink and it's not possible (AFAIK)
# to instruct Docker to "mount the eventual-target-of this symlink into <location>"
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: check-cert-persists-between-stages
image: alpine
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
- apk add curl
# The command below would fail if the cert was unavailable or invalid
- curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: build-image
# ...contents irrelevant to this question...
- name: push-built-image
image: plugins/docker
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
settings:
repo: docker-registry.scubbo.org:8843/scubbo/blog_nginx
tags: built_in_ci
debug: true
launch_debug: true
volumes:
- name: docker-cert-persistence
temp: {}
giving these logs from push-built-image step - ending in...
+ /usr/local/bin/docker tag 472d41d9c03ee60fe9c1965ad9cfd36a1cdb6cbf docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
+ /usr/local/bin/docker push docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
The push refers to repository [docker-registry.scubbo.org:8843/scubbo/blog_nginx]
Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
exit status 1
How should I go about providing the CA Certificate to my Drone Docker Plugin step to permit it to communicate over TLS with a secure Docker registry? This answer suggests simply reverting to insecure integration, which works but is unsatisfactory.
EDIT: After re-reading this documentation, I extended the copy-cert-into-place commands to copy all 3 certificate-related files:
commands:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- cp -L /custom-certs/live/docker-registry.scubbo.org/privkey.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.key
- cp -L /custom-certs/live/docker-registry.scubbo.org/cert.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.cert
but that did not resolve the problem - same x509: certificate signed by unknown authority error.
EDIT2: I directly confirmed (directly on a host, outside the context of a plugin or docker container) that adding the certificate to the path used above is sufficient to permit interaction with the registry:
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
Error response from daemon: Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
$ sudo cp -L <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem /etc/docker/certs.d/docker-registry.scubbo.org\:8843/ca.crt
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
built_in_ci: Pulling from scubbo/blog_nginx
Digest: sha256:3a17f86f23050303d94443f24318b49fb1a5e2d0cc9228270678c8aa55b4d2c2
Status: Image is up to date for docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
This isn't a complete answer, but I was able to get secure registry access working by switching from mounting a directory, to mounting the file directly:
I changed the docker run option to --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,$(readlink -f <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem):/registry_cert.crt
I changed the commands in copy-cert-into-place to:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp /registry_cert.crt /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
I don't consider this a complete answer (and would love further input or advice!), because:
I don't know why copying the file out of the mounted directory into /etc/docker/... (as in the original question) didn't work, but mounting the file directly from the host filesystem worked. (Note that the check-cert-persists-between-stages stage confirms that the certificate is correct, so it's not a mistake of copying a wrong or empty file)
I don't know how to mount the file directly into an in-stage path that contains a colon - this answer indicates how to mount a path containing a colon directly into a container, but in this case we're passing the path to DRONE_RUNNER_VOLUMES

Keycloak 17.0.1 Import Realm on Docker / Docker-Compose Startup

I am trying to find a way to import a realm in Keycloak version 17.0.1 that can be done at starting up a docker container (with docker-compose). I want to be able to do this in "start" mode and not "start-dev" mode as in my experience so far "start-dev" in 17 is forcing an H2/in-mem database and not allowing me to point to an external db which I would like to do to more closely resemble dev/prod environments when running locally.
Things I've tried:
1) It appears that according to recent conversations on Github (Issue 10216 and Issue 10754 to name a couple) that the environment variable that used to allow this (KEYCLOAK_IMPORT or KC_IMPORT_REALM in some versions) is no longer a trigger for this. In my attempts it also did not work for version 17.0.1.
2) I've also tried appending the following command in my docker-compose setup for keycloak and had no luck (also tried with just "start") - It appears to just ignore the command (no error or anything):
command: ["start-dev", "-Dkeycloak.import=/tmp/my-realm.json"]
3) I tried running the kc.sh command "import" in the Dockerfile (both before and after Entrypoint/start) but got error: Unmatched arguments from index 1: '/opt/keycloak/bin/kc.sh', 'im port', '--file', '/tmp/my-realm.json'
4) I've shifted gears and have tried to see if it is possible to just do it after the container starts (even with manual intervention) just to get some sanity restored. I attempted to use the admin-cli but after quite a few different attempts at different points/endpoints etc. I just get that localhost refuses to connect.
bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password adminpassword
Responds when hitting the following ports as shown:
8080: Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
8443: Failed to send request - localhost:8443 failed to respond
I am sure there are other ways that I've tried and am forgetting - I've kind of spun my wheels at this point.
My code (largely the same as the latest docs on the Keycloak website):
Dockerfile:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
ENV KC_HOSTNAME=localhost
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start" ]
Docker-compose.yml:
version: "3"
services:
keycloak:
build:
context: .
volumes:
- ./my-realm.json:/tmp/my-realm.json:ro
env_file:
- .env
environment:
KC_DB_URL: ${POSTGRESQL_URL}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
ports:
- 8080:8080
- 8443:8443 # <-- I've tried with only 8080 and with only 8443 as well. 8443 appears to be the only that I can get the admin console ui to even work on though.
networks:
- my_net
networks:
my_net:
name: my_net
Any suggestion on how to do this in a programmatic + "dev-opsy" way would be greatly appreciated. I'd really like to get this to work but am confused on how to get past this.
Importing realm upon docker initialization thru configuration is not supported yet. See https://github.com/keycloak/keycloak/issues/10216. They might release this feature in next release v18.
The workarounds people had shared in github thread is create own docker image and import the realm thru json file when building it.
FROM quay.io/keycloak/keycloak:17.0.1
# Make the realm configuration available for import
COPY realm-and-users.json /opt/keycloak_import/
# Import the realm and user
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak_import/realm-and-users.json
# The Keycloak server is configured to listen on port 8080
EXPOSE 8080
EXPOSE 8443
# Import the realm on start-up
CMD ["start-dev"]
As #tboom said, it was not supported yet by keycloak 17.x. But it is now supported by keycloak 18.x using the --import-realm option :
bin/kc.[sh|bat] [start|start-dev] --import-realm
This feature does not work as it was before. The JSON file path must not be specified anymore: the JSON file only has to be copied in the <KEYCLOAK_DIR>/data/import directory (multiple JSON files supported). Note that the import operation is skipped if the realm already exists, so incremental updates are not possible anymore (at least for the time being).
This feature is documented on https://www.keycloak.org/server/importExport#_importing_a_realm_during_startup.

ssh-agent forwarding into docker-compose environment is not working

I have been having serious troubles to get ssh-agent forwarded into the docker container (with my docker-compose installation). I have Mac running Catalina, with docker-engine 19.03.8 and Compose # 1.24.
The following is my docker-compose file:
version: '3.7'
services:
platform:
build:
context: .
dockerfile: ./platform/compose/Dockerfile.platform.local
working_dir: /root/platform
ports:
- "3000:3000"
command: ["./compose/scripts/start_rails.sh"]
tty: true
stdin_open: true
volumes:
- type: bind
source: /run/host-services/ssh-auth.sock
target: /run/host-services/ssh-auth.sock
env_file: ./platform/.env
environment:
TERM: xterm-256color
SSH_AUTH_SOCK: /run/host-services/ssh-auth.sock
volumes:
The way I have configured ssh-agent forwarding is as specified in docker-compose documentation
The ./compose/scripts/start_rails.sh script does bundle install && bundle exec rails s. I have few gems that I am pulling from private-repositories and I thought I should be able to install these gems by forwarding ssh-agent.
I have also tried starting the ssh-agent before I spin the docker-compose up, but that doesnt seem to do anything.
{
"debug": true,
"experimental": true,
"features": {
"buildkit": true
}
}
This is what I have added inside my docker configuration file. Any help is appreciated.
**UPDATE: 0 **
The following in my .ssh directory structure and config:
tree ~/.ssh
├── config
├── known_hosts
├── midhun
│   ├── id_rsa
│   └── id_rsa.pub
└── client
├── id_rsa
└── id_rsa.pub
cat ~/.ssh/config
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/client/id_rsa
Host me.github.com
HostName github.com
User git
IdentityFile ~/.ssh/midhun/id_rsa
UPDATE: 1
Updated my config with ForwardAgent Yes and it didn't work either. I have recorded entire ssh-logs in this gist -> https://gist.github.com/midhunkrishna/8f77ebdc90c7230d2ffae0834dc477cc .
I believe below change to your ~/.ssh/config should fix the issue:
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/client/id_rsa
ForwardAgent yes
Host me.github.com
HostName github.com
User git
IdentityFile ~/.ssh/midhun/id_rsa
ForwardAgent yes
Update 1: 5th May 2020
In your case, the reason it may not be working is that the agent on the host is key less.
You can confirm that using:
$ ssh-add -L
$ ssh-add -l
The agent will only forward the keys it has in its memory, nothing on your disk. Else you risk exposing every key that is there without any permission. What you need do is make sure you add those keys to your ssh-agent at startup:
$ ssh-add ~/.ssh/client/id_rsa
$ ssh-add ~/.ssh/midhun/id_rsa
Then if you do ssh-add -L on host and inside the docker terminal you should see both keys. And the ssh-agent also will work.

Specify cert to use for SSL in docker-compose.yml file?

I've got a small docker-compose.yaml file that looks like this (it's used to fire up a local test-environment):
version: '2'
services:
ubuntu:
build:
context: ./build
dockerfile: ubuntu.dock
volumes:
- ./transfer:/home/
ports:
- "60000:22"
python:
build:
context: ./build
dockerfile: python.dock
volumes:
- .:/home/code
links:
- mssql
- ubuntu
mssql:
image: "microsoft/mssql-server-linux"
environment:
SA_PASSWORD: "somepassword"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
The issue that I run into is that when I run docker-compose up it fails at this step (which is in the python.dock file):
Step 10/19 : RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
---> Running in e4963c91a05b
The error looks like this:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
More details here: http://curl.haxx.se/docs/sslcerts.html
The part where it fails in the python.dock file looks like this:
# This line fails
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update -y
I've had issues with curl / docker in the past - because we use a self-signed cert for decrypting/encrypting at the firewall level (network requirement); is there a way for me to specify a self-signed cert for the containers to use?
That would allow curl to reach out and download the required files.
I've tried running something like docker-compose -f docker-compose.yaml --tlscert ~/certs/the-self-signed-ca.pem up but it fails.
How can I do this a bit more programatically so I can just run docker-compose up ?
If I understood it correctly, your firewall breaks the TLS encryption and re-encrypts it with a certificate from a local CA. I think this is generally a bad situation and you should aim for proper end-to-end, but politics aside.
The argument --tlscert passed to docker-compose is used to communicate with the docker daemon, potentially running remotely, exposed on port 2376, by default. In such a scenario, your local docker-compose command orchestrates containers on a remote machine, including building the image.
In your case, the curl command runs within a container. It will use the CAs (usually) installed by the base image in python.dock. To use your custom CA, you need to either
copy the CA certificate to the correct place within the image, e.g.,
COPY the-self-signed-ca.pem /etc/ssl/certs/
The exact procedure depends on your base image. This will make the certificate available in the container. The certificate will most likely be used by all subsequent processes. Other developers/users might not be aware that a custom CA is installed and that the connection is not secure!
copy the CA certificate to a custom place within the image, e.g.,
COPY the-self-signed-ca.pem /some/path/the-self-signed-ca.pem
and tell curl explicitly about the custom CA using the --cacert argument:
RUN curl --cacert /some/path/the-self-signed-ca.pem https://example.com/

Docker private registry using selfsigned certificates

I want to run a private docker registry which is widely available.
So I will be able to push and pull images from other servers.
I'm following this tutorials: doc1 & doc2
I performed 3 steps:
First I've created my certificate and key (as CNAME I filled in my ec2-hostname)
mkdir -p certs && openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
Than I've created my docker registry, using this key.
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
Than I copied the content of domain.crt to /etc/docker/certs.d/ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/ca.crt
I restarted my docker: sudo service docker restart
When I try to push an image I get the following error:
unable to ping registry endpoint https://ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/v0/
v2 ping attempt failed with error: Get https://ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/v2/: net/http: TLS handshake timeout
v1 ping attempt failed with error: Get https://ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/v1/_ping: net/http: TLS handshake timeout
I really don't know what I'm missing or doing wrong. Can someone please help me. Thanks
I'm not sure if you copy/pasted your pwd directly... but the file path should be /etc/docker/certs.d
You currently have etc/docker/cert.d/registry.ip:5000/domain.crt
The error message says "TLS handshake timeout". This indicates that either no process is listening on port 5000 (check using netstat) or the port is closed from the location where you are trying to push the image (open port in the AWS security group).
From what I've seen docker login is way more sensitive to properly crafted self-signed certs than browsers are + there's an interesting gotcha I'll point out at the very bottom, so read the whole thing.
According to this site:
https://jamielinux.com/docs/openssl-certificate-authority/create-the-root-pair.html
Bash# openssl x509 -noout -text -in ca.crt
X509v3 Basic Constraints: critical
CA:TRUE
^You should see something like this is you provisioned your certs right.
While following random how-to guides on the net I was able to generate ca.crt and website.crt
When I ran the above command I didn't see that output, but I noticed:
If I imported the cert as trusted in Mac or Win my browser would be happy and say yeap valid cert, but docker login on RHEL7 would complain with messages like)
x509: certificate signed by unknown authority
I tried following directions related to using: /etc/docker/certs.d/mydockerrepo.lan:5000/ca.crt
on https://docs.docker.com/engine/security/certificates/
It got me a better error message (which caused me to find the above site in the first place)
x509: certificate signed by unknown authority (possibly because of
"x509: invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate
After 2 days of messing around I figured it out:
When I was taught programming I was taught the concept of a short self-contained example, so going to try doing that here for ansible, leveraging the openssl built-in modules, I'm running latest ansible 2.9, but this should work for ansible 2.5++ in theory:
Short Self Contained Example:
#Name this file generatecertificates.playbook.yml
#Run using Bash# ansible-playbook generatecertificates.playbook.yml
#
#What to Expect:
#Run Self Contained Stand Alone Ansible Playbook --Get-->
# currentworkingdir/certs/
# ca.crt
# ca.key
# mydockerrepo.private.crt
# mydockerrepo.private.key
#
#PreReq Ansible 2.5++
#PreReq Bash# pip3 install cryptograph >= 1.6 or PyOpenSSL > 0.15 (if using selfsigned provider)
---
- hosts: localhost
connection: local
gather_facts: no
vars:
- caencryptionpassword: "myrootcaencryptionpassword"
- dockerepodns: "mydockerrepo.private"
- rootcaname: "My Root CA"
tasks:
- name: get current working directory
shell: pwd
register: pathvar
- debug: var=pathvar.stdout
- name: Make sub directory
file:
path: "{{pathvar.stdout}}/certs"
state: directory
register: certsoutputdir
- debug: var=certsoutputdir.path
- name: "Generate Root CA's Encrypted Private Key"
openssl_privatekey:
size: 4096
path: "{{certsoutputdir.path}}/ca.key"
cipher: auto
passphrase: "{{caencryptionpassword}}"
- name: "Generate Root CA's Self Signed Certificate Signing Request"
openssl_csr:
path: "{{certsoutputdir.path}}/ca.csr"
privatekey_path: "{{certsoutputdir.path}}/ca.key"
privatekey_passphrase: "{{caencryptionpassword}}"
common_name: "{{rootcaname}}"
basic_constraints_critical: yes
basic_constraints: ['CA:TRUE']
- name: "Generate Root CA's Self Signed Certificate"
openssl_certificate:
path: "{{certsoutputdir.path}}/ca.crt"
csr_path: "{{certsoutputdir.path}}/ca.csr"
provider: selfsigned
selfsigned_not_after: "+3650d" #Note: Mac won't trust by default due to https://support.apple.com/en-us/HT210176, but you can explitly trust to make it work.
privatekey_path: "{{certsoutputdir.path}}/ca.key"
privatekey_passphrase: "{{caencryptionpassword}}"
register: cert
- debug: var=cert
- name: "Generate Docker Repo's Private Key"
openssl_privatekey:
size: 4096
path: "{{certsoutputdir.path}}/{{dockerepodns}}.key"
- name: "Generate Docker Repo's Certificate Signing Request"
openssl_csr:
path: "{{certsoutputdir.path}}/{{dockerepodns}}.csr"
privatekey_path: "{{certsoutputdir.path}}/{{dockerepodns}}.key"
common_name: "{{dockerepodns}}"
subject_alt_name: 'DNS:{{dockerepodns}},DNS:localhost,IP:127.0.0.1'
- name: "Generate Docker Repo's Cert, signed by Root CA"
openssl_certificate:
path: "{{certsoutputdir.path}}/{{dockerepodns}}.crt"
csr_path: "{{certsoutputdir.path}}/{{dockerepodns}}.csr"
provider: ownca
ownca_not_after: "+365d" #Cert valid 1 year
ownca_path: "{{certsoutputdir.path}}/ca.crt"
ownca_privatekey_path: "{{certsoutputdir.path}}/ca.key"
ownca_privatekey_passphrase: "{{caencryptionpassword}}"
register: cert
- debug: var=cert
Interesting Gotcha/Final Step:
RHEL7Bash# sudo cp ca.crt /etc/pki/ca-trust/source/anchors/ca.crt
RHEL7Bash# sudo update-ca-trust
RHEL7Bash# sudo systemctl restart docker
The gotcha is that you have to restart docker, for docker login to recognize updates to CA's newly added to the system.

Resources