Copying SSL certificate with Ansible - docker

I'm trying to copy an SSL certificate from an Ansible host VM to a Docker host VM,and i keep having the following error
FAILED! => {"changed": false, "msg": "Template source files must be utf-8 encoded"}
the playbook is simple and has only two steps:
- name: Create directory for SSL certificate
file: path=/etc/ssl/certs/pm state=directory
- name: Copy SSL certificate from Ansible host to Docker host
template:
src: inventories/staging/files/pm.jks
dest: /etc/ssl/certs/pm/pm.jks
owner: root
mode: 0755
ignore_errors: true
When i replace pm.jks with an empty file with the same name and extension,the copying works fine,so clearly there's a problem with the format of its content. but i'm not sure how to fix this.
I'm using this command to generate the certif :
keytool -genkey -alias pm -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore pm.p12 -genkey -alias pm -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore pm.p12 -validity 3650

Why not use the file method of Ansible instead the template module? The error is quite clear - you cannot use the template module, because the source file you are referencing is not UTF-8 encoded.
If you examine the file you created with the command you posted, you will notice that it is a binary file:
test#toor:~$ file pm.p12
pm.p12: data
test#toor:~$ less pm.p12
"pm.p12" may be a binary file. See it anyway?
Either try using a different Ansible module or try saving the file in a plain text format.

Just because I stumbled over it searching for the same solution.
Here an excerpt from my playbook where I used the builtin copy module
# playbook.yaml
- name: Create path for SSL cert
file: path=/etc/ssl/certs/custom state=directory
- name: Copy SSL cert
ansible.builtin.copy:
src: /path/to/local/my_cert.p12
dest: /etc/ssl/certs/custom/my_cert.p12
owner: root
mode: 0644
Rolling out on a Ubuntu VM.

Related

Asp.net core, "asn1 encoding routines:asn1_d2i_read_bio:not enough data" error for certificate

When running my asp.net core application locally in my Linux Docker container, the following error occurs:
Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error:0D06B08E:asn1 encoding routines:asn1_d2i_read_bio:not enough data
at Internal.Cryptography.Pal.OpenSslX509CertificateReader.FromBio(SafeBioHandle bio, SafePasswordHandle password)
at Internal.Cryptography.Pal.OpenSslX509CertificateReader.FromFile(String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
... when instanciating an X509Certificate2 object in my startup.cs:
services.AddDefaultIdentity<ApplicationUser>(options => options.SignIn.RequireConfirmedAccount = true).AddEntityFrameworkStores<ApplicationDbContext>();
var identityServerBuilder = services.AddIdentityServer().AddApiAuthorization<ApplicationUser, ApplicationDbContext>();
var certificate = new X509Certificate2(#"abc.pfx", "abc"); // This is where the exception is thrown.
identityServerBuilder.AddSigningCredential(certificate);
I need this self-signed certificate to run IdentityServer4.
When debugging in Visual Studio I have no problems and I can perfectly evaluate all of the pfx's properties.
I generated the pfx file on Linux as follows:
openssl genrsa -out rsa.private 1024
openssl req -new -key rsa.private > rsa.csr
openssl x509 -req -in rsa.csr -signkey rsa.private -out rsa.crt
openssl pkcs12 -export -in rsa.crt -inkey rsa.private -out abc.pfx
...and verifying its integrity:
openssl pkcs12 -nokeys -info -nocerts -in abc.pfx
... revealed no problems:
MAC: sha1, Iteration 2048
MAC length: 20, salt length: 8
PKCS7 Encrypted data: pbeWithSHA1And40BitRC2-CBC, Iteration 2048
Certificate bag
PKCS7 Data
Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 2048
I also used the Microsoft Management Console to generate the pfx, but that results in the same error.
I'm mounting my pfx file executing this Docker run command:
docker run -d=false -p 8080:80 -v abc.pfx:/app/abc.pfx --name mijncont mijncont:dev
My appsettings.json :
"IdentityServer": {
"Clients": {
"TEST.Client": {
"Profile": "IdentityServerSPA"
}
},
"Key": {
"Type": "Store",
"StoreName": "My",
"StoreLocation": "LocalMachine",
"Name": "CN=mijnsubject"
}
When running :
docker exec -it mijncont /bin/bash
... the following prompt appears :
root#3815c63cb5c4:/app#
When executing 'ls -la'
I get this :
drwxr-xr-x 2 root root 4096 Jul 13 16:04 abc.pfx
However when I include this line in my Startup.cs :
Console.WriteLine("Size: "+ new System.IO.FileInfo("abc.pfx").Length);
... .Net throws an exception saying that the file doesn't exist.
I included the directory containing the pfx file in Docker -> Settings -> Resources -> File sharing
Anyone?
The problem lied in the way I mounted my abc.pfx file.
This :
docker run -d=false -p 8080:80 -v abc.pfx:/app/abc.pfx --name mijncont mijncont:dev
...mounts a directory called abc.fpx in the container (holding the abc.pfx file)
If I specify an absolute path :
docker run -d=false -p 8080:80 -v C:\mysolution\abc.pfx:/app/abc.pfx --name mijncont
...only the file is mounted to the 'app' directory which is what I want.

Docker TLS - How to create key on local machine

Pre knowledge:
So I started using docker myself and installed it on my server and enabled TLS. I followed this tutorial: https://docs.docker.com/engine/security/https/
This tutorial will eventually give you 6 files:
-r-------- ca-key.pem
-r--r--r-- ca.pem
-r--r--r-- cert.pem
-r-------- key.pem
-r--r--r-- server-cert.pem
-r-------- server-key.pem
The owner of these files is root. I copied the ca.pem, cert.pem and key.pem, I used them to connect from my local portainer instance. (Actually I only use cert.pem and key.pem since I only have client verification on)
DOCKER HOST:
{
"hosts": [
"unix:///var/run/docker.sock",
"tcp://0.0.0.0:2376"
],
"storage-driver": "overlay2",
"tls": true,
"tlscacert": "/etc/docker/certs/ca.pem",
"tlscert": "/etc/docker/certs/server-cert.pem",
"tlskey": "/etc/docker/certs/server-key.pem",
"tlsverify": true
}
My problem:
The company where I work installed docker for me and enabled TLS, put all the pem files in a directory which I can access... Problem is, I cannot download the key.pem since the owner is root and I won't get access to it.
I can download the next files:
ca.pem
cert.pem
server-cert.pem
Is is possible for me; with access to those files ONLY, not changing anything on the server, to access docker over TLS? How can I create my own key.pem, or is there another way?
Sorry if this is common knowledge, I just could not find my answer anywhere, or I did not know what I was exactly searching for...
Yes, you can work against the docker-daemon on that server and you don't need to create another key and certificate for the server.
Download the server-cert.pem and export the following environment variables in your local session:
DOCKER_TLS_VERIFY="1"
DOCKER_CERT_PATH="server-cert.pem"
DOCKER_HOST= "tcp://HOST:2376"
Now you can use your local docker-client and work against the docker-daemon on your server. e.g. docker ps should display containers running on the remote docker.
Private keys create the certificates, you can't create a key from a cert. If your docker wants a 2 way authentication you will need access to the private key. It cannot be done without.
You'll need the following files (for client-server authentication):
ca.pem
cert.pem
key.pem

Keycloak SSL setup using docker image

I am trying to deploy keycloak using docker image (https://hub.docker.com/r/jboss/keycloak/ version 4.5.0-Final) and facing an issue with setting up SSL.
According to the docs
Keycloak image allows you to specify both a
private key and a certificate for serving HTTPS. In that case you need
to provide two files:
tls.crt - a certificate tls.key - a private key Those files need to be
mounted in /etc/x509/https directory. The image will automatically
convert them into a Java keystore and reconfigure Wildfly to use it.
I followed the given steps and provided the volume mount setting with a folder with the necessary files (tls.crt and tls.key), But I am facing issues with SSL handshake, getting
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
error, blocking keycloak load in browser when trying to access it.
I have used letsencrypt to generate pem files and used openssl to create .crt and .key files.
Also tried just openssl to create those files to narrow down issue and the behavior is same(some additional info if this should matter)
By default, when I simply specify just the port binding -p 8443:8443 without specifying the cert volume mount /etc/x509/https the keycloak server generates a self signed certificate and I don't see issue in viewing the app in browser
I guess this might be more of a certificate creation issue than anything specific to keycloak, But, unsure how to get this to working.
Any help is appreciated
I also faced the issue of getting an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error, using the jboss/keycloak Docker image and free certificates from letsencrypt. Even after considering the advices from the other comments. Now, I have a working (and quite easy) setup, which might also help you.
1) Generate letsencrypt certificate
At first, I generated my letsencrypt certificate for domain sub.example.com using the certbot. You can find detailed instructions and alternative ways to gain a certificate at https://certbot.eff.org/ and the user guide at https://certbot.eff.org/docs/using.html.
$ sudo certbot certonly --standalone
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): sub.example.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for sub.example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/sub.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/sub.example.com/privkey.pem
Your cert will expire on 2020-01-27. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
2) Prepare docker-compose environment
I use docker-compose to run keycloak via docker. The config and data files are stored in path /srv/docker/keycloak/.
Folder config contains the docker-compose.yml
Folder data/certs contains the certificates I generated via letsencrypt
Folder data/keycloack_db is mapped to the database container to make its data persistent.
Put the certificate files to the right path
When I first had issues using the original letscrypt certificates for keycloak, I tried the workaround of converting the certificates to another format, as mentioned in the comments of the former answers, which also failed. Eventually, I realized that my problem was caused by permissions set to the mapped certificate files.
So, what worked for me is to just to copy and rename the files provided by letsencrypt, and mount them to the container.
$ cp /etc/letsencrypt/live/sub.example.com/fullchain.pem /srv/docker/keycloak/data/certs/tls.crt
$ cp /etc/letsencrypt/live/sub.example.com/privkey.pem /srv/docker/keycloak/data/certs/tls.key
$ chmod 755 /srv/docker/keycloak/data/certs/
$ chmod 604 /srv/docker/keycloak/data/certs/*
docker-compose.yml
In my case, I needed to use the host network of my docker host. This is not best practice and should not be required for your case. Please find information about configuration parameters in the documentation at hub.docker.com/r/jboss/keycloak/.
version: '3.7'
networks:
default:
external:
name: host
services:
keycloak:
container_name: keycloak_app
image: jboss/keycloak
depends_on:
- mariadb
restart: always
ports:
- "8080:8080"
- "8443:8443"
volumes:
- "/srv/docker/keycloak/data/certs/:/etc/x509/https" # map certificates to container
environment:
KEYCLOAK_USER: <user>
KEYCLOAK_PASSWORD: <pw>
KEYCLOAK_HTTP_PORT: 8080
KEYCLOAK_HTTPS_PORT: 8443
KEYCLOAK_HOSTNAME: sub.example.ocm
DB_VENDOR: mariadb
DB_ADDR: localhost
DB_USER: keycloak
DB_PASSWORD: <pw>
network_mode: host
mariadb:
container_name: keycloak_db
image: mariadb
volumes:
- "/srv/docker/keycloak/data/keycloak_db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: <pw>
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: <pw>
network_mode: host
Final directory setup
This is how my final file and folder setup looks like.
$ cd /srv/docker/keycloak/
$ tree
.
├── config
│ └── docker-compose.yml
└── data
├── certs
│ ├── tls.crt
│ └── tls.key
└── keycloak_db
Start container
Finally, I was able to start my software using docker-compose.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can see the mounted certificates within the container.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can doublecheck the mounted certificates within the container.
## open internal shell of keycloack container
$ sudo docker exec -it keycloak_app /bin/bash
## open directory of certificates
$ cd /etc/x509/https/
$ ll
-rw----r-- 1 root root 3586 Oct 30 14:21 tls.crt
-rw----r-- 1 root root 1708 Oct 30 14:20 tls.key
Considerung the setup from the docker-compose.yml, keycloak is now available at https://sub.example.com:8443
After some research the following method worked (for self-signed certs, I still have to figure out how to do with letsencrypt CA for prod)
generate a self-signed cert using the keytool
keytool -genkey -alias localhost -keyalg RSA -keystore keycloak.jks -validity 10950
convert .jks to .p12
keytool -importkeystore -srckeystore keycloak.jks -destkeystore keycloak.p12 -deststoretype PKCS12
generate .crt from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nokeys -out tls.crt
generate .key from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nocerts -nodes -out tls.key
Then use the tls.crt and tls.key for volume mount /etc/x509/https
Also, on the securing app, in the keycloak.json file specify the following properties
"truststore" : "path/to/keycloak.jks",
"truststore-password" : "<jks-pwd>",
For anyone who is trying to run Keycloak with a passphrase protected private key file:
Keycloak runs the script /opt/jboss/tools/x509.sh to generate the keystore based on the provided files in /etc/x509/https as described in https://hub.docker.com/r/jboss/keycloak - Setting up TLS(SSL).
This script takes no passphrase into account unfortunately. But with a little modification at Docker build time you can fix it by yourself:
Within your Dockerfile add:
RUN sed -i -e 's/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\\n -passin pass:"${SERVER_KEYSTORE_PASSWORD}" \\/' /opt/jboss/tools/x509.sh
This command modifies the script and appends the parameter to pass in the passphrase
-passin pass:"${SERVER_KEYSTORE_PASSWORD}"
The value of the parameter is an environment variable which you are free to set: SERVER_KEYSTORE_PASSWORD
Tested with Keycloak 9.0.0

GitLab with Docker on Synology NAS - HTTPS

I would like to set up GitLab with https on Synology DS918+.
I am using DOCKER in DSM. I downloaded the latest GitLab Community docker image.
And I used Putty to ssh into the NAS and create keys using openssl.
1) Create a key into cert folder:
mkdir /volume1/docker/gitlab/certs
cd /volume1/docker/gitlab/certs
openssl genrsa -out gitlab.key 2048
openssl req -new -key gitlab.key -out gitlab.csr
openssl x509 -req -days 3650 -in gitlab.csr -signkey gitlab.key -out gitlab.crt
openssl dhparam -out dhparam.pem 2048
chmod 400 gitlab.key
2) I added two additional variables in custom image to set up the environment for HTTPS:
3) In the last part:
I remove port 80 that was first set in the default image.
And add ports 30000/30001 for 22/443 port bindings that were set to auto in the default image:
When I go to browser for https://synologyip.com:30000 GitLab can't be reached.
Any guesses on what have I missed or done wrong?
Thanks!
I don't know about you, but I had to create the cert in the following folder:
/volume1/docker/gitlab/gitlab/certs
Note the repeated gitlab directory
A good well-written tutorial can be found here: Github Tutorial, and with a letsencrypt cert too!
Although i shortened the cert copy part like follows:
cat /usr/syno/etc/certificate/_archive/*/privkey.pem > /volume1/docker/github/github/certs/gitlab.key
cat /usr/syno/etc/certificate/_archive/*/fullchain.pem > /volume1/docker/github/github/certs/gitlab.crt
and continued then with the dhparam.pem

Docker private registry using selfsigned certificates

I want to run a private docker registry which is widely available.
So I will be able to push and pull images from other servers.
I'm following this tutorials: doc1 & doc2
I performed 3 steps:
First I've created my certificate and key (as CNAME I filled in my ec2-hostname)
mkdir -p certs && openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
Than I've created my docker registry, using this key.
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
Than I copied the content of domain.crt to /etc/docker/certs.d/ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/ca.crt
I restarted my docker: sudo service docker restart
When I try to push an image I get the following error:
unable to ping registry endpoint https://ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/v0/
v2 ping attempt failed with error: Get https://ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/v2/: net/http: TLS handshake timeout
v1 ping attempt failed with error: Get https://ec2-xx-xx-xx-xx.compute.amazonaws.com:5000/v1/_ping: net/http: TLS handshake timeout
I really don't know what I'm missing or doing wrong. Can someone please help me. Thanks
I'm not sure if you copy/pasted your pwd directly... but the file path should be /etc/docker/certs.d
You currently have etc/docker/cert.d/registry.ip:5000/domain.crt
The error message says "TLS handshake timeout". This indicates that either no process is listening on port 5000 (check using netstat) or the port is closed from the location where you are trying to push the image (open port in the AWS security group).
From what I've seen docker login is way more sensitive to properly crafted self-signed certs than browsers are + there's an interesting gotcha I'll point out at the very bottom, so read the whole thing.
According to this site:
https://jamielinux.com/docs/openssl-certificate-authority/create-the-root-pair.html
Bash# openssl x509 -noout -text -in ca.crt
X509v3 Basic Constraints: critical
CA:TRUE
^You should see something like this is you provisioned your certs right.
While following random how-to guides on the net I was able to generate ca.crt and website.crt
When I ran the above command I didn't see that output, but I noticed:
If I imported the cert as trusted in Mac or Win my browser would be happy and say yeap valid cert, but docker login on RHEL7 would complain with messages like)
x509: certificate signed by unknown authority
I tried following directions related to using: /etc/docker/certs.d/mydockerrepo.lan:5000/ca.crt
on https://docs.docker.com/engine/security/certificates/
It got me a better error message (which caused me to find the above site in the first place)
x509: certificate signed by unknown authority (possibly because of
"x509: invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate
After 2 days of messing around I figured it out:
When I was taught programming I was taught the concept of a short self-contained example, so going to try doing that here for ansible, leveraging the openssl built-in modules, I'm running latest ansible 2.9, but this should work for ansible 2.5++ in theory:
Short Self Contained Example:
#Name this file generatecertificates.playbook.yml
#Run using Bash# ansible-playbook generatecertificates.playbook.yml
#
#What to Expect:
#Run Self Contained Stand Alone Ansible Playbook --Get-->
# currentworkingdir/certs/
# ca.crt
# ca.key
# mydockerrepo.private.crt
# mydockerrepo.private.key
#
#PreReq Ansible 2.5++
#PreReq Bash# pip3 install cryptograph >= 1.6 or PyOpenSSL > 0.15 (if using selfsigned provider)
---
- hosts: localhost
connection: local
gather_facts: no
vars:
- caencryptionpassword: "myrootcaencryptionpassword"
- dockerepodns: "mydockerrepo.private"
- rootcaname: "My Root CA"
tasks:
- name: get current working directory
shell: pwd
register: pathvar
- debug: var=pathvar.stdout
- name: Make sub directory
file:
path: "{{pathvar.stdout}}/certs"
state: directory
register: certsoutputdir
- debug: var=certsoutputdir.path
- name: "Generate Root CA's Encrypted Private Key"
openssl_privatekey:
size: 4096
path: "{{certsoutputdir.path}}/ca.key"
cipher: auto
passphrase: "{{caencryptionpassword}}"
- name: "Generate Root CA's Self Signed Certificate Signing Request"
openssl_csr:
path: "{{certsoutputdir.path}}/ca.csr"
privatekey_path: "{{certsoutputdir.path}}/ca.key"
privatekey_passphrase: "{{caencryptionpassword}}"
common_name: "{{rootcaname}}"
basic_constraints_critical: yes
basic_constraints: ['CA:TRUE']
- name: "Generate Root CA's Self Signed Certificate"
openssl_certificate:
path: "{{certsoutputdir.path}}/ca.crt"
csr_path: "{{certsoutputdir.path}}/ca.csr"
provider: selfsigned
selfsigned_not_after: "+3650d" #Note: Mac won't trust by default due to https://support.apple.com/en-us/HT210176, but you can explitly trust to make it work.
privatekey_path: "{{certsoutputdir.path}}/ca.key"
privatekey_passphrase: "{{caencryptionpassword}}"
register: cert
- debug: var=cert
- name: "Generate Docker Repo's Private Key"
openssl_privatekey:
size: 4096
path: "{{certsoutputdir.path}}/{{dockerepodns}}.key"
- name: "Generate Docker Repo's Certificate Signing Request"
openssl_csr:
path: "{{certsoutputdir.path}}/{{dockerepodns}}.csr"
privatekey_path: "{{certsoutputdir.path}}/{{dockerepodns}}.key"
common_name: "{{dockerepodns}}"
subject_alt_name: 'DNS:{{dockerepodns}},DNS:localhost,IP:127.0.0.1'
- name: "Generate Docker Repo's Cert, signed by Root CA"
openssl_certificate:
path: "{{certsoutputdir.path}}/{{dockerepodns}}.crt"
csr_path: "{{certsoutputdir.path}}/{{dockerepodns}}.csr"
provider: ownca
ownca_not_after: "+365d" #Cert valid 1 year
ownca_path: "{{certsoutputdir.path}}/ca.crt"
ownca_privatekey_path: "{{certsoutputdir.path}}/ca.key"
ownca_privatekey_passphrase: "{{caencryptionpassword}}"
register: cert
- debug: var=cert
Interesting Gotcha/Final Step:
RHEL7Bash# sudo cp ca.crt /etc/pki/ca-trust/source/anchors/ca.crt
RHEL7Bash# sudo update-ca-trust
RHEL7Bash# sudo systemctl restart docker
The gotcha is that you have to restart docker, for docker login to recognize updates to CA's newly added to the system.

Resources