IBM MQ docker add personal cert to .kdb - docker

I have created a kdb file in my IBMMQ (docker) using below command:
runmqakm -keydb -create -db key.kdb -stash -pw password -type cms
And I've created a self-signed cert by:
openssl genrsa -out ca.key 2048
openssl req -new -x509 -key ca.key -subj "$prefix/CN=ca" -out ca.crt
openssl pkcs12 -export -out ca.pfx -in ca.crt -nokeys
openssl pkcs12 -export -in ca.crt -inkey ca.key -out ca.p12 -name myca -CAfile ca.crt -passin pass:mypass -passout pass:mypass
Now I want to add my own ca.crt to kdb as personal cert, I mean something like below:
runmqakm -cert -list -db key.kdb -stashed
Certificates found
* default, - personal, ! trusted, # secret key
- CAlabel
I've tried this commands:
runmqckm -cert -import -file ca.pfx -pw mypass -type pkcs12 -target filename -target_pw password -target_type cms -label CAlabel
runmqckm -cert -import -file ca.p12 -pw mypass -type pkcs12 -target filename -target_pw password -target_type cms -label CAlabel
But keep facing this error (login as root in docker:docker exec -it -u 0 containerid sh):
The database doesn't contain an entry with label 'CAlabel'.
Check the label and try again.
And also (login normally in docker :docker exec -ti containerid /bin/bash)
Dec 19, 2021 7:48:57 AM java.util.prefs.FileSystemPreferences$1 run
WARNING: Couldn't create user preferences directory. User preferences are unusable.
Dec 19, 2021 7:48:57 AM java.util.prefs.FileSystemPreferences$1 run
List item
WARNING: java.io.IOException: No such file or directory
The input file '/mnt/mqm/data/qmgrs/QM1/ssl/ca.pfx' could not be found.
Check the database path.
Does anyone have any suggestion that how can I solve this problem?

This command will import all certs contained in the p12 file to the kdb.
runmqcakm -cert -import -file ca.p12 -pw mypass -type pkcs12 -target key.kdb -target_stashed -target_type cms

Related

Generating a self-signed cert with dockerfile not actually generating a self-signed cert

First, I'm fairly new to docker. But this seems pretty straight forward.
I am working off of this dockerfile. I made some very basic modifications like installing openssl and generating some self-signed certs so I can use ssl in apache. Here is a section that I added to the linked dockerfile:
RUN mkdir /ssl-certs
RUN openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -subj \
"/C=../ST=../L=..../O=LAB/CN=....." \
-keyout /ssl-certs/ssl.key -out /ssl-certs/ssl.crt
RUN mkdir -p /etc/apache2/ssl/
COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key
COPY /ssl-certs/ssl.crt /etc/apache2/ssl/ssl.crt
However, when I compile this I get the following output:
=> CACHED [ 8/19] RUN openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -subj "/C=../ST=../L=.... 0.0s
=> CACHED [ 9/19] RUN mkdir -p /etc/apache2/ssl/ 0.0s
=> ERROR [10/19] COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key 0.0s
=> ERROR [11/19] COPY /ssl-certs/ssl.crt /etc/apache2/ssl/ssl.crt 0.0s
------
> [10/19] COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key:
------
------
> [11/19] COPY /ssl-certs/ssl.crt /etc/apache2/ssl/ssl.crt:
------
This basically tells me openssl isn't actually doing anything or docker doesn't wait for openssl to finish which doesn't seem likely. I've looked around and I can't seem to find anyone with a similar problem. Any pointers are appreciated.
COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key
COPY /ssl.crt /etc/apache2/ssl/ssl.crt
The COPY command tries to access /ssl-certs on the host, not inside the container. You may try
RUN cp /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key \
&& cp /ssl.crt /etc/apache2/ssl/ssl.crt
Edit: regardless that I consider as a bad practice to
build secrets (private key) into the container, rather mount the secrets at run-time
create non-deterministic builds (generating a new random private key)
I guess or rather hope it's for dev/education purpose, but when doing ssl, let's do it properly, even for the self-signed certificates

How to convert pcks12 certificate string, which was taken from azure key vault in docker container, to pem format?

I use for taking a certificate in a docker container via managed identity like described in Microsoft docs here (Example 1): https://learn.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity#example-1-use-a-user-assigned-identity-to-access-azure-key-vault
When it was a certificate in pem format output of the command:
curl https://mykeyvault.vault.azure.net/secrets/SampleSecret/?api-version=2016-10-01 -H "Authorization: Bearer $token"
Was like:
{"value":"-----BEGIN PRIVATE
KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDBkelEEzvwXiaW\nX4sPt052w/5tahn6OAy+lasH4Lq1xvU/G+z9Ra0rBs2NGhPr7smu8iAxACfr74I5\nCHENM4kvmM{too
many symbols}KkrjDMmf5Om\n-----END PRIVATE KEY-----\n-----BEGIN
CERTIFICATE-----\nMIIDMDCCAhigAw{too many symbols}4GMgUQ==\n-----END
CERTIFICATE-----\n","contentType":"application/x-pem-file","id":"myid","managed":true,"attributes":{"enabled":true,"nbf":1600276258,"exp":1631812858,"created":1600276858,"updated":1600276858,"recoveryLevel":"Recoverable+Purgeable"},"kid":"https://cert_url"}
And parse it to cert.pem and private_key.pem files is easy.
But if it is pcks12 format output is just like one string:
{"value":"MIIKPAIBAzCCCfwGCSqGSIb3DQEHAaCCCe0EggnpMIIJ5TCCBhYGCSqGSIb3DQEHA{only
many
symbols}8O3VaP5TOUaZMQ=","contentType":"application/x-pkcs12","id":"myid","managed":true,"attributes":{"enabled":true,"nbf":1600275456,"exp":1631812056,"created":1600276056,"updated":1600276056,"recoveryLevel":"Recoverable+Purgeable"},"kid":"https://cert_url"}
So I can't convert that string to cert.pem and private_key.pem files like was explained above.
I put in file cert.cer value via:
curl https://testigorcert.vault.azure.net/secrets/SampleSecret/?api-version=2016-10-01 -H "Authorization: Bearer $token" | jq '.value' > cert.cer
And tried command like:
openssl pkcs12 -in cert.cer -out cert.pem -nodes
Error:
139876006393152:error:0D0680A8:asn1 encoding
routines:asn1_check_tlen:wrong tag:../crypto/asn1/tasn_dec.c:1130:
139876006393152:error:0D07803A:asn1 encoding
routines:asn1_item_embed_d2i:nested asn1
error:../crypto/asn1/tasn_dec.c:290:Type=PKCS12
Tried:
openssl pkcs12 -in cert.cer -nocerts -nodes -out key.pem
Error:
140021099644224:error:0D0680A8:asn1 encoding
routines:asn1_check_tlen:wrong tag:../crypto/asn1/tasn_dec.c:1130:
140021099644224:error:0D07803A:asn1 encoding
routines:asn1_item_embed_d2i:nested asn1
error:../crypto/asn1/tasn_dec.c:290:Type=PKCS12
Tried:
openssl x509 -in cert.cer -text
Error:
139665046693184:error:0909006C:PEM routines:get_name:no start
line:../crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE
So. How can I convert this value of pkcs12 certificate format to two files cert.pem and private_key.pem?
The problem was in encoding of downloaded string, because curl get a .pfx string, BUT in ascii coding (should be in base64). So I just use another way (Example 2):
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity#example-2-use-a-system-assigned-identity-to-access-azure-key-vault
Where I just download certificate .pfx via command:
az keyvault secret download --file cert.pfx --name {cert_name} --vault-name {vault_name} -e base64
And then convert to two needed files by:
openssl pkcs12 -in cert.pfx -nocerts -out key.rsa -nodes -passin pass:
openssl pkcs12 -in cert.pfx -clcerts -nokeys -out cert.crt -passin pass:
That another (best) option to convert cert to base 64 format by an appropriate command like:
token=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true | jq -r '.access_token')
curl https://myvault.vault.azure.net/secrets/mycert/?api-version=2016-10-01 -H "Authorization: Bearer $token" |
jq -r ".value" | base64 -d | openssl pkcs12 -nocerts -out /etc/ssl/private-key.pem -nodes -passin pass:
curl https://myvault.vault.azure.net/secrets/mycert/?api-version=2016-10-01 -H "Authorization: Bearer $token" |
jq -r ".value" | base64 -d | openssl pkcs12 -clcerts -nokeys -out /etc/ssl/cert.pem -passin pass:

UWSGI for https configuration is not working

UWSGI Version- 2.0.18
Openssl- 1.0.2k-fips
Python 2.7
Getting Error:
uwsgi: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
When ever we do pip install uWSGI, It automatically binds to openssl libs.
But make sure you have installed openssl and openssl-devel package.
I tried with following versions:
Python- 3.6
UWSGI- 2.0.18
Commands:
Create Virtual Env and install flask and uWSGI:
virtualenv -p /usr/bin/python3.6 testing
source testing/bin/activate
pip install flask
pip install uWSGI
Create Certs:
openssl genrsa -out foobar.key 2048
openssl req -new -key foobar.key -out foobar.csr
openssl x509 -req -days 365 -in foobar.csr -signkey foobar.key -out foobar.crt
Create Sample Python file: foobar.py
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
Run uWSGI:
uwsgi --shared-socket 0.0.0.0:443 --uid roberto --gid roberto --https =0,foobar.crt,foobar.key --wsgi-file foobar.py
Make sure do not get confused with the uWSGI installed in a virtual environment and with root user.
Follow the documentation:
https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html

cannot validate certificate for <ip-address> because it doesn't contain any IP SANs

I'm working on a GitLab CI pipeline that will deploy my docker stack. I'm trying to set the $DOCKER_HOST to be tcp://DROPLET_IP:2377, but I'm getting an error saying that my certificate does doesn't contain any IP SANs. I'm testing with a Digital Ocean Droplet, so I haven't set a domain name for my droplet yet.
deploy:
stage: deploy
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: "tcp://$DROPLET_IP:2377"
DOCKER_TLS_VERIFY: 1
before_script:
- mkdir -p ~/.docker
- echo "$TLS_CA_CERT" > ~/.docker/ca.pem
- echo "$TLS_CERT" > ~/.docker/cert.pem
- echo "$TLS_KEY" > ~/.docker/key.pem
script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" "$CI_REGISTRY"
- docker info
- docker stack deploy --with-registry-auth --compose-file=docker-stack.yml mystack
Here's the error I'm getting in the output of my GitLab CI job:
$ docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" "$CI_REGISTRY"
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
error during connect: Post https://<ip-address>:2377/v1.39/auth: x509: cannot validate certificate for <ip-address> because it doesn't contain any IP SANs
I'm using the following set of commands to generate my certs (ca.pem, server-cert.pem and server-key.pem) that I'm trying to use in my deploy stage described above. I have saved TLS_CA_CERT, TLS_CERT and TLS_KEY to variables that are being used in GitLab CI.
openssl genrsa -aes256 -out ca-key.pem 4096
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
openssl genrsa -out server-key.pem 4096
openssl req -subj "/CN=<ip-address>" -sha256 -new -key server-key.pem -out server.csr
echo subjectAltName = IP:<ip-address> >> extfile.cnf
echo extendedKeyUsage = serverAuth >> extfile.cnf
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out server-cert.pem -extfile extfile.cnf
I see you have included the IP address in the subjectAltName
echo subjectAltName = IP:<ip-address> >> extfile.cnf
Check, as in here, if this is a configuration issue:
I put subjectAltName in the wrong section. Working method: Basically I edited openssl.cnf, in section [v3_ca] I added 'subjectAltName = IP:192.168.2.107'.
Produced new certificate and added to server + client.
You need to make sure your extension is declared in the v3_ca part, as shown here.
As of OpenSSL 1.1.1, providing subjectAltName directly on command line becomes much easier, with the introduction of the -addext flag to openssl req
Example:
export HOST="my.host"
export IP="127.0.0.1"
openssl req -newkey rsa:4096 -nodes -keyout ${HOST}.key -x509 -days 365 -out ${HOST}.crt -addext 'subjectAltName = IP:${IP}' -subj '/C=US/ST=CA/L=SanFrancisco/O=MyCompany/OU=RND/CN=${HOST}/'
Inspired by link

Allow access to private dependencies before install

I have a github project being tracked by Travis.
Currently, I have a new dependency, which is a private repo.
For now, I just need to use the simple Deploy Key approach.
This is my understanding of the steps that are needed:
generate the public/private ssh key pair
encrypt it using travis cli
ship the encrypted key.enc to the repository
Then the CLI enlights us with command we can use to decrypt the file:
before_install:
- openssl aes-256-cbc -K $encrypted_X_key -iv $encrypted_Y_iv -in key.enc -out key -d
I can decrypt the key now.
But how do I add it to the ssh-agent at build time?
This is the required step to add the key before installing the private dependencies:
before_install:
- openssl aes-256-cbc -K $encrypted_X_key -iv $encrypted_Y_iv -in .travis/key.enc -out .travis/key -d
- chmod 600 .travis/key
- eval "$(ssh-agent -s)"
- ssh-add .travis/key

Resources