UWSGI for https configuration is not working - uwsgi

UWSGI Version- 2.0.18
Openssl- 1.0.2k-fips
Python 2.7
Getting Error:
uwsgi: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

When ever we do pip install uWSGI, It automatically binds to openssl libs.
But make sure you have installed openssl and openssl-devel package.
I tried with following versions:
Python- 3.6
UWSGI- 2.0.18
Commands:
Create Virtual Env and install flask and uWSGI:
virtualenv -p /usr/bin/python3.6 testing
source testing/bin/activate
pip install flask
pip install uWSGI
Create Certs:
openssl genrsa -out foobar.key 2048
openssl req -new -key foobar.key -out foobar.csr
openssl x509 -req -days 365 -in foobar.csr -signkey foobar.key -out foobar.crt
Create Sample Python file: foobar.py
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"]
Run uWSGI:
uwsgi --shared-socket 0.0.0.0:443 --uid roberto --gid roberto --https =0,foobar.crt,foobar.key --wsgi-file foobar.py
Make sure do not get confused with the uWSGI installed in a virtual environment and with root user.
Follow the documentation:
https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html

Related

IBM MQ docker add personal cert to .kdb

I have created a kdb file in my IBMMQ (docker) using below command:
runmqakm -keydb -create -db key.kdb -stash -pw password -type cms
And I've created a self-signed cert by:
openssl genrsa -out ca.key 2048
openssl req -new -x509 -key ca.key -subj "$prefix/CN=ca" -out ca.crt
openssl pkcs12 -export -out ca.pfx -in ca.crt -nokeys
openssl pkcs12 -export -in ca.crt -inkey ca.key -out ca.p12 -name myca -CAfile ca.crt -passin pass:mypass -passout pass:mypass
Now I want to add my own ca.crt to kdb as personal cert, I mean something like below:
runmqakm -cert -list -db key.kdb -stashed
Certificates found
* default, - personal, ! trusted, # secret key
- CAlabel
I've tried this commands:
runmqckm -cert -import -file ca.pfx -pw mypass -type pkcs12 -target filename -target_pw password -target_type cms -label CAlabel
runmqckm -cert -import -file ca.p12 -pw mypass -type pkcs12 -target filename -target_pw password -target_type cms -label CAlabel
But keep facing this error (login as root in docker:docker exec -it -u 0 containerid sh):
The database doesn't contain an entry with label 'CAlabel'.
Check the label and try again.
And also (login normally in docker :docker exec -ti containerid /bin/bash)
Dec 19, 2021 7:48:57 AM java.util.prefs.FileSystemPreferences$1 run
WARNING: Couldn't create user preferences directory. User preferences are unusable.
Dec 19, 2021 7:48:57 AM java.util.prefs.FileSystemPreferences$1 run
List item
WARNING: java.io.IOException: No such file or directory
The input file '/mnt/mqm/data/qmgrs/QM1/ssl/ca.pfx' could not be found.
Check the database path.
Does anyone have any suggestion that how can I solve this problem?
This command will import all certs contained in the p12 file to the kdb.
runmqcakm -cert -import -file ca.p12 -pw mypass -type pkcs12 -target key.kdb -target_stashed -target_type cms

Generating a self-signed cert with dockerfile not actually generating a self-signed cert

First, I'm fairly new to docker. But this seems pretty straight forward.
I am working off of this dockerfile. I made some very basic modifications like installing openssl and generating some self-signed certs so I can use ssl in apache. Here is a section that I added to the linked dockerfile:
RUN mkdir /ssl-certs
RUN openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -subj \
"/C=../ST=../L=..../O=LAB/CN=....." \
-keyout /ssl-certs/ssl.key -out /ssl-certs/ssl.crt
RUN mkdir -p /etc/apache2/ssl/
COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key
COPY /ssl-certs/ssl.crt /etc/apache2/ssl/ssl.crt
However, when I compile this I get the following output:
=> CACHED [ 8/19] RUN openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -subj "/C=../ST=../L=.... 0.0s
=> CACHED [ 9/19] RUN mkdir -p /etc/apache2/ssl/ 0.0s
=> ERROR [10/19] COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key 0.0s
=> ERROR [11/19] COPY /ssl-certs/ssl.crt /etc/apache2/ssl/ssl.crt 0.0s
------
> [10/19] COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key:
------
------
> [11/19] COPY /ssl-certs/ssl.crt /etc/apache2/ssl/ssl.crt:
------
This basically tells me openssl isn't actually doing anything or docker doesn't wait for openssl to finish which doesn't seem likely. I've looked around and I can't seem to find anyone with a similar problem. Any pointers are appreciated.
COPY /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key
COPY /ssl.crt /etc/apache2/ssl/ssl.crt
The COPY command tries to access /ssl-certs on the host, not inside the container. You may try
RUN cp /ssl-certs/ssl.key /etc/apache2/ssl/ssl.key \
&& cp /ssl.crt /etc/apache2/ssl/ssl.crt
Edit: regardless that I consider as a bad practice to
build secrets (private key) into the container, rather mount the secrets at run-time
create non-deterministic builds (generating a new random private key)
I guess or rather hope it's for dev/education purpose, but when doing ssl, let's do it properly, even for the self-signed certificates

Installing certs into a Jupyter notebook docker image

At work we have a bunch of internal servers that use self-signed certificates. I'm trying to install these certs into a Jupyter notebook image so it can access the servers, but for some reason they're not being found. Here is a minimal Dockerfile:
FROM jupyter/datascience-notebook:notebook-6.4.2
USER root
RUN echo 'Acquire::http::proxy "http://proxy.internal.server";' >> /etc/apt/apt.conf.d/99proxy
ENV http_proxy http://proxy.internal.server
ENV https_proxy http://proxy.internal.server
ENV NO_PROXY internal.server
COPY certificates/* /usr/local/share/ca-certificates/
RUN update-ca-certificates
After doing this, when I try to copy a file, eg with curl -O https://internal.server/file, it fails with a message that the cert is invalid. I have to add the -k flag to turn SSL verification off for it to succeed.
If I follow the same procedure but starting from a vanilla Ubuntu image, then there's no problem. (I do have to install ca-certificates and curl.)
Is there something about the Jupyter image that is messing with the cert store? What is the correct procedure for installing certs?
The reason is that the Jupyter images use conda and conda is shipped with openssl and it's own CA certificates through the ca-certificates package.
You can see it in the image
python -c "import ssl; print(ssl.get_default_verify_paths())"
# DefaultVerifyPaths(cafile='/opt/conda/ssl/cert.pem', capath=None,
# openssl_cafile_env='SSL_CERT_FILE',
# openssl_cafile='/opt/conda/ssl/cert.pem',
# openssl_capath_env='SSL_CERT_DIR',
# openssl_capath='/opt/conda/ssl/certs')
I have not the ideal solution to use custom CA certificates. You can try playing with the various environment variables.
export SSL_CERT_DIR=/etc/ssl/certs
export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
As last resort you can try to
Add the certificate to the conda ca file
openssl x509 -in /path/to/custom/ca.crt -outform PEM >> $CONDA_PREFIX/ssl/cacert.pem
Overwrite the conda CA file with a symlink to the system location.
However, those fixes will break if the ca-certificate package is updated.

cannot validate certificate for <ip-address> because it doesn't contain any IP SANs

I'm working on a GitLab CI pipeline that will deploy my docker stack. I'm trying to set the $DOCKER_HOST to be tcp://DROPLET_IP:2377, but I'm getting an error saying that my certificate does doesn't contain any IP SANs. I'm testing with a Digital Ocean Droplet, so I haven't set a domain name for my droplet yet.
deploy:
stage: deploy
image: docker:stable
services:
- docker:dind
variables:
DOCKER_HOST: "tcp://$DROPLET_IP:2377"
DOCKER_TLS_VERIFY: 1
before_script:
- mkdir -p ~/.docker
- echo "$TLS_CA_CERT" > ~/.docker/ca.pem
- echo "$TLS_CERT" > ~/.docker/cert.pem
- echo "$TLS_KEY" > ~/.docker/key.pem
script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" "$CI_REGISTRY"
- docker info
- docker stack deploy --with-registry-auth --compose-file=docker-stack.yml mystack
Here's the error I'm getting in the output of my GitLab CI job:
$ docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" "$CI_REGISTRY"
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
error during connect: Post https://<ip-address>:2377/v1.39/auth: x509: cannot validate certificate for <ip-address> because it doesn't contain any IP SANs
I'm using the following set of commands to generate my certs (ca.pem, server-cert.pem and server-key.pem) that I'm trying to use in my deploy stage described above. I have saved TLS_CA_CERT, TLS_CERT and TLS_KEY to variables that are being used in GitLab CI.
openssl genrsa -aes256 -out ca-key.pem 4096
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
openssl genrsa -out server-key.pem 4096
openssl req -subj "/CN=<ip-address>" -sha256 -new -key server-key.pem -out server.csr
echo subjectAltName = IP:<ip-address> >> extfile.cnf
echo extendedKeyUsage = serverAuth >> extfile.cnf
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out server-cert.pem -extfile extfile.cnf
I see you have included the IP address in the subjectAltName
echo subjectAltName = IP:<ip-address> >> extfile.cnf
Check, as in here, if this is a configuration issue:
I put subjectAltName in the wrong section. Working method: Basically I edited openssl.cnf, in section [v3_ca] I added 'subjectAltName = IP:192.168.2.107'.
Produced new certificate and added to server + client.
You need to make sure your extension is declared in the v3_ca part, as shown here.
As of OpenSSL 1.1.1, providing subjectAltName directly on command line becomes much easier, with the introduction of the -addext flag to openssl req
Example:
export HOST="my.host"
export IP="127.0.0.1"
openssl req -newkey rsa:4096 -nodes -keyout ${HOST}.key -x509 -days 365 -out ${HOST}.crt -addext 'subjectAltName = IP:${IP}' -subj '/C=US/ST=CA/L=SanFrancisco/O=MyCompany/OU=RND/CN=${HOST}/'
Inspired by link

Allow access to private dependencies before install

I have a github project being tracked by Travis.
Currently, I have a new dependency, which is a private repo.
For now, I just need to use the simple Deploy Key approach.
This is my understanding of the steps that are needed:
generate the public/private ssh key pair
encrypt it using travis cli
ship the encrypted key.enc to the repository
Then the CLI enlights us with command we can use to decrypt the file:
before_install:
- openssl aes-256-cbc -K $encrypted_X_key -iv $encrypted_Y_iv -in key.enc -out key -d
I can decrypt the key now.
But how do I add it to the ssh-agent at build time?
This is the required step to add the key before installing the private dependencies:
before_install:
- openssl aes-256-cbc -K $encrypted_X_key -iv $encrypted_Y_iv -in .travis/key.enc -out .travis/key -d
- chmod 600 .travis/key
- eval "$(ssh-agent -s)"
- ssh-add .travis/key

Resources