Background
I setup and configured VerneMQ Broker. Broker is in docker container and I start it using docker-compose.yml. This is how my docker-compose file looks:
version: '3.3'
services:
db:
image: erlio/docker-vernemq
container_name: vernemq1
network_mode: docker_mysql_default
restart: always
environment:
DOCKER_VERNEMQ_ALLOW_ANONYMOUS: 'off'
DOCKER_VERNEMQ_PLUGINS.vmq_diversity: 'on'
DOCKER_VERNEMQ_PLUGINS.vmq_passwd: 'off'
DOCKER_VERNEMQ_PLUGINS.vmq_acl: 'off'
DOCKER_VERNEMQ_VMQ_DIVERSITY.auth_mysql.enabled: 'on'
DOCKER_VERNEMQ_VMQ_DIVERSITY.mysql.host: 'docker_mysql'
DOCKER_VERNEMQ_VMQ_DIVERSITY.mysql.port: '3306'
DOCKER_VERNEMQ_VMQ_DIVERSITY.mysql.user: 'vernemq'
DOCKER_VERNEMQ_VMQ_DIVERSITY.mysql.password: 'vernemq'
DOCKER_VERNEMQ_VMQ_DIVERSITY.mysql.database: 'vernemq_db'
DOCKER_VERNEMQ_VMQ_DIVERSITY.mysql.password_hash_method: 'md5'
DOCKER_VERNEMQ_LISTENER__SSL__CAFILE: '/vernemq/etc/ssl/chain.pem'
DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE: '/vernemq/etc/ssl/cert.pem'
DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE: '/vernemq/etc/ssl/privkey.pem'
DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT: '0.0.0.0:8081'
DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT__USE_IDENTITY_AS_USERNAME: 'off'
DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT__REQUIRE_CERTIFICATE: 'off'
ports:
# <Port exposed> : <Port running inside container>
- '1883:1883'
- '8081:8081'
expose:
# Opens port 1883 on the container
- '1883'
- '8081'
# Where our data will be persisted
volumes:
- /var/lib/
- /home/ubuntu/etc/ssl:/vernemq/etc/ssl
# Name our volume
volumes:
my-db:
I am using MySQL database for authentication
I am trying to use TLS certificates, based on the provided documentation ( https://docs.vernemq.com/configuration/listeners#sample-ssl-config )
This setup is fully functional when I'm not trying to accept SSL connections (this means, when I remove the following lines from docker-compose.yml):
DOCKER_VERNEMQ_LISTENER__SSL__CAFILE: '/vernemq/etc/ssl/chain.pem'
DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE: '/vernemq/etc/ssl/cert.pem'
DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE: '/vernemq/etc/ssl/privkey.pem'
DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT: '0.0.0.0:8081'
DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT__USE_IDENTITY_AS_USERNAME: 'off'
DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT__REQUIRE_CERTIFICATE: 'off'
I tested/verified the TLS connection using openssl client:
openssl s_client -connect 172.18.0.4:8081 -key privkey.pem -cert cert.pem
I executed this from server localhost, 172.18.0.4 is the IP Address of vernemq docker container, 8081 is the expected SSL default port (listener) and key/cert are provided
and this is the outcome (I suppose it means the TLS listener works):
Question
How can I test this using mosquitto client or any other mqtt client?
I want to use TLS based connection when publishing and subscribing.
When I don't use TLS, this is how I execute mosquitto_sub (subscription client):
mosquitto_sub -h <ip_address> -p 1883 -t topic -d -u user -P password -i client-id
This is the response:
VerneMQ Subscription
When I try to use TLS, I add the --key and --cert options to use private key and certificate:
mosquitto_sub -h <ip_address> -p 1883 -t topic -d -u user -P password -i client-id --key privkey.pem --cert cert.pem
I only get
Client user sending CONNECT
repeatedly. What am I doing wrong?
some things you need to do give correct permissions to your certificate directory you need to ensure the permission set to the user running verneMQ in my case its "vernemq" now next things is to setup the permissions to certificate folder
chown -R vernemq:vernemq /etc/letsencrypt/live
All the configurations files should be in .pem format
listener.ssl.cafile = /etc/letsencrypt/live/mqtts.domain.com/chain.pem
listener.ssl.certfile = /etc/letsencrypt/live/mqtts.domain.com/cert.pem
listener.ssl.keyfile = /etc/letsencrypt/live/mqtts.domain.com/privkey.pem
Client must use Fullchain.pem to connect to Server if you do not have
The domain certificate is issued by intermediate “Let’s Encrypt Authority X3”, this intermediate is cross-signed by “DST Root CA X3” (from IdenTrust). IdenTrust is widely trusted by most OSes and applications, we will “DST Root CA X3” as root CA.
if you are not on too old OS then you could use this from your local machine
cat /etc/ssl/certs/DST_Root_CA_X3.pem /etc/letsencrypt/live/$domain/chain.pem > ca.pem
From the mosquitto_sub man page:
Encrypted Connections
mosquitto_sub supports TLS encrypted connections. It is strongly
recommended that you use an encrypted connection for anything more
than the most basic setup.
To enable TLS connections when using x509 certificates, one of either
--cafile or --capath must be provided as an option.
--capath
Define the path to a directory containing PEM encoded CA certificates that are trusted. Used to enable SSL communication.
For --capath to work correctly, the certificate files must have ".crt" as the file ending and you must run "openssl rehash " each time you add/remove a certificate.
To use the mosquitto_sub command you must supply either a file with the trusted CA certificate or a directory holding a collection of trusted CA certificates
Related
I get the following error when I try to subscribe to a topics using by certs:
Command:
mosquitto_sub -d -v --capath <path_to_file>/xxx.pem --cert <path_to_file>/yyy.pem.crt --key <path_to_file>/zzz.pem.key -h "<my_endpoint>" -p 8883 -t "<my_topic>"
Client (null) sending CONNECT
OpenSSL Error[0]: error:0A000086:SSL routines::certificate verify failed
Error: A TLS error occurred.
I have checked the permission of the certificates and also provided the correct paths, but still not sure why I am hitting this error.
As pointed out in the comments
--capath is used to point to a directory full of CA certificates
--cafile is used to point to a single certificate file
From the man page
--cafile
Define the path to a file containing PEM encoded CA certificates that are trusted. Used to enable SSL communication.
See also --capath
--capath
Define the path to a directory containing PEM encoded CA certificates that are trusted. Used to enable SSL communication.
For --capath to work correctly, the certificate files must have ".crt"
as the file ending and you must run "openssl rehash "
each time you add/remove a certificate.
See also --cafile
I have an OpenLDAP Docker instance from Osixia and am trying to query it securely from the client using TLS. The query works without encryption using $ ldapwhoami -H ldap://localhost -x and does not work when using the -ZZ flag to start TLS operation $ ldapwhoami -H ldap://localhost -x -ZZ - it returns ldap_start_tls: Can't contact LDAP server (-1). How can i make this work? Below are all the steps i took:
Run LDAP server in docker:
$ docker run -p 389:389 -p 636:636 --name ldap-service --hostname ldap-service \
--env LDAP_ADMIN_PASSWORD="password" --env LDAP_BASE_DN="dc=example,dc=org" --detach osixia/openldap:1.4.0
Test Connectivity - shows success, it returns anonymous
$ ldapwhoami -H ldap://localhost -x
anonymous
Preparations for TLS connectivity - Configure client to trust SERVER Certificate Authority (CA)
SERVER DOCKER CONTAINER: TLS certs are autoconfigured upon runtime in the osixia/openldap image. Copy contents of CA in /container/service/slapd/assets/certs/ca.crt
CLIENT: Paste the copied SERVER ca.crt into CLIENT folder /usr/local/share/ca-certificates/ca.crt , then run sudo update-ca-certificates to add it. Confirm success of adding by checking that the CA is inside /etc/ssl/certs/ca-certificates.crt
CLIENT: In file/etc/ldap/ldap.conf I added the line TLS_CACERT /etc/ssl/certs/ca-certificates.crt
Test TLS connectivity from CLIENT via -ZZ flag to start TLS operation:
$ ldapwhoami -H ldap://localhost -x -ZZ
ldap_start_tls: Can't contact LDAP server (-1)
additional info: The TLS connection was non-properly terminated.
Further logs from inside LDAP docker:
5ff42195 conn=1079 fd=12 ACCEPT from IP=172.17.0.1:39420 (IP=0.0.0.0:636)
TLS: can't accept: No certificate was found..
5ff42195 conn=1079 fd=12 closed (TLS negotiation failure)
Test TLS connectivity from CLIENT via LDAP Secure URI scheme ldaps://
$ ldapwhoami -H ldaps://localhost -x
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
Things i tried out:
I read through https://www.openldap.org/doc/admin24/tls.html and subsequently installed the Server CA on the client.
I read through this post: ldapsearch over ssl/tls doesn't work, I changed the settings in /etc/ldap/ldap.conf to include the below items, but to no avail.
TLS_CACERT /etc/ssl/certs/ca-certificates.crt
TLS_REQCERT ALLOW
PORT 636
HOST localhost // i also tried 'ldap-service'
I found the solution:
Add --env LDAP_TLS_VERIFY_CLIENT=try to the docker run command. Source
For Googlers,
Presto does not supply client certificates (client certificate verification, two-way verification) when connecting to LDAP service, so you will need --env LDAP_TLS_VERIFY_CLIENT=tryor never if you use osixia/openldap, or, edit ldap.conf and set TLS_REQCERT never and restart the LDAP service.
First post here so apologies if my etiquette isn't quite on point!
I'm having a bit of trouble understanding how certificate authorities work between separate machines, specifically when it comes to MQTT and the mosquito broker.
I have generated my server and client certificates using this link and got them working absolutely fine on localhost. For the server, I used common name RPi-Host i.e. the hostname, and for the clients I used 'localhost'. An example of the code I use to generate a CA for a client is given below, where %NAME is just the name of the cert:
Generate Key with:
$ openssl genrsa -out <%NAME>.key 2048
Generate certificate request with:
$ openssl req -out <%NAME>.csr -key <%NAME>.key -new
Link to main CA:
$ openssl x509 -req -in <%NAME>.csr -CA ../ca/ca.crt -CAkey ../ca/ca.key -CAcreateserial -out <%NAME>.crt -days 365
Lets say I'd generated client and client2 certificates, I can then run the below in 2 different terminals on the RPi-Host, and connect no problem at all:
Subscribe to MQTT broker:
$ mosquitto_sub -p 8883 --cafile ca.crt --cert client2.crt --key client2.key -h localhost -t /world
Publish to MQTT broker:
$ mosquitto_pub -p 8883 --cafile ../ca/ca.crt --cert client.crt --key client.key -h localhost -m hello! -t /world
However, if I change the -h localhost to 192.168.0.190, i.e the IP address, I immediately get:
Error: A TLS Error occurred.
...which Isn't very helpful!
The aim is to try and connect to this from a separate machine, however I'm stumped just trying to do this on the same machine with its own IP address! Do I need to dome something fancy in the common name when generating the certificate? Sadly I have not yet sourced a tutorial which reviews connecting using mosquitto and TLS across 2 separate machines.
Any pointers appreciated, and terribly sorry if I'm missing the obvious!
Alex
The hostname (or IP address*) that you use to connect to the remote machine MUST to match the CN/SAN value in the certificate presented by that machine.
localhost shouldn't really ever be used in certificates as it is just a placeholder which says "This machine". Using TLS/SSL with localhost doesn't do anything useful. You should always generate certificates with the externally used hostname of the broker.
If you can't set up proper hostnames in a DNS server then you should probably add suitable entries to the /etc/hosts file on all the client machines with the hostname for the broker.
The temporary workaround to the error is probably to add -i to the mosquitto_pub and mosquitto_sub command lines. This tells them to ignore any miss match between the hostname and the name in the certificate. But this is only a workaround as it basically negates one of TLS/SSL's two key features (1. proving the machine you are connecting to is the machine it claims to be, 2. enabling encryption of the messages passing back and forth between the client/broker)
* Using raw IP addresses for TLS is possible, but it adds another level of difficulty getting the entries in the certificates right.
I have this setup
in Remote server. I tried
mosquitto_sub -h 127.0.0.1 -t 'myTopic' -i 'myId'
in My computer I tried
mosquitto_pub -h 'remote_ip_here' -t 'myTopic' -m 'the message'
the remote server was able to get the message I published from my computer
the remote server has these keys
certificate file = cert.pem
certificate key file = privkey.pem
certification chain file = chain.pem
If I want to have a ssl/tls communication between my computer and the remote computer.
- How do I use those keys ?
- Am I suppose to copy those keys from the remote computer and put them also in my computer ?
- can someone please help what's the proper command to execute in order to have an ssl and tls communication.
In the remote server I tried
mosquitto_sub -h 127.0.0.1 -t 'myTopic' -i 'myId' --capath /etc/myPemPath -p 1883
While in my computer, I tried
mosquitto_pub -h remote_ip -t 'myTopic' -m 'the message' --capath /etc/localPemPath -p 1883
it didn't work, so how ?
You seem to have miss understood how MQTT works. Both mosquitto_sub and mosquitto_pub are MQTT clients which communicate with a MQTT broker (mosquitto). It is not a direct client/server relationship.
In order to have TLS secured MQTT connection you first need to configure the broker to use the certificates to identify it's self, then configure the clients to verify that certificate as part of the TLS handshake.
The mosquitto documentation on how to configure TLS is available here. You need to add either a cafile or capath and certfile and keyfile options to your mosquitto.conf file. Be aware that TLS settings apply to the last listener configured, so you will probably need to set up a new listener on a different port to 1883.
As for the clients, assuming you are not doing mutual authenticated TLS then you only need to pass the -cafile/-capath option to mosquitto_pub and mosquitto_sub to enable a TLS session.
I'm trying to get SSL working on my JHipster app.
I'm using docker and docker-compose, and have the following:
app.yml
ports:
- 443:443
Dockerfile
ADD /keystore.p12 /keystore.p12
EXPOSE 443
application-prod.yml
server:
port: 443
ssl:
key-store: /keystore.p12
key-store-password: <password>
keyStoreType: PKCS12
keyAlias: <alias>
I generated a self certified key via keytool -genkey as mentioned in application-prod.yml and copied this (using ADD DockerFile) into the app image. (I'm aware this probably isn't best practice ~ but it is for dev purposes).
./mvnw package -Pprod docker:build and then docker-compose -f src/main/docker/app.yml up runs without error.
When I try to connect via https://localhost:443 I get connection refused.
I should mention when the ssl entry is removed from application-prod.yml everything works as expected i.e. site loads ok in http.
Thanks,
Reading in the comments, this was a user misconfiguration...