Graylog TLS client authentication Unknown beats protocol version - docker

I want to make beat input work with TLS client authentication without it works
So I made custom graylog image with selfsigned certificates
FROM graylog/graylog:4.3
USER root
ADD beat.crt /usr/local/share/ca-certificates/beat.crt
RUN chmod 644 /usr/local/share/ca-certificates/beat.crt && update-ca-certificate
Next I made beat input with tls auth requared
bind_address: 0.0.0.0
no_beats_prefix: true
number_worker_threads: 80
override_source: <empty>
port: 5044
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: <empty>
tls_client_auth: required
tls_client_auth_cert_file: /usr/local/share/ca-certificates/beat.crt
tls_enable: false
tls_key_file: <empty>
tls_key_password:********
And set filebeat on another machine
folder "tls" added as volume when running filebeat in docker --volume="/home/filebeat/:/tls"
output.logstash:
hosts: ["graylog_ip_here:5044"]
ssl.certificate_authorities: ["/tls/beat.pem"]
ssl.certificate: "/tls/beat.crt"
ssl.key: "/tls/beat.key"
beat crt look inside like so pem is the same file
-----BEGIN CERTIFICATE-----
MIIFVzCCAz+gAwIBAgIJALJI6zP
After all this had been set I'm getting error on graylog server
ERROR: org.graylog2.plugin.inputs.transports.AbstractTcpTransport - Error in Input cause io.netty.handler.codec.DecoderException: java.lang.IllegalStateException: Unknown beats protocol version: 3)

As said in documentation here you should apply both steps to make it work. The first step is to set up a TLS exchange. The second is for authenticating specific users.
TLS Beats Input
To enable TLS on the input, a certificate (and private key file) is needed. It can be the same or
a different certificate as the one of your REST/web interface, as long as it matches all hostnames
of your input. Just reference the files TLS cert file and TLS private key file in the Beats Input
configuration and restart the input.
The ingesting client will verify the presented certificate against his know CA certificates,
if that is successful communication will be established using TLS.
Add client authentication to Beats input
Create one directory (/etc/graylog/server/trusted_clients ) that will hold all client certificates
you allow to connect to the beats input. This directory must be available on all Graylog server
that have the input enabled. Write that path in the beats input configuration
TLS Client Auth Trusted Certs and select required for the option TLS client authentication.
After this setting is saved only clients that provide a certificate that is trusted by the CA
and is placed inside the configured directory (/etc/graylog/server/trusted_clients)
can deliver messages to Graylog.

Related

What is the proper way of adding trust certificates to confluent kafka connect docker image

I have a kafka connect cluster (cp_kafka_connect_base) on docker, and I need to include a .pem file in order to connect to a source over TLS. It seems there are already a number of trusted certificates included in connect, so how would I add a new trusted certificate without invalidating the old ones?
Specific problem
I want to use MongoDB Source Connector, alongside a number of other connectors. As per documentation, I have imported my .pem certificate in a .jks store, and added the following envvars to my kafka connect containers:
KAFKA_OPTS="
-Djavax.net.ssl.trustStore=mystore.jks
-Djavax.net.ssl.trustStorePassword=mypass
This lets me connect to my data source, but invalidates other TLS connections, unless I add them all to my .jks. Since all other TLS connections work out of the box, I shouldn't need to manually import every single one of them to a .jks, just to make one connector implementation happy.
I have also tried setting:
CONNECT_SSL_TRUSTSTORE_TYPE: "PEM"
CONNECT_SSL_TRUSTSTORE_LOCATION: "myloc"
but the truststore location config isn't known, and TLS doesn't work:
WARN The configuration 'ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)

I'm able to connect to MQTT broker running in a container with any client certificate

I have created an MQTT broker (Env : Docker Container , baseimage : Ubuntu:18) with self signed certificates with commonname set to localhost.
but i m able to connect to MQTT broker with any client certificate. How do i stop this.
Here is the mosquitto configuration :
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
cafile /etc/mosquitto/ca_certificates/ca.crt
keyfile /etc/mosquitto/certs/server.key
certfile /etc/mosquitto/certs/server.crt
require_certificate false
password_file /etc/mosquitto/passwd```
If you want to force clients to supply a certificate then you need to have
require_certificate true
Client certificates will need to be signed by a CA that is in cafile or capth to be accepted.
Since the certificate will be used to assert the user's identity the passwd_file will not be used. If you want to use the ACL (with the acl_file to control what topics a given user can use you will need to add use_identity_as_username true or use_subject_as_username true to set which item in the certificate to be the username.
From the man page:
When using certificate based encryption there are three options that
affect authentication. The first is require_certificate, which may be
set to true or false. If false, the SSL/TLS component of the client
will verify the server but there is no requirement for the client to
provide anything for the server: authentication is limited to the MQTT
built in username/password. If require_certificate is true, the client
must provide a valid certificate in order to connect successfully. In
this case, the second and third options, use_identity_as_username and
use_subject_as_username, become relevant. If set to true,
use_identity_as_username causes the Common Name (CN) from the client
certificate to be used instead of the MQTT username for access control
purposes. The password is not used because it is assumed that only
authenticated clients have valid certificates. This means that any CA
certificates you include in cafile or capath will be able to issue
client certificates that are valid for connecting to your broker. If
use_identity_as_username is false, the client must authenticate as
normal (if required by password_file) through the MQTT options. The
same principle applies for the use_subject_as_username option, but the
entire certificate subject is used as the username instead of just the
CN.

How to generate certs for secured connection from Celery to Redis

I'm following this tutorial, and adjusting the Celery-background related code to my project.
In my case I am operating in a Docker environment, and I have a secured site (i.e. https://localhost) which requires secured ssl communication.
The documentation in here shows an example on how to provide cert related files (keyfile, certfile, ca_certs).
But it is not clear to me how to create these files in the first place.
The tutorial in here shows how to create a custom certificate authority, and how to sign a certificate with it.
I followed the steps and created the 3 files:
keyfile - dev.mergebot.com.crt - the signed certificate (signed by myCA.pem)
ca_certs - dev.mergebot.com.key - private key to create a signed cert with "self-trusted CA"
certfile - myCA.pem - "self-trusted CA" certificate (filename in the tutorial: myCA.pem)
Note that I created these 3 files completely unrelated to Celery or Redis or Docker.
They were created in my local machine outside Docker. The files don't have the name of the Redis container and the Common Name in the cert was set to "foo"
When I use these files in my webapp, there is no connection from Celery to Redis.
Without ssl I do get a connection, so the overall environment aside from ssl is OK - see here
Is there any specific requirements to create the cert related files? (e.g. should the Common Name in the cert have the container name "redis", etc... )
Is there a way to test the validity of the certs without the app, e.g. by issuing a command from the container shell?
Thanks
I was able to generate the cert related files (keyfile, certfile, ca_certs) using the tutorial in here
I first tested that I can connect from my localhost to the "redis with ssl" docker container.
and I described the details here
Then I tested that I can connect from Celery docker container to the "redis with ssl" docker container
and I described the details here
Yes the certificate comman name should match the host name also the issuer of the certificate should be trusted by the client .
In your case since you are using a custom CA and generating the certs , the public cert of the CA should be in the trusted root of the client .
Additionally the certificate should be issued to the hostname in your case it will be localhost . Please do note that if you access the site from a remote machine by either using the fqdn or the Up the browser will flag an alert as invalid.
Also to verify the certificates , you can use the OpenSSL Verify option.

mosquitto_sub with TLS enabled

I am new to MQTT and I have a frustrating problem.
I have been using MQTT.fx to subscribe to a topic; I have set the:
Broker Address
Port
Client ID
Enable SSL/TLS
Topic
This works well, however I would like to use mosquitto_sub.
I am attempting to subscribe to the same topic in the following way:
mosquitto_sub -h host -p 8883 -t topic -i client id
This is not working for me. I am using it on a Ubuntu VM.
My powers of observation tell me that I should enable TLS, however I'm not quite sure how to do that, I have stuffed around with certificates and enabling TLS in may ways but have not got the right combo. I know it is required as if I uncheck the SSL/TLS box in MQTT.fx I am unable to connect.
I would really like to replicate what I have in MQTT.fx with mosquitto.
In the mosquitto_sub command, use the --capath argument to point to /etc/ssl/certs. It needs a pointer to the trusted certificates.
To enable SSL with mosquitto_sub you need to specify a CA certificate.
This can be done in 1 of 2 ways.
--cafile /path/to/a/file where the file contains the required trusted CA certificate (either on it's own or part of a concatenated set)
--capath /path/to/directory where the directory contains a collection of files ending in .crt which contain the CA certificates to be trusted. The ca certs should also be indexed with the c_rehash function.
Both these options are mentioned in the mosquito_sub man page as ways to enabled SSL
e.g.
mosquitto_sub -h host -p 8883 --cafile ca.crt -t topic -i client id
i am aware of a 3rd way (short cut) which is using the flag --tls-use-os-certs
also as a side note, mosquitto_sub/pub also sends SNI within the tls connection request, which is great news if you are using SNI based routing on the broker side.
don't know if the MQTT standard actually prescribes this, but at least mosquitto client's implementation does support it

How to renew letsencrypt cert in AWS Load Balancer?

I used letsencrypt to generate SSL Cert with standalone option, then I generated successfully a SSL cert.
I went to the AWS Load Balancer to config a listener at port 433 and used the SSL cert that I generated before to import at this kind of popup:
Then everything worked, now I want to renew this SSL cert. I followed this instruction to renew my cert.
I tried:
./certbot-auto renew --standalone
=> Checking for new version...
Requesting root privileges to run certbot...
/root/.local/share/letsencrypt/bin/letsencrypt renew --standalone
No renewals were attempted.
Or obtain the cert again ./certbot-auto certonly --standalone
Failed authorization procedure. www.atoha.com (tls-sni-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Incorrect validation certificate for TLS-SNI-01 challenge. Requested ef39667c9d782884f8157f30f3e85e81.fb4436208f9bc7c8bdeb19356bb090f2.acme.invalid from 54.179.140.152:443. Received certificate containing 'www.my_domain.com'
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: www.my_domain.com
Type: unauthorized
Detail: Incorrect validation certificate for TLS-SNI-01 challenge.
Requested ef39667c9d782884f8157f30f3e85e81.fb4436208f9bc7c8bdeb1935
6bb090f2.acme.invalid from 54.179.140.152:443. Received certificate
containing 'www.my_domain.com'
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
This means my cert was generated correctly before, for now I want to know how to renew it, since it is nearly expired!
Thanks!
you can try this :
bash /opt/letsencrypt/letsencrypt-auto -t --renew-by-default --server https://acme-v01.api.letsencrypt.org/directory certonly --agree-tos --email 'your#email.com' --webroot --webroot-path 'yourwebdirectory_publichtml' -d yourdomain.com -d www.yourdomain.com
where /opt/letsencrypt/ = your letsencrypt directory location
and then place (copy paste) the file's content of .pem to your aws elb (i usually use cat in linux) :
private key ---> privkey.pem
public key certificate ---> fullchain.pem
certificate chain ---> no need to fill this
I use https://github.com/alex/letsencrypt-aws to automatically handle automatic renewal on AWS. The only thing it doesn't currently do is remove old certs.

Resources