neo4j docker image (vps managed with plesk), cannot assign certificates for secure bolt connection with Let's encrypt certificate - docker

I'm trying to run neo4j community on a vps via a docker image managed with plesk.
I am however having issues configuring the SSL certificate so I can connect to it securely from nodejs.
Currently, the error I'm getting is quite straightforward in node:
Neo4jError: Failed to connect to server.
Please ensure that your database is listening on the correct host and port and that you have
compatible encryption settings both on Neo4j server and driver. Note that the default encryption
setting has changed in Neo4j 4.0. Caused by: Server certificate is not trusted. If you trust the
database you are connecting to, use TRUST_CUSTOM_CA_SIGNED_CERTIFICATES and add the signing
certificate, or the server certificate, to the list of certificates trusted by this driver using
`neo4j.driver(.., { trustedCertificates:['path/to/certificate.crt']}). This is a security measure
to protect against man-in-the-middle attacks. If you are just trying Neo4j out and are not
concerned about encryption, simply disable it using `encrypted="ENCRYPTION_OFF"` in the driver
options. Socket responded with: DEPTH_ZERO_SELF_SIGNED_CERT
I've mapped the volumes as follows:
/certificates to the letsencrypt live folder for the domain db.example.com
Then I'm trying to connect to it via: bolt://db.example.com:32771
When i check via browser, the certificate being served is self-signed. I have try to add this certificate to the trusted certificates in windows but it didn't do anything at all.
Also added the path to the trusted certificates when instantiating the driver:
this._driver = neo4j.driver(process.env.Neo4jUri, token, {
encrypted: true,
trustedCertificates: ['ssl/neo4j.crt'],
});
I've also tried to copy the files within that certificate folder so that the appropriate files are named as mentioned in this article.

Related

What is the proper way of adding trust certificates to confluent kafka connect docker image

I have a kafka connect cluster (cp_kafka_connect_base) on docker, and I need to include a .pem file in order to connect to a source over TLS. It seems there are already a number of trusted certificates included in connect, so how would I add a new trusted certificate without invalidating the old ones?
Specific problem
I want to use MongoDB Source Connector, alongside a number of other connectors. As per documentation, I have imported my .pem certificate in a .jks store, and added the following envvars to my kafka connect containers:
KAFKA_OPTS="
-Djavax.net.ssl.trustStore=mystore.jks
-Djavax.net.ssl.trustStorePassword=mypass
This lets me connect to my data source, but invalidates other TLS connections, unless I add them all to my .jks. Since all other TLS connections work out of the box, I shouldn't need to manually import every single one of them to a .jks, just to make one connector implementation happy.
I have also tried setting:
CONNECT_SSL_TRUSTSTORE_TYPE: "PEM"
CONNECT_SSL_TRUSTSTORE_LOCATION: "myloc"
but the truststore location config isn't known, and TLS doesn't work:
WARN The configuration 'ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)

SSL local/remote Cert for DotNetCore API

I'm a newbie when it comes to certificates.
I'm building a Linux docker image with a Dot Net Core REST WebAPI app that will host the backend for a game. I plan to host this backend on Azure using a Container Instance.
I'd like all communication to be via SSL. I've created a self-signed cert for local communication from my Windows machine to the container. Once I registered it in my hosts file, the self-signed cert is working fine locally.
Now I'm ready to host on Azure. I'm prepared to obtain a CA cert, but am trying to work out how to maintain local access w/o cert errors as well as public access w/o cert errors without modifying the container between my local/debug sessions and the production/remote sessions. I'd prefer to have a single certificate, if possible.
Can anyone give me guidance on how to setup a cert for this situation? Seems like a common need, but I'm not finding resources to walk me through it. Thanks!

How to generate certs for secured connection from Celery to Redis

I'm following this tutorial, and adjusting the Celery-background related code to my project.
In my case I am operating in a Docker environment, and I have a secured site (i.e. https://localhost) which requires secured ssl communication.
The documentation in here shows an example on how to provide cert related files (keyfile, certfile, ca_certs).
But it is not clear to me how to create these files in the first place.
The tutorial in here shows how to create a custom certificate authority, and how to sign a certificate with it.
I followed the steps and created the 3 files:
keyfile - dev.mergebot.com.crt - the signed certificate (signed by myCA.pem)
ca_certs - dev.mergebot.com.key - private key to create a signed cert with "self-trusted CA"
certfile - myCA.pem - "self-trusted CA" certificate (filename in the tutorial: myCA.pem)
Note that I created these 3 files completely unrelated to Celery or Redis or Docker.
They were created in my local machine outside Docker. The files don't have the name of the Redis container and the Common Name in the cert was set to "foo"
When I use these files in my webapp, there is no connection from Celery to Redis.
Without ssl I do get a connection, so the overall environment aside from ssl is OK - see here
Is there any specific requirements to create the cert related files? (e.g. should the Common Name in the cert have the container name "redis", etc... )
Is there a way to test the validity of the certs without the app, e.g. by issuing a command from the container shell?
Thanks
I was able to generate the cert related files (keyfile, certfile, ca_certs) using the tutorial in here
I first tested that I can connect from my localhost to the "redis with ssl" docker container.
and I described the details here
Then I tested that I can connect from Celery docker container to the "redis with ssl" docker container
and I described the details here
Yes the certificate comman name should match the host name also the issuer of the certificate should be trusted by the client .
In your case since you are using a custom CA and generating the certs , the public cert of the CA should be in the trusted root of the client .
Additionally the certificate should be issued to the hostname in your case it will be localhost . Please do note that if you access the site from a remote machine by either using the fqdn or the Up the browser will flag an alert as invalid.
Also to verify the certificates , you can use the OpenSSL Verify option.

How to specify SSL certificate for WSO2 Identity Server in Docker?

I'm using the WSO2 Identity Server docker image and want to specify my own SSL certificate. The documentation I have found so far revolve around Identity Server being installed directly on the target machine rather than in a container.
One way to approach this is to set up a proxy in front of wso2. I did this using NginX:
https://medium.com/#oliver.zampieri/self-signed-ssl-reverse-proxy-with-docker-dbfc78c05b41

Trusting a certificate signed by a local CA from an AspNetCore-Api inside docker

I have the following scenario:
I want to run three services (intranet only) in windows docker containers on a windows host
an IdentityServer4
an Api (which uses the IdSvr for authorization)
a Webclient (which uses the api as Datalayer and the IdSvr for authorization)
All three services are running with asp.netcore 2.1 (with microsoft/dotnet:2.1-aspnetcore-runtime as base) and using certificates signed by a local CA.
The problem I'm facing now is that i cannot get the api or the webclient into trusting these certificates.
E.g. if I call the api the authentication-middleware tries to call the IdSvr but gets an error on GET '~/.well-known/openid-configuration' because of an untrusted ssl certificate.
Is there any way to get the services into trusting every certificate issued by the local CA? I've already tried this way but either I'm doing it wrong or it just doesn't work out.
Imho a docker container must have its own CertStore otherwise none trusted https connection would be possible. So my idea is to get the root certificate from the docker hosts CertStore (which trusts the CA) into the container but I don't know how to achieve this.

Resources