How to properly set up HTTPS for OnlyOffice - docker

Following OnlyOffice's help center's instructions leads to the creation of security certificate declared as invalid by browsers, as it is self-signed.
The intention is to use OnlyOffice's server on Docker for NextCloud, which runs properly already on another server.
Currently, the certificates have been created in the directory suggested by the instructions:
/app/onlyoffice/DocumentServer/data/certs# ls
dhparam.pem onlyoffice.crt onlyoffice.csr onlyoffice.key
I have followed all the given steps, and it does not work.
Is there a way to use LetsEncrypt instead of self-signed certificates?
I am not an IT management person, I am a simple developer trying to use OSS to not use Google Docs etc. Take this into consideration when providing guidance, as what you may take for granted, I may not.

The problem that you are having is that a self-signed certificate is not trusted by anyone (anything).
You can create your own certificate using Let's Encrypt. You will either need to create a special file on your server or create a TXT record on your domain's DNS server.
Certbot Download
The following is for Linux. If you are running on Windows, try using ManuaLE (more info below)
Go to let's encrypt and download certbot. Then from the command line:
certbot certonly --manual --preferred-challenges dns -d mydomain.com
This command will prompt you to create a TXT record on your DNS server. After creating the record wait a few minutes before pressing ENTER to continue.
After your SSL certificate is created, copy and rename them to your desired location listed in your question.
Instructions for ManuaLE for Windows.
ManuaLE Download
manuale authorize mydomain.com
manuale issue mydomain.com
After your SSL certificate is created, copy and rename them to your desired location listed in your question.

Please restart the container, the DocumentServer will switch to https config.
Open the address of the DocumentServer in your browser. If that is available, it can be connected to your NextCloud instance.

Related

Add Letsencrypt Certificate to Keycloak Trusted Certificates

We have the following setup:
A Keycloak Server on a VM installed as a docker container.
Server certificate via Lets Encrypt.
Two realms a and b.
Realm b is integrated into Realm a as an identity provider.
To achieve that it works, we had to import the certificate of the Keycloak server into the java trusted store. Now the login works and we can choose in realm a if we want to login with realm b. Unfortunately the process of importing the certificate comes with lots of manual effort (copy the certificate into the container, divide the chain into several files with only one certificate, call a function) and the certificates are just valid for 90 days. Of course we can automate this but the question is, is there an "official way" of doing this? Like mounting the Lets Encrypt certificate folder into the container and "done"? We are using the official jboss/keycloak container image.
The docker container should support this by setting the X509_CA_BUNDLE variable accordingly. See the docs here.
This creates the truststore for you and configures it in Wildfly. Details can be found in this and that script.

What is the proper way of adding trust certificates to confluent kafka connect docker image

I have a kafka connect cluster (cp_kafka_connect_base) on docker, and I need to include a .pem file in order to connect to a source over TLS. It seems there are already a number of trusted certificates included in connect, so how would I add a new trusted certificate without invalidating the old ones?
Specific problem
I want to use MongoDB Source Connector, alongside a number of other connectors. As per documentation, I have imported my .pem certificate in a .jks store, and added the following envvars to my kafka connect containers:
KAFKA_OPTS="
-Djavax.net.ssl.trustStore=mystore.jks
-Djavax.net.ssl.trustStorePassword=mypass
This lets me connect to my data source, but invalidates other TLS connections, unless I add them all to my .jks. Since all other TLS connections work out of the box, I shouldn't need to manually import every single one of them to a .jks, just to make one connector implementation happy.
I have also tried setting:
CONNECT_SSL_TRUSTSTORE_TYPE: "PEM"
CONNECT_SSL_TRUSTSTORE_LOCATION: "myloc"
but the truststore location config isn't known, and TLS doesn't work:
WARN The configuration 'ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:384)

Need Help setting up Certificate Chain for a LAMP stack with nginx proxy docker

I have a LAMP Stack docker set up with:
Nginx proxy Docker https://hub.docker.com/r/jwilder/nginx-proxy
Apache Stack
Everything is working fine except that when we check our SSL on ssllabs.com, we are getting report of: certificate chain missing.
If you are familiar working with this nginx proxy docker and LAMP stack, i need your advise to get this chain certficate up.
Just wonder if you can help me with this.
when you get a certificate from a certificate authority(for example let's encrypt), they will give you some files. between those files you have a key and a cert and a bundle file. all you need to do is just to append your bundle file contents to your cert file and then you will end up with a file known as fullchain file. then replace the address of this file with the address of your cert file in your configuration and do a restart. (browsers will automatically complete the chain even if your not provided it, but tools like curl want your server to explicitly mention the chain of your cert.)

How to generate certs for secured connection from Celery to Redis

I'm following this tutorial, and adjusting the Celery-background related code to my project.
In my case I am operating in a Docker environment, and I have a secured site (i.e. https://localhost) which requires secured ssl communication.
The documentation in here shows an example on how to provide cert related files (keyfile, certfile, ca_certs).
But it is not clear to me how to create these files in the first place.
The tutorial in here shows how to create a custom certificate authority, and how to sign a certificate with it.
I followed the steps and created the 3 files:
keyfile - dev.mergebot.com.crt - the signed certificate (signed by myCA.pem)
ca_certs - dev.mergebot.com.key - private key to create a signed cert with "self-trusted CA"
certfile - myCA.pem - "self-trusted CA" certificate (filename in the tutorial: myCA.pem)
Note that I created these 3 files completely unrelated to Celery or Redis or Docker.
They were created in my local machine outside Docker. The files don't have the name of the Redis container and the Common Name in the cert was set to "foo"
When I use these files in my webapp, there is no connection from Celery to Redis.
Without ssl I do get a connection, so the overall environment aside from ssl is OK - see here
Is there any specific requirements to create the cert related files? (e.g. should the Common Name in the cert have the container name "redis", etc... )
Is there a way to test the validity of the certs without the app, e.g. by issuing a command from the container shell?
Thanks
I was able to generate the cert related files (keyfile, certfile, ca_certs) using the tutorial in here
I first tested that I can connect from my localhost to the "redis with ssl" docker container.
and I described the details here
Then I tested that I can connect from Celery docker container to the "redis with ssl" docker container
and I described the details here
Yes the certificate comman name should match the host name also the issuer of the certificate should be trusted by the client .
In your case since you are using a custom CA and generating the certs , the public cert of the CA should be in the trusted root of the client .
Additionally the certificate should be issued to the hostname in your case it will be localhost . Please do note that if you access the site from a remote machine by either using the fqdn or the Up the browser will flag an alert as invalid.
Also to verify the certificates , you can use the OpenSSL Verify option.

How to provide separate SSL certificate for specified path

I have Rails app running behind Nginx using Passenger. SSL is configured at Nginx side in server block and works fine. Now I need specify separate certificate on certain path, say for https://example.com/blablabla.
I need this because of constrains of some system I working with.
A certificate identifies a server and not a path inside the server. The path is only known after the SSL handshake is done, that is after the certificate was already provided. Thus it is not possible to have one certificate for a specific path and another certificate for another path and the same hostname.

Resources