I want to use openldap docker container with fabric-ca , I've been searching on internet for a week now. Is there anyone in the community who have tried or implemented ?
I can't say I have done it myself, but you can configure the Fabric CA to use LDAP.
Inside your CA Server Configuration file there is a section related to LDAP. More specifically, you would start by enabling LDAP and pointing to the URL where it is running:
ldap:
enabled: true
url: ldap://<adminDN>:<adminPassword>#<host>:<port>/<base>
If you have enabled TLS using self-signed certificates on the LDAP server then you would need to also configure TLS to trust the signing certificate.
The Fabric CA documentation has a section on how you would configure LDAP, see that for more elaborate configurations. It includes the minimum configuration that you would need to do to get started with using OpenLDAP Docker container osixia/openldap:
ldap:
enabled: true
url: ldap://cn=admin,dc=example,dc=org:admin#localhost:10389/dc=example,dc=org
userfilter: (uid=%s)
Finally, this Medium post discusses the steps needed to configure Fabric CA to use LDAP. I believe the author is using OpenLDAP. Good luck!
Related
I am working on strimzi kafka.I want to deploy kafka with self signed certs issued by certmanager instead of strimzi operator/kafka provided selfsigned certs.
I have gone through the strizmi documentation but I didn't find solution to integrate cert manger with strimzi kafka/operator.
When we deploy kafka we can see many secrets(with certs) are being created in the namespace.If I want all those secrets/certs issued by certs manager to work with kafka how I can do it.
Thank you !!
You can use Cert Manager to provide a listener certificate s described in this blog post. But there is currently no easy way to use it for the internal CAs. You can follow this proposal which might make it possible in the future.
We have the following setup:
A Keycloak Server on a VM installed as a docker container.
Server certificate via Lets Encrypt.
Two realms a and b.
Realm b is integrated into Realm a as an identity provider.
To achieve that it works, we had to import the certificate of the Keycloak server into the java trusted store. Now the login works and we can choose in realm a if we want to login with realm b. Unfortunately the process of importing the certificate comes with lots of manual effort (copy the certificate into the container, divide the chain into several files with only one certificate, call a function) and the certificates are just valid for 90 days. Of course we can automate this but the question is, is there an "official way" of doing this? Like mounting the Lets Encrypt certificate folder into the container and "done"? We are using the official jboss/keycloak container image.
The docker container should support this by setting the X509_CA_BUNDLE variable accordingly. See the docs here.
This creates the truststore for you and configures it in Wildfly. Details can be found in this and that script.
I have installed Alfresco 6.2 using docker based installation and it's working fine with http.
Now, I have to run same set-up on https and i have to apply self signed certificate for this.
Can someone please provide the steps to generate this self-signed certificate and how to apply it inside docker image.
Any help will be appreciated.
I already did same thing for Alfresco 5.2 without docker, but here I am quite new to docker and not understanding how to do this.
Instead of changing the tomcat certificate I would recommend to setup SSL on nginx or any other reverse proxy. The Tomcat certificate is also used to authenticate Solr. Configuration errors can easily cause the search to stop working.
When using a reverse proxy don't forget to set your external connection in alfresco-global.properties to avoid problems with the CSRF Token Filter. e.g.:
alfresco.context=alfresco
alfresco.host=alfresco.mycompany.com
alfresco.port=443
alfresco.protocol=https
share.context=share
share.host=${alfresco.host}
share.port=${alfresco.port}
share.protocol=${alfresco.protocol}
I'm following this tutorial, and adjusting the Celery-background related code to my project.
In my case I am operating in a Docker environment, and I have a secured site (i.e. https://localhost) which requires secured ssl communication.
The documentation in here shows an example on how to provide cert related files (keyfile, certfile, ca_certs).
But it is not clear to me how to create these files in the first place.
The tutorial in here shows how to create a custom certificate authority, and how to sign a certificate with it.
I followed the steps and created the 3 files:
keyfile - dev.mergebot.com.crt - the signed certificate (signed by myCA.pem)
ca_certs - dev.mergebot.com.key - private key to create a signed cert with "self-trusted CA"
certfile - myCA.pem - "self-trusted CA" certificate (filename in the tutorial: myCA.pem)
Note that I created these 3 files completely unrelated to Celery or Redis or Docker.
They were created in my local machine outside Docker. The files don't have the name of the Redis container and the Common Name in the cert was set to "foo"
When I use these files in my webapp, there is no connection from Celery to Redis.
Without ssl I do get a connection, so the overall environment aside from ssl is OK - see here
Is there any specific requirements to create the cert related files? (e.g. should the Common Name in the cert have the container name "redis", etc... )
Is there a way to test the validity of the certs without the app, e.g. by issuing a command from the container shell?
Thanks
I was able to generate the cert related files (keyfile, certfile, ca_certs) using the tutorial in here
I first tested that I can connect from my localhost to the "redis with ssl" docker container.
and I described the details here
Then I tested that I can connect from Celery docker container to the "redis with ssl" docker container
and I described the details here
Yes the certificate comman name should match the host name also the issuer of the certificate should be trusted by the client .
In your case since you are using a custom CA and generating the certs , the public cert of the CA should be in the trusted root of the client .
Additionally the certificate should be issued to the hostname in your case it will be localhost . Please do note that if you access the site from a remote machine by either using the fqdn or the Up the browser will flag an alert as invalid.
Also to verify the certificates , you can use the OpenSSL Verify option.
I have used ldap based camunda-auth to login to the application using HttpBasicAuthenticationProvider provided by camunda, where how can I implement https login and is it supported by camunda (or) we need to use spring security?
Please send any link related or config to camunda - https implementation.
I am not sure I understood you correctly- you want to set up camunda to have TLS and additionally you want LDAP authorization?
To set up TLS, you need to configure it directly on Tomcat server.
First you need to obtain/generate certificates.
Then you need to point to those certificates in server.xml configuration file.
Just google "TLS on Tomcat". I'm sure there are hundreds of tutorials how to do this step by step.
When it comes to LDAP integration - follow documentation:
https://docs.camunda.org/manual/7.8/installation/full/tomcat/configuration/#ldap