Importing existing certificate from k8s Secret into cert-manager CR Certificate - cert-manager

Let me start with Why?
My cert-manager manages dozens of my certs issued by private ACME CA server. I use them for ingress and egress traffic(mTLS). Now, I have to use certs for some use cases issued by public CA. So cert-manager would not help me with that but could help me with life cycle of these certs (Prometheus metrics to be precise).
Let's say we have some cert in the K8s secret:
kubectl create secret generic my-cert \
--from-file=tls.key=cert.key \
--from-file=tls.crt=cert.crt
Is there a way to create cert-manager Certificate from that Secret?

According to Allow upload of external certificates there is no option to do that right now.

Related

Cert manager issuing certs for Strimzi kafka deployment

I am working on strimzi kafka.I want to deploy kafka with self signed certs issued by certmanager instead of strimzi operator/kafka provided selfsigned certs.
I have gone through the strizmi documentation but I didn't find solution to integrate cert manger with strimzi kafka/operator.
When we deploy kafka we can see many secrets(with certs) are being created in the namespace.If I want all those secrets/certs issued by certs manager to work with kafka how I can do it.
Thank you !!
You can use Cert Manager to provide a listener certificate s described in this blog post. But there is currently no easy way to use it for the internal CAs. You can follow this proposal which might make it possible in the future.

Add Letsencrypt Certificate to Keycloak Trusted Certificates

We have the following setup:
A Keycloak Server on a VM installed as a docker container.
Server certificate via Lets Encrypt.
Two realms a and b.
Realm b is integrated into Realm a as an identity provider.
To achieve that it works, we had to import the certificate of the Keycloak server into the java trusted store. Now the login works and we can choose in realm a if we want to login with realm b. Unfortunately the process of importing the certificate comes with lots of manual effort (copy the certificate into the container, divide the chain into several files with only one certificate, call a function) and the certificates are just valid for 90 days. Of course we can automate this but the question is, is there an "official way" of doing this? Like mounting the Lets Encrypt certificate folder into the container and "done"? We are using the official jboss/keycloak container image.
The docker container should support this by setting the X509_CA_BUNDLE variable accordingly. See the docs here.
This creates the truststore for you and configures it in Wildfly. Details can be found in this and that script.

Azure IoT Edge Certificates Requirements

We are running the Azure IoT Edge runtime on commodity servers inside a corporate intranet. I understand the Microsoft documentation recommends installing certificates for production IoT edge deployment.
We are using basic edge modules only, no gateway configurations, passthroughs, etc...
For our intranet scenario are self-signed certs suitable for production? If so can a single certificate be used for all devices?
Thanks
Yes, you can use self signed CA certificates. Check here.
Every IoT Edge device in production needs a device certificate authority (CA) certificate installed on it. That CA certificate is then declared to the IoT Edge runtime in the config.yaml file. For development and testing scenarios, the IoT Edge runtime creates temporary certificates if no certificates are declared in the config.yaml file. However, these temporary certificates expire after three months and aren't secure for production scenarios. For production scenarios, you should provide your own device CA certificate, either from a self-signed certificate authority or purchased from a commercial certificate authority.
Regarding using the same CA cert on various Edge devices,logically you should be able to use it as the identity cert is the one that differs for edge devices based on CN name.But I think you can easily check this out by doing the POC.
Here is the link to generate CA cert.

How to generate certs for secured connection from Celery to Redis

I'm following this tutorial, and adjusting the Celery-background related code to my project.
In my case I am operating in a Docker environment, and I have a secured site (i.e. https://localhost) which requires secured ssl communication.
The documentation in here shows an example on how to provide cert related files (keyfile, certfile, ca_certs).
But it is not clear to me how to create these files in the first place.
The tutorial in here shows how to create a custom certificate authority, and how to sign a certificate with it.
I followed the steps and created the 3 files:
keyfile - dev.mergebot.com.crt - the signed certificate (signed by myCA.pem)
ca_certs - dev.mergebot.com.key - private key to create a signed cert with "self-trusted CA"
certfile - myCA.pem - "self-trusted CA" certificate (filename in the tutorial: myCA.pem)
Note that I created these 3 files completely unrelated to Celery or Redis or Docker.
They were created in my local machine outside Docker. The files don't have the name of the Redis container and the Common Name in the cert was set to "foo"
When I use these files in my webapp, there is no connection from Celery to Redis.
Without ssl I do get a connection, so the overall environment aside from ssl is OK - see here
Is there any specific requirements to create the cert related files? (e.g. should the Common Name in the cert have the container name "redis", etc... )
Is there a way to test the validity of the certs without the app, e.g. by issuing a command from the container shell?
Thanks
I was able to generate the cert related files (keyfile, certfile, ca_certs) using the tutorial in here
I first tested that I can connect from my localhost to the "redis with ssl" docker container.
and I described the details here
Then I tested that I can connect from Celery docker container to the "redis with ssl" docker container
and I described the details here
Yes the certificate comman name should match the host name also the issuer of the certificate should be trusted by the client .
In your case since you are using a custom CA and generating the certs , the public cert of the CA should be in the trusted root of the client .
Additionally the certificate should be issued to the hostname in your case it will be localhost . Please do note that if you access the site from a remote machine by either using the fqdn or the Up the browser will flag an alert as invalid.
Also to verify the certificates , you can use the OpenSSL Verify option.

Kubernetes pod fails while making call to google cloud pub/sub with unknown certificate autority

I have a kubernetes cluster setup where I am trying to publish a message to google cloud pub/sub from my pod. When the POST call (created by the API behind the scenes) is being made by the pod, it fails citing the issue below:
2016/07/21 10:31:24 Publish failed, Post https://pubsub.googleapis.com/v1/projects/<project-name>/topics/MyTopic:publish?alt=json: x509: certificate signed by unknown authority
I have already put a self signed certificate in the /etc/ssl/certs of my docker Debian image. Do I need to purchase a SSL certificate signed by some certified authority or will a self signed one do the job and I am missing something out here.
Self-signed certificates will not work. The certificate needs to be signed by a certificate authority.

Resources