SSL local/remote Cert for DotNetCore API - docker

I'm a newbie when it comes to certificates.
I'm building a Linux docker image with a Dot Net Core REST WebAPI app that will host the backend for a game. I plan to host this backend on Azure using a Container Instance.
I'd like all communication to be via SSL. I've created a self-signed cert for local communication from my Windows machine to the container. Once I registered it in my hosts file, the self-signed cert is working fine locally.
Now I'm ready to host on Azure. I'm prepared to obtain a CA cert, but am trying to work out how to maintain local access w/o cert errors as well as public access w/o cert errors without modifying the container between my local/debug sessions and the production/remote sessions. I'd prefer to have a single certificate, if possible.
Can anyone give me guidance on how to setup a cert for this situation? Seems like a common need, but I'm not finding resources to walk me through it. Thanks!

Related

Add Letsencrypt Certificate to Keycloak Trusted Certificates

We have the following setup:
A Keycloak Server on a VM installed as a docker container.
Server certificate via Lets Encrypt.
Two realms a and b.
Realm b is integrated into Realm a as an identity provider.
To achieve that it works, we had to import the certificate of the Keycloak server into the java trusted store. Now the login works and we can choose in realm a if we want to login with realm b. Unfortunately the process of importing the certificate comes with lots of manual effort (copy the certificate into the container, divide the chain into several files with only one certificate, call a function) and the certificates are just valid for 90 days. Of course we can automate this but the question is, is there an "official way" of doing this? Like mounting the Lets Encrypt certificate folder into the container and "done"? We are using the official jboss/keycloak container image.
The docker container should support this by setting the X509_CA_BUNDLE variable accordingly. See the docs here.
This creates the truststore for you and configures it in Wildfly. Details can be found in this and that script.

How to handle https for a containerized OIDC server in local development?

I have an OpenID Connect server (OpenIDdict) and an asp.net core webapp in containers behind a TLS termination proxy. In production, all communication between the webapp and the OIDC server can go through the 'outside', based on their public names. However, in development, I'm using self signed certificates that aren't trusted by the containers running the apps, only by my host pc. Because of that, in development, the webapp can redirect the browser to the OIDC server just fine, but when it, for instance, needs to call the token endpoint, it will fail, because the certificate isn't trusted.
A possible solution would be to have the server to server communication go through the internal container network, but I haven't been able to get that to work. Is there a way to make the asp.net core OpenID Connect middleware use a different url (and protocol) for server to server communication?
Another solution would be to install the self signed certificates in the containers, but because that's only needed in development, it seems bad practice to burden the images with that. Is that assessment correct?
I'm hoping I'm missing the most obvious solution. Any ideas?
This is what I ended up doing:
I added a custom domain to the hosts file of my pc, pointing to itself.
Using openssl, I created a rootDevCA.crt and added it to the trusted root on my pc and in all the container images.
With that root certificate, I signed a new certificate for the custom domain and supplied that (including its key) to the proxy.
As long as I keep the key file for the root certificate far away from my source code, there should be no security issues.

Asp.net APIs have wrong certificate, Blazor website refuses to connect

I have the following setup: One Blazor Server Side Website, an ASP.net API Gateway using Ocelot and a few Microservices also using ASP.net. All of these things are run in individual Docker Containers.
First of, everything works, the connections work, I can fetch data, but only on http. I have the dev-certs enabled for ASP.net, and they also "work" but the problem is, they are signed for the wrong host.
When I navigate to a gateway with the browser, it works as long as I call "localhost" and the port. The problem is, because they run in Containers, localhost does not mean the same thing for them. That means I have to use my local IP and the port to send them to the correct service. But the certificate is signed for "localhost" meaning it thinks the cert is invalid because the host is no longer "localhost".
This means that my blazor app sees a certificate that does not match the host and gets an exception because it can't validate the cert. I looked up a lot of stuff and found nothing, but basically I need a way to either change the certificates in the containers or tell the blazor server to accept those certificates.
I have not found a thing on this topic, so I would really appreciate some help.

Trusting a certificate signed by a local CA from an AspNetCore-Api inside docker

I have the following scenario:
I want to run three services (intranet only) in windows docker containers on a windows host
an IdentityServer4
an Api (which uses the IdSvr for authorization)
a Webclient (which uses the api as Datalayer and the IdSvr for authorization)
All three services are running with asp.netcore 2.1 (with microsoft/dotnet:2.1-aspnetcore-runtime as base) and using certificates signed by a local CA.
The problem I'm facing now is that i cannot get the api or the webclient into trusting these certificates.
E.g. if I call the api the authentication-middleware tries to call the IdSvr but gets an error on GET '~/.well-known/openid-configuration' because of an untrusted ssl certificate.
Is there any way to get the services into trusting every certificate issued by the local CA? I've already tried this way but either I'm doing it wrong or it just doesn't work out.
Imho a docker container must have its own CertStore otherwise none trusted https connection would be possible. So my idea is to get the root certificate from the docker hosts CertStore (which trusts the CA) into the container but I don't know how to achieve this.

How to connect my docker with SSL in local network (without domain)?

I wanted to know if there was a way to secure the traffic between my Rapsberry Pi, which runs Docker, and my computer on the same local network with SSL protocol. I just want to be able to connect HTTPS to my containers, just in local with let's encrypt (I use raspberrypi.local domain).
Thanks,
You can use SSL to connect to a local (not registered) domain, but not using letsencrypt.
Letsencrypt (and any other service that provides certificates) needs to verify the ownership of the domain in order to deploy a certificate.
This is done in various ways (out of the scope of this question) but in any case, an existing domain name, publicly resolvable, must exist.
This is not your case obviously.
What you can do, is generate a self-signed certificate and use that to connect through SSL.
This is a tutorial for generating an SSL certificate using docker only: https://codefresh.io/blog/using-docker-generate-ssl-certificates/
Once you have a certificate, you have to deploy it in your docker app stack, but I guess this is off-topic for your question.

Resources