Asp.net APIs have wrong certificate, Blazor website refuses to connect - docker

I have the following setup: One Blazor Server Side Website, an ASP.net API Gateway using Ocelot and a few Microservices also using ASP.net. All of these things are run in individual Docker Containers.
First of, everything works, the connections work, I can fetch data, but only on http. I have the dev-certs enabled for ASP.net, and they also "work" but the problem is, they are signed for the wrong host.
When I navigate to a gateway with the browser, it works as long as I call "localhost" and the port. The problem is, because they run in Containers, localhost does not mean the same thing for them. That means I have to use my local IP and the port to send them to the correct service. But the certificate is signed for "localhost" meaning it thinks the cert is invalid because the host is no longer "localhost".
This means that my blazor app sees a certificate that does not match the host and gets an exception because it can't validate the cert. I looked up a lot of stuff and found nothing, but basically I need a way to either change the certificates in the containers or tell the blazor server to accept those certificates.
I have not found a thing on this topic, so I would really appreciate some help.

Related

How to handle https for a containerized OIDC server in local development?

I have an OpenID Connect server (OpenIDdict) and an asp.net core webapp in containers behind a TLS termination proxy. In production, all communication between the webapp and the OIDC server can go through the 'outside', based on their public names. However, in development, I'm using self signed certificates that aren't trusted by the containers running the apps, only by my host pc. Because of that, in development, the webapp can redirect the browser to the OIDC server just fine, but when it, for instance, needs to call the token endpoint, it will fail, because the certificate isn't trusted.
A possible solution would be to have the server to server communication go through the internal container network, but I haven't been able to get that to work. Is there a way to make the asp.net core OpenID Connect middleware use a different url (and protocol) for server to server communication?
Another solution would be to install the self signed certificates in the containers, but because that's only needed in development, it seems bad practice to burden the images with that. Is that assessment correct?
I'm hoping I'm missing the most obvious solution. Any ideas?
This is what I ended up doing:
I added a custom domain to the hosts file of my pc, pointing to itself.
Using openssl, I created a rootDevCA.crt and added it to the trusted root on my pc and in all the container images.
With that root certificate, I signed a new certificate for the custom domain and supplied that (including its key) to the proxy.
As long as I keep the key file for the root certificate far away from my source code, there should be no security issues.

Serving dockerized microservices over HTTPS

I'm currently struggling with docker and SSL. Let me give you an overview on what I'm trying to do.
I built a microservice-based architecture which is composed by a react web application and some "backend" services written in python and exposed with gunicorn on docker containers. I need to serve it over SSL because of Auth0 which needs the https communication. So, I built the server, bought a domain and got the SSL certificate for the domain with let's encrypt.
Now, here are the troubles, since mi services communicates to each other with a docker network, say services-network. For this reason they refer each other with the url `service:port/example.
At the moment I'm able to successfully connect to my web app with https but whenever this tries to contact the "backend" services the connection is refused because of it came from a non-secure resource (I used http://service:port/endpoint).
I tried to use the let's encrypt certificate generated for the webapp but the communication is blocked with message requests.exceptions.SSLError: HTTPSConnectionPool(host='service', port=8081): Max retries exceeded with url: /endpoint (Caused by SSLError(CertificateError("hostname 'service' doesn't match 'domain.com'",),))
I understand that a possible workaround for this error is to make the services communicate each other without using the docker network but the external one. Anyway I think that is not a good practice and that the communication among containers needs to be done through the docker network.
Finally, my question is: which is the best way to make the containers communicate through https over the docker network?
I personally like to use nginx as a reverse proxy. You would configure it normally and set it to proxy_pass <dockerIp:port>.
Many people like to use traefik.io which has many features including Let's Encrypt integration.

Azure Cloud Service microservice to K8 Migration

I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?

Accessing local https site over LAN

I am trying to debug my ASP.Net MVC website when hosted over https. I am hosting this via the IIS instance with Visual Studio. I need to access the website from a mobile device - so I am attempting to access the site via LAN.
I have enabled https via visual studio by going to the project properties and setting SSL Enabled to true. Now, when I debug the site, it starts two local instances; one for http (as usual) and one for https. The internal ports for these are listed when I right click the IIS icon in the task tray.
I do not have an SSL certificate installed. My browser gives me insecure connection warnings, but I just dismiss them - I don't believe these are a concern as I am just testing locally (I could be wrong).
I am using a program called Sharp Proxy to translate these internal ports to external ones, allowing my site to be accessible. 64312 is my internal port for accessing the site over http. Sharp proxy is translating this to 4567. 44300 is my internal port for accessing the site over https. Sharp proxy is translating this to 5678.
Successful permutations:
http://localhost:64312 - SUCCESS
https://localhost:44300 - SUCCESS
http://192.168.0.72:4567 - SUCCESS
Failed permutations:
https://192.168.0.72:5678 - FAIL
Bad Request - Invalid Hostname
The issue is clearly not to do with accessing my machine over LAN as I am able to successfully hit 192.168.0.72 with http.
My question is: What do I need to do to be able to access my https site over LAN via debugging with Visual Studio?
In order to verify the identity of the server you're connecting to, HTTPS needs a hostname to check the certificate against.
Therefore, you cannot connect to an IP address over HTTPS.
You need to use a domain name.

Jenkins Server - Issues with setting URL

I am trying to set up an internal Jenkins server for our QA team and facing some issues with the server URL. This is inside a corporate network and all sort of firewall and proxy settings are in place, however we need to access the server only with in our internal network. This server runs from a Mac Mini. I was able to install and access the server without any issues using localhost:8080.
I tried to set a custom URL (something like testjenkins.local:8080)under the Manage Jenkins option and never was able to access the server. The only option worked for me is with the IP address (IP:8080). I was able to access the server from other machines in the network using this URL.
The real problem with the above setup is that the machine IP changes(I am not able to make it static), and hence wont be able to get an always working URL.
Highly appreciate if any one guide me in the wright direction.
Given you have a dynamic IP on your server, a good alternative would be using ngrok. Ngrok can expose the port 8080 of that server to the internet via secure tunnels, and you can access it via an URL, so changes in the IP won't affect it.
However, ngrok exposes the server to the whole Internet. To make it accessible only for your team you can add authentication in both ngrok tunnel and Jenkins server (would it work for you?).

Resources