I'm trying to deploy a WCF service implemented using CoreWCF + .NET 6 to Azure Container Apps exposing https endpoints from a linux container.
I try the same service using http protocol and everything work correctly.
I also expose a gRPC service, but the difference with WCF is the binding configuration. WCF needs to set up the same protocol schema for both client and server. So I suppose it's not possible redirect an https request to a container that expose a WCF service on the port 80. This can be done with REST or gRPC service instead.
I enable ingress in Azure Container Apps and set the port 443. When I try the http endpoint I set the port 80 instead.
The binding of the WCF is BasicHttpBinding with security mode Transport. When i try the http endpoint I set security mode None.
In the dockefile I expose port 80 and 443.
On my local machine I'm able to get thing work because i can use a self signed certificate, but in a production enviroment this doesn't seems to work. I deploy the self signed certificate with the container image, but maybe there isn't a certification authority that can trust this certificate.
I read that for Azure Container Instance it's possible to configure https running Ngix in a sidecar container. But in this case the request is redirected internally on port 80 and so it doesn't work for me.
What can i do for get my service work over https?
Related
I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.
Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.
I created an asp.net service, exposing a webservice at 3188. It uses Kestrel server.
I published this via Linux container on local machine, with port mapping as 5188:3188.
I cannot access this, http://localhost:5188/diagnostics.
If I publish on local machine, with port mapping as 3188:3188, it works, and url http://localhost:3188/diagnostics yields a response.
How do I fix this issue, and publish specifically on port 5188, so, I can have similar services on different ports.
I'm running a TCP server (Docker instance / Go) on Kubernetes.. It's working and clients can connect and do intended stuff. I would like to make the TCP connection secure with an SSL certificate. I already got SSL working with a HTTP Rest API service running on the same Kubernetes cluster by using ingress controllers, but I'm not sure how to set it up with a regular TCP connection. Can anyone point me in the right direction ?
As you can read in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing
services other than HTTP and HTTPS to the internet typically uses a
service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Depending on the platform you are using you have different kind of LoadBalancers available which you can use to terminate your SSL traffic. If you have on-premise cluster you can set up additional nginx or haproxy server before your Kubernetes cluster which will handle SSL traffic.
I Have multiple Docker containers exposing their respective ports which I am bringing up using docker-compose, I have a service which is running on port 80.
I need to add SSL Certificate for all those Containers, Such that the application running on port 80 will be https
You can either keep certificate on a reverse proxy or propagate through service configuration tools - which one depends on your infrastructure. One example might be vault