SSL with TCP connection on Kubernetes? - docker

I'm running a TCP server (Docker instance / Go) on Kubernetes.. It's working and clients can connect and do intended stuff. I would like to make the TCP connection secure with an SSL certificate. I already got SSL working with a HTTP Rest API service running on the same Kubernetes cluster by using ingress controllers, but I'm not sure how to set it up with a regular TCP connection. Can anyone point me in the right direction ?

As you can read in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing
services other than HTTP and HTTPS to the internet typically uses a
service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Depending on the platform you are using you have different kind of LoadBalancers available which you can use to terminate your SSL traffic. If you have on-premise cluster you can set up additional nginx or haproxy server before your Kubernetes cluster which will handle SSL traffic.

Related

Configure WCF service in Azure Container Apps

I'm trying to deploy a WCF service implemented using CoreWCF + .NET 6 to Azure Container Apps exposing https endpoints from a linux container.
I try the same service using http protocol and everything work correctly.
I also expose a gRPC service, but the difference with WCF is the binding configuration. WCF needs to set up the same protocol schema for both client and server. So I suppose it's not possible redirect an https request to a container that expose a WCF service on the port 80. This can be done with REST or gRPC service instead.
I enable ingress in Azure Container Apps and set the port 443. When I try the http endpoint I set the port 80 instead.
The binding of the WCF is BasicHttpBinding with security mode Transport. When i try the http endpoint I set security mode None.
In the dockefile I expose port 80 and 443.
On my local machine I'm able to get thing work because i can use a self signed certificate, but in a production enviroment this doesn't seems to work. I deploy the self signed certificate with the container image, but maybe there isn't a certification authority that can trust this certificate.
I read that for Azure Container Instance it's possible to configure https running Ngix in a sidecar container. But in this case the request is redirected internally on port 80 and so it doesn't work for me.
What can i do for get my service work over https?

Connecting a docker container through a transparent proxy in a second container

I have two containers, a client container and a proxy container. My goal is to get all of the client's outgoing network traffic (TCP and UDP) to be sent to the proxy container. The proxy container has a local socket that receives all traffic from the client, does some processing on it, then forwards the traffic to its original destination (using a new socket).
I have been able to implement this with real hardware (using two Raspberry Pis), but I'm trying to get this working on Docker now.
Currently, I'm trying to do this by creating two networks, an internal and an external network. The client is connected to the internal network and the proxy is connected to both the internal and external network. I then set the default route for the client to send all traffic the proxy. On the proxy, I have IP tables routes that should be sending content to a local proxy running on the system (using these instructions: https://www.kernel.org/doc/html/latest/networking/tproxy.html). Unfortunately, no connections are made to the proxy socket.
I'm hoping someone can point me in the right direction for getting this to work. I'm happy to describe more about what I've tried, but I worry that might just confuse the issue.

How to implement liveness and readiness endpoints for a gRPC service?

I have a gRPC service which listens on a port using a tcp listener. This service is Dockerized and eventually I want to run it in a Kubernetes cluster.
I was wondering what is the best way to implement liveness and readiness probes for checking the health of my service?
Should I run a separate http server in another goroutine and respond to /health and /ready paths?
Or, should I also have gRPC calls for liveness and readiness of my service and use a gRPC client to query these endpoints?!
Previously I've run a separate http server inside the app, just for healthchecks (this was because AWS application load balancers only have http checking, I don't know about kube).
If you run the http server as a separate routine and the grpc server on the main goroutine then you should avoid the situation where the grpc server goes down and http is still 200 - OK (assuming you don't yet have a means for http to healthcheck your grpc).
You could also use a heatbeat pattern of goroutines, that are controlled by the http server and accept heartbeats from the grpc server to make sure that it's all OK.
If you run 2 servers, they will need to be running on different ports, this can be an issue for some schedulers (like ECS) that expects 1 port for a service. There are examples and packages that will allow you to multiplex multiple protocols onto the same port. I guess kube supports multiple port services so this might not be a problem.
Link to example of multiplexing:
https://github.com/gdm85/grpc-go-multiplex/blob/master/greeter_multiplex_server/greeter_multiplex_server.go

IPv6-only client connecting to Docker container on Rancher using an Application Load Balancer?

We are developing an application for a platform that uses only IPv6 addresses on their client. We have built out our infrastructure using Rancher Server and Rancher hosts for the application containers.
Rancher does not support IPv6, so to allow the IPv6-only client to be able to connect to the application, I have put an Application Load Balancer (ALB) that supports IPv6 in front of the Rancher load balancer, which uses haproxy.
In Route53, my A and AAAA entries point to the application load balancer which is then forwarding traffic to the haproxy loadb alancer in Rancher which then distributes traffic to the application.
When we test the client, we are getting an error which states that the host is unreachable via IPv6. However if the ALB (application load balancer) is accessible via IPv6, it can forward to an IPv4-only host when it receives a connection from an IPv6-only client.
I actually solved the issue - I realized that in my terraform security group script, I was not adding IPV6 cidr blocks so therefore ipv6 traffic was not permitted at all to any of my instances. I updated my security group settings via Terraform and voila - this works as expected. So anyone who is looking for ipv6 support for your applications using Rancher on AWS, you can accomplish this using a dualstack application loadbalancer (ALB). Make sure your VPC is configured with ipv6 so this works.
Make sure your security groups allow for ipv6 traffic by configuring an ipv6 cidr block on the respective ports.

What is my web service ports to be allowed in the firewall?

Due to a virus in the system, one of our clients have made access to internet restricted in their server. We use two web services data on this server (both use a SOAP API).
The client company is asking me for the ports on the firewall they should leave open so we could be able to use those web services only. I'm not good at networks. So how can I get those information?
I need the port or any address from my two services so that they let them pass through the firewall.
I'm not sure I understand the question, but most web services receive connections on one of:
80 (http)
443 (https)
8080 (http)
Most firewalls client should be configured to allow outbound connections to these.
Server firewall MUST be configured to allow inbound connections on one of these (or some other pre-arranged non-standard port).
To work out what ports your existing web server is actually using:
how to investigate ports opened by a certain process in linux?
Its most likely, 80, 443, or 8080.

Resources