How to implement liveness and readiness endpoints for a gRPC service? - docker

I have a gRPC service which listens on a port using a tcp listener. This service is Dockerized and eventually I want to run it in a Kubernetes cluster.
I was wondering what is the best way to implement liveness and readiness probes for checking the health of my service?
Should I run a separate http server in another goroutine and respond to /health and /ready paths?
Or, should I also have gRPC calls for liveness and readiness of my service and use a gRPC client to query these endpoints?!

Previously I've run a separate http server inside the app, just for healthchecks (this was because AWS application load balancers only have http checking, I don't know about kube).
If you run the http server as a separate routine and the grpc server on the main goroutine then you should avoid the situation where the grpc server goes down and http is still 200 - OK (assuming you don't yet have a means for http to healthcheck your grpc).
You could also use a heatbeat pattern of goroutines, that are controlled by the http server and accept heartbeats from the grpc server to make sure that it's all OK.
If you run 2 servers, they will need to be running on different ports, this can be an issue for some schedulers (like ECS) that expects 1 port for a service. There are examples and packages that will allow you to multiplex multiple protocols onto the same port. I guess kube supports multiple port services so this might not be a problem.
Link to example of multiplexing:
https://github.com/gdm85/grpc-go-multiplex/blob/master/greeter_multiplex_server/greeter_multiplex_server.go

Related

Wcf service call failure in linux container

I'm running multiple docker containers on the same docker network and it's routed through traefik.
Calls from clients arrive at traefik and are routed to the appropriate container using the traefik rule.
Sometimes, inter-container communication also happens.
Container 1 has a Core WCF application.
Container 2 has a WCF client that communicates with Container 1.
External legacy WCF clients will call the service available in Container 1 with HTTPS & Wsbinding security transport mode.
Currently, external requests are failing if container 1 does not have wsbinding security, but if I set container1 wsbinding security to transport mode, inter-container communication fails with the certificate RemoteCertificateNameMismatch error
How can a CoreWcf app achieve both cross-container and external communication?

Docker host multiple containers with different ip address but on same port

I have three tomcat containers running on different bridge networks with different subnet and gateway
For example:
container1 172.16.0.1 bridge1
container2 192.168.0.1 bridge2
container3 192.168.10.1 bridge3
These containers are running on different ports like 8081, 8082, 8083
Is there any way to run all three containers in same 8081?
If it is possible, how can I do it in docker.
You need to set-up a reverse proxy. As the name suggests, this is a proxy that works in an opposite way from the standard proxy. While standard proxy gets requests from internal network and serves them from external networks (internet), the reverse proxy gets requests from external network and serves them by fetching information from internal network.
There are multiple applications that can serve as a reverse proxy, but the most used are:
NginX
Apache
HAProxy mainly as a load-balancer
Envoy
Traefik
Majority of the reveres proxies can run as another container on your docker. Some of this tools are easy to start since there is ample amount of tutorials.
The reverse proxy is more than just exposing single port and forwarding traffic to back-end ports. The reverse proxy can manage and distribute the load (load balancing), can change the URI that is arriving from the client to a URI that the back-end understands (URL rewriting), can change the response form the back-end (content rewriting), etc.
Reverse HTTP/HTTP traffic
What you need to do to set a reverse proxy, assuming you have HTTP services, in your example is foloowing:
Decide which tool to use. As a beginner, I suggest NginX
Create a configuration file for the proxy which will take the requests from the port 80 and distribute to ports 8081, 8082, 8083. Since the containers are on different network, you will need to decide if you want to forward the traffic to their IP addresses (which I don't recommend since IP can change), or to publish the ports on the host and use the host IP. Another alternative is to run all of them on the same network.
Depending on the case, you need to setup the X-Forwarding-* flags and/or URL rewriting and content rewriting.
Run the container and publish the port 80 as 8080 (if you expose the containers on host, your 8081 will be already taken).
Reverse TCP/UDP traffic
If you have non-HTTP services (raw TCP or UDP services), then you can use HAProxy. Steps are same apart from the configuration step #2. The configuration is different due to non-HTTP nature of the traffic and you can find example in this SO

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

SSL with TCP connection on Kubernetes?

I'm running a TCP server (Docker instance / Go) on Kubernetes.. It's working and clients can connect and do intended stuff. I would like to make the TCP connection secure with an SSL certificate. I already got SSL working with a HTTP Rest API service running on the same Kubernetes cluster by using ingress controllers, but I'm not sure how to set it up with a regular TCP connection. Can anyone point me in the right direction ?
As you can read in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing
services other than HTTP and HTTPS to the internet typically uses a
service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Depending on the platform you are using you have different kind of LoadBalancers available which you can use to terminate your SSL traffic. If you have on-premise cluster you can set up additional nginx or haproxy server before your Kubernetes cluster which will handle SSL traffic.

AWS Service discovery, nginx and node issue

I am running 2 services in AWS ECS fargate. One is with nginx containers running behind the application load balancer. And the other is running with a node.js application. Node application is running with service discovery and Nginx containers proxy to the "service discovery endpoint" of node application containers.
My issue is :
After scaling up node application containers from 1 to 2. Nginx is unable to send the requests to the newly spawned container. It only sends the request to old containers. After restart/redploy of nginx containers it is able to send the requests to new containers.
I tried with "0" DNS ttl for service discovery endpoint. But facing the same issue.
Nginx does not resolve resolve at runtime if your server is specified as part of an upstream group or in certain other situations, see this SF post for more details. This means that Nginx never becomes aware of new containers being registered for service discovery.
You haven't posted your Nginx config so it's hard to say what you can do there. For proxy_pass directives, some people suggest using variables to force runtime resolution.
Another idea might be to expose a http endpoint from the Nginx container that listens to connections and reloads Nginx config. This endpoint can then be triggered by a lambda when new containers are registered (Lambda is in it's turn triggered by CloudWatch events). Disclaimer: I haven't tried this in practice, but it might work.

Resources