How to use Kubernetes effectively for 2 distant nodes - docker

I want to move all of my operations over to K8S for so long, but am still hesitant to that. This question will likely be broad, but bear with me. Let me first describe the existing system.
I hosts a lot of different websites (>30). A lot of that for my own experimentation, but some are for actual clients. I have 1 VM in New York (I'm using DigitalOcean), with multiple Docker containers, frequently managed using docker-compose. There is 1 container for every site. The request first comes in to front container running HAProxy. This strips away SSL, then forwards the request to 2 proxy container running Nginx. These 2 container then forwards the request to all the other containers for their service. All of my certificates come from LetsEncrypt, and have to be renewed every 3 months. To do so, I stop front, run certbot --apache so it binds to port 80. It gets the certificates, then I stop apache, then recreate front container.
There are several reasons to why I do it this way:
I change site configs a lot, and how all of them are wired together. So front is expected to run forever, unless I'm getting certificates, and proxys are expected to change a lot. I change the proxy image, then stops and recreates the 1st container, then stops and recreates the 2nd container, so that there will be no downtime at all.
I really don't know how to get certificates when there are multiple nodes. In fact, I'm a total noob at the whole certificate thing and LetsEncrypt is pretty much the only way I know of to do this.
I want to directly edit files on the remote server. I have a bad practice of editing production code directly, mainly because I get impatient with setting up dev, staging and production environments. It takes too much time, and the gains feels small. And for clients, they are typically small businesses, with <10 employees, and regularly, they want to have some aesthetic changes to the websites. I can have a video call with them, they tell me exactly what they want, I code that in, it gets uploaded to the server immediately, and they see changes right away. Then they can critique the design, and we can iterate back and forth. If I were to setup different environments, they can't see it right away, and there has to be this long process of committing to git, deploy to staging, then production. This takes a long time, and I don't think is justified.
I realize that my systems are not that well maintained. Images are not getting security updates, I don't know if they are still running or not unless I check for them manually, which is tedious, so I don't do them at all. Furthermore, I have an Asian background, that means I have clients from both the US and Asia, pretty much the farthest place possible from each other, which increases latency by a lot. That means client in Asia has to wait for around 1-2 second for the page to actually load, which is eternal. I have also moved to Asia in the past week, so now, accessing the New York server via ssh is incredibly slow, and my productivity just plummets. So now it might be the best time to revamp everything, and move to K8S once and forever. However, there are major problems in the planning process and currently, K8S seems to lack a lot of stuff that are just deal breakers for me. So please criticize my plans, and improve them however you see fit.
What I plan to do now is this:
There will be 2 servers, 1 at New York, 1 at Singapore. These 2 severs will have 2 different ip addresses. Those 2 will be running K8S Pods. Preferably, they should have exactly the same configs, website containers, database containers, etc. Then for each website DNS record, I will modify A and AAAA records so that they contain 2 ip addresses for the 2 servers.
My question is:
Will DNS always route to Singapore if user is in China, and always route to New York if user is in England?
How to actually get certificates for 2 nodes? My understanding is that when certbot issues a certificate, it associates the domain name with the node ip address. That means 2 nodes can't have the same certificate for the same domain name. Is this correct? If you can get certificates for 2 nodes then how to do that?
How to keep files in sync between servers? Say I edit the file tree in Singapore server, I want that file to also be modified in New York several seconds later. For databases, I can have a master database at either Singapore or New York, then have slave databases at both locations that updates whenever the master updates, and the slaves can serve as a low latency database for each server.
How to actually route requests from servers to containers inside. I initially plan to use NodePort, to direct the request to front Pods, then that can distribute requests to other Pods, but I was heartbroken when NodePort can't attach to ports below 30000. The only other option that I am aware of is to have an external load balancing service that directs traffic to the 2 servers. But that costs like $15/site/month, and because I have >30 sites, doing so will bankrupt me. I can also have 4 servers in total, 2 for the K8S cluster, and 2 serves as a load balancer that will forward to NodePort. Will this plan works? How will automatic renewing of certificates even work here?
Please note that may be my questions are the wrong questions to ask (like, may be I shouldn't use A and AAAA records for directing traffic), and there's a different way to do this entirely, so feel free to ask the right questions.

read your question hats off to write down the whole stuff but half of the stuff is useless.
Answers of your question :
Can we add the same or multiple entries in DNS? example.com with A record multiple times possible?
You might require to set up a regional K8s cluster with regional ingress support. you can use certmanager with letsencrypt which will manage your cert at LB level and terminate it at the front.
If you are looking forward to use two VMs put one LB in front of both and set SSL over there.
if you are using K8s with stateless PODs editing direct file inside container is not a option. better you manage the Github update inside and container get deployed on to both cluster at a same time for that you can setup CI/CD. You are right in case of database server setup with master slave concept you can use read replicas.
To route the traffic from server to internal application of K8s you can an internal LB or exposing services with node ports(above 30000 but change target port in SVC) and route the port if you want to redirect requests on a specific port using the target port.
still, i am not getting "I can also have 4 servers in total, 2 for the K8S cluster, and 2 serves as a load balancer that will forward to NodePort. Will this plan works? How will automatic renewing of certificates even work here?" which server will be in front and which one in the backend.

If all your services are websites (run over http) you could use k8s ingress to route traffic to pods based on Host header (domain name) and use only one LB with one IP address. The most popular ingress controller seems to be the Nginx Ingress Controller
If you don't want to use LB you can use hostPort to expose nginx ingress but as soon as you have k8s cluster with more than one node, use LB because hostPort is generally not advised to use unless you have a very good reason to do so.
Speaking of DNS, you can use sth like AWS route53 routing policies for location routing. You don't necessarily need to use AWS. I just want to show you that there are solutions to this problem, but use whatever you like.
For certificates use cetrmanager with DNS-01 challenge.
From letsencrypt docs about DNS-01 challenge:
It works well even if you have multiple web servers.
cetrmanager will also handle certificate renewal for you.
About keeping files in sync between servers; It depends on files, but for static content it might be best to use CDN that will replicate content from one source to other locations.
For simultanous deploys to 2 separate clusters you can use some CI/CD pipeline like e.g. github actions.

Related

Sending requests Google Kubernetes Engine, multiple deployments, under one external IP address

The Google Cloud Platform Kubernetes Engine based backend deployment I work on has between 4-60 nodes running at all times, spanning two different services.
I want to interface with an API that employs IP whitelisting however, which would mean that all outgoing requests would have to be funneled through one singular IP address.
How do I do this? The deployment uses an Nginx Ingress controller, which doesn't allow many options when it comes to the egress part of things.
I tried setting up a VM outside of the deployment, but still on GCP in the same region, and was unable to set up a forward proxy. At least, not one that I could connect to off my local device. Not sure if this was because of GCP's firewall or anything of that sort. This was using Squid, as well Apache, with no success in either.
I also looked at the Cloud NAT option, but it seems like I would have to recreate all the services, CI/CD pipelines, and DNS settings etc. I would ideally avoid that, as it would be a few days worth of work and would call for some downtime of the systems as well.
Ideally I would have a working forward proxy. I tried looking for Docker images that would function as one, but that does not seem to be a thing, sadly. SSHing into a VM to set up such a proxy hasn't led to success yet, either.
You have already found the solution, you have to rebuild things using either Cloud NAT or an equivalent solution made yourself. Even that is relatively recent and I've not actually tried it myself, as recently as a 6 months ago we were told this was not supported for GKE. Our solution was the proxy idea you mentioned, an HTTP proxy running outside of GKE and directing things through it at the app code level rather than infrastructure. It was not fun.

Docker: redirect traffic to containers

We are a small design company, I'm the only one to "code" (making small scripts/tools for the creatives)
I have a server on a local network.
On this server, I installed docker and docker-compose.
On this server I want to have a few containers running, one per service (gitlab, taiga, wiki.js, mattermost, wekan)
When setting the docker-compose.yml, How should I manage ports (and or any other settings) so that:
First (case study): (Let's say I just have one container running) when typing the host IP address in a web browser, it redirect to my service and display for example, /var/www/ if my service is a website
Second: when typing subdomain.myhostname in a web browser, it redirects to one specific service
It's a very broad question, strongly dependent on one's experience. From what I consider fast and reliable, as far as small environments are considered, you may want to take Rancher for a spin.
It's super easy to start with. What's more, there's a range of services like Gitlab or DokuWiki you can start with just one click. On top of that, you can configure a load balancer, that can perform the redirections you mentioned. I think it's one of the fastest options to get a functional and scalable stack. Definitely not the most stable one, compared to enterprise-grade OpenShift, but I think it'll do just fine.
I will not go through all the setup details as I believe it's not what the question is about, but you can start with setting up Rancher 1.6 docker server going step by step through the official doc guide. It's pretty straightforward - one bash command and you are up and running.
Openshift is a platform competing to Rancher. To my best knowledge, it's harder to work with, especially having no experience. It's more stable, that's for sure, alas requires more effort in general.
I intentionally omitted a few options as I took an assumption OP wants it working asap while still easily being re-configurable, stable, and GUI-manageable.
-- edit a few years later --
Rancher and Openshift are still actively developed and attract new users. Rancher released a stable v2 since my original answer, and so I no longer recommend looking at v1.6.

Kubernetes scaling pods using custom algorithm

Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web and Mongo. Currently we run these containers on a single machine. However as our users are increasing we are looking for a solution to scale. Using Kubernetes we would form a multi container pod. If we are to replicate we need to replicate all 3 containers as a unit. Our cloud application is consumed by mobile app users. Our app can only handle approx 30000 users per Worker node and we intend to place a single pod on a single worker node. Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
We plan on using Kubernetes to manage the containers. Load balancing doesn't work for our use case as a mobile device needs to be tied to a single machine once assigned and each Pod works independently with its own persistent volume. However we need a way of spinning up new Pods on worker nodes if the number of users goes over 30000 and so on.
The idea is we have some sort of custom scheduler which assigns a mobile device a Worker Node ( domain/ IPaddress) depending on the number of users on that node.
Is Kubernetes a good fit for this design and how could we implement a custom pod scale algorithm.
Thanks
Piggy-Backing on the answer of Jonah Benton:
While this is technically possible - your problem is not with Kubernetes it's with your Application! Let me point you the problem:
Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web, and Mongo.
Here is your first problem: Is you can only deploy these three containers together and not independently - you cannot scale one or the other!
While MongoDB can be scaled to insane loads - if it's bundled with your web server and web application it won't be able to...
So the first step for you is to break up these three components so they can be managed independently of each other. Next:
Currently we run these containers on a single machine.
While not strictly a problem - I have serious doubt's what it would mean to scale your application and what the challenges that come with scalability!
Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
Now, this IS a problem. You're looking to run an application on Kubernetes but I do not think you understand the consequences of doing that: Kubernetes orchestrates your resources. This means it will move pods (by killing and recreating) between nodes (and if necessary to the same node). It does this fully autonomous (which is awesome and gives you a good night sleep) If you're relying on clients sticking to a single nodes IP, you're going to get up in the middle of the night because Kubernetes tried to correct for a node failure and moved your pod which is now gone and your users can't connect anymore. You need to leverage the load-balancing features (services) in Kubernetes. Only they are able to handle the dynamic changes that happen in Kubernetes clusters.
Using Kubernetes we would form a multi container pod.
And we have another winner - No! You're trying to treat Kubernetes as if it were your on-premise infrastructure! If you keep doing so you're going to fail and curse Kubernetes in the process!
Now that I told you some of the things you're thinking wrong - what a person would I be if I did not offer some advice on how to make this work:
In Kubernetes your three applications should not run in one pod! They should run in separate pods:
your webservers work should be done by Ingress and since you're already familiar with nginx, this is probably the ingress you are looking for!
Your web application should be a simple Deployment and be exposed to ingress through a Service
your database should be a separate deployment which you can either do manually through a statefullset or (more advanced) through an operator and also exposed to the web application trough a Service
Feel free to ask if you have any more questions!
Building a custom scheduler and running multiple schedulers at the same time is supported:
https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
That said, to the question of whether kubernetes is a good fit for this design- my answer is: not really.
K8s can be difficult to operate, with the payoff being the level of automation and resiliency that it provides out of the box for whole classes of workloads.
This workload is not one of those. In order to gain any benefit you would have to write a scheduler to handle the edge failure and error cases this application has (what happens when you lose a node for a short period of time...) in a way that makes sense for k8s. And you would have to come up to speed with normal k8s operations.
With the information provided, hard pressed to see why one would use k8s for this workload over just running docker on some VMs and scripting some of the automation.

On-prem docker swarm deployment with HA

I’m doing on-prem deployments using docker swarm and I need application and DB high availability.
As far as application HA is concerned, it works great within docker (service discovery and load balancing), but I’m not sure how to use it on my network. I mean how can I assign a virtual IP to all of my docker managers so that if any of them goes down, that virtual IP automatically points to the other docker manager in the cluster. I don’t want to have a single point of failure in my architecture, that’s why I’m not inclined to use any (single) reverse proxy solution in front of my swarm cluster (because to my understanding, if nginx/HAProxy goes down, the whole system goes into abyss. I would love to know that I’m wrong).
Secondly, I use WebSockets in my application for push notifications which doesn’t behave normally with all the load balancing stuff because socket handshakes get distorted.
I want a solution to these problems without writing anything in code (HA-specific and non-generic like hard coding IPs etc). Any suggestions? I hope I explained my problem correctly.
Docker Flow Proxy or Traefik can be placed on a set of swarm nodes that you want to receive traffic for incoming connections, and use DNS routing to get packets to the correct containers. Both have sticky sessions option (I know Docker Flow does, not sure about Traefik).
Then you can either:
If your incoming connections are just client HTTP/S requests, you can use DNS Round Robin with multiple A records, which works great, or
By an expensive hardware fault tolerant reverse proxy like F5
Use some network-layer IP failover that is at the OS and physical network level (not related to Docker really), but I'm not sure how well that would work with Swarm.
Number 2 is the typical solution in private datacenters that need full HA at all layers.

How to improve Kubernetes security especially inter-Pods?

TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate?
Unlike Docker, where one would usually only allow network connection between containers that need to communicate (via --link), each Pod on Kubernetes can access all other Pods on that cluster.
That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database.
People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people.
One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful.
The current Kubernetes security seems to be very incomplete. Is there already a way to have a decent security for production?
In the not-too-distant future we will introduce controls for network policy in Kubernetes. As of today that is not integrated, but several vendors (e.g. Weave, Calico) have policy engines that can work with Kubernetes.
As #tim-hockin says, we do plan to have a way to partition the network.
But, IMO, for systems with more moving parts, (which is where Kubernetes should really shine), I think it will be better to focus on application security.
Taking your three-layer example, the PHP pod should be authorized to talk to the database, but the Nginx pod should not. So, if someone figures out a way to execute an arbitrary command in the Nginx pod, they might be able to send a request to the database Pod, but it should be rejected as not authorized.
I prefer the application-security approach because:
I don't think the --links approach will scale well to 10s of different microservices or more. It will be too hard to manage all the links.
I think as the number of devs in your org grows, you will need fine grained app-level security anyhow.
In terms of being like docker compose, it looks like docker compose currently only works on single machines, according to this page:
https://github.com/docker/compose/blob/master/SWARM.md

Resources