What is the "proper" way to migrate from Docker Compose to Kubernetes? - docker

My organization manages systems where each client is provisioned a VPS and then their tech stack is spun up on that system via Docker Compose.
Data is stored on-system, using Docker Compose volumes. None of the fancy named storage - just good old direct path volumes.
While this solution is workable, the problem is that this method does not scale. We can always give the VPS more CPU/Memory but that does not fix the underlying issues.
Staging / development environments must be brought up manually - and there is no service redundancy. Hot swapping is impossible with our current system.
Kubernetes has been pitched to me to solve our problems, but honestly I have no idea where to begin - most of the documentation is obtuse and I have failed to find somebody with our particular predicament.
The end goal would be to have just a few high-spec machines running Kubernetes - with redundancy, staging, and the ability to spin up new clients as necessary (without having to provision additional machines or external IPs).
What specific tools would my organization need to use to achieve this goal?
Are there any tools that would allow us to bring over our existing Docker Compose stacks into Kubernetes?

Where to begin: given what you're telling us, I would first look into my options to implement some SDS.
You're currently using local volumes, which you probably won't be able to do with Kubernetes - or at least shouldn't, if you don't want to bind your containers to a unique node.
The most easy way - while not necessarily the one I would recommend - would be to use some NFS servers. Even better: with some DRBD, pacemaker / corosync, using a VIP for failover -- or the FreeBSD way: hastd, carp, ifstated, maybe some zfs. You would probably have to deploy distinct systems scaling your Kubernetes cluster, distributing IOs, ... a single NFS server doesn't last long without its load going over 50 and iowaits spiking ...
A better way would be to look into actual SDS solutions. One I could recommend is Ceph, though there's a lot of new solutions I'm less familiar with ... and there's GlusterFS I would definitely avoid. An easy way to deploy Ceph would be to use ceph-ansible.
Given what corporate hardware you have at your disposal, maybe you would have some NetApp or equivalent, something that can implement NFS shares, and/or some iSCSI gateways.
Now, those are all solutions you could run on the side, although note that you would also find "CNS" solutions (container native), which are meant to be deployed on top of Kubernetes. Ceph clusters can be managed using Rook. These can be interesting, though in terms of maintenance and operations, it requires good knowledge of both the solution you operate and kubernetes/containers in general: troubleshooting issues and fixing outages may not be as easy as a good-old bare-meta/VM setup. For a first Kubernetes experience: I would refrain myself. When you'll feel comfortable enough, go ahead.
In any cases, another critical consideration before deploying your cluster would be the network that would host your installation. Consider that Kubernetes should not be directly deployed on public instances: you would probably want to have some private VLAN, maybe an internal DNS, a local resitry (could be Kubernetes-hosted), or other tools such as an LDAP server, some SMTP relay, HTTP cache/proxies, loadbalancers to put in front of your API, ...
Once you've made up your mind regarding those issues, you can look into deploying a Kubernetes cluster using tools such as Kubespray (ansible) or Kops (uses Terraform, and thus requires some cloud API, eg: aws). Both projects are part of the Kubernetes project and maintained by its community. Kubespray would cover all scenarios (IAAS & bare-metal), integrate with popular SDS out of the box, can ship with various ingress controllers, ... overall offers good defaults, and lots of variables to customize your installation.
Start with a 3-master 2-workers cluster, make sure the resulting cluster matches what you would expect.
Before going to prod, take your time to properly translate your existing configurations. Sometime, refactoring code or images could be worth it.
Going to prod, consider adding a group of "infra" nodes: if you want to host some logging solution or other internal services that are somewhat critical to users and shouldn't suffer outages caused by end-users workloads (eg: ingress routers, monitoring, logging, integrated registry, ...).
Kubespray: https://github.com/kubernetes-sigs/kubespray/
Kops: https://github.com/kubernetes/kops
Ceph: https://ceph.com/en/discover/
Ceph Ansible: https://github.com/ceph/ceph-ansible
Rook (Ceph CNS): https://github.com/rook/rook

Related

Question regarding Monolithic vs. Microservice Architecture

I'm currently rethinking an architecture I was planning.
So suppose I have a system where there are about 8 different services interacting with a single database. Some services listen and react to database events and do stuff like sending SMS.
Then there's an API layer sitting on top of the database and a frontend connected to this API. So in my understanding this is rather monolithic.
In fact I don't see any advantage of using containers in this scenario. Their real advantage is that they can be swapped out, right? My intuition tells me that there is often no purpose in doing that except maybe some load balancing on API level. Instead many companies just seem to blindly jump on the hype train of containerizing everything.
Now the question arises, is docker the right tool for this context? In each forum people refrain from using docker for the sole purpose of a more resource efficient "VM" aggregating all services within a single container. However this is the only real scenario I'd see any advantages in using docker (the environment, e.g. alpine-linux, is the same on all customer's computers when rolling out the system).
Even docker-compose is not "grouping" containers together as a complete system only exposing port 443 but instead starts an infrastructure of multiple interacting containers. Oftentimes services like Kubernetes are then used for deploying these infrastructures on "nodes", i.e. VMs.
However, in my opinion it would be great to have a single self-contained container without putting them into a VM. This container would include every necessary service only exposing one port, e.g. 443.
Since I'm rather confused now, I'd really appreciate your help here.
Thanks in advance!
Kubernetes does many things and has many useful features. But Kubernetes also require that you architect your apps to follow The Twelve-Factor App principles. An important thing here is that your apps are stateless.
When the app is stateless, it is easy to scale out horizontally - this can also be done automatically when the load increases.
When the app is stateless, it is easy to do Rolling Deployments that upgrade the app to a new version without downtime.
You can run containers on bare metal Linux servers, but this is mostly very big servers. If you use a cloud, you probably want more VM instances, but distributed to 3 Availability Zones - for increased availability.
"Self-contained container - exposing one port". With Kubernetes, you typically use a private network and you only expose services via a single load balancer - typically on a port, but different URLs send traffic to different services.
Some services listen and react to database events and do stuff like sending SMS.
As I said, many things is easier when it is horizontal scalable, but this kind of app - that listen for events and react - is one of few examples where you can not scale horizontally. But it is a good fit for a serverless architecture instead, possibly on Kubernetes using Knative.
Now the question arises, is docker the right tool for this context?
My opinion is that most workload will run in containers. It is more a question about how it should be run in Kubernetes - one or multiple replicas. As stateless Deployments or stateful StatefulSet or some other way.

Where should I put shared services for multiple kubernetes-clusters?

Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing).
We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.
So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:
Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
Logging: We didn't decide yet.
Mailserver (https://github.com/tomav/docker-mailserver)
independant services: Sentry, Gitlab
I am not sure where to run these external shared services. I found these options:
1. Inside each cluster
We need to install the tools 3 times for the 3 environments.
Con:
We don't have one central point to analyze our systems.
If the whole cluster is down, we cannot look at anything.
Installing the same tools multiple times does not feel right.
2. Create an additional cluster
We install the shared tools in an additional kubernetes-cluster.
Con:
Cost for an additional cluster
It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
3) Use an additional root-server
We run docker-containers on an oldschool-root-server.
Con:
Feels contradictory to use root-server instead of cutting-edge-k8s.
Single point of failure.
We need to control the docker-containers manually (or attach the machine to rancher).
I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic?
Or is it just no relevant problem that a cluster might go down?
To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.
The important questions are:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Thanks for your hints,
Marius
That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.
I think the best way is the second one - Create an additional cluster, and that's why:
You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.
Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.
With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.
Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.
Now, about your questions:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
No, it is not a problem with a separate cluster.
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
I think, yes. At least I did it several times, and I know some other projects with similar architecture.
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Yes, nothing complex there. Usually, it does not depend on the platform.

Container technologies: docker, rkt, orchestration, kubernetes, GKE and AWS Container Service

I'm trying to get a good understanding of container technologies but am somewhat confused. It seems like certain technologies overlap different portions of the stack and different pieces of different technologies can be used as the DevOps team sees fit (e.g., can use Docker containers but don't have to use the Docker engine, could use engine from cloud provider instead). My confusion lies in understanding what each layer of the "Container Stack" provides and who the key providers are of each solution.
Here's my layman's understanding; would appreciate any corrections and feedback on holes in my understanding
Containers: self-contained package including application, runtime environment, system libraries, etc.; like a mini-OS with an application
It seems like Docker is the de-facto standard. Any others that are notable and widely used?
Container Clusters: groups of containers that share resources
Container Engine: groups containers into clusters, manages resources
Orchestrator: is this any different from a container engine? How?
Where do Docker Engine, rkt, Kubernetes, Google Container Engine, AWS Container Service, etc. fall between #s 2-4?
This may be a bit long and present some oversimplification but should be sufficient to get the idea across.
Physical machines
Some time ago, the best way to deploy simple applications was to simply buy a new webserver, install your favorite operating system on it, and run your applications there.
The cons of this model are:
The processes may interfere with each other (because they share CPU and file system resources), and one may affect the other's performance.
Scaling this system up/down is difficult as well, taking a lot of effort and time in setting up a new physical machine.
There may be differences in the hardware specifications, OS/kernel versions and software package versions of the physical machines, which make it difficult to manage these application instances in a hardware-agnostic manner.
Applications, being directly affected by the physical machine specifications, may need specific tweaking, recompilation, etc, which means that the cluster administrator needs to think of them as instances at an individual machine level. Hence, this approach does not scale. These properties make it undesirable for deploying modern production applications.
Virtual Machines
Virtual machines solve some of the problems of the above:
They provide isolation even while running on the same machine.
They provide a standard execution environment (the guest OS) irrespective of the underlying hardware.
They can be brought up on a different machine (replicated) quite quickly when scaling (order of minutes).
Applications typically do not need to be rearchitected for moving from physical hardware to virtual machines.
But they introduce some problems of their own:
They consume large amounts of resources in running an entire instance of an operating system.
They may not start/go down as fast as we want them to (order of seconds).
Even with hardware assisted virtualization, application instances may see significant performance degradation over an application running directly on the host.
(This may be an issue only for certain kinds of applications)
Packaging and distributing VM images is not as simple as it could be.
(This is not as much a drawback of the approach, as it is of the existing tooling for virtualization.)
Containers
Then, somewhere along the line, cgroups (control groups) were added to the linux kernel. This feature lets us isolate processes in groups, decide what other processes and file system they can see, and perform resource accounting at the group level.
Various container runtimes and engines came along which make the process of creating a "container", an environment within the OS, like a namespace which has limited visibility, resources, etc, very easy. Common examples of these include docker, rkt, runC, LXC, etc.
Docker, for example, includes a daemon which provides interactions like creating an "image", a reusable entity that can be launched into a container instantly. It also lets one manage individual containers in an intuitive way.
The advantages of containers:
They are light-weight and run with very little overhead, as they do not have their own instance of the kernel/OS and are running on top of a single host OS.
They offer some degree of isolation between the various containers and the ability to impose limits on various resources consumed by them (using the cgroup mechanism).
The tooling around them has evolved rapidly to allow easy building of reusable units (images), repositories for storing image revisions (container registries) and so on, largely due to docker.
It is encouraged that a single container run a single application process, in order to maintain and distribute it independently. The light-weight nature of a container make this preferable, and leads to faster development due to decoupling.
There are some cons as well:
The level of isolation provided is a less than that in case of VMs.
They are easiest to use with stateless 12-factor applications being built afresh and a slight struggle if one tries to deploy legacy applications, clustered distributed databases and so on.
They need orchestration and higher level primitives to be used effectively and at scale.
Container Orchestration
When running applications in production, as the complexity grows, it tends to have many different components, some of which scale up/down as necessary, or may need to be scaled. The containers themselves do not solve all our problems. We need a system that solves problems associated with real large-scale applications such as:
Networking between containers
Load balancing
Managing storage attached to these containers
Updating containers, scaling them, spreading them across nodes in a multi-node cluster and so on.
When we want to manage a cluster of containers, we use a container orchestration engine. Examples of these are Kubernetes, Mesos, Docker Swarm etc. They provide a host of functionality in addition to those listed above and the goal is to reduce the effort involved in dev-ops.
GKE (Google Container Engine) is hosted Kubernetes on Google Cloud Platform. It lets a user simply specify that they need an n-node kubernetes cluster and exposes the cluster itself as a managed instance. Kubernetes is open source and if one wanted to, one could also set it up on Google Compute Engine, a different cloud provider, or their own machines in their own data-center.
ECS is a proprietary container management/orchestration system built and operated by Amazon and available as part of the AWS suite.
To answer your questions specifically:
Docker engine: A tool to manage the lifecycle of a docker container and docker images. Create, restart, delete docker containers. Create, rename, delete docker images.
rkt: Analogous to docker engine, but different implementation
Kubernetes: A collection of tools to manage the lifecycle of a distributed application that uses containers. Contains tooling to manage containers, groups of containers, configuration for containers, orchestrating containers, scheduling them on actual instances, tooling to help developers write and maintain other services/tools to deal with containers.
Google Container Engine: Instead of getting VMs, installing "docker-engine" on them, installing kubernetes on them and getting it all to work with things like the right permissions to your infrastructure etc. imagine if it all came together so that you can choose the types of machines and the size of your cluster that has all of this just working. Things like pulling images from your project specific docker repository (google container registry) or claiming persistent volumes, or provisioning load-balancers just work without worrying about service accounts and permissions and what not.
ECS: Analogous to GKE (4) but without Kubernetes.
To address the points in your understanding: you are loosely right about things (except container engine I think). It's important to understand that the only important thing to understand is what a container is. The rest of it is just marketing/product names. It's also important to understand that today's understanding of containers is very warped by what Docker containers are and a lot of the opinions enforced by Docker and tooling around Docker. Containers have been around for a long time.
So once you understand what a (docker) container is, a container engine is just a tool to manage them, a container cluster is a just a group of containers, an orchestrator is just a tool to manage where containers run based on some parameters. IMHO, you really don't need to worry too much about what the rest of the tooling is once you understand and build a solid mental model around containers. The rest will just fit in automatically.
The best way to understand all of this? Build & deploy a decently complex application with Docker (persist data/use a database in your app) and everything will make sense.

How to improve Kubernetes security especially inter-Pods?

TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate?
Unlike Docker, where one would usually only allow network connection between containers that need to communicate (via --link), each Pod on Kubernetes can access all other Pods on that cluster.
That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database.
People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people.
One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful.
The current Kubernetes security seems to be very incomplete. Is there already a way to have a decent security for production?
In the not-too-distant future we will introduce controls for network policy in Kubernetes. As of today that is not integrated, but several vendors (e.g. Weave, Calico) have policy engines that can work with Kubernetes.
As #tim-hockin says, we do plan to have a way to partition the network.
But, IMO, for systems with more moving parts, (which is where Kubernetes should really shine), I think it will be better to focus on application security.
Taking your three-layer example, the PHP pod should be authorized to talk to the database, but the Nginx pod should not. So, if someone figures out a way to execute an arbitrary command in the Nginx pod, they might be able to send a request to the database Pod, but it should be rejected as not authorized.
I prefer the application-security approach because:
I don't think the --links approach will scale well to 10s of different microservices or more. It will be too hard to manage all the links.
I think as the number of devs in your org grows, you will need fine grained app-level security anyhow.
In terms of being like docker compose, it looks like docker compose currently only works on single machines, according to this page:
https://github.com/docker/compose/blob/master/SWARM.md

openstack overkill for HA website stack?

Some background:
I'm building a pretty involved website (as far as used stack concerned). Components among some other smaller stuff include:
Elasticsearch
Redis
ZeroMQ
Couchbase
RethinkDB
traffic through Nginx -> Node
The intention is to have a high available website running but be pretty lean (and low cost) at the same time.
Current topology I'm considering:
2 webservers in active/active config with DNS-loadbalancing. (Nginx, static asset serving, etc. + loadbalancing to the second tier:
2 appservers in active/active. Most of the components like Elasticsearch can do sharding/replication themselves so this should not be as hard to set-up (fingers crossed)
session handling in replicated Redis
Naturally I want monitoring and alerting when something is wrong, and ideally the system should be able to handle failures automatically. Stuff like: promote Redis from Slave to Master, or even initialize a new ec2-instance, if I were to be on Ec2 that is.
However, I want to be free from a particular hosting provider. Which I believe (please correct if wrong) is where Openstack comes in.
Is it correct that:
- openstack allows me to control the entire lifecycle of my website-stack (covering multiple boxes / virtual machines? )
- Does it allow me to (with work on config of course) to spin-up instances, monitor, alert when something goes wrong, take appropriate actions in those scenario's, etc.?
Or is Openstack just entirely the wrong tool for the job? Anything else that would fit better as a sort of "management layer" on top of my entire website?
Thanks
OpenStack isn't VMWare ESX. It's not a very good straight up simple virtual machine hosting environment. If what you want is a way to easily manage virtual machines I might suggest Ganeti. It even has HA failover of virtual machines. In a two physical host environment, this is probably the way to go.
What OpenStack gives you that Ganeti won't is RESTful APIs. It has AWS Compatible APIs, but it has OpenStack APIs that are even better. If you want to automate elasticity or healability this is huge. Being able to link up in python using existing client APIs and just write scripts that spin up instances as needed is something joe DevOps is all about.
So I guess it comes down to what your level of commitment is and what you need. For 2 physical machines OpenStack probably isn't the best solution. But, down the line when you've got more apps and more vms than you can manage manually, openstack will be there to help you write code that makes your datacenter dance to your melodic tunes.

Resources