I would like to request some expert opinion from SMEs on:
What would it take to build a CAAS using physical/bare-metal servers.
a) Suggested options to implement concerns like : Resource Monitoring, Utilization etc
b) Any suggested approaches to make a robust, scalable, resilient underlying infrastructure which marries well kubernetes/docker/etcd/flannel based CAAS layer
Kubernetes as a solution likes to leverage underlying "Cloud Provider" ; any suggestion to create it from scratch? (of course I am not looking at elaborate solutions like OpenStack, since the idea is to create a lean/low-cost/light-weight solution) The solution I am looking for is to create an IAAS out of lets say 10 prod servers. Not looking for Data Center type of solutions.
For your second question, you should check out the Getting started from Scratch guide in the Kubernetes docs.
i have implemented similar setup on baremetal.
you dont need etcd on each machine
setup 3 node etcd cluster for high availability
if you have space machines then keep etcd cluster separated from controle plane
setup 3 node control plane
use nginx for load balancing the masters/control plane
Run keepalived service on each control plane.
The virtual ip should be routable in the across the nodes in cluster. chose VIP from same subnet
keepalived provide failover in case primary control plance/master is down
Related
we run a AKS Cluster with about 35 Microservices and now need a HiveMQ.
We are unsure on how to proceed. We could just deploy a HiveMQ on our AKS cluster over helm as we did on the dev cluster.
But we are unsure if this is a good idea. Our concerns are, that the scaling of the services and the HiveMQ are not independent if we do so.
But of course to setup a hole ci/cd process for a AKS cluster for each stage and market comes with some work and costs.
We have around 20 Million end devices using our backend and scaling is a big issue. But I found nothing that explicitly says that we need a different Cluster to justify the planning of the work.
What is your opinion?
Thanks!
Robin
The decision is based on your resources, if you have a big cluster composed of reliable multiple nodes with reliable CPU/MEM do it on the same cluster but in a dedicated namespace.
Having it to another cluster will be the same, you will just have other nodes, are they more reliable than the one from the first cluster? DTS, but it will not be a bad idea, mainly if you deploy the cluster on another AKS region, one could say that this will be more "safe"
So it is absolutely up to you, personally if my first cluster is well managed and have a good support, I would have deployed it there.
My organization manages systems where each client is provisioned a VPS and then their tech stack is spun up on that system via Docker Compose.
Data is stored on-system, using Docker Compose volumes. None of the fancy named storage - just good old direct path volumes.
While this solution is workable, the problem is that this method does not scale. We can always give the VPS more CPU/Memory but that does not fix the underlying issues.
Staging / development environments must be brought up manually - and there is no service redundancy. Hot swapping is impossible with our current system.
Kubernetes has been pitched to me to solve our problems, but honestly I have no idea where to begin - most of the documentation is obtuse and I have failed to find somebody with our particular predicament.
The end goal would be to have just a few high-spec machines running Kubernetes - with redundancy, staging, and the ability to spin up new clients as necessary (without having to provision additional machines or external IPs).
What specific tools would my organization need to use to achieve this goal?
Are there any tools that would allow us to bring over our existing Docker Compose stacks into Kubernetes?
Where to begin: given what you're telling us, I would first look into my options to implement some SDS.
You're currently using local volumes, which you probably won't be able to do with Kubernetes - or at least shouldn't, if you don't want to bind your containers to a unique node.
The most easy way - while not necessarily the one I would recommend - would be to use some NFS servers. Even better: with some DRBD, pacemaker / corosync, using a VIP for failover -- or the FreeBSD way: hastd, carp, ifstated, maybe some zfs. You would probably have to deploy distinct systems scaling your Kubernetes cluster, distributing IOs, ... a single NFS server doesn't last long without its load going over 50 and iowaits spiking ...
A better way would be to look into actual SDS solutions. One I could recommend is Ceph, though there's a lot of new solutions I'm less familiar with ... and there's GlusterFS I would definitely avoid. An easy way to deploy Ceph would be to use ceph-ansible.
Given what corporate hardware you have at your disposal, maybe you would have some NetApp or equivalent, something that can implement NFS shares, and/or some iSCSI gateways.
Now, those are all solutions you could run on the side, although note that you would also find "CNS" solutions (container native), which are meant to be deployed on top of Kubernetes. Ceph clusters can be managed using Rook. These can be interesting, though in terms of maintenance and operations, it requires good knowledge of both the solution you operate and kubernetes/containers in general: troubleshooting issues and fixing outages may not be as easy as a good-old bare-meta/VM setup. For a first Kubernetes experience: I would refrain myself. When you'll feel comfortable enough, go ahead.
In any cases, another critical consideration before deploying your cluster would be the network that would host your installation. Consider that Kubernetes should not be directly deployed on public instances: you would probably want to have some private VLAN, maybe an internal DNS, a local resitry (could be Kubernetes-hosted), or other tools such as an LDAP server, some SMTP relay, HTTP cache/proxies, loadbalancers to put in front of your API, ...
Once you've made up your mind regarding those issues, you can look into deploying a Kubernetes cluster using tools such as Kubespray (ansible) or Kops (uses Terraform, and thus requires some cloud API, eg: aws). Both projects are part of the Kubernetes project and maintained by its community. Kubespray would cover all scenarios (IAAS & bare-metal), integrate with popular SDS out of the box, can ship with various ingress controllers, ... overall offers good defaults, and lots of variables to customize your installation.
Start with a 3-master 2-workers cluster, make sure the resulting cluster matches what you would expect.
Before going to prod, take your time to properly translate your existing configurations. Sometime, refactoring code or images could be worth it.
Going to prod, consider adding a group of "infra" nodes: if you want to host some logging solution or other internal services that are somewhat critical to users and shouldn't suffer outages caused by end-users workloads (eg: ingress routers, monitoring, logging, integrated registry, ...).
Kubespray: https://github.com/kubernetes-sigs/kubespray/
Kops: https://github.com/kubernetes/kops
Ceph: https://ceph.com/en/discover/
Ceph Ansible: https://github.com/ceph/ceph-ansible
Rook (Ceph CNS): https://github.com/rook/rook
I'm new in service mesh with Consul.
I found a lot of documentation about using Consul and Envoy for service mesh in K8S but I'm not finding much documentation about using it on docker swarm (Enterprise Edition).
My question is: is it possible to implement it on Docker Swarm EE? If not, what are the technical reasons that prevent or not recommend to implement it?
I wondered the same.
The main problem with docker swarm it seems is it lacks the concept of "sidecar" containers. For example, k8's has "pods". I haven't used k8's, but my understanding is that, you can group services into a unit called a "pod". This has benefits and really enables the mesh style architecture.. one reason is that services in the same "pod" can all communicate through "localhost" on different port bindings - i.e the services are "local" to eachother. When you want a "companion" service this is what you need as you know communicating with it is going to be fast as it is essentially local / co located with your app. Now consider swarm. You can add services to your stack, but you don't necessarily know where they are going to be placed - your "side car proxy" servcice could end up being placed on node 2 whilst your app is on node 1. This is not very efficient as it means there are now network hops to route traffic between your app and its "sidecar" proxy which could be on the other side of the data centre, but should really be local. So you start thinking of creative workarounds.. What about if I use "placement" settings to place my service and the sidecar service on the same node? Well then you lose the ability for swarm to place them on a different node if that node goes down, because your placement options have confined it to only one node. What if.. you deploy the "sidecar" proxy as a "global" service so that it is available on each node? Then your apps should all be able to communicate with the service via the IP address of whatever node its on.. but how do you configure that IP address per task (container)? I'm exploring that option, but then that gives you a single sidecar instance per node (1 instance to potentially serve many services) so this has impacts for how you scale that sidecar. I think possibly one other solution is that you have to embed these "sidecar" services into your own service docker image so that they are truly running locally with your app. However I haven't seen any that really advocate that approach so it's most likely fraught with hurdles to overcome. Most documentation is for k8s,, and nothing for swarm for these sorts of reasons. If only swarm could have added this ability in it's style of simplicity it would extend its reach so much.
Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web and Mongo. Currently we run these containers on a single machine. However as our users are increasing we are looking for a solution to scale. Using Kubernetes we would form a multi container pod. If we are to replicate we need to replicate all 3 containers as a unit. Our cloud application is consumed by mobile app users. Our app can only handle approx 30000 users per Worker node and we intend to place a single pod on a single worker node. Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
We plan on using Kubernetes to manage the containers. Load balancing doesn't work for our use case as a mobile device needs to be tied to a single machine once assigned and each Pod works independently with its own persistent volume. However we need a way of spinning up new Pods on worker nodes if the number of users goes over 30000 and so on.
The idea is we have some sort of custom scheduler which assigns a mobile device a Worker Node ( domain/ IPaddress) depending on the number of users on that node.
Is Kubernetes a good fit for this design and how could we implement a custom pod scale algorithm.
Thanks
Piggy-Backing on the answer of Jonah Benton:
While this is technically possible - your problem is not with Kubernetes it's with your Application! Let me point you the problem:
Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web, and Mongo.
Here is your first problem: Is you can only deploy these three containers together and not independently - you cannot scale one or the other!
While MongoDB can be scaled to insane loads - if it's bundled with your web server and web application it won't be able to...
So the first step for you is to break up these three components so they can be managed independently of each other. Next:
Currently we run these containers on a single machine.
While not strictly a problem - I have serious doubt's what it would mean to scale your application and what the challenges that come with scalability!
Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
Now, this IS a problem. You're looking to run an application on Kubernetes but I do not think you understand the consequences of doing that: Kubernetes orchestrates your resources. This means it will move pods (by killing and recreating) between nodes (and if necessary to the same node). It does this fully autonomous (which is awesome and gives you a good night sleep) If you're relying on clients sticking to a single nodes IP, you're going to get up in the middle of the night because Kubernetes tried to correct for a node failure and moved your pod which is now gone and your users can't connect anymore. You need to leverage the load-balancing features (services) in Kubernetes. Only they are able to handle the dynamic changes that happen in Kubernetes clusters.
Using Kubernetes we would form a multi container pod.
And we have another winner - No! You're trying to treat Kubernetes as if it were your on-premise infrastructure! If you keep doing so you're going to fail and curse Kubernetes in the process!
Now that I told you some of the things you're thinking wrong - what a person would I be if I did not offer some advice on how to make this work:
In Kubernetes your three applications should not run in one pod! They should run in separate pods:
your webservers work should be done by Ingress and since you're already familiar with nginx, this is probably the ingress you are looking for!
Your web application should be a simple Deployment and be exposed to ingress through a Service
your database should be a separate deployment which you can either do manually through a statefullset or (more advanced) through an operator and also exposed to the web application trough a Service
Feel free to ask if you have any more questions!
Building a custom scheduler and running multiple schedulers at the same time is supported:
https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
That said, to the question of whether kubernetes is a good fit for this design- my answer is: not really.
K8s can be difficult to operate, with the payoff being the level of automation and resiliency that it provides out of the box for whole classes of workloads.
This workload is not one of those. In order to gain any benefit you would have to write a scheduler to handle the edge failure and error cases this application has (what happens when you lose a node for a short period of time...) in a way that makes sense for k8s. And you would have to come up to speed with normal k8s operations.
With the information provided, hard pressed to see why one would use k8s for this workload over just running docker on some VMs and scripting some of the automation.
Reading the documentation about Istio I come with this questions.
Which are the points where Istio And Docker Swarm works the same?
Also, which one is better in different scenarios?
It's true that descriptions of Istio and Docker Swarm both refer the term "service mesh".
However, service mesh in Docker Swarm is more comparable to Services model seen in Kubernetes, and the two orchestrators are generally comparable with respect to most features each of them has. In both of the orchestrators' service routing only touches network layers, and doesn't have a visibility into e.g. HTTP protocol.
Please note that Kubernetes Ingress API should be considered separately, it actually sits above the service model, and is in fact implemented by an external controller, e.g. Traefik or HAProxy, and actually Istio brings another implementation of ingress controller.
While Istio is (approx) one level above an orchestrator, right now it runs only on Kubernetes, but it will very likely support Docker Swarm along with other popular orchestrators in the future.
More specifically, Istio's service mesh is much more advanced than what Docker Swarm offers (and, by analogy, what Kubernetes Services offer), e.g. it enables fault injection, and transparent TLS, among many other features.
Docker Engine swarm mode makes it easy to publish ports for services to make them available to resources outside the swarm. All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.
Istio is an open platform to connect, manage, and secure services. Essentially, its an open service mesh, where we would like developers and operators to not have headaches about how to connect services, how to think about making them resilient, how to secure them, and how to manage the runtime. We would like Istio to be able to do that for developers and operators across all environments and clouds. And, when I say services its really all kinds of services not necessarily just micro. It could be anything like you are building MySQL API service, a really small micro-service within your application for payment or shopping, in any given language. So, istio takes an approach of working with a polyglot environment. You know, no matter which language you write your services in and where you have deployed, Istio would like to give you a uniform substrate between your application and network, which can take care of connectivity between services, resiliency between services. So resiliency includes things like retries, failover, all of the good stuff , and distributed systems securing services. We think internal services should be as secure as external once, so security by default. And, have complete observability and visibility of metrics all the way from L3 to L7 across all your services.
Essentially think about layer (some people called it L5) which is basically a layer between your application and network. And, when you think about it, you are basically creating, we are injecting the a proxy next to every service. And, those are in the data path of all service-to-service communication. They are all interconnected and also connected to a common control plane. And, that interconnected set of proxies which are living next to every service is what typically is being called as Service Mesh The reason it is so interesting is that once you think of mesh as a layer which exist as a network does, you can, at the application layer offload things like connectivity, resiliency, visibility to that layer. So, historically you could do this in either application libraries as if you are building in java, python or go there's bunch of libraries in each of these languages that you could import and write a logic into it. Or you could do L3 layer security and policy like IP white-listing, firewall rule setup, VPN networking, VPN peering, and so on. So, we think service mesh is a space between the two that can offload few things from L7 and give policy-driven contracts to operate your network. So, Istio service mesh is much better than the Docker Swarm service mesh.
It's an apples to oranges comparison in many respects. Istio (currently) runs on top of Kubernetes, a container orchestrator like Docker Swarm.