Cassandra servers running on Kubernetes Cluster? - docker

Dear Cassandra Admins,
I am thinking if Cassandra is appropriate to be installed on Kubernetes Cluster, as I have never implemented that way before.
I appreciate if you could share some thoughts ? Here are my questions:
I know there exist solutions to install Cassandra on K8s platforms. But in terms of "Pros v.s. Cons", is Kubernetes a really a good platform for Cassandra servers to be installed (assume I need to install Cassandra servers) ?
If I want to install Cassandra in "on-prem" Kubernetes cluster(NOT public cloud like Azure, AWS or Google), what "on-prem" Kubernetes solutions you choose? For example:
OpenShift
Charmed Kubernetes (I think it’s from Ubuntu)
HPE Ezmeral
Vanilla K8S (complete open-source)
Minikube
Microk8s,
Any others K8s solution you choose?
I appreciate you could share insights and thoughts !

If I want to install Cassandra in "on-prem" Kubernetes cluster(NOT
public cloud like Azure, AWS or Google), what "on-prem" Kubernetes
solutions you choose?
Minikube, MicroK8s and other are not for production usage if you setting for just development option you can use any.
If you are setting up for Production grade you can use the Kops, kubeadm etc.
I know there exist solutions to install Cassandra on K8s platforms.
But in terms of "Pros v.s. Cons", is Kubernetes a really a good
platform for Cassandra servers to be installed (assume I need to
install Cassandra servers) ?
Yes, good option in terms of Pros you can manage things easily, and scale them as per need.
You might also need to focus on regular backup, monitoring and logging part or server it's not just setup so with K8s setup you have need log backup, database backup things needs to be takecare.
Read this nice article : https://medium.com/flant-com/running-cassandra-in-kubernetes-challenges-and-solutions-9082045a7d93

Related

which is better way to install jenkins docker or kubernetes

I am new to devops. I want to install jenkins. So out of all options available to install jenkins provided in official documentation which one should I use. I am zeroed on docker or kubernetes. So parameters I am looking for decision are below.
portability - can be installed on any major os or cloud provider.
minimal changes to move to production.
Kubernetes is a container orchestrator that may use Docker as its container runtime. So, they are quite different things—essentially, different levels of abstraction.
You could theoretically run an application at both of these abstraction levels. Here's a comparison:
Docker
You can run an application as a Docker container on any machine that has Docker installed (i.e. any OS or cloud provider instance that supports Docker). However, you would need to implement any operations-related features that are relevant for production, such as health checks, replication, load balancing, etc. yourself.
Kubernetes
Running an application on Kubernetes requires a Kubernetes cluster. You can run a Kubernetes cluster either on-premises, in the cloud, or use a managed Kubernetes service (such as Amazon EKS, Google GKE, or Azure AKS). The big advantage of Kubernetes is that it provides all the production-relevant features mentioned above (health checks, replication, load balancing, etc.) as part of the platform. So, you don't need to implement them yourself but just use the primitives that Kubernetes provides to you.
Regarding your two requirements, Kubernetes provides both of them, while using Docker alone does not provide easy production-readiness (requirement 2). So, if you're opting for production stability, setting up a Kubernetes cluster is certainly worth the effort.

Cassandra on Google Cloud

I am new to GCP and I want to deploy Cassandra nodes on Google Cloud. What are the advantages of using Cassandra containers over directly deploying Cassandra on these nodes?
We tried this scenarios:
Running cassandra in kubernetes
Running cassandra in docker on VM instances
Running cassandra on VMs without docker
Short version:
We decided to run on VMs (docker)
Long version
Building a working kubernetes setup takes some time. You need to find out, how to set ip adresses right, how to pick right disk types. And how to access machines.
When it comes to installing sidecars like cassandra reaper we found the configuration being easier when you are on a dedicated vm.
Same story with disaster recovery. We backup attached disks daily and keep them for a certain period. There were cases where we need to reattach a disk from a backup additionally to a running version. That was again easier than in a kubernetes environment. Remember - when we are talking about disaster recovery most likely you are under stress because things just got f... up ;)
In the end both solutions work but a dedicated VM per node is easier to manage.
So:
Docker: yes (or better docker-compose), because you dont have to worry about VM setup.
Kubernetes: rather no (but this is a question of personal taste)

Does it makes sense to manage Docker containers of a/few single hosts with Kubernetes?

I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.

What is the simplest reasonable Kubernetes setup?

I'm interested in getting started with Kubernetes, but my needs are simple and it does not look simple. I have a number of containerized applications that I deploy to container servers. I use nginx as a reverse proxy to expose these applications.
As far as I can tell, Kubernetes is meant to simplify management of setups like this. But I'm not sure the setup investment is worth it, given that I only realistically need one instance of each app running.
What is the simplest reasonable Kubernetes setup that I can deploy a few containerized applications to?
EDIT: If I start using Kubernetes, it will be using only on-site servers. The applications in question are ones I’ve developed for my employer, who requires that everything stays on-site.
On developers machine; you should use minikube.
On Azure / Google / Amazon ..etc; you should use managed kubernetes services
On Prem you should deploy kubernetes with on your own setup.
3.1. https://github.com/kelseyhightower/kubernetes-the-hard-way
3.2. with kubeadm
3.3 with ansible scripts like kubespray
If you choose kubeadm installation,while you are upgrading kubernetes cluster, again you should use kubeadm again. Best way to deploy on prem is using kubeadm, kube-spray or automating it with Pivotal's Bosh scripts
As you want to get started with Kubernetes, I assume that you want to set-up for your local development, I think that minikube is a best candidate for this purpose. You can also take a look at interactive tutorials from official Kubernetes website, I find it very helpful.
Take a look at this opinionated cluster setup k8s-snowflake and deploy it somewhere like Azure, or Google Compute.
It's a great exercise to figure out how kubernetes clusters work at a low level, but when you're ready to go to production, take a look at Google's Container Engine or AWS Elastic Container Engine. These ease the management of clusters immensely and exposes all the other benefits of the cloud platform to your kubernetes workloads.
Well according to previous answers you should start with minikube on your machine.
Regarding the futher dev/test/staging/prod deployment it depends. There are a couple of solutions:
use clusters provided by Google, Azure or AWS (AWS EKS is not ready yet - https://aws.amazon.com/eks/)
the hard way - setup own cluster or EC2 machines or similar
use tools like Rancher - some additional abstration over k8s for easy start, from version 2.0 k8s will be default mechanism orchiestration for Rancher
Update 31-01-2018:
regarding the hard way - there are of course some tools which helps with that approach like helm
The Kubernetes docs provide an excellent page to choose between the different options to setup kubernetes. The page is found under Picking the Right Solution.
If you want a simple way for setting a kubernetes cluster on premise, check kubeadm.

Kubernetes for a Development Environment

Good day
We have a development environment that consists of 6 virtual machines. Currently we are using Vagrant and Ansible with VirtualBox.
As you can imagine, hosting this environment is a maintenance nightmare particularly as versions of software/OS change. Not too mention resource load for developer machines.
We have started migrating some virtual machines to docker. But this itself poses problems around orchestration, correct configurations, communication etc. This led me to Kubernetes.
Would someone be so kind as to provide some reasoning as to whether Kubernetes would or wouldn't be the right tool for the job? That is managing and orchestrating 'development' docker containers.
Thanks
This is quite complex topic and many things have to be considered if it's worth to use k8s as local dev environment. Especially I used it when I wanted to have my local developer environment very close to production one which was running on Kubernetes. This helped to avoid many configuration bugs.
In my opinion Kubernetes(k8s) will provide you all you need for a development environment.
It gives you much flexibility and does much configuration itself. Few examples:
An easy way to deploy new version into local kubernetes stack
You prepare k8s replication controller files for each of your application module (keep in mind that they need to be stateless modules)
In replication controller you specify the docker image and that's it.
Using this approach you can push new docker images to local docker_registry and then using kubectl control the lifecycle of your application.
Easy way to scale your application modules
For example:
kubectl scale rc your_application_service --replicas=3
This way k8s will check how many pods you have running for your service and if it recognises that the number is smaller then the replicas value it will create new to satisfy the replicas number.
It's endless topic and many other things come to my mind, but I would suggest you to try it out.
There is a https://github.com/kubernetes/kubernetes/blob/master/docs/devel/developer-guides/vagrant.md project for running the k8s cluster in vagrant.
Of course you have to remember that if you have many services all of them have to be pushed to local repository and run by k8s. This will require some time but if you automate local deploy with some custom scripts you won't regret.
As wsl mentioned before, it is a quite complex topic. But i'm doing this as well at the moment. So let me summaries some things for you:
With Kubernetes (k8s) you're going to orchestrate your SaaS Application. In best case, it is a Cloud-native Application. The properties/requirements for a Cloud-native Application are formulated by the Cloud Native Computing Foundation (CNCF), which basically were formed around k8s, after Google donates it to the Linux Foundation.
So the properties/requirements for a Cloud-native Application are: Container packaged, Dynamically managed and Micro-services oriented (cncf.io/about/charter). You will benefit mostly from k8s, if your applications are micro-service based and every service has a separate container.
With micro-service based applications, every service can be developed independently. The developer only needs to follow the 12Factor Method (12factor.net) for example (use env var instead of hard coded IP addresses, etc).
In the next step the developer build the container for a service and pushes it the a container registry. For a local develop environment, you may need to run a container registry inside the cluster as well, so the developer can push and test his code locally.
Then you're able to define your k8s replication-controllers, services, PetSets, etc. with Ports, Port-mapping, env vars, Container Images... and create and run it inside the cluster.
The k8s-documentation recommend Minikube for running k8s locally (kubernetes.io/docs/getting-started-guides/minikube/). With Minikube you got features like DNS, NodePorts, ConfigMaps and Secrets
Dashboards.
But I choose the multi node CoreOS Kubernetes with Vagrant Cluster for my Development Environment as Puja Abbassi mentioned in the Blog "Finding The Right Local Kubernetes Development Environment" (https://deis.com/blog/2016/local-kubernetes-development-environment/), it is closer to the my production environment (12Factor: 10 - Dev/prod parity).
With the Vagrant Environment you got features like:
Networking with flannel
Service Discovery with etcd
DNS names for a set of containers with SkyDNS
internal load balancing
If you want to know, how everything works look inside this Github repo github.com/coreos/coreos-kubernetes/tree/master/multi-node (vagrant and generic folder).
So you have to ask yourself, if you or your developers really need to run a complete "cloud environment" locally. In many cases a developer can develop a service (based on micro-services and containers) independently.
But sometimes it is necessary to have multiple or all services run on your local machine as a dev-environment.

Resources