Difference between Rancher and other container orchestration - docker

I read on Rancher Official Page
Rancher is an open source software platform that enables organizations
to run containers in production. With Rancher, organizations no longer
have to build a container services platform from scratch using a
distinct set of open source technologies. Rancher supplies the entire
software stack needed to manage containers in production.
Base on this description, I think Rancher is a container orchestration like docker-compose. But as I read on same page:
Many users choose to run containerized applications using a container
orchestration and scheduling framework. Rancher includes a
distribution of all popular container orchestration and scheduling
frameworks today, including Docker Swarm, Kubernetes, and Mesos.
This paragraph makes me think Rancher is not a container orchestration but something that controls those thing. Please tell me what is the difference between Rancher and other container orchestration.

[Rancher Labs employee]
Basically we are unopinionated about what orchestration system you want to use. Rancher contains our own container orchestration system called Cattle, which has a full UI, API, and supports YAML syntax that matches docker-compose plus additional things not available in compose.
Obviously we like our own but are aware that other choices exist in the ecosystem and many people want to use them. And managing their installations is not a trivial task... so Rancher contains (Cattle) templates for Kubernetes, Mesos, and Swarm.
When you create an environment you choose an orchestration system, and if you pick e.g. K8s we use Cattle to orchestrate the installation and configuration of K8s, plus integrating other Rancher services like access controls, load balancing, etc. You can then use the standard tools like kubectl, their API, or we have a fairly complete custom UI for K8s built-in.

Related

which is better way to install jenkins docker or kubernetes

I am new to devops. I want to install jenkins. So out of all options available to install jenkins provided in official documentation which one should I use. I am zeroed on docker or kubernetes. So parameters I am looking for decision are below.
portability - can be installed on any major os or cloud provider.
minimal changes to move to production.
Kubernetes is a container orchestrator that may use Docker as its container runtime. So, they are quite different things—essentially, different levels of abstraction.
You could theoretically run an application at both of these abstraction levels. Here's a comparison:
Docker
You can run an application as a Docker container on any machine that has Docker installed (i.e. any OS or cloud provider instance that supports Docker). However, you would need to implement any operations-related features that are relevant for production, such as health checks, replication, load balancing, etc. yourself.
Kubernetes
Running an application on Kubernetes requires a Kubernetes cluster. You can run a Kubernetes cluster either on-premises, in the cloud, or use a managed Kubernetes service (such as Amazon EKS, Google GKE, or Azure AKS). The big advantage of Kubernetes is that it provides all the production-relevant features mentioned above (health checks, replication, load balancing, etc.) as part of the platform. So, you don't need to implement them yourself but just use the primitives that Kubernetes provides to you.
Regarding your two requirements, Kubernetes provides both of them, while using Docker alone does not provide easy production-readiness (requirement 2). So, if you're opting for production stability, setting up a Kubernetes cluster is certainly worth the effort.

Does a simple monolith application need kubernetes to manage

I'm very new to the infrastructure so I built a simple monolith application and I use docker for building a container and deploy it on my linux server. My question is, do I need to install kubernetes for a single container and if no how can I scale or do the load balancing.
"... do I need to install kubernetes for a single container" - No, it is not mandatory. One can use docker to manage applications. Kubernetes is a platform that can be used to orchestrate containerized applications. It offers tools and concepts like autoscaling based on load, isolation through namespaces, network access management through services and ingresses, and much more. But Kubernetes is not the only platform for orchestration. There are others, for example OpenShift, docker swarm, rancher. All those are optional platforms with additional tooling and concepts that can be used if necessary.
"how can I scale or do the load balancing." - We can, for example, define the replicas through the replicas variable in a docker-compose file. All containers defined under a service are accessed through this service's name. How exactly the balancing is done can also be configured through the endpoint_mode configuration. If we need even more control, we can deploy a separate load balancer, e.g. nginx. A possible configuration is described in this medium article.
For future posts, please limit yourself to one question per post.

Containers Orchestations and some docker functions

I am familiarizing with the architecture and practices to package, build and deploy software or unless, small pieces of software.
If suddenly I am mixing concepts with specific tools (sometimes is unavoidable), let me know if I am wrong, please.
On the road, I have been reading and learning about the images and containers terms and their respective relationships in order to start to build the workflow software systems of a better possible way.
And I have a question about the services orchestration in the context of the docker :
The containers are lightweight and portable encapsulations of an environment in which we have all the binary and dependencies we need to run our application. OK
I can set up communication between containers using container links --link flag.
I can replace the use of container links, with docker-compose in order to automate my services workflow and running multi-containers using .yaml file configurations.
And I am reading about of the Container orchestration term, which defines the relationship between containers when we have distinct "software pieces" separate from each other, and how these containers interact as a system.
Well, I suppose that I've read good the documentation :P
My question is:
A docker level, are container links and docker-compose a way of container orchestration?
Or with docker, if I want to do container orchestration ... should I use docker-swarm?
You should forget you ever read about container links. They've been obsolete in pure Docker for years. They're also not especially relevant to the orchestration question.
Docker Compose is a simplistic orchestration tool, but I would in fact class it as an orchestration tool. It can start up multiple containers together; of the stack it can restart individual containers if their configurations change. It is fairly oriented towards Docker's native capabilities.
Docker Swarm is mostly just a way to connect multiple physical hosts together in a way that docker commands can target them as a connected cluster. I probably wouldn't call that capability on its own "orchestration", but it does have some amount of "scheduling" or "placement" ability (Swarm, not you, decides which containers run on which hosts).
Of the other things I might call "orchestration" tools, I'd probably divide them into two camps:
General-purpose system automation tools that happen to have some Docker capabilities. You can use both Ansible and Salt Stack to start Docker containers, for instance, but you can also use these tools for a great many other things. They have the ability to say "run container A on system X and container B on system Y", but if you need inter-host communication or other niceties then you need to set them up as well (probably using the same tool).
Purpose-built Docker automation tools like Docker Compose, Kubernetes, and Nomad. These tend to have a more complete story around how you'd build up a complete stack with a bunch of containers, service replication, rolling updates, and service discovery, but you mostly can't use them to manage tasks that aren't already in Docker.
Some other functions you might consider:
Orchestration: How can you start multiple connected containers all together?
Networking: How can one container communicate with another, within the cluster? How do outside callers connect to the system?
Scheduling: Which containers run on which system in a multi-host setup?
Service discovery: When one container wants to call another, how does it know who to call?
Management plane: As an operator, how do you do things like change the number of replicas of some specific service, or cause an update to a newer image for a service?

Kubernetes for a Development Environment

Good day
We have a development environment that consists of 6 virtual machines. Currently we are using Vagrant and Ansible with VirtualBox.
As you can imagine, hosting this environment is a maintenance nightmare particularly as versions of software/OS change. Not too mention resource load for developer machines.
We have started migrating some virtual machines to docker. But this itself poses problems around orchestration, correct configurations, communication etc. This led me to Kubernetes.
Would someone be so kind as to provide some reasoning as to whether Kubernetes would or wouldn't be the right tool for the job? That is managing and orchestrating 'development' docker containers.
Thanks
This is quite complex topic and many things have to be considered if it's worth to use k8s as local dev environment. Especially I used it when I wanted to have my local developer environment very close to production one which was running on Kubernetes. This helped to avoid many configuration bugs.
In my opinion Kubernetes(k8s) will provide you all you need for a development environment.
It gives you much flexibility and does much configuration itself. Few examples:
An easy way to deploy new version into local kubernetes stack
You prepare k8s replication controller files for each of your application module (keep in mind that they need to be stateless modules)
In replication controller you specify the docker image and that's it.
Using this approach you can push new docker images to local docker_registry and then using kubectl control the lifecycle of your application.
Easy way to scale your application modules
For example:
kubectl scale rc your_application_service --replicas=3
This way k8s will check how many pods you have running for your service and if it recognises that the number is smaller then the replicas value it will create new to satisfy the replicas number.
It's endless topic and many other things come to my mind, but I would suggest you to try it out.
There is a https://github.com/kubernetes/kubernetes/blob/master/docs/devel/developer-guides/vagrant.md project for running the k8s cluster in vagrant.
Of course you have to remember that if you have many services all of them have to be pushed to local repository and run by k8s. This will require some time but if you automate local deploy with some custom scripts you won't regret.
As wsl mentioned before, it is a quite complex topic. But i'm doing this as well at the moment. So let me summaries some things for you:
With Kubernetes (k8s) you're going to orchestrate your SaaS Application. In best case, it is a Cloud-native Application. The properties/requirements for a Cloud-native Application are formulated by the Cloud Native Computing Foundation (CNCF), which basically were formed around k8s, after Google donates it to the Linux Foundation.
So the properties/requirements for a Cloud-native Application are: Container packaged, Dynamically managed and Micro-services oriented (cncf.io/about/charter). You will benefit mostly from k8s, if your applications are micro-service based and every service has a separate container.
With micro-service based applications, every service can be developed independently. The developer only needs to follow the 12Factor Method (12factor.net) for example (use env var instead of hard coded IP addresses, etc).
In the next step the developer build the container for a service and pushes it the a container registry. For a local develop environment, you may need to run a container registry inside the cluster as well, so the developer can push and test his code locally.
Then you're able to define your k8s replication-controllers, services, PetSets, etc. with Ports, Port-mapping, env vars, Container Images... and create and run it inside the cluster.
The k8s-documentation recommend Minikube for running k8s locally (kubernetes.io/docs/getting-started-guides/minikube/). With Minikube you got features like DNS, NodePorts, ConfigMaps and Secrets
Dashboards.
But I choose the multi node CoreOS Kubernetes with Vagrant Cluster for my Development Environment as Puja Abbassi mentioned in the Blog "Finding The Right Local Kubernetes Development Environment" (https://deis.com/blog/2016/local-kubernetes-development-environment/), it is closer to the my production environment (12Factor: 10 - Dev/prod parity).
With the Vagrant Environment you got features like:
Networking with flannel
Service Discovery with etcd
DNS names for a set of containers with SkyDNS
internal load balancing
If you want to know, how everything works look inside this Github repo github.com/coreos/coreos-kubernetes/tree/master/multi-node (vagrant and generic folder).
So you have to ask yourself, if you or your developers really need to run a complete "cloud environment" locally. In many cases a developer can develop a service (based on micro-services and containers) independently.
But sometimes it is necessary to have multiple or all services run on your local machine as a dev-environment.

What is the difference between Docker Swarm and Kubernetes/Mesophere?

From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
Disclosure: I'm a lead engineer on Kubernetes
Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.
Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
Swarm is a very simple add-on to Docker. It currently does not provide all the features of Kubernetes. It is currently hard to predict how the ecosystem of these tools will play out, it's possible that Kubernetes will make use of Swarm.

Resources