I have had been having great success so far using Mesos, Marathon, and Docker to manage a fleet of servers, and the containers I'm placing on them.
However, due to some problems I had to destroy my MesosMaster machine and now when I started the machine again and connected it with zookeeper cluster, I dont see anymore docker containers on marathon.
Those docker containers are running since they run on MesosSlaves and I can see them running when I ssh into those slave servers. But they no longer appear on Marathon GUI.
Now I can go back and start all the containers again on marathon GUI manually(takes alot of time) but I think that there has to be another smarter way to it.
Can somebody help me out here because my understanding of the logic of mesos, zookeeper tells me that my above actions should not have implications and mesos cluster should be smart enough to see the docker containers.
Related
I am new to cluster container management, and this question is the basis for all the freshers over here.
I read some documentation, but still, my understanding is not too clear, so any leads.. helping to understand?
Somewhere it is mentioned, Minikube is used to run Kubernetes locally. So if we want to maintain cluster management in my four-node Raspberry Pi, then Minikube is not the option?
Does Minikube support only a one-node system?
Docker Compose is set of instructions and a YAML file to configure and start multiple Docker containers. Can we use this to start containers of the different hosts? Then for simple orchestration where I need to call container of the second host, I don't need any cluster management, right?
What is the link between Docker Swarm and Kubernetes? Both are independent cluster management. Is it efficient to use Kubernetes on Raspberry Pi? Any issue, because I was told that Kubernetes in single node takes the complete memory and CPU usage? Is it true?
Is there other cluster management for Raspberry Pi?
I think this 4-5 set will help me better.
Presuming that your goal here is to run a set of containers over a number of different Raspberry Pi based nodes:
Minikube isn't really appropriate. This starts a single virtual machine on a Windows, MacOS or Linux and installs a Kubernetes cluster into it. It's generally used by developers to quickly start-up a cluster on their laptops or desktops for development and testing purposes.
Docker Compose is a system for managing sets of related containers. So for example if you had a web server and database that you wanted to manage together you could put them in a single Docker Compose file.
Docker Swarm is a system for managing sets of containers across multiple hosts. It's essentially an alternative to Kubernetes. It has fewer features than Kubernetes, but it is much simpler to set up.
If you want a really simple multi-node Container cluster, I'd say that Docker swarm is a reasonable choice. If you explicitly want to experiment with Kubernetes, I'd say that kubeadm is a good option here. Kubernetes in general has higher resource requirements than Docker Swarm, so it could be somewhat less suited to it, although I know people have successfully run Kubernetes clusters on Raspberry Pis.
Docker Compose
A utility to to start multiple docker containers on a single host using a single docker-compose up. This makes it easier to start multiple containers at once, rather than having do mutliple docker run commands.
Docker swarm
A native container orchestrator for Docker. Docker swarm allows you to create a cluster of docker containers running on multiple machines. It provides features such as replication, scaling, self-healing i.e. starting a new container when one dies ...
Kubernetes
Also a container orchestrator. Kubernetes and Docker swarm can be considered as alternatives to one another. They both try to handle managing containers starting in a cluster
Minikube
Creating a real kubernetes cluster requires having multiple machines either on premise or on a cloud platform. This is not always convenient if someone is just new to Kubernetes and trying to learn by playing around with Kubernetes. To solve that minikube allows you to start a very basic Kubernetes cluster that consists of a single VM on you machine, which you can use to play around with Kubernetes.
Minikube is not for a production or multi-node cluster. There are many tools that can be used to create a multi-node Kubernetes cluster such as kubeadm
Containers are the future of application deployment. Containers are smallest unit of deployment in docker. There are three components in docker as docker engine to run a single container, docker-compose to run a multi-container application on a single host and docker-swarm to run multi-container application across hosts which also an orchestration tool.
In kubernetes, the smallest unit of deployment is Pod(which is composed of multiple container). Minikube is a single node cluster where you can install it locally and try, test and feel the kubernetes features locally. But, you can't scale this to more than a single machine. Kubernetes is an orchestration tool like Docker Swarm but more prominent than Docker Swarm with respect to features, scaling, resiliency, and security.
You can do the analysis and think about which tool will be fit for your requirements. Each one having their own pros or cons like docker swarm is good and easy to manage small clusters whereas kubernetes is much better for larger once. There is another orchestration tool Mesos which is also popular and used in largest size clusters.
Check this out, Choose your own Adventure but, it's just a general analogy and only to understand because all the three technologies are evolving rapidly.
I get the impression you're mostly looking for confirmation and am happy to help with that if I can.
Yes, minikube is local-only
Yes, minikube is intended to be single-node
Docker-compose isn't really an orchestration system like swarm and Kubernetes are. It helps with running related containers on a single host, but it is not used for multi-host.
Kubernetes and Docker Swarm are both container orchestration systems. These systems are good at managing scaling up, but they have an overhead associated with them so they're better suited to multi-node.
I don't know the range of orchestration options for Raspberry Pi, but there are Kubernetes examples out there such as Build Your Own Cloud with Kubernetes and Some Raspberry Pi.
For Pi, you can use Docker Swarm Mode on one or more Pi's. You can even run ARM emulation for testing on Docker for Windows/Mac before trying to get it all working directly on a Pi. Same goes for Kubernetes, as it's built-in to Docker for Windows/Mac now (no minikube needed).
Alex Ellis has a good blog on Pi and Docker and this post may help too.
I've been playing around with orchestrating Docker containers on a subnet of Raspberry Pis (3Bs).
I found Docker-swarm easiest to set up and work with, and adequate for my purposes. Guide: https://docs.docker.com/engine/swarm/swarm-tutorial/
For Kubernetes there are two main options; k3s and microk8s. Some guides:
k3s
https://bryanbende.com/development/2021/05/07/k3s-raspberry-pi-initial-setup
microk8s
https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview
We are using Jenkins and docker for doing CI/CD. Our Jenkins is setup as master/slave style, where slaves are distributed across different data centers. when a new build needs to happen Jenkins master identifies a slave in one of the DC and spin up a ephemeral container and tear it down once done.
Due to firewall limitations, we only have about 10 ports open for the slaves in some of the DCs. for example Port Range: 8000 - 8010. In general docker uses the linux port ranges 32768 to 61000. The problem is Jenkins master can not talk to the containers if the host port is bound out of 8000 - 8010. Jenkins docker plugin has limitation where you can not bind multiple ports (may be I am wrong here). I would like to know if any way we can configure this at docker end or in Jenkins docker plugin.
After researching in many forums and talking to people, this is not possible or recommended even to try doing. The recommended implementation to overcome this issue is to move to Docker Swarm,
where you have only one virtual docker cloud
which takes care of spinning up containers behind the scenes and keep it ready for consumption even before the need arises. The configurations options are flexible.
Read more about Swarm here
https://docs.docker.com/swarm/
I have a silly question regarding docker swarm.
I am thinking I can start a web application image in two containers, either in same server or two vm servers, then I start a load balance container, pointing to two web app containers through IP and port.
In this case, why do I need docker swarm for clustering management? What benefits can docker swarm bring?
I have read from docker documentation, they only introduce what is swarm and how to use swarm. But I can not find out answer for why I have to use swarm.
Thanks
What is swarming managing? turns a pool of Docker hosts into a single, virtual Docker host.
Can swarm auto-start the container if the container died? Yes it can, so can the Docker daemon on each host.
Can swarm auto-create more nodes if the resource is not enough? No it cannot. It does not aims on providing this service. Nevertheless you can program a node that start and run containers when needed.
Which mean, if traffic grows fast, do we still manually create more node and deploy more containers? Yes, unfortunately.
update
If needed, here is an answer that details how to deploy a Swarm cluster.
Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you
Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.
We're running a mesos cluster and jenkins for continuous integration workflow.
Jenkins is configured with the mesos plugin.
Previously we built our docker images in mesos containers. Now we are switching to docker containers for building our docker images.
I've been searching for the advantage of building our docker images inside a docker container with dind image like this one "dind-jenkins-slave" found on docker hub.
With dind you lose the caching opportunities when sharing the docker.sock of the host. And with dind you also have to push the privileged parameter.
What is the downside of just sharing the docker.sock of the host?
I'm using sharing docker.sock approach. The only downside which I see is security - you could do everything what you want with the host when you could run any docker containers. But if you trust people who create jobs or could control which docker containers with which options could be run from jenkins then giving access to main docker daemon is easy solution.
It depends on what you're doing, really. To get our jenkins jobs truly isolated so that we can run as many as we want in parallel, we switched to DinD. If you share the host socket you still only have a single docker daemon- port conflicts, pulling/pushing multiple images from separate jobs, and one job relying on an image or build that is also being messed with by another job are all issues.
To get around the caching issue, I create the dind container and leave it around. I run
docker start -a dindslave || docker run -v ${WORKSPACE}:/data my/dindimage jenkinscommands.sh
Then jenkins just writes its commands to jenkinscommands.sh and restarts the container every time. When you remove the container you remove your cache as well, and you don't share caches between jobs if that is something you want... but you don't have to think about jobs interfering with one another or making sure they are running on the same host.