What does container churn mean in Docker?
I am watching this video. At 9:46 they talk about how fast containers can churn. What does the churn mean?
I Googled the churn word definition and it says that churn is a machine or container in which butter is made by agitating milk or cream. So, does it mean that the Docker containers can make the real butter and milk?
I can see no other definition of the container churning online.
Related
I just want to know .. Is there any kind of facility available now in docker. I have already gone through some of the documentations in docker regarding the multi-host facility such as,
Docker swarm
Docker service (with replicas)
And also I am aware about the volume problems in swarm mode and the maximum resource (RAM and CPU) limit to a container will vary and depends upon where (at what machine) it assigned by the swarm manager. So here my question is,
How to run a single container instance over multiple machines (not as service) ? (This means a single container can acquire all resources [RAM1 + RAM2 + ... + RAMn] over these connected machines)
is there any way to achieve this ?
My question may be idiotic. But I am curious to know.. how to achieve the same ?
The answer is No. Containerization technologies cannot handle compute, network and storage resources across cluster as one unit. They're only orchestrate them.
Docker and Co. based on cgroup, namespaces, layered FS, virtual networks, etc. All of them wired to specific machine + running processes and requiring additional servicing to manage containers not only on concrete machine, but in the cluster(For example, Mesos, k8s or Swarm).
You can check products such as Hadoop, Spark, Cassandra, Akka framework and other distributed computation implementations to see examples how to manage cluster resources as one unit.
PS You should always think about increasing system complexity with rising of components distribution.
I am familiarizing with the architecture and practices to package, build and deploy software or unless, small pieces of software.
If suddenly I am mixing concepts with specific tools (sometimes is unavoidable), let me know if I am wrong, please.
On the road, I have been reading and learning about the images and containers terms and their respective relationships in order to start to build the workflow software systems of a better possible way.
And I have a question about the services orchestration in the context of the docker :
The containers are lightweight and portable encapsulations of an environment in which we have all the binary and dependencies we need to run our application. OK
I can set up communication between containers using container links --link flag.
I can replace the use of container links, with docker-compose in order to automate my services workflow and running multi-containers using .yaml file configurations.
And I am reading about of the Container orchestration term, which defines the relationship between containers when we have distinct "software pieces" separate from each other, and how these containers interact as a system.
Well, I suppose that I've read good the documentation :P
My question is:
A docker level, are container links and docker-compose a way of container orchestration?
Or with docker, if I want to do container orchestration ... should I use docker-swarm?
You should forget you ever read about container links. They've been obsolete in pure Docker for years. They're also not especially relevant to the orchestration question.
Docker Compose is a simplistic orchestration tool, but I would in fact class it as an orchestration tool. It can start up multiple containers together; of the stack it can restart individual containers if their configurations change. It is fairly oriented towards Docker's native capabilities.
Docker Swarm is mostly just a way to connect multiple physical hosts together in a way that docker commands can target them as a connected cluster. I probably wouldn't call that capability on its own "orchestration", but it does have some amount of "scheduling" or "placement" ability (Swarm, not you, decides which containers run on which hosts).
Of the other things I might call "orchestration" tools, I'd probably divide them into two camps:
General-purpose system automation tools that happen to have some Docker capabilities. You can use both Ansible and Salt Stack to start Docker containers, for instance, but you can also use these tools for a great many other things. They have the ability to say "run container A on system X and container B on system Y", but if you need inter-host communication or other niceties then you need to set them up as well (probably using the same tool).
Purpose-built Docker automation tools like Docker Compose, Kubernetes, and Nomad. These tend to have a more complete story around how you'd build up a complete stack with a bunch of containers, service replication, rolling updates, and service discovery, but you mostly can't use them to manage tasks that aren't already in Docker.
Some other functions you might consider:
Orchestration: How can you start multiple connected containers all together?
Networking: How can one container communicate with another, within the cluster? How do outside callers connect to the system?
Scheduling: Which containers run on which system in a multi-host setup?
Service discovery: When one container wants to call another, how does it know who to call?
Management plane: As an operator, how do you do things like change the number of replicas of some specific service, or cause an update to a newer image for a service?
Given that I have only one machine(high configuration laptop), can I run the entire DCOS on my laptop (for purely simulation/learning purpose). The way I was thinking to set this up was using some N number of docker containers (with networking enabled between them), where some of those from N would be masters, some slaves, one zookeeper maybe, and 1 container to run the scheduler/application. So basically the 1 docker container would be synonymous to a machine instance in this case. (since I don't have multiple machines and using multiple VMs on one machine would be an overkill)
Has this been already done, so that I can straight try it out or am I completely missing something here with regards to understanding?
We're running such a development configuration where ZooKeeper, Mesos Masters and Slaves as well as Marathon runs fully dockerized (but on 3 bare metal machine cluster) on CoreOS latest stable. It has some known downsides, like when a slave dies the running tasks cannot be recovered AFAIK by the restarted slave.
I think it also depends on the OS what you're running on your laptop. If it's non-Windows, you should normally be fine. If your system supports systemd, then you can have a look at tobilg/coreos-setup to see how I start the Mesos services via Docker.
Still, I would recommend to use a Vagrant/VirtualBox solution if you just want to test how Mesos works/"feels"... Those will probably save you some headaches compared to a "from scratch" solution. The tobilg/coreos-mesos-cluster project runs the services via Docker on CoreOS within Vagrant.
Also, you can have a look at dharmeshkakadia/awesome-mesos and especially the Vagrant based setup section to get some references.
Have a look at https://github.com/dcos/dcos-docker it is quite young but enables you to do exactly what you want.
It starts a DC/OS cluster with masters and agents on a single node in docker containers.
i'm searching a possibility to get the power of 4 HPC-Nodes combined. The nodes are exactly the same, 32 Cores, 128GB RAM. The idea is to give students of our university the possibility to deploy there containers and run there programs on it.
I've read something about Docker Machine and Swarm, but it's more like Highly Availability.. Any ideas? :)
Kind regards,
Sub
be aware that the docker daemon runs as root, therefore granting users to mount the host as root...
see
https://stackoverflow.com/a/30754910/771372
Supposed I want to run several Docker containers at the same time.
Is there any formula I can use to find out in advance on how many containers can be run at the same time by a single Docker host? I.e., how much CPU, memory & co. do I have to take into account for the containers themselves?
It's not a formula per se, but you can gather information about resource usage in the container by examining Linux control groups in /sys/fs/cgroup.
Links
See this excellent post by Jérôme Petazzoni of Docker, Inc on the subject.
See also Google's cAdvisor tool to view container resource usage.
This IBM research paper documents that Docker performance is higher than KVM in every measurement.
docker stats is also useful for getting a rough idea for how CPU and Memory your containers use.
cAdvisor is going to provide resource usage and other interesting stats about all containers o a host. We have a preliminary setup for root usage, but we are adding a lot more this week.