Docker: Monitor logical units of linked containers - docker

I'm running several micro-services in docker containers. Each service consists of 2 or more containers that I start/stop using docker-compose.
As the number of containers grows I'd like to monitor them, but I haven't found a method or (self-hosted) tool to monitor linked containers as a logical unit (service) instead of every single container, yet.
Do you have any recommendations?

That should be included in a project like Kubernetes, which groups containers in a pod.
You can then run Heapster, a cluster-wide aggregator of monitoring and event data. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes’ Kubelets, the on-machine Kubernetes agent.

You can try Ruxit that provides service-oriented monitoring for Docker containers. It automatically detects the services in the containers and show them as separate entities on the dashboard even if a service is provided by multiple containers across hosts.

Related

What is a cluster and a node oriented to containers?

Sorry for this question, but I just started with Docker and Docker Compose and I really didn't need any of this until I read that I need to use Docker Swarn or Kuebernetes to have more stability in production. I started reading about Docker Swarn and they mentioned nodes and clusters.
I was really happy not knowing about this as I understood docker-compose:
Is that I could manage my services/containers from a single file
and only have to run several commands to launch, build, delete, etc.
all my services based on the docker-compose configuration.
But now the nodes and cluster have come out and I've really gone a bit crazy, and that's why if you can help me understand this next step in the life of containers. I've been googling and it's not very clear to me.
I hope you can help me and explain it to me in a way that I can understand.
Thank you!
A node is just a physical or virtual machine.
In Kubernetes/Docker Swarm context each node must have the relevant binaries installed (Docker Engine, kubelet etc..)
A cluster is a grouping of one or more nodes.
If you have just been testing on your local machine you have a single node.
If you were to add a second machine and link both machines together using docker swarm/kubernetes then you would have created a 2 node cluster
You can then use docker swarm/kubernetes to run your services/containers on any or all nodes in your cluster. This allows your services to be more resilient and fault tolerant.
By default Docker Compose runs a set of containers on a single system. If you need to run more containers than fit on one system, or you're just afraid of that system crashing, you need more than one system to do it. The cluster is the group of all of the systems (physical computers, virtual machines, cloud instances) that are working together to run the containers. Each of those individual systems is a node.
The other important part of the cluster container setups is that you can generally run multiple replicas of a give container, and you don't care where in the cluster they run. Say you have five nodes, and a Web server container, and you'd like to run three copies of it for redundancy. Instead of having to pick a node, ssh to it, and manually docker run there, you just tell the cluster manager "run me three of these", and it chooses a node and launches the container for you. You can also scale the containers up and down at runtime, or potentially set the cluster to do the scaling on its own based on load.
If your workload is okay running a single copy of containers on a single server, you don't need a cluster setup. (You might have some downtime during updates or if the single server dies.) Swarm has the advantages of being bundled with Docker and being able to use Docker-native tools (docker-compose can deploy to a Swarm cluster). Kubernetes is much more complex, but at this point most public cloud providers will sell you a preconfigured Kubernetes cluster, and it has better stories around security, storage management, and autoscaling. There are also a couple other less-prominent alternatives like Nomad and Mesos out there.

Difference between Docker container and service

I'm wondering whether there are any differences between the following docker setups.
Administrating two separate docker engines via the remote api.
Administrating two docker swarm nodes via one single docker engine.
I'm wondering if you can administrate a swarm with the ability run a container on a specific node are there any use cases to have separate docker engines?
The difference between the two is swarm mode. When a docker engine is running services in swarm mode you get:
Orchestration from the manager to continuously try to correct any differences between the current state and the target state. This can also include HA using the quorum model (as long as a majority of the managers are reachable to make decisions).
Overlay networking which allows containers on different hosts to talk to each other on their own container network. That can also involve IPSEC for security.
Mesh networking for published ports and a VIP for the service that doesn't change like container IP's do. The latter prevents problems from DNS caching. And the former has all nodes in the swarm publish the port and routes traffic to a container providing this service.
Rolling upgrades to avoid any downtime with replicated services.
Load balancing across multiple nodes when scaling up a service.
More details on swarm mode are available from docker's documentation.
The downside of swarm mode is that you are one layer removed from the containers when they run on a remote node. You can't run an exec command on a task to investigate a container, you need to do that on a container and be on the node it's currently using. Docker also removed some options from services like --volumes-from which don't apply when containers may be running on different machines.
If you think you may grow beyond running containers on a single node, need to communicate between the containers on different nodes, or simply want the orchestration features like rolling upgrades, then I would recommend swarm mode. I'd only manage containers directly on the hosts if you have a specific requirement that prevents swarm mode from being an option. And you can always do both, manage some containers directly and others as a service or stack inside of swarm, on the same nodes.

When to use Docker-Compose and when to use Docker-Swarm

I'm trying to understand the differences or similarities between Docker-Compose and Docker-Swarm.
By reading the documentation I have understood that docker-compose provides a mechanism to bind different containers together and work in collaboration, as a single service (I'm guessing it's using the same functionality as --link command used to link two containers)
Also, my understanding of docker-swarm is that it allows you to manage a cluster of different docker-hosts, each of which is running several container instances of some docker-images. We could define connections as overlay-networks between different containers in the swarm (even if they across two docker-hosts in the swarm) to connect them as a unit.
What I'm trying to understand is has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?
Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??
Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??
Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.
I'm a bit confused???
Thanks Alot!
It will probably help to start with a few definitions:
docker-compose: Command used to configure and manage a group of related containers. It is a frontend to the same api's used by the docker cli, so you can reproduce it's behavior with commands like docker run.
docker-compose.yml: Definition file for a group of containers, used by docker-compose and now also by swarm mode.
swarm mode: Used to manage a group of docker engines as a single entity and provide orchestration (constantly trying to correct any differences between the current state and the target state).
service: One or more containers for the same image and configuration within swarm, multiple containers provide scalability.
stack: One or more services within a swarm, these may be defined using a DAB or a docker-compose.yml file.
bridge network: Network managed by a single docker engine where multiple containers may communicate with each other. You may have multiple networks managed by an engine, and containers can be attached to zero or more networks.
overlay network: Similar to a bridge network but spanning multiple docker engines. These require a key/value store to maintain their state. Swarm mode provides this, but if swarm mode is disabled, you may also use etcd, consul, or zookeeper.
links: a method to connect containers together that predates the bridged network. Its usage is no longer recommended.
classic swarm: A predecessor to the integrated swarm mode that runs as a container, allows multiple engines to appear as one, but does not provide orchestration or include its own k/v store.
To answer the questions:
has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?
Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??
They provide different functionality and will continue to both serve a purpose. docker-compose cannot start containers inside swarm mode, but a newer version of the docker-compose.yml file (version 3) can be used to define a stack directly in swarm mode without using docker-compose itself. docker-compose is needed to manage containers outside of swarm mode, on a single docker engine or with classic swarm.
Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??
Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.
docker-compose starting with version 2 of the yml file connects multiple containers together by default with a new bridged network per project (the project defaults to the directory name). With classic swarm, that would default to an overlay network using an external k/v store. And with a swarm mode stack, this would be an overlay network.
Using docker networks is the preferred way to have containers communicate with each other. You want a network per group of containers you wish to isolate from the rest of your docker environment. docker-compose automates this network creation, but you can also do it from the command line with docker networks create.
Linking have been largely replaced by docker networks with built-in DNS discovery. When you remove links from your docker-compose.yml, you may need to replace them with a depends_on section to enforce container startup order. Otherwise, there are very few scenarios where linking makes sense and all the usage I've seen is from someone following outdated documentation.
compose or swarm or swarm overlay networks
You would find that you need to use all of the above if you're doing anything other than a demo on your laptop etc.
I deliberately separated out swarm & swarm overlay networks, because you need not use both, but you cannot get an overlay network without having a swarm underneath it.
Compose is for bringing up multiple containers together. Now it makes sense that they are related to each other, although they may not be. But let's suppose a typical case when the containers are for services that are related to each other, then you would want them to talk to each other in some way, but yet control how they talk to each other using networks. For example, take a 3 tier app that has a webserver, appserver and db. Let's say all three components are dockerized and you are using compose to bring them up togetherm instead of running docker run.. three times with different parameters etc. All three would come up, but you would want to control how they connect to each other. You want the webserver to be able to talk to the appserver, but not to the db directly. And you would want the appserver to talk (ping) the db server container and also ping the web server. All connections are two way, but restricted to only those services that you want to be able to communicate with each other. For such an arrangement, you would typically setup 2 networks - say frontend and backend. The web and app containers are connected to the frontend network. The app and db containers are connected to the backend network. Because there is no common network between the db and web containers they cannot touch (ping) each other, which is your intent.
Now, if you want these 3 services to be able to run on your cluster of 100's of machines, and you also want to scale across them, you would need a network that spans multiple hosts. That is where overlay networking (in swarm) comes into picture. Overlay networking is nothing but multi-host networking build over VxLAN technology. You do not have to know about VxLAN, except that it is a standard network topology that is supported in almost all modern networking infrastructure.
I hope that clarifies.
Edit: I did not see that you got an answer already!
I think you have most of the understanding correct as to what each is, but some tweaking is required.
You're correct docker-compose is to bring up multi-container applications. Earlier you used to do docker run .. to start every container. Usually modern applications embracing the micro-services paradigm can be made up of dozens of services and using docker run .. will get very tiresome very soon. Hence docker-compose allows you to express all the containers and their properties and how they connect to each other as a yaml or json file so you can manage it in an easier fashion.
So, docker-compose is the container orchestration part in the docker ecosystem.
Links are different, they are just a part of docker-compose or docker run commands and are deprecated in favor of software defined networks of which overlay networks are just one of them.
Swarm is the scheduling component in docker. What is scheduling - it is nothing but figuring out where to "place" your containers in your cluster of docker hosts. You can have a cluster of hundreds of servers, and you may have hundreds of containers, each encapsulating a service for a dozen different applications. Now how should these containers be distributed across your cluster of hundreds of servers, should some containers be placed only on certain hosts because they satisfy a particular criteria or maybe they should be closer to (or not) other containers which are somehow related... all these are part of the scheduling component which is performed by docker Swarm.
I suggest you go through the getting started documentation on docker.com here: https://docs.docker.com/engine/getstarted-voting-app/

How can I schedule tasks to CPUs?

I have some tasks defined as Docker containers, each one will consume one full CPU whilst it runs.
Id like to run them as cost efficiently as possible so that VMs are only active when the tasks are running.
Whats the best way to do this on Google Cloud Platform?
It seems Kuberenetes and other cluster managers assume you will be running some service, and have spare capacity in your cluster to schedule the containers.
Is there any program or system that I can use to define my tasks, and then start/stop VMs on a schedule to run those tasks?
I think the best way is to just use the GCP/Docker APIs and script it myself?
You're right, all the major cloud container services provide you a cluster for running containers - GCP Container Engine, EC2 Container Service, and Azure Container Service.
In all those cases, the compute is charged by the VMs in the cluster, so you'll pay while the VMs are running. If you have an occasional workload you'll need to script creating or starting the VMs before running your containers, and stopping or deleting them when you're done.
An exception is Joyent's cloud, which let's you run Docker containers and charges per container - that could fit your scenario.
Disclaimer - I don't work for Google, Amazon, Microsoft or Joyent. Or Samsung.

Docker Compose: deploying different services from docker-compose.yml to different set of hosts

Let's say I have a docker-compose.yml file with 3 different services (s1, s2, s3).
Then if I deploy them, on say AWS ECS (just for example) cluster with one host, all the three containers will go to that host. If I scale the cluster 2 hosts, then the second hosts, then the second host will also get all the three containers.
Ideally, I'd want to have different clusters for different services, so that they can be scaled independently. I'd not want to have my database container on the same cluster as my backend container as both of them have different scaling needs.
How will I achieve this kind of behaviour with docker compose?
Kubernetes has concept of pods which kind of provides this abstraction, however since that's not a part of docker, I want to know *how would one develop multi-service application in docker in which each service (as defined in docker-compose.yml) can be scaled independently. *
For ECS you would need to either create multiple clusters (i.e. a cluster for each piece of the infrastructure if you're sticking with deploying via compose), or just create multiple tasks. Each task should be a layer in your stack (i.e. api or web, etc). Then you can scale the layers independently.
The big difference you'll find between ECS and K8S is that since ECS uses host port mappings, you can't have 2 different tasks expose the same port and run on the same host.
Check out this aws article as well: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/application_architecture.html
Let's get the terminology straight here:
docker-compose deploys docker containers on a single host.
docker-swarm is the tool to use to deploy containers on multiple hosts.
A cluster in general, is a set of hosts (or nodes), either physical machines or VMs, that work together.
A Pod is not a cluster: it is a set of containers that are guaranteed to run on a single node, grouped together and communicating via localhost.
In Kubernetes, a deployment will schedule containers on all available nodes based on replication policies, node resources, and affinity, so you don't define where a container goes: Kubernetes manages that for you.
You then scale in 2 ways:
scale the number of instances of a container, by simply increasing it's replication factor (or you can also use auto-scale with defined policy)
scale the cluster, by adding new nodes (physical or VMs), therefore adding resources to the cluster.

Resources