First of all:
I have some poor experience with docker swarm (I mean I touch in production env). I read a lot of about it, and I know concepts like veth, overlay, labels-pinning, vtep, bridge and so on. I know also that docker swarm use some distributed key-value storage to deal with management of cluster.
However, there is something that I don't understand: service disorvery/DNS/resolving service name.
How does it work? Where is this DNS server placed? Who cares to resolve service names?
Is it possible to read the content of distributed key-value storage?
How does it work? Where is this DNS server placed? Who cares to resolve service names?
It's embedded in the docker daemon itself. Every container which is part of a user defined network, does is name resolving requests to 127.0.0.11.
https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/
Is it possible to read the content of distributed key-value storage?
Docker is using libkv. But I'm not sure if it's possible to bypass the docker daemon and access it.
https://github.com/docker/libkv
The DNS in Docker Swarm overlay networks works just like docker-compose bridge networks. Services resolve inside the same network by their service name. Other things like Swarm VIP and Routing Mesh make the whole solution slightly different but don't directly affect DNS resolution.
The Swarm raft log isn't meant to be easily read, but it's not more than just the service definitions of services and networks you create in Swarm. I've never needed to look at it directly in a production system.
Here's a 3.5 training video on all things Swarm (including lots of network details) from former Docker engineer Jerome Petazzoni and AJ Bowen
Also, Laura Frank has some details from last years DockerCon on how the raft log and consensus works and might point to some tools if you want to look under the hood.
Related
I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/
I want to find a way to deploy an etcd cluster as a Docker Swarm service that would automatically configure itself without any interaction. Basically, I think of something in spirit of this command:
docker service create --name etcd --replicas 3 my-custom-image/etcd
I'm assuming that overlay network is configured to be secure and provide both encryption and authentication, so I believe I don't need TLS, not even --auto-tls. Don't want an extra headache finding a way to provision the certificates, when this can be solved on the another layer.
I need an unique --name for each instance, but I can get that from an entrypoint script that would use export ETCD_NAME=$(hostname --short).
The problem is, I'm stuck on initial configuration. Based on the clustering guide there are three options, but none seems to fit:
The DNS discovery scenario is closest to what I'm looking for, but Docker doesn't support DNS SRV records discovery at the moment. I can lookup etcd and I will get all the IPs of my nodes' containers, but there are no _etcd-server._tcp records.
I cannot automatically build ETCD_INITIAL_CLUSTER because while I know the IPs, I don't know the names of the other nodes and I'm not aware about any way to figure those out. (I'm not going to expose Docker API socket to etcd container for this.)
There is no preexisting etcd cluster, and while supplying the initial configuration URI from discovery.etcd.io is a possible workaround I'm interested in not doing this. I'm aiming for "just deploy a stack from this docker-compose.yml and it'll automatically do the right thing, no questions asked" no-brainer scenario.
Is there any trick I can pull?
As you have correctly said you know the IPs of your nodes’ containers,
so the suggested trick is to simply build the required etcd names as derivatives of each node’s IP address.
inside each container etcd is named using this particular container's IP i.e. etcd-$ip
ETCD_INITIAL_CLUSTER is populated using other containers' IPs in a similar way
The names could be as simple as etcd-$ip or even better i.e. we could use the netmask to calculate the node’s IP on this network to make the names prettier.
In this case in a simple 3-nodes configuration one could end up having names like etcd-02 etcd-03 etc
No specific requirements exist for the name attribute, it just needs to be unique and human-readable. Although it indeed looks like a trick it might work
I am new with docker swarm and I'm having ambitious to deploy my application with docker swarm.
With the docker swarm, it has itself discovery service but I googled around and found out people are mentioning about the Consul as discovery service.
My question is. What is the advantage of Consul? Why don't we just use default discovery service?
Thanks,
Consul was used as a service discovery module in the standalone Swarm (prior to docker 1.12). However, since docker 1.12, Swarm mode was introduced with comes with default discovery service. So you don't need an external store.
Key point to notice is that if you had a swarm with an external store like consul, it would still have some data/metadata that needs to be preserved. Hence the use of Consul still exists.
Let us first look at the scope of service discovery provided by both swarm and Consul.
Swarm is to fascilitate service discovery on your docker network/infra only, while consul can be used with almost anything if you know how to use it, be it a monolythic application or a microservice, consul gives you all of that at one place.
Secondly, even though Swarm is great to handle a small infrastructure loads, it doesn't really go well with handling high production loads for a resource heavy infrastructure. This is why there are other tools in existance, for example kubernetes, ECS etc.
So considering that you have an application which you know is going to grow, I would rather go for a solution that works well with whatever I may try in future without having to change too much and works well with scaling on any IaaS provider. Hope that helps.
I'm trying to understand the differences or similarities between Docker-Compose and Docker-Swarm.
By reading the documentation I have understood that docker-compose provides a mechanism to bind different containers together and work in collaboration, as a single service (I'm guessing it's using the same functionality as --link command used to link two containers)
Also, my understanding of docker-swarm is that it allows you to manage a cluster of different docker-hosts, each of which is running several container instances of some docker-images. We could define connections as overlay-networks between different containers in the swarm (even if they across two docker-hosts in the swarm) to connect them as a unit.
What I'm trying to understand is has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?
Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??
Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??
Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.
I'm a bit confused???
Thanks Alot!
It will probably help to start with a few definitions:
docker-compose: Command used to configure and manage a group of related containers. It is a frontend to the same api's used by the docker cli, so you can reproduce it's behavior with commands like docker run.
docker-compose.yml: Definition file for a group of containers, used by docker-compose and now also by swarm mode.
swarm mode: Used to manage a group of docker engines as a single entity and provide orchestration (constantly trying to correct any differences between the current state and the target state).
service: One or more containers for the same image and configuration within swarm, multiple containers provide scalability.
stack: One or more services within a swarm, these may be defined using a DAB or a docker-compose.yml file.
bridge network: Network managed by a single docker engine where multiple containers may communicate with each other. You may have multiple networks managed by an engine, and containers can be attached to zero or more networks.
overlay network: Similar to a bridge network but spanning multiple docker engines. These require a key/value store to maintain their state. Swarm mode provides this, but if swarm mode is disabled, you may also use etcd, consul, or zookeeper.
links: a method to connect containers together that predates the bridged network. Its usage is no longer recommended.
classic swarm: A predecessor to the integrated swarm mode that runs as a container, allows multiple engines to appear as one, but does not provide orchestration or include its own k/v store.
To answer the questions:
has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?
Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??
They provide different functionality and will continue to both serve a purpose. docker-compose cannot start containers inside swarm mode, but a newer version of the docker-compose.yml file (version 3) can be used to define a stack directly in swarm mode without using docker-compose itself. docker-compose is needed to manage containers outside of swarm mode, on a single docker engine or with classic swarm.
Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??
Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.
docker-compose starting with version 2 of the yml file connects multiple containers together by default with a new bridged network per project (the project defaults to the directory name). With classic swarm, that would default to an overlay network using an external k/v store. And with a swarm mode stack, this would be an overlay network.
Using docker networks is the preferred way to have containers communicate with each other. You want a network per group of containers you wish to isolate from the rest of your docker environment. docker-compose automates this network creation, but you can also do it from the command line with docker networks create.
Linking have been largely replaced by docker networks with built-in DNS discovery. When you remove links from your docker-compose.yml, you may need to replace them with a depends_on section to enforce container startup order. Otherwise, there are very few scenarios where linking makes sense and all the usage I've seen is from someone following outdated documentation.
compose or swarm or swarm overlay networks
You would find that you need to use all of the above if you're doing anything other than a demo on your laptop etc.
I deliberately separated out swarm & swarm overlay networks, because you need not use both, but you cannot get an overlay network without having a swarm underneath it.
Compose is for bringing up multiple containers together. Now it makes sense that they are related to each other, although they may not be. But let's suppose a typical case when the containers are for services that are related to each other, then you would want them to talk to each other in some way, but yet control how they talk to each other using networks. For example, take a 3 tier app that has a webserver, appserver and db. Let's say all three components are dockerized and you are using compose to bring them up togetherm instead of running docker run.. three times with different parameters etc. All three would come up, but you would want to control how they connect to each other. You want the webserver to be able to talk to the appserver, but not to the db directly. And you would want the appserver to talk (ping) the db server container and also ping the web server. All connections are two way, but restricted to only those services that you want to be able to communicate with each other. For such an arrangement, you would typically setup 2 networks - say frontend and backend. The web and app containers are connected to the frontend network. The app and db containers are connected to the backend network. Because there is no common network between the db and web containers they cannot touch (ping) each other, which is your intent.
Now, if you want these 3 services to be able to run on your cluster of 100's of machines, and you also want to scale across them, you would need a network that spans multiple hosts. That is where overlay networking (in swarm) comes into picture. Overlay networking is nothing but multi-host networking build over VxLAN technology. You do not have to know about VxLAN, except that it is a standard network topology that is supported in almost all modern networking infrastructure.
I hope that clarifies.
Edit: I did not see that you got an answer already!
I think you have most of the understanding correct as to what each is, but some tweaking is required.
You're correct docker-compose is to bring up multi-container applications. Earlier you used to do docker run .. to start every container. Usually modern applications embracing the micro-services paradigm can be made up of dozens of services and using docker run .. will get very tiresome very soon. Hence docker-compose allows you to express all the containers and their properties and how they connect to each other as a yaml or json file so you can manage it in an easier fashion.
So, docker-compose is the container orchestration part in the docker ecosystem.
Links are different, they are just a part of docker-compose or docker run commands and are deprecated in favor of software defined networks of which overlay networks are just one of them.
Swarm is the scheduling component in docker. What is scheduling - it is nothing but figuring out where to "place" your containers in your cluster of docker hosts. You can have a cluster of hundreds of servers, and you may have hundreds of containers, each encapsulating a service for a dozen different applications. Now how should these containers be distributed across your cluster of hundreds of servers, should some containers be placed only on certain hosts because they satisfy a particular criteria or maybe they should be closer to (or not) other containers which are somehow related... all these are part of the scheduling component which is performed by docker Swarm.
I suggest you go through the getting started documentation on docker.com here: https://docs.docker.com/engine/getstarted-voting-app/
I would like to listen your experience about service discovery on Docker environment.
We plan to have multi-host docker environment with Swarn.
The latest version of Docker provides internal DNS and round-robin feature.
Our idea is to use Docker overlay network.
I believe that one overlay network per application is the way to go, so every environment will be segmented in a specific subnet. Or just one big subnet for all applications is better?
Do service discovery internally (inside overlay network) from one service to another is easy, Docker internal DNS solves it, we just need to use --net-alias parameter.
But how external service discovery can be done? One from another machine/service outside the overlay network.
Can you share your experience or your thoughts about it?
Regards