I run a web container in server A, and a DB container in server B.
How can I connect these two containers?
I only know how to connect two containers which run in same host.
You need a:
service discovery which will record your containers hosts (like Consul, a KV -- Key/Value store)
A good way to think of Consul is broken into 3 layers.
The middle layer is the actual config store, which is not that different from etcd or Zookeeper.
The layers above and below are pretty unique to Consul.
The killer feature of Consul is its service catalog. Instead of using the key-value store to arbitrarily model your service directory as you would with etcd or Zookeeper, Consul exposes a specific API for managing services.
a registrator (you can use swarm even though it is a bit overkill, or registrator)
Registrator is a single, host-level service you run as a Docker container.
It watches for new containers, inspects them for service information, and registers them with a service registry. It also deregisters them when the container dies.
It has a pluggable registry system, meaning it can work with a number of service discovery systems.
Related
I built two docker applications that communicate with each other using the docker network, but when I tried to run those applications using nomad. The problem within nomad is that the container name is not configurable and gives the container a random name. So I can't add those containers to the docker network and have them know each other with their specific names.
So how can I run two or more docker containers in the same docker network using nomad?
I'm aware of few approaches. First one works with nomad only, the others assume that consul is deployed as well.
Place both containers in the same task group. Nomad will locate them always on the same node and you can access address via Nomad env variables NOMAD_IP_<label>, NOMAD_PORT_<label> or NOMAD_ADDR_<label>.
Register the server application (docker container) in the consul service registry with nomad service stanza. You can then use nomad template stanza in "client" application to render config. Example/doc is here.
Setup consul connect (service mesh) in your deployment.
You could use consul DNS interface. Consul can work as a DNS server and every service is resolvable at <service_name>.service.<dc>.consul (doc). But you have to configure your servers to use consul DNS (doc).
Approach 1 is the easiest but has huge limitation (the same node). Approach 2 worked for me well for several years. Nomad is that intelligent that it will reload/restart your client IP should the server IP/port change.
I have two instances of keycloak running on container each on is running on a single node.
The nodes are bare-metal nodes inside my company network.
keycloak uses TCPPING as discovery protocol.
Since the two containers are running on different nodes, and each instance is pining inside docker default network they are not able to find each other.
I said docker default network because I didn’t specify special network for the two containers.
Any idea how can I make the two instances in this architectural design discover each others!
and I was thinking about docker swarm as a solution.
Assuming the two nodes are on the same network and are able to connect to each other, you can get the two container to discover each other using docker host networking
It would be as easy as docker run --net=host
Docker host networking makes the container to use the networking of the host node and thus will be allocated an IP address by the DHCP server used by the host node and for all practical purposes , would look like another host in that network.
This allows the two containers to discover each other using TCPPING
Docker swarm would also enable this .Docker swarm basically abstracts multiple host nodes such that you can containers on them as if you are running docker on single host. But that will require docker-machine and whole new setup.
I'm wondering whether there are any differences between the following docker setups.
Administrating two separate docker engines via the remote api.
Administrating two docker swarm nodes via one single docker engine.
I'm wondering if you can administrate a swarm with the ability run a container on a specific node are there any use cases to have separate docker engines?
The difference between the two is swarm mode. When a docker engine is running services in swarm mode you get:
Orchestration from the manager to continuously try to correct any differences between the current state and the target state. This can also include HA using the quorum model (as long as a majority of the managers are reachable to make decisions).
Overlay networking which allows containers on different hosts to talk to each other on their own container network. That can also involve IPSEC for security.
Mesh networking for published ports and a VIP for the service that doesn't change like container IP's do. The latter prevents problems from DNS caching. And the former has all nodes in the swarm publish the port and routes traffic to a container providing this service.
Rolling upgrades to avoid any downtime with replicated services.
Load balancing across multiple nodes when scaling up a service.
More details on swarm mode are available from docker's documentation.
The downside of swarm mode is that you are one layer removed from the containers when they run on a remote node. You can't run an exec command on a task to investigate a container, you need to do that on a container and be on the node it's currently using. Docker also removed some options from services like --volumes-from which don't apply when containers may be running on different machines.
If you think you may grow beyond running containers on a single node, need to communicate between the containers on different nodes, or simply want the orchestration features like rolling upgrades, then I would recommend swarm mode. I'd only manage containers directly on the hosts if you have a specific requirement that prevents swarm mode from being an option. And you can always do both, manage some containers directly and others as a service or stack inside of swarm, on the same nodes.
I'm new to docker and microservices. I've started to decompose my web-app into microservices and currently, I'm doing manual configuration.
After some study, I came across docker swarm mode which allows service discovery. Also, I came across other tools for service discovery such as Eureka and Consul.
My main aim is to replace IP addresses in curl call with service name and load balance between multiple instances of same service.
i.e. for ex. curl http://192.168.0.11:8080/ to curl http://my-service
I have to keep my services language independent.
Please suggest, Do I need to use Consul with docker swarm for service discovery or i can do it without Consul? What are the advantages?
With the new "swarm mode", you can use docker services to create clustered services across multiple swarm nodes. You can then access those same services, load-balanced, by using the service name rather than the node name in your requests.
This only applies to nodes within the swarm's overlay network. If your client systems are part of the same swarm, then discovery should work out-of-the-box with no need for any external solutions.
On the other hand, if you want to be able to discover the services from systems outside the swarm, you have a few options:
For stateless services, you could use docker's routing mesh, which will make the service port available across all swarm nodes. That way you can just point at any node in the swarm, and docker will direct your request to a node that is running the service (regardless of whether the node you hit has the service or not).
Use an actual load balancer in front of your swarm services if you need to control routing or deal with different states. This could either be another docker service (i.e. haproxy, nginx) launched with the --mode global option to ensure it runs on all nodes, or a separate load-balancer like a citrix netscaler. You would need to have your service containers reconfigure the LB through their startup scripts or via provisioning tools (or add them manually).
Use something like consul for external service discovery. Possibly in conjunction with registrator to add services automatically. In this scenario you just configure your external clients to use the consul server/cluster for DNS resolution (or use the API).
You could of course just move your service consumers into the swarm as well. If you're separating the clients from the services in different physical VLANs (or VPCs etc) though, you would need to launch your client containers in separate overlay networks to ensure you don't effectively defeat any physical network segregation already in place.
Service discovery (via dns) is built into docker since version 1.12. When you create a custom network (like bridge or overlay if you have multiple hosts) you can simply have the containers talk to each other via name as long as they are part of same network. You can also have an alias for each container which would round-robin the list of containers which have the same alias. For simple example see:
https://linuxctl.com/docker-networking-options-bridge
As long as you are using the bridge mode for your docker network and creating your containers inside that network, service discovery is available to you out of the box.
You will need to get help from other tools once your infrastructure starts to span in to multiple servers and microservices distributed on them.
Swarm is a good tool to start with, however, I would like to stick to consul if it comes to any IaaS provider like Amazon for my production loads.
I am having problem understanding the need of a separated service discovery server while we could register the slave node to the master node at the slave node start-up through whatever protocol. Hosting another service seem redundant to me.
Docker Swarm is there to create a cluster of hosts running Docker and schedule containers across the cluster.
It does not include service discovery, which is provided by a backend service, such as etcd, consul or zookeeper.
The first problem: service registration and discovery is an infrastructure concern, not an application concern.
The second problem: implementing service registration and discovery when infrastructure and application implementation are mutually agnostic is tough.
The DockerCon makes that distinction clear this morning (Nov. 16th, 2015), with the "Docker Stack":
(Graphics from #laurelcomics)
Docker networking solves these problems by backing an interface (DNS) with pluggable infrastructure components that adhere to a common KV interface.
You can see consul.io used in:
"Easy routing and service discovery with Docker, Consul and nginx"
"Docker Overlay Networks: That was Easy"
"Docker DNS getaddrinfo ENOTFOUND"
That means:
Consul is a KV (Key/Value) store which can be plugged into Swarm in order to manage the service discovery aspect.
Swarm is the access layer, which is usually the layer that contains a gateway or routing component that allows others to actually reach your services.
(Image from the "Easy routing and service discovery with Docker, Consul and nginx" article written by Ladislav Gazo)
The goal is to isolate what is an infrastructure concern (Discovery service) in its own container, separate from a dev tool concern (Swarm).