I have come across the word service in many Docker documentations. Does service mean a container? By definition, a container is supposed to perform only one task. Are service and container then to be understood as synonymous?
The term "service" is originally linked with Docker Swarm
A service is the definition of the tasks to execute on the manager or worker nodes.
It is the central structure of the swarm system and the primary root of user interaction with the swarm.
When you create a service, you specify which container image to use and which commands to execute inside running containers.
As mentioned in the Container Network Model:
Endpoint represents a Service Endpoint. It provides the connectivity for services exposed by a container in a network with other services provided by other containers in the network.
Since Endpoint represents a Service and not necessarily a particular container, Endpoint has a global scope within a cluster.
That illustrates a service is not a container, but a way to describe and manage a set of containers (0 to n).
From "How services work":
Related
I have a docker swarm mode orchestration on my servers, and as my business requirements I have a custom service discovery (it's run by swarm too).
Every services after running call register method on service discovery and introduce his contact information.
So service discovery could reversing traffics and balancing load between instance by introduced ip and port
My problem there is, when an instance(ruined in container) call discovery register method, his remote-addr is not real (mean it's not equal to hostname -i) and service discovery can not find it in network
Is any idea?
One option would be for the service discovery to also participate in the swarm. Then it should be able to find the instances which are in containers that are in the swarm.
Another would be for the containers to run with --net=host. Though this may defeat the reason for having them in a swarm in the first place.
So, I'm starting to play with docker, so far so good, but I got this question on my head.
Having this two statements in mind (please also correct me if I am misunderstanding something):
1) Docker Swarm provides out of the box service discovering, meaning micro services can talk to each other on the same network by service name without actually knowing on which hosts the other services are allocated.
2) Services instances are ephemeral, so a service can be hosted by different machines in a swarm lifespan.
How should I know which ip adress should expose as a central API gateway service, for instance?
You can expose the IP address of any node in the cluster as Docker has a swarm load balancer running on any of the nodes.
I'm new to docker and microservices. I've started to decompose my web-app into microservices and currently, I'm doing manual configuration.
After some study, I came across docker swarm mode which allows service discovery. Also, I came across other tools for service discovery such as Eureka and Consul.
My main aim is to replace IP addresses in curl call with service name and load balance between multiple instances of same service.
i.e. for ex. curl http://192.168.0.11:8080/ to curl http://my-service
I have to keep my services language independent.
Please suggest, Do I need to use Consul with docker swarm for service discovery or i can do it without Consul? What are the advantages?
With the new "swarm mode", you can use docker services to create clustered services across multiple swarm nodes. You can then access those same services, load-balanced, by using the service name rather than the node name in your requests.
This only applies to nodes within the swarm's overlay network. If your client systems are part of the same swarm, then discovery should work out-of-the-box with no need for any external solutions.
On the other hand, if you want to be able to discover the services from systems outside the swarm, you have a few options:
For stateless services, you could use docker's routing mesh, which will make the service port available across all swarm nodes. That way you can just point at any node in the swarm, and docker will direct your request to a node that is running the service (regardless of whether the node you hit has the service or not).
Use an actual load balancer in front of your swarm services if you need to control routing or deal with different states. This could either be another docker service (i.e. haproxy, nginx) launched with the --mode global option to ensure it runs on all nodes, or a separate load-balancer like a citrix netscaler. You would need to have your service containers reconfigure the LB through their startup scripts or via provisioning tools (or add them manually).
Use something like consul for external service discovery. Possibly in conjunction with registrator to add services automatically. In this scenario you just configure your external clients to use the consul server/cluster for DNS resolution (or use the API).
You could of course just move your service consumers into the swarm as well. If you're separating the clients from the services in different physical VLANs (or VPCs etc) though, you would need to launch your client containers in separate overlay networks to ensure you don't effectively defeat any physical network segregation already in place.
Service discovery (via dns) is built into docker since version 1.12. When you create a custom network (like bridge or overlay if you have multiple hosts) you can simply have the containers talk to each other via name as long as they are part of same network. You can also have an alias for each container which would round-robin the list of containers which have the same alias. For simple example see:
https://linuxctl.com/docker-networking-options-bridge
As long as you are using the bridge mode for your docker network and creating your containers inside that network, service discovery is available to you out of the box.
You will need to get help from other tools once your infrastructure starts to span in to multiple servers and microservices distributed on them.
Swarm is a good tool to start with, however, I would like to stick to consul if it comes to any IaaS provider like Amazon for my production loads.
Elasticsearch is designed to run in cluster mode, all I have to do is to define the relevant node IPs in the cluster via environment variable and as long as network connectivity is available it will connect and join the other nodes to the cluster.
I have 3 nodes, 1 is acting as the docker swarm manager and the other two are workers. I have initialized the manager and joined the worker nodes and everything looks ok from that standpoint.
Now I'm trying to run the elasticsearch container in a way that will allow me to join all nodes to the same elasticsearch cluster, however, I want the nodes to join using their overlay network interface and that means that I need to know the container internal IP addresses at the time of running the docker service create command, how can I do this? Do I have to use something like consul to achieve this?
Some clarifications:
I need to know, at the time of service creation the IP addresses (or DNS names) for all Elasticsearch participants so I could start the cluster correctly. This has to be at the time of creation and not afterwards. Also, as I understand, I can expose ports 9200/9300 for all services and work with the external machine IPs and get it to work, but I would like to use the overlay network to do all these communications (I thought this is what swarm mode is for).
Only a partial solution here.
So, when attaching your services to a custom overlay network, you indeed have access to Docker's custom Service discovery feature. I'll detail the networking feature of Docker Swarm mode, before trying to tie it to your problem.
I'll be using the different term of services and tasks, in which a service could be elasticsearch, whereas a task is a single instance of that elasticsearch service.
Docker networking
The idea is that for each services you create, docker assigns a Virtual IP (VIP), and a custom dns alias. You can retrieve this VIP using the docker service inspect myservice command.
But, there is two modes to attach a service to an overlay network dnsrr and VIP. You can select these options using the --endpoint-mode options of docker service create.
The VIP mode (I believe it is the default one, or at least the most used), affects the virtual ip to the service's dns alias. This means that doing an nslookup servicename would return to you a single vip, that behind the scenes, would be linked to one of your container in a round robin fashion. But, there is also a special dns alias that lets you access all of your instances ips (all of your tasks ips) : tasks.myservice.
So in VIP mode you can retrieve all of your tasks ips using a simple nslookup tasks.myservice, where myservice is a service name.
The other mode is dnsrr. This mode simply gets rid of the VIP, and connects the dns alias to the different tasks (=service instances), in a round robin way. This way, you simply have to do a nslookup myservice to retrieve the different service instances ip.
Elasticsearch clustering
Ok so first of all I'm not really familiar with the way elasticsearch lets you cluster. From what I understood from your question, you need when running the elasticsearch binary, give it as a parameter, the adress of all of the other nodes it needs to cluster with.
So what I would do, is to create a custom Elasticsearch image, probably based on the one from the default library, to which I would add a custom Entrypoint that would firstly run a script to retrieve the other tasks ip.
I'd believe that staying in VIP mode is suitable for you, since there is the tasks.myservice dns alias. You'll then need to parse the output to retrieve the tasks ip (and probably remove yours). Then you'll be able to save them in a config file environment variable, or use them as a runtime option for your elasticsearch binary.
Edit: To create a custom overlay network, you will need to use the docker network create command, and use the --network option of docker service create
This is answer is mainly based on the Swarm mode networking documentation
I run a web container in server A, and a DB container in server B.
How can I connect these two containers?
I only know how to connect two containers which run in same host.
You need a:
service discovery which will record your containers hosts (like Consul, a KV -- Key/Value store)
A good way to think of Consul is broken into 3 layers.
The middle layer is the actual config store, which is not that different from etcd or Zookeeper.
The layers above and below are pretty unique to Consul.
The killer feature of Consul is its service catalog. Instead of using the key-value store to arbitrarily model your service directory as you would with etcd or Zookeeper, Consul exposes a specific API for managing services.
a registrator (you can use swarm even though it is a bit overkill, or registrator)
Registrator is a single, host-level service you run as a Docker container.
It watches for new containers, inspects them for service information, and registers them with a service registry. It also deregisters them when the container dies.
It has a pluggable registry system, meaning it can work with a number of service discovery systems.