How to call all containers/replicas behind a docker swarm service - docker

Consider there is a docker service with 5 replicas. I want to make a rest call to all 5 replicas. If any replica fails the whole request should fail. I want to do this because sometimes the code inside the container stops running and does not respond to rest calls. Is is possible to make a single rest call to a service and if any container fails to return a response the whole request fails

For future reference, to new viewers, starting from Compose file version 3.3 onwards, a service's deploy now supports 'endpoint_mode' options. Instead of the default 'vip' (virtual IP/proxy load balancer), option 'dnsrr' is now also available:
DNS round-robin (DNSRR) service discovery does not use a single virtual IP. Docker sets up DNS entries for the service such that a DNS query for the service name returns a list of IP addresses, and the client connects directly to one of these. DNS round-robin is useful in cases where you want to use your own load balancer, or for Hybrid Windows and Linux applications.
Meaning a command like 'nslookup'/'dig +short'/etc on your service (within the network) will now resolve to a list of container IPs, instead of the proxy load balancer in front of them.
nslookup <yourservice> | awk '/^Address: / { print $2 }' | xargs | sed -e 's/ /,/g'
can be used for a comma-seperated IP address string. A Java alternative:
Arrays.asList(java.net.InetAddress.getAllByName(<yourservice>));
You can adjust your application's code accordingly.
Based on this, you can implement your own behavior on awaiting each container's response and how to handle situations like not all containers replying within a specified time.
For compose file version 3.3+ compatibility, the docker engine version should be 17.06.0+.
Another approach is to resolve tasks.<service-name>. More information can be found in this service discovery question.

Docker's load balancer proxies your request to one of the five tasks. That's what it's intended to to. If you want to send a message to the service and collect the results of all tasks you will have to implement that by yourself. You may put a proxy in front or implement some cluster message distribution in your application.
In your case look at docker healthchecks. You define a command that is periodically run inside your container and if that fails docker assumes your container unhealthy and kills it. You need to write a short script that send your REST call and returns a non zero exit code if it fails.

Related

Periodic cron-like Functions Across Containers in a Docker Project

I have implemented the LAMP stack for a 3rd party forum application on its own dedicated virtual server. One of my aims here was to use a composed docker project (under Git) to encapsulate the application fully. I wanted to keep this as simple to understand as possible for the other sysAdmins supporting the forum, so this really ruled out using S6 etc., and this in turn meant that I had to stick to the standard of one container per daemon service using the docker runtime to do implement the daemon functionality.
I had one particular design challenge that doesn't seem to be addressed cleanly through the Docker runtime system, and that is I need to run periodic housekeeping activities that need to interact across various docker containers, for example:
The forum application requires a per-minute PHP housekeeping task to be run using php-cli, and I only have php-cli and php-fpm (which runs as the foreground deamon process) installed in the php container.
Letsencrypt certificate renewal need a weekly certbot script to be run in the apache container's file hierarchy.
I use conventional /var/log based logging for high-volume Apache access logs as these generate Gb access files that I want to retain for ~7 days in the event of needing to do hack analysis, but that are otherwise ignored.
Yes I could use the hosts crontab to run docker exec commands but this involves exposing application internals to the host system and IMO this is breaking one of my design rules. What follows is my approach to address this. My Q is really to ask for comments and better alternative approaches, and if not then this can perhaps serve as a template for others searching for an approach to this challenge.
All of my containers contain two special to container scripts: docker-entrypoint.sh which is a well documented convention; docker-service-callback.sh which is the action mechanism to implement the tasking system.
I have one application agnostic host service systemctl: docker-callback-reader.service which uses this bash script, docker-callback-reader. This services requests on a /run pipe that is volume-mapped into any container that need to request such event processes.
In practice I have only one such housekeeping container see here that implements Alpine crond and runs all of the cron-based events. So for example the following entry does the per-minute PHP tasking call:
- * * * * echo ${VHOST} php task >/run/host-callback.pipe
In this case the env variable VHOST identifies the relevant docker stack, as I can have multiple instances (forum and test) running on the server; the next parameter (php in this case) identifies the destination service container; the final parameter (task) plus any optional parameters are passed as arguments to a docker exec of php containers docker-service-callback.sh and magic happens as required.
I feel that the strengths of the system are that:
Everything is suitably encapsulated. The host knows nothing of the internals of the app other than any receiving container must have a docker-service-callback.sh on its execution path. The details of each request are implemented internally in the executing container, and are hidden from the tasking container.
The whole implementation is simple, robust and has minimal overhead.
Anyone is free to browse my Git repo and cherry-pick whatever of this they wish.
Comments?

Dynamically set proxy for docker pull

I'm trying to pull an image from a server with multiple proxies.
Setting a proper proxy depends on which zone the machine is trying to docker pull from.
For the record, adding the one relevant proxy in /etc/systemd/system/docker.service.conf/http-proxy.conf of the machine which is pulling the image, works fine.
But the container is supposed to be downloaded on multiple zones, which require different proxies based on where the machine is.
I tried two things:
Passed the list of proxies in the http-proxy.conf, like this:
[Service]
Environment="HTTP_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="HTTPS_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="NO_PROXY=localhost"
Some machines require http://proxy_1:port/, which work fine.
But on a machine that requires http://proxy_2:port/ to pull; it does not work, meaning, Docker does not fallback to another proxy to try. It returns this error:
Error response from daemon: Get HTTP:<ip>:<proxy_1> proxyconnect tcp: dial tcp <ip>:<proxy_1>: connect: no route to host
Ofcourse if I were to provide only the second working proxy to the configuration, it will work.
Passing proxy as a parameter to docker pull, like in docker build/run but that is not supported as per the documentation.
I am looking for a way to set-up proxies in such a way that either
Docker falls back to trying other provided alternate proxies
OR
I can provide proxy dynamically at the time of pull. (This will be part of an automated process which determines relevant proxy to pass.)
I do not want to constantly change the http-proxy file and restart docker for obvious reasons.
What are my options?
If you're using a sufficiently recent docker (i.e. 17.07 and higher) you can have this configuration on the client side. Refer to the official documentation for details on the configuration.
You still need to have multiple configuration files for the various proxy configuration you need, but you can switch them without the need to restart the docker daemon.
In order to do something similar (not exactly related to proxy) I use a shell script that wraps the invocation of docker client pointing to a custom configuration file via the --config option.

Docker swarm load balancing - How to give common name to the service?

I read swarm routing mesh
I create a simple service which uses tomcat server and listens at 8080.
docker swarm init I created a node manager at node1.
docker swarm join /tokens I used the token provided by the manager at node 2 and node 3 to create workers.
docker node ls shows 5 instances of my service, 3 running at node 1, 1 running at node 2, another one is at node 3.
docker service create image I created the service.
docker service scale imageid=5 scaled it.
My application uses atomic number which is maintained at JVM level.
If I hit http://node1:8080/service 25 times, all requests goes to node1. How dose it balance node?
If I hit http://node2:8080/service, it goes to node 2.
Why is it not using round-robin?
Doubts:
Is anything wrong in the above steps?
Did I miss something?
I feel I am missing something. Like common service name http://domain:8080/service, then swarm will work in round robin fashion.
I would like to understand only swarm mode. I am not interested external load balancer as of now.
How do I see swarm load balance in action?
Docker does round robin load balancing per connection to the port. As long as a connection is up, it will continue to go to the same instance.
Http allows a connection to be kept alive and reused. Browsers take advantage of this behavior to speed up later requests by leaving connections open. To test the round robin load balancing, you'd need to either disable that keep alive setting or switch to a command line tool like curl or wget.

Container Orchestration for provisioning single containers based on user action

I'm pretty new to Docker orchestration and managing a fleet of containers. I'm wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container.
Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I'm wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?
I'd look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)
I'm not sure exactly what you mean by command, but I'll assume its some sort of dev env triggered by a CLI, make-dev.
User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
Kubernetes will schedule it out across your fleet of machines
Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod
This should scale well, and if your different environments use the same base container image, they should start really fast.
Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container
kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.
The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.
If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

HaProxy for service discovery on a marathon mesos docker linked containers

Please this is not asked anywhere I have checked. Here is what I have done. I am able to deploy single instance of mesos, marathon and docker. Moving next step ahead I want to have 2 mesos slave(docker containers) linked to each other. Just using docker the same can be achieved by using the docker link feature. But while using the orchestration(mesos) and scheduler(marathon)it seems u need to use service discovery.
My setup up is simple and runnning on a single host. So I will have 2 docker containers one running a simple pub/sub and one running rabbitmq. How can I use HA PRoxy in this setup. I have seen some documents provided by mesosphere
http://mesosphere.com/docs/getting-started/service-discovery/ but it is not clear how to go about it.
The canonical approach for service discovery with Mesos + Marathon + Docker is currently what is described in the document you linked.
I'm assuming you're able to get the two applications running in Marathon already.
Typically what happens is:
1) Configure your application definition to include the ports that your application requires.
2) You set up the provided haproxy-marathon-bridge script to run periodically using a utility like cron. This script scrapes Marathon's API to figure out what host and port the application instances are running on and what the known "friendly" port is.
In the example in the service discovery article, the first application has friendly ports of 80 and 443, whilst the second has a friendly port of 8081.
The script then generates a haproxy.cfg configuration that has rules mapping localhost:friendly_port to actual_host:actual_port.
3) Configure your applications to look for each other on localhost:friendly_port. HAProxy will route connections appropriately.
Hope this helps your understanding!
I created a haproxy service discovery docker container that you can run in mesos. It's not production ready but I am using it in my development environment doing exactly what you're trying to do. The reason I prefer this over what comes with marathon is I haven't found a good way to do complicated haproxy configurations with haproxy-marathon-bridge. With spiderweb you can create a template for the haproxy configuration which enables you to do things such as acl routing etc. It doesn't support health checks yet which is something that will need to be done before its production ready. You can see the project here https://github.com/SBRDevelopment/spiderweb.
We have combined Mesos and Marathon with consul and registartor,
so in the end you have haproxy configuration auto-generated with consul-template.
try https://github.com/eBayClassifiedsGroup/PanteraS
All in one container.
With Mesos-DNS you can also do the following:
Setup mesos-dns as in this guide: http://programmableinfrastructure.com/guides/service-discovery/mesos-dns-haproxy-marathon/ (you can skip HAProxy steps they are not required)
When you start your docker containers make sure that they have "namespace %slave_ip_with_mesos_dns%" (replace string with IP address) in their /etc/resolv.conf files.
if lets say name of an app is "peek" it should be reachable from other applications at peek.marathon.mesos

Resources