docker-compose.yml to start containers on multiple VM's - docker

I have created Docker containers using docker-compose.yml on a single host.
Can anybody tell if docker-compose.yml file can be used to start Docker containers on multiple VMs ? If yes, how?

The compose file cannot be used with the new Docker "Swarm Mode" introduced in June (Docker 1.12). The "legacy" Docker Swarm accepts compose files but you should really focus on learning Docker "Swarm Mode", not the old Docker Swarm. It's much simpler too, except for the missing support for compose files.
"swarm mode" accepts dab files and there is a way to convert compose files to dab, but it's experimental (which means that a lot of what you have put in your compose file won't translate). So the current best way is to create bash scripts with the CLI commands eg: docker service create --name nginx nginx:1.10-alpine.
and do have look at #Matt 's link about learning the basics of Docker swarm mode. http://docs.docker.com/engine/swarm/key-concepts

You can quickly spin up a stack swarm from the external containers and a node list (by ip like 10.0.0.1 or hostname like nodeb):
docker run -d -P --restart=always --name swarm-manager swarm manager \
"nodes://10.0.0.1:2376,nodeb:2376,nodec:2376"
export DOCKER_HOST=$(docker port swarm-manager 2375)
docker-compose up
Before running this, you'd need to configure the engines to listen on 2376 with TLS configured, a client key/certificate, and the appropriate network access. See docker's documentation on TLS for more details on configuring this.

Related

Getting a list of services from inside a Docker container in Swarm Mode

I am trying to find a list of all running Docker Swarm Mode services that have a specific tag - but from inside a container.
I know about the hack to mount docker.sock into the container, but I'm looking for a more elegant/secure way to retrieve this information.
Essentially, I want my app to "auto-discover" other available services. Since the docker swarm manager node already has this information, this would eliminate the need for a dedicated service registry like Consul.
You can query docker REST API from within the container.
For example, on MacOS, run on the host to list docker images:
curl --unix-socket /var/run/docker.sock http:/v1.40/images/json
To run the same inside the container, first install socat on the host.
Then establish a relay between host's unix-socket /var/run/docker.sock and host's port 2375 using socat:
socat TCP-LISTEN:2375,reuseaddr,fork UNIX-CONNECT:/var/run/docker.sock
Then query host's 2375 port from within a container:
curl http://host.docker.internal:2375/v1.40/images/json
You should see the same result.
Notes:
I don't have initialized docker swarm, so examples use docker images
listing. Refer to Docker docs for listing services api.
You can find out API version from the output of docker info
Refer to What is linux equivalent of “host.docker.internal” if you don't use MacOS. Latest Linux docker versions should support host.docker.internal.

Installing 2-node cluster of MarkLogic in docker

I'd like to install MarkLogic in docker and form a cluster i.e. two or more ML nodes instance running on the same machine. How to achieve that ?
In the Building a MarkLogic Docker Container blog entry, it describes how to create and initialize a Docker image running MarkLogic.
Near the bottom of the article, it describes how to link multiple containers using the --link switch and docker-compose to assist in managing a cluster of Docker containers:
Linking Containers
You are the one who tells Docker how containers should communicate! When using the docker run command, you can also pass in a --link flag.
Consider the following examples:
docker run -d --name=marklogic1 --hostname=marklogic1.local -p 8000-8002:8000-8002 marklogic:8.05-preinitialized
docker run -d --name=marklogic2 --hostname=marklogic2.local --link marklogic1:marklogic1 -p 18000-18002:8000-8002 marklogic:8.05-preinitialized
The above creates two MarkLogic containers. The second has the --link flag. Docker networking sets environment variables and the /etc/hosts file inside each container being linked along and also the linking container. This sets up the ability for Docker containers to communicate over the internal Docker network. The --hostname flag is used to be consistent with MarkLogic, which uses the full domain name when contacting other MarkLogic servers in the cluster. So we simply add the .local domain to the name of the container.
Finally, note the -p flag on the second container exposes the MarkLogic’s ports in the range of 8000 to 8002 to the host computer’s ports of 18000 to 18002. Why not use the host computer’s ports of 8000 to 8002? Because the first container is already using them. Remember, Docker shares networking with the host computer! But of course, you can choose any range of open ports on your host computer to map the container’s MarkLogic ports.
Now, simply point your browser to port 8001 in the first container (marklogic1) and go through the post-installation steps. Skip joining a cluster. When finished, point your browser to port 18001 for the second container (marklogic2) and go through the post-installation steps. When asked to join a cluster, simply use the host name of localhost and leave the port number at 8001. MarkLogic in the second container will contact MarkLogic in the first container. The configuration will be updated such that the marklogic2 joins the cluster with marklogic1. Create and add a third MarkLogic container, also linking it to marklogic1:marklogic1 and marklogic2:marklogic2 and you’ll soon have a proper 3-node MarkLogic cluster!
Using docker-compose
Docker has created another tool to aid in managing clusters of Docker containers. docker-compose has commands to create multiple containers and network them together. You can then create them, start them and stop them using docker-compose commands. Docker uses a file called Dockerfile to build containers. docker-compose uses a file called docker-compose.yml to build networks of containers.
docker-compose is available as a separate download.

best practices for deploying nginx

I am totally new in the cloud stuff, I wanted to deploy my application which using node,MongoDB and redis. all these parts become a docker container and working well together.
now I want to set up nginx. I wonder what is the best practice for deploying load balancers? should I run nginx as docker container? or just install it in system level?
I think it depends on how many services you want to serve with your nginx instance. For example, since you can have only one nginx instance bound to the 80 and 443 ports, if you want to share the same SAP between different domains I would go for nginx running on the host (or in a dedicated stack but it looks complex). If you use the SAP for a single domain then it makes perfect sense to have it inside the stack.
If you are running other components of the stack on containers , then it makes sense to run nginx as container as well.
But it depends on your environment , what tools are available. You can scale nginx on kubernetes easily , as well as on docker swram or any other tool of your choice.
Ideally you need to run each compenent in a separate container so that you can manage and scale and troubleshoot them independently.
It's a really good idea to embed an nginx in your docker network. As a docker container, in a docker network, it could connect to other by their service/container name, while you will define port forwarding rule only on the nginx service.
For example :
docker network create --driver overlay --attachable demo
docker run -d -p 80:80 --network demo --name nginx nginx
docker run -it --network demo --name alpine alpine
Your shell should be in the alpine container. Do a "ping nginx". You should be able to ping it. The opposite is possible too.
So now, you have at localhost:80 (from your host machine) a nginx deployed, which can call other containers with their container/service name. Really useful to have an access point to your web-apis deployed in your docker network.

docker swarm container communication

I create a swarm and join a node, very nice all works fine
docker swarm init --advertise-addr 192.168.99.1
docker swarm join --token verylonggeneratedtoken 192.168.99.1:2377
I create 3 services on the swarm manager
docker service create --replicas 1 --name nginx nginx --publish published=80,target=80
docker service create --replicas 1 --name php php:7.1-fpm published=9000,target=9000
docker service create --replicas 1 --name postgres postgres:9.5 published=5432,target=5432
All services boots up just fine, but if I customize the php image with my app, and configure nginx to listen to the php fpm socket I can’t find a way to communicate these three services. Even if I access the services using “docker exec -it service-id bash” and try to ping the container names or host names (I even tried to curl them).
What I am trying to say is I don’t know how to configure nginx to connect to fpm since I don’t know how one container communicates to another using swarm. Using docker-compose or docker run is simple as using a links option. I’ve read all documentation around, spent hours on trial and error, and I just couldn’t wrap my head around this. I have read about the routing mesh, wish will get the ports published and it really does to the outside world, but I couldn’t figure in wish ip its published for the internal containers, also that can't be an random ip as that will cause problems to manage my apps configuration, even the nginx configurations.
To have multiple containers communicate with each other, they next to be running on a user created network. With swarm mode, you want to use an overlay network so containers can run on multiple hosts.
docker network create -d overlay mynet
Then run the services with that network:
docker service create --network mynet ...
The easier solution is to use a compose.yml file to define each of the services. By default, the services in a stack are deployed on their own network:
docker stack deploy -c compose.yml stack-name
Or you can just make 1 Docker-compose, and make a docker stack with them.
It's easier and more reliable to combine php_fpm and nginx in the same image. I know this goes against the official way of single-app images, but for cases like php_fpm+nginx where you must have both to return a request, it's the best case. I have a WIP sample here: https://github.com/BretFisher/php-docker-good-defaults

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

Resources