Need Explanation for the docker documentation on the swarm - docker

I finished this documentation:
https://docs.docker.com/swarm/install-w-machine/
It works fine.
Now I tried to setup this EC2 instances by following this documentation:
https://docs.docker.com/swarm/install-manual/
I am in Step 4. Set up a discovery backend
I cannot understand the steps what I need to do further.
I created 5 nodes in EC2: manager0, manager1, consul0, node0, node1. Now I need to know how to setup service discovery with swarm.
In that document they ask us to connect manager0 and consul0 then ifconfig, then they given as etc0 instance. I don't know where this is coming from.
Ultimately I need to know where (in which node?) to run this command:
$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Any suggestion for me How to clear this step?

Consul will run on the consul0 server you created. So basically you first need to be able to run docker on worker0 and worker1 remotely, this is step 3. A better way of doing this is editing the daemon directly with the command:
sudo echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"' > /etc/default/docker`
Then restart docker. Afterwards you will find that you can run docker remotely from master0, master1 or any other instance behind your firewall with docker commands that start with:
docker -H $WORKER0_IPADDRESS:2375
For example if your workers ip address was 1.2.3.4 this would run the docker ps command remotely:
docker -H 1.2.3.4:2375 ps
This is what swarm runs on. Then start up your consul server with the command you want to run, you got that one right and thats it you wont do anything else with the consul0 server except use its IP address when you run your swarm commands.
So if $CONSUL0 represented the IP address of your consul server this is how you would set up the rest of swarm. If you ran each of them on the local machine of each node:
On consul0:
docker run -d -p 8500:8500 --restart=unless-stopped --name=consul progrium/consul -server -bootstrap
On master0 and master1:
docker run --name=master -d -p 4000:4000 swarm manage -H :4000 --replication --advertise $(hostname -i):4000 consul://$CONSUL0:8500
On worker0 and worker1:
docker run -d --name=worker swarm join --advertise=$(hostname -i):2375 consul://$CONSUL0:8500/

Related

Can I Run Docker Exec from an external VM?

I have a group of docker containers running on a host (172.16.0.1). Because of restrictions of the size of the host running the docker containers, I'm trying to set up an auto-test framework on a different host (172.16.0.2). I need my auto-test framework to be able to access the docker containers. I've looked over the docker documentation and I don't see anything that says how to do this.
Is it possible to run a docker exec and point it to the docker host? I was hoping to do something like the following but there isn't an option to specify the host.:
docker exec -h 172.16.0.1 -it my_container bash
Should I be using a different command?
Thank you!
Not sure why there is need of doing docker exec remotely. But anyways it is achievable.
You need to make sure your docker daemon on your host where your containers are running is listening on a socket.
Something like this:
# Running docker daemon which listens on tcp socket
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
Now interact with the docker daemon remotely from external VM using:
$ docker -H tcp://<machine-ip>:2375 exec -it my-container bash
OR
$ export DOCKER_HOST="tcp://<machine-ip>:2375"
$ docker exec -it my-container bash
Note: Exposing docker socket publicly in your network has some serious security risks. Although there are other ways to expose it on encrypted HTTPS socket or over the ssh protocol.
Please go through these docs carefully, before attempting anything:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
https://docs.docker.com/engine/security/https/
If you have SSH on both machines you can easily execute commands on remote daemon like that:
docker -H "ssh://username#remote_host" <your normal docker command>
# for example:
docker -H "ssh://username#remote_host" exec ...
docker -H "ssh://username#remote_host" ps
# and so on
Another way to do the same is to store -H key value into DOCKER_HOST environment variable:
export DOCKER_HOST=ssh://username#remote_host
# now you can talk to remote daemon with your regular commands
# these will be executed on remote host:
docker ps
docker exec ...
Without SSH you can make Docker listen for TCP. This will require you to make some preparations to maintain security. This guide walks through creating certificates and some basic usage. After that you will have somewhat similar usage:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=172.16.0.1:2376
At last you can use docker context to save external hosts and their configuration. Using context allows you to communicate with various remote hosts with ease by using --context <name> option. Read context documentation here.

What goes in "some-network" placeholder in dockerized redis cli?

I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.

No such image or container error

I want to setup a rancher server and a rancher agent on the same server.
Here is what i have done for creating server:
docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
Then, I have opened my web-browser on 8080 port.
I have chosen a login/password and enabled access control.
Then i wanted to create a host (agent). Rancher web interface says me to type this command:
docker run -e CATTLE_AGENT_IP=x.x.x.x --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10 http://nsxxx.ovh.net:8080/v1/scripts/yyyy:1514678400000:zzzz
I have no error message, but I do not see any entry in host section (in rancher web interface).
So I tried to execute a shell on the agent docker container:
docker exec -ti xxxxx /bin/bash
I tried to manually run run.sh script and here is what I see:
Error: No such image or container: nsxxx
I suppose this is because docker containers cannot communicate together, but I have done exactly what is in the documentation...
Thanks for your help
For docker exec your need to replace the xxxxx string with the container id or the name of the container. Both you get from the docker ps command

How to connect to server on Docker from host machine?

Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/

Should Swarm Master Join As Node in a Single Node Cluster?

We are building a small cluster, and a (strange) requirement is to setup everything in one machine, to which other machines can join in the future.
I set up consul with:
docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
and the master with:
docker run -d -p 4000:4000 swarm manage -H :4000 --advertise <ip_here>:4000 consul://<ip_here>:8500
where docker is run with:
sudo docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
and docker -H :4000 info lists the Nodes as 0 at this stage, where I cannot run any images with docker -H :4000 run <image> because No healthy node available in the cluster.
When I join the master node to the cluster with:
docker run -d swarm join --advertise=<ip_here>:2375 consul://<ip_here>:8500
Then docker -H :4000 info lists the Nodes as 1, and I can run containers.
Please note that <ip_here> refers all to the same ip of the machine.
Is this the intended behaviour? If not, what am I doing wrong?
After seeing Docker Machine's way of creating a Swarm cluster, as well as using Swarm that is integrated into Docker v1.12.0, I wanted to post an update. Swarm master does join Swarm cluster, by running two containers, an agent and a master.
As for me, I use the Swarm Master as the Consul server. This answer may help you. Then, to answer, the Swarm Master does not join in as a single node cluster.
You can't deploy Swarm on a single node. That's not its use and cannot work that way. Swarm turns a pool of Docker hosts into a single, virtual Docker host, so if the pool of Docker hosts contains zero hosts... There is no Docker Agent to host container.

Resources