I have a group of docker containers running on a host (172.16.0.1). Because of restrictions of the size of the host running the docker containers, I'm trying to set up an auto-test framework on a different host (172.16.0.2). I need my auto-test framework to be able to access the docker containers. I've looked over the docker documentation and I don't see anything that says how to do this.
Is it possible to run a docker exec and point it to the docker host? I was hoping to do something like the following but there isn't an option to specify the host.:
docker exec -h 172.16.0.1 -it my_container bash
Should I be using a different command?
Thank you!
Not sure why there is need of doing docker exec remotely. But anyways it is achievable.
You need to make sure your docker daemon on your host where your containers are running is listening on a socket.
Something like this:
# Running docker daemon which listens on tcp socket
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
Now interact with the docker daemon remotely from external VM using:
$ docker -H tcp://<machine-ip>:2375 exec -it my-container bash
OR
$ export DOCKER_HOST="tcp://<machine-ip>:2375"
$ docker exec -it my-container bash
Note: Exposing docker socket publicly in your network has some serious security risks. Although there are other ways to expose it on encrypted HTTPS socket or over the ssh protocol.
Please go through these docs carefully, before attempting anything:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
https://docs.docker.com/engine/security/https/
If you have SSH on both machines you can easily execute commands on remote daemon like that:
docker -H "ssh://username#remote_host" <your normal docker command>
# for example:
docker -H "ssh://username#remote_host" exec ...
docker -H "ssh://username#remote_host" ps
# and so on
Another way to do the same is to store -H key value into DOCKER_HOST environment variable:
export DOCKER_HOST=ssh://username#remote_host
# now you can talk to remote daemon with your regular commands
# these will be executed on remote host:
docker ps
docker exec ...
Without SSH you can make Docker listen for TCP. This will require you to make some preparations to maintain security. This guide walks through creating certificates and some basic usage. After that you will have somewhat similar usage:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=172.16.0.1:2376
At last you can use docker context to save external hosts and their configuration. Using context allows you to communicate with various remote hosts with ease by using --context <name> option. Read context documentation here.
Related
I want to create a docker container from one machine (suppose having centos) machine and then access that container from another machine(may be centos or mac). How can we do that? Is it possible with macvlan networking? If yes , what are steps? If not, what is the way?
Depends from what is your final goal. Following are some approaches (depending on what you want to achieve as final goal):
Manage container and execute bash in the container on a remote host:
Easiest way is to use the environment variable DOCKER_HOST
export DOCKER_HOST=ssh://vagrant#192.168.5.178
docker exec -ti centos_remote /bin/bash
You can find more information in this answer https://stackoverflow.com/a/51897942/2816703
Use the container as a form of virtual machine on which user can ssh:
First you will need a container that is running the sshd. You will expose the port 22 on another port on the host network. Finally you will use the ssh with -p to connect that port. Here is a working example:
$ sudo docker run -d -P --name test_sshd rastasheep/ubuntu-sshd:14.04
$ sudo docker port test_sshd 22
0.0.0.0:49154
$ ssh root#localhost -p 49154
# The password is `root`
root#test_sshd $
or if you are on a remote machine, use the host IP address xxx.xxx.xxx.xxx, to connect to the container use:
$ ssh root#xxx.xxx.xxx.xxx -p 49154
# The password is `root`
Also you can pre-select a port (in this case port 22000) and test from the host.
~# docker run -d -p 22000:22 --name test_sshd rastasheep/ubuntu-sshd:14.04
~# ssh root#<ipaddress> -p 22000
Setup a network layer (L2/L3) between the hosts:
Using macvlan is one approach. Another approach is the ipvlan. In both cases, you are converting the host network adapter to a virtual router, after which you need to setup the routes. You can find detailed explanation on this link http://networkstatic.net/configuring-macvlan-ipvlan-linux-networking/
I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.
I want to setup a rancher server and a rancher agent on the same server.
Here is what i have done for creating server:
docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
Then, I have opened my web-browser on 8080 port.
I have chosen a login/password and enabled access control.
Then i wanted to create a host (agent). Rancher web interface says me to type this command:
docker run -e CATTLE_AGENT_IP=x.x.x.x --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10 http://nsxxx.ovh.net:8080/v1/scripts/yyyy:1514678400000:zzzz
I have no error message, but I do not see any entry in host section (in rancher web interface).
So I tried to execute a shell on the agent docker container:
docker exec -ti xxxxx /bin/bash
I tried to manually run run.sh script and here is what I see:
Error: No such image or container: nsxxx
I suppose this is because docker containers cannot communicate together, but I have done exactly what is in the documentation...
Thanks for your help
For docker exec your need to replace the xxxxx string with the container id or the name of the container. Both you get from the docker ps command
I finished this documentation:
https://docs.docker.com/swarm/install-w-machine/
It works fine.
Now I tried to setup this EC2 instances by following this documentation:
https://docs.docker.com/swarm/install-manual/
I am in Step 4. Set up a discovery backend
I cannot understand the steps what I need to do further.
I created 5 nodes in EC2: manager0, manager1, consul0, node0, node1. Now I need to know how to setup service discovery with swarm.
In that document they ask us to connect manager0 and consul0 then ifconfig, then they given as etc0 instance. I don't know where this is coming from.
Ultimately I need to know where (in which node?) to run this command:
$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Any suggestion for me How to clear this step?
Consul will run on the consul0 server you created. So basically you first need to be able to run docker on worker0 and worker1 remotely, this is step 3. A better way of doing this is editing the daemon directly with the command:
sudo echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"' > /etc/default/docker`
Then restart docker. Afterwards you will find that you can run docker remotely from master0, master1 or any other instance behind your firewall with docker commands that start with:
docker -H $WORKER0_IPADDRESS:2375
For example if your workers ip address was 1.2.3.4 this would run the docker ps command remotely:
docker -H 1.2.3.4:2375 ps
This is what swarm runs on. Then start up your consul server with the command you want to run, you got that one right and thats it you wont do anything else with the consul0 server except use its IP address when you run your swarm commands.
So if $CONSUL0 represented the IP address of your consul server this is how you would set up the rest of swarm. If you ran each of them on the local machine of each node:
On consul0:
docker run -d -p 8500:8500 --restart=unless-stopped --name=consul progrium/consul -server -bootstrap
On master0 and master1:
docker run --name=master -d -p 4000:4000 swarm manage -H :4000 --replication --advertise $(hostname -i):4000 consul://$CONSUL0:8500
On worker0 and worker1:
docker run -d --name=worker swarm join --advertise=$(hostname -i):2375 consul://$CONSUL0:8500/
Is it possible to control (list/start/stop/delete) docker containers from docker container running on the same machine?
The idea/intent is to have docker container which monitors/controls neighbours.
Both low/high level details would be useful.
Thanks!
Yes, the easiest way is to mount the docker socket from the host inside the docker container e.g:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker debian /bin/bash
root#dcd3b64945ed:/# docker ps -q
dcd3b64945ed
3178d5269041
e59d5e37e0f6
Mounting the docker socket is the easiest however its unsecure as gives the root access to everyone who has access to the docker.sock
Id suggest using the Docker Remote API to do the list/start/stop/etc with a program which hides the docker remote ( in your case local ) daemon .
Ref: https://docs.docker.com/articles/basics/