I put the docker host in one machine and put the client in other. Then I try to run some like this from machine 2 (client).
docker -H tcp://machine1:port run -v ./dummy-folder:/dummy-folder alpine sh
Is that dummy-folder going to work through TCP connection?
Is the same valid for docker-compose too?
is the same valid for docker swarm mode?
The volume mount will run locally on the docker host where the container runs, there's no path over the TCP connection for the volume mount (there is a build time packaging of the build context to send that from the client to the server). Swarm is unchanged, if you mount a volume, it will mount on whatever host the container happens to run on.
If you can't replicate your data across the hosts, then you'll want to use a volume mount over the network to a shared storage location or use a volume driver that does the replication for you (e.g. nfs, infinite, glusterfs, flocker).
Related
I was following the below youtube video linked in the article which allows a docker container to get root access on the host.
There are a few steps which are unclear, can someone please explain how they work further?
https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
Step 1> Bind mount /var/run/docker.sock from host to container
Step 2> Install docker in container <<< at this stage I see that docker ps
-a shows all the containers which are present on the host.
**QUESTION:** How can the container see the containers present on the host? Is it because dockerd on the new container is using /var/run/docker.sock on the host? netstat/ss in the new container doesn't show anything..
Step 3> Run another container from the 1st container. Pass the following parameters to it:
docker run -dit -v /:/host ubuntu
Intention of this is to mount / from host filesystem to /host in the 2nd container being created
**QUESTION:** How does the 1st container have access to / (being filesystem of the host?)
Thanks.
Docker runs as a service on the host machine. This service communicates with clients via a socket which, by default, is the unix socket: unix:/var/run/docker.sock.
When you share this socket with any container, that container will get full access to the docker daemon. From there, the container could start other containers, delete containers/volumes/etc or even map volumes at will from the host to a new container, for example, as is described in your question with -v /:/host. Doing that will give the container root access to the host file system in /host/.
In short: you should be careful sharing this precious socket with any container you don't trust. In some cases the shared socket makes sense (for example portainer: a container that serves as a management GUI to docker).
I have a Docker Ubuntu bionic container on A Ubuntu server host. From the container I can see the host drive is mounted as /etc/hosts which is not a directory. Tried unmounting and remounting on a different location but throws permission denied error, this happens when I am trying as root.
So How do you access the contents of your host system ?
Firstly, etc/hosts is a networking file present on all linux systems, it is not related to drives or docker.
Secondly, if you want to access part of the host filesystem inside a Docker container you need to use volumes. Using the -v flag in a docker run command you can specify a directory on the host to mount into the container, in the format:
-v /path/on/host:/path/inside/container
for example:
docker run -v /path/on/host:/path/inside/container <image_name>
Example.
container id: 32162f4ebeb0
#HOST BASH SHELL
docker cp 32162f4ebeb0:/dir_inside_container/image1.jpg /dir_inside_host/image1.jpg
docker cp /dir_inside_host/image1.jpg 32162f4ebeb0:/dir_inside_container/image1.jpg
Docker directly manages the /etc/hosts files in containers. You can't bind-mount a file there.
Hand-maintaining mappings of host names to IP addresses in multiple places can be tricky to keep up to date. Consider running a DNS server such as BIND or dnsmasq, or using a hosted service like Amazon's Route 53, or a service-discovery system like Consul (which incidentally provides a DNS interface).
If you really need to add entries to a container's /etc/hosts file, the docker run --add-host option or Docker Compose extra_hosts: setting will do it.
As a general rule, a container can't access the host's filesystem, except to the extent that the docker run -v option maps specific directories into a container. Also as a general rule you can't directly change mount points in a container; stop, delete, and recreate it with different -v options.
run this command for linking local folder to docker container
docker run -it -v "$(pwd)":/src centos
pwd: present working directroy(we can use any directory) and
src: we linking pwd with src
I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.
the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes
I am using docker image for kafka from wurstmeister
The docker-compose file defines a volume such as /var/run/docker.sock:/var/run/docker.sock
What is the purpose of the above unix socket?
When should a docker image declare the above volume?
The kafka-docker project is making (questionable, see below) use of the docker command run inside the kafka container in order to introspect your docker environment. For example, it will determine the advertised kafka port like this:
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` $KAFKA_PORT | sed -r "s/.*:(.*)/\1/g")
There is a broker-list.sh script that looks for kafka brokers like this:
CONTAINERS=$(docker ps | grep 9092 | awk '{print $1}')
In order to run the docker cli inside the container, it needs access to the /var/run/docker.sock socket on your host.
Okay, that's it for the facts. The following is just my personal opinion:
I think this is frankly a terrible idea and that the only containers that should ever have access to the docker socket are those that are explicitly managing containers. There are other mechanisms available for performing container configuration and discovery that do not involve giving the container root access to your host, which is exactly what you are doing when you give something access to the docker socket.
By default, the Docker daemon listens on unix:///var/run/docker.sock to allow only local connections by the root user. So, generally speaking, if we can access to this socket from somewhere else, we can talk to the Docker daemon or extract information about other containers.
If we want some processes inside our container to access to information of other containers managed by the Docker daemon (run on our host), we can declare the volume like above.
Let's see an example from the wurstmeister docker.
The Docker file:
At the end of the file, it will call:
CMD ["start-kafka.sh"]
start-kafka.sh
Let's take a look from the line 6:
if [[ -z "$KAFKA_ADVERTISED_PORT" ]]; then
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` $KAFKA_PORT | sed -r "s/.*:(.*)/\1/g")
fi
When start his Kafka container, he wants to execute below command inside the Kafka container (to find the port mapping to container...):
docker port `hostname` $KAFKA_PORT
Note that he did mount the above volume to be able to execute command like this.
Reference from Docker website(search for the Socket keyword)
What is the purpose of the above unix socket?
Mounting the /var/run/docker.sock socket in a container provides access to the Docker Remote API hosted by the docker daemon. Anyone with access to this socket has complete control of docker and the host running docker (essentially root access).
When should a docker image declare the above volume?
Very rarely. If you are running a docker admin tool that requires API access inside a container then it needs to be mounted (or accessible via TCP) so the tool can manage the hosting docker daemon.
As larsks mentioned, docker-kafka's use of the socket for config discovery is very questionable.
Its not necessary to mount docker.sock file and can be avoided by commenting out appropriate lines in kafka-docker/Dockerfile & start-kafka.sh. No need to add broker-list.sh to kafka container.