I have docker on my host machine with a container running. I was wondering if it's possible, and what the best approach would be, to "trigger" a container creation from the running container.
Let's say my machine is host and I have a container called app (with id 123456789) running on host.
root#host $ docker contain ls
123456789 app_mage .... app
I would like to create a container on host from within app
root#123456789 $ docker run --name app2 ...
root#host docker container ls
123456789 app_mage .... app
12345678A app_mage .... app2
What I need is for my app to be running on docker and to run arbitrary applications in an isolated environment (but I'd rather avoid docker-in-docker)
A majority of the Docker community will veer away from these types of designs, however it is very doable.
Similar to Starting and stopping docker container from other container you can simply mount the docker.sock file from the host machine into the container, giving it privilege to access the docker daemon.
To make things more automated, you could use the docker-py sdk to start containers from inside a container, which would in turn access the Docker deamon on the host machine hosting the container that you are spawning more containers from.
For example:
docker run -v /var/run/docker.sock:/var/run/docker.sock image1 --name test1
----
import docker
def create_container():
docker.from_env().containers.run("image2", name="test2")
This example starts container test1, and runs that method inside the newly created container, which in turn creates a new container test2 running on the same host as test1.
Related
I was following the below youtube video linked in the article which allows a docker container to get root access on the host.
There are a few steps which are unclear, can someone please explain how they work further?
https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
Step 1> Bind mount /var/run/docker.sock from host to container
Step 2> Install docker in container <<< at this stage I see that docker ps
-a shows all the containers which are present on the host.
**QUESTION:** How can the container see the containers present on the host? Is it because dockerd on the new container is using /var/run/docker.sock on the host? netstat/ss in the new container doesn't show anything..
Step 3> Run another container from the 1st container. Pass the following parameters to it:
docker run -dit -v /:/host ubuntu
Intention of this is to mount / from host filesystem to /host in the 2nd container being created
**QUESTION:** How does the 1st container have access to / (being filesystem of the host?)
Thanks.
Docker runs as a service on the host machine. This service communicates with clients via a socket which, by default, is the unix socket: unix:/var/run/docker.sock.
When you share this socket with any container, that container will get full access to the docker daemon. From there, the container could start other containers, delete containers/volumes/etc or even map volumes at will from the host to a new container, for example, as is described in your question with -v /:/host. Doing that will give the container root access to the host file system in /host/.
In short: you should be careful sharing this precious socket with any container you don't trust. In some cases the shared socket makes sense (for example portainer: a container that serves as a management GUI to docker).
I am using docker-compose for deployment.
I want to restart my "centos-1" container from "centos-2" container. Both containers are running on the same host.
Please suggest, How could I achieve this in a simplest and automated way?
I followed How to run shell script on host from docker container? and tried to run a script on Host from "centos-2" container, but the script is executing inside a container and not on the host.
Script:
#!/bin/bash
sudo docker container restart centos-1
Error:
line 2: docker: command not found
(Docker isn't installed inside any centos-2 container)
You need:
Install docker CLI (command line interface) on second container. Do not confuse with full scale installation - you dont need docker daemon, only command line tool (docker executable)
Share you host's docker daemon (service) to make it accessible in second container. That is achieved with simply sharing /var/run/docker.sock when launching 2nd container, example:
docker run ... -v "/var/run/docker.sock:/var/run/docker.sock" container2 ...
Now you can execute any docker command, like docker stop from second container and these commands are happily passed to your main (and the only) docker daemon.
There is a approach from the CI-context to control the Docker Daemon on System from a running container called Docker-out-of-Docker (DooD):
you have to install docker inside your container
Map you docker installation from your system inside your container using volumes
-v /var/run/docker.sock:/var/run/docker.sock
Now each docker command inside your container are execute on the system docker installation. E.g. if you type docker image list inside your container there should be the same list as if your type the command on your system.
Is it possible to spawn Docker Services within a container running on Docker swarm? This would allow containers to dynamically maintain the components running in the swarm.
Currently I am able to run containers within other containers on the host machine by mounting the /var/run/docker.sock into the container while using the docker-py SDK.
docker run -v /var/run/docker.sock:/var/run/docker.sock master
Inside the container I have a python script that runs the following:
container = docker.from_env().containers.run('worker', detach=True, tty=True, volumes=volumes, network='backend-network', mem_limit=worker.memory_limit)
Is something similar to this possible in Docker Swarm, not just vanilla Docker?
You can mount the Docker socket and use the docker module as you're doing now, but create a service, assuming you're on a manager node.
some_service = docker.from_env().services.create(…)
https://docker-py.readthedocs.io/en/stable/services.html
how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.
the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes