Is is possible to share a single docker volume to multiple docker containers with /var/lib/docker destination?
A minimal reproducible example would be like below:
$ docker volume create --name lib
$ docker run --privileged -v lib:/var/lib/docker --name c1 -d docker:dind
$ docker run --privileged -v lib:/var/lib/docker --name c2 -d docker:dind
I want to work with docker inside c1 and c2 containers simultaneously. But if you wait a moment, you'll see it's not possible and the second container (c2) stops. I've checked the error logs:
$ docker logs -f c2
.
.
.
failed to start containerd: timeout waiting for containerd to start
And, I can not make multiple volumes; Because the storage is limited and the size of images are heavy.
UPDATE:
Maybe I'm facing with XY Problem! Actually I want to have my images shared. I want all of my Docker Images inside my host machine, go into all DinD containers ALSO the containers should be able to create a new Docker Image and this new image should be accessible from other containers at the same time.
On the title of the question, yes, multiple containers can mount the same volume. However, your containers are each docker engines, and the second engine is failing to start because there's already a running docker engine on the /var/lib/docker directory. This isn't a volume mounting issue so much as a docker engine design challenge.
Given your requirements, a container image database from the host engine, shared with various DinD instances, while not sharing the docker engine of the host itself (via a docker.sock or mTLS), I don't believe there's a good answer. You're left with two options:
Run your own local registry server. This is keep the layers from being sent outside your network, and could even be on the same host. However, the layers will be copied for each engine, and you'll need to manage GC policies on that registry. This gives you the desired isolation without the desired deduplication of image layers.
Share the docker.sock between the host and trusted containers. The containers would then have direct access to the host engine, effectively root on the host (unless you have setup the engine as rootless), so only do this in environments where you trust it. This would give you the layer deduplication, but none of the isolation.
The reason it's difficult is docker is designed to manage it's own copy of /var/lib/docker, so all the state can be tracked in memory and periodically pushed out as json metadata files on disk to handle restarts. Mutexes are within the one process, and it doesn't need to worry about multiple writers modifying layers, or a reader running while a writer is still creating a layer.
Take a look at this Document:
https://docs.docker.com/storage/bind-mounts/
Related
Is is possible to share a single docker volume to multiple docker containers with /var/lib/docker destination?
A minimal reproducible example would be like below:
$ docker volume create --name lib
$ docker run --privileged -v lib:/var/lib/docker --name c1 -d docker:dind
$ docker run --privileged -v lib:/var/lib/docker --name c2 -d docker:dind
I want to work with docker inside c1 and c2 containers simultaneously. But if you wait a moment, you'll see it's not possible and the second container (c2) stops. I've checked the error logs:
$ docker logs -f c2
.
.
.
failed to start containerd: timeout waiting for containerd to start
And, I can not make multiple volumes; Because the storage is limited and the size of images are heavy.
UPDATE:
Maybe I'm facing with XY Problem! Actually I want to have my images shared. I want all of my Docker Images inside my host machine, go into all DinD containers ALSO the containers should be able to create a new Docker Image and this new image should be accessible from other containers at the same time.
On the title of the question, yes, multiple containers can mount the same volume. However, your containers are each docker engines, and the second engine is failing to start because there's already a running docker engine on the /var/lib/docker directory. This isn't a volume mounting issue so much as a docker engine design challenge.
Given your requirements, a container image database from the host engine, shared with various DinD instances, while not sharing the docker engine of the host itself (via a docker.sock or mTLS), I don't believe there's a good answer. You're left with two options:
Run your own local registry server. This is keep the layers from being sent outside your network, and could even be on the same host. However, the layers will be copied for each engine, and you'll need to manage GC policies on that registry. This gives you the desired isolation without the desired deduplication of image layers.
Share the docker.sock between the host and trusted containers. The containers would then have direct access to the host engine, effectively root on the host (unless you have setup the engine as rootless), so only do this in environments where you trust it. This would give you the layer deduplication, but none of the isolation.
The reason it's difficult is docker is designed to manage it's own copy of /var/lib/docker, so all the state can be tracked in memory and periodically pushed out as json metadata files on disk to handle restarts. Mutexes are within the one process, and it doesn't need to worry about multiple writers modifying layers, or a reader running while a writer is still creating a layer.
Take a look at this Document:
https://docs.docker.com/storage/bind-mounts/
Is it possible for a second Docker container to natively access the internal file system of another container if they're running on the same system?
without mapping volumes, I think it is not possible!
when you run a container Docker create a namespace for that container and this namespace creates a layer of isolation for the processes of that container meaning their PID sequence, hostname, filesystem, ..... are isolated and for them it is like they are the only processes in that machine
if you need more informations refer to this book: https://www.manning.com/books/kubernetes-in-action
You can use shared filesystem mounted as volume in two separate containers.
The following command will create a directory called nginxlogs in your current user’s home directory and bindmount it to /var/log/nginx in the container:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
Then you can perform same operations on another container.
Finally you have to remember that if two separate processes from different containers will try to access files it can cause conflicts.
I have a docker image which runs tomcat. Whenever I deploy a detached container and login to the container using docker exec, I usually get logged in by default as root. However, whenever I try commands like mount/umount, the container shell keeps returning an error saying must be superuser
What is this error and how do I fix it?
Even as root, the set of things you can do inside a Docker container is limited. There's some discussion of this under "Runtime privilege and Linux capabilities" in the docker run documentation. Among the things you can't do in a container without additional configuration is mount(8) additional filesystems.
In general, though, it's not good Docker practice to docker exec into containers and start making changes. You usually want to set things up so that you can run a single docker run (or docker-compose up) command, and everything is automatically configured for you. This is especially important when you start looking at things like restart policies or clustered environments like Docker Swarm or Kubernetes: manually tweaking things after startup doesn't work well when you have multiple copies of a container, potentially on different hosts, that might restart on their own.
Docker has some built-in support for managing filesystems in the container and it's better to use that:
If you're trying to mount --bind a host directory for things like publishing logs out, Docker has its own bind mount system, so you can
docker run -v $PWD/host/directory/path:/container/path ...
If you're trying to mount a physical device for external storage, you can mount(8) it on the host and then bind-mount it into the container as above.
Or, you can manually configure a Docker named volume to mount a physical device. The docker volume create command takes extended options that let you manually specify most of the mount options, so you can
docker volume create disk --driver local --opt device=/dev/sdX
docker run -v disk:/container/path ...
If you need to unmount a volume, stop the container, delete it, and re-run it with one fewer -v option. (Stopping and recreating containers for config changes like this is extremely routine.)
I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md
Currently I have a container created with
docker run --detach --name gitlab_app --restart=always --publish 192.168.0.200:80:80 --publish 192.168.0.200:22:22 --volumes-from gitlab_data gitlab_image
I want to remove both port bindings 80 and 22 from the image. Is it possible to remove port binding from an existing docker container?
NB: It is okay to take the container offline for removing the binding.
If its ok for the container to be offline why not just remove and run again without the port switches?
If you do need to do this without deleting containers you could just modify the underlying iptables changes.
# Will list the rules
iptables -L
# Will delete the rule you want to remove
iptables --delete [chain] <Rule definition>
In general your data should always be in one of 3 places
A data only container that can be linked with a restarted service container.
A volume defined in your service container than can be linked with a new container to take backups. See here for an example.
In a host mounted volume so that you can restart containers and mount the same location into new containers.
With one of these three approaches restarting services becomes easily and this should be standard as micro-services should be designed such that they can go down and recover often. These approaches will also speed up your application as the default union file system is slower than normal file systems which are used for volumes.
If you need to recover data from a container where you did not plan volumes properly you can use the docker export functionality to export the state of your container. Then import it into a new container with a host mounted volume. Copy your critical data from inside the container to the volume.