How to assign separated volumes to each container? [Docker-swarm] [Hyperledger] - docker-swarm

Generally, Hyperledger use an internal /var/hyperledger/ to store the database for each container. And we actually need to mount this directory outside of the container.
When running a bare command docker run or docker-compose, we can specify this parameter separately or even use the docker compose file.
Question:
Since I may need to try Hyperledger with the docker swarm (Docker 1.12), and each Hyperledger container must not uses the same shared volume with any others. So, how can I specify the separated volumes to each container using Docker swarm mode?
$ docker service create ...

Related

How to persist docker Images through multiple dind containers? [duplicate]

Is is possible to share a single docker volume to multiple docker containers with /var/lib/docker destination?
A minimal reproducible example would be like below:
$ docker volume create --name lib
$ docker run --privileged -v lib:/var/lib/docker --name c1 -d docker:dind
$ docker run --privileged -v lib:/var/lib/docker --name c2 -d docker:dind
I want to work with docker inside c1 and c2 containers simultaneously. But if you wait a moment, you'll see it's not possible and the second container (c2) stops. I've checked the error logs:
$ docker logs -f c2
.
.
.
failed to start containerd: timeout waiting for containerd to start
And, I can not make multiple volumes; Because the storage is limited and the size of images are heavy.
UPDATE:
Maybe I'm facing with XY Problem! Actually I want to have my images shared. I want all of my Docker Images inside my host machine, go into all DinD containers ALSO the containers should be able to create a new Docker Image and this new image should be accessible from other containers at the same time.
On the title of the question, yes, multiple containers can mount the same volume. However, your containers are each docker engines, and the second engine is failing to start because there's already a running docker engine on the /var/lib/docker directory. This isn't a volume mounting issue so much as a docker engine design challenge.
Given your requirements, a container image database from the host engine, shared with various DinD instances, while not sharing the docker engine of the host itself (via a docker.sock or mTLS), I don't believe there's a good answer. You're left with two options:
Run your own local registry server. This is keep the layers from being sent outside your network, and could even be on the same host. However, the layers will be copied for each engine, and you'll need to manage GC policies on that registry. This gives you the desired isolation without the desired deduplication of image layers.
Share the docker.sock between the host and trusted containers. The containers would then have direct access to the host engine, effectively root on the host (unless you have setup the engine as rootless), so only do this in environments where you trust it. This would give you the layer deduplication, but none of the isolation.
The reason it's difficult is docker is designed to manage it's own copy of /var/lib/docker, so all the state can be tracked in memory and periodically pushed out as json metadata files on disk to handle restarts. Mutexes are within the one process, and it doesn't need to worry about multiple writers modifying layers, or a reader running while a writer is still creating a layer.
Take a look at this Document:
https://docs.docker.com/storage/bind-mounts/

Docker container A access file system of Docker container B without host volume

Is it possible for a second Docker container to natively access the internal file system of another container if they're running on the same system?
without mapping volumes, I think it is not possible!
when you run a container Docker create a namespace for that container and this namespace creates a layer of isolation for the processes of that container meaning their PID sequence, hostname, filesystem, ..... are isolated and for them it is like they are the only processes in that machine
if you need more informations refer to this book: https://www.manning.com/books/kubernetes-in-action
You can use shared filesystem mounted as volume in two separate containers.
The following command will create a directory called nginxlogs in your current user’s home directory and bindmount it to /var/log/nginx in the container:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
Then you can perform same operations on another container.
Finally you have to remember that if two separate processes from different containers will try to access files it can cause conflicts.

Docker keeps saying must be a superuser

I have a docker image which runs tomcat. Whenever I deploy a detached container and login to the container using docker exec, I usually get logged in by default as root. However, whenever I try commands like mount/umount, the container shell keeps returning an error saying must be superuser
What is this error and how do I fix it?
Even as root, the set of things you can do inside a Docker container is limited. There's some discussion of this under "Runtime privilege and Linux capabilities" in the docker run documentation. Among the things you can't do in a container without additional configuration is mount(8) additional filesystems.
In general, though, it's not good Docker practice to docker exec into containers and start making changes. You usually want to set things up so that you can run a single docker run (or docker-compose up) command, and everything is automatically configured for you. This is especially important when you start looking at things like restart policies or clustered environments like Docker Swarm or Kubernetes: manually tweaking things after startup doesn't work well when you have multiple copies of a container, potentially on different hosts, that might restart on their own.
Docker has some built-in support for managing filesystems in the container and it's better to use that:
If you're trying to mount --bind a host directory for things like publishing logs out, Docker has its own bind mount system, so you can
docker run -v $PWD/host/directory/path:/container/path ...
If you're trying to mount a physical device for external storage, you can mount(8) it on the host and then bind-mount it into the container as above.
Or, you can manually configure a Docker named volume to mount a physical device. The docker volume create command takes extended options that let you manually specify most of the mount options, so you can
docker volume create disk --driver local --opt device=/dev/sdX
docker run -v disk:/container/path ...
If you need to unmount a volume, stop the container, delete it, and re-run it with one fewer -v option. (Stopping and recreating containers for config changes like this is extremely routine.)

Is it possible to provide secret to docker run?

I am just wondering whether it's possible to provide docker secret created from any file to docker run as an argument, or is it possible to mount docker secret during docker run.
I know it's possible using docker service where we can specify --secret while creating secret but I didn't see such option for docker run.
The docker secrets functionality is implemented only in swarm mode. You can make a single node swarm cluster very easily (docker swarm init) and run your container as a service. Some will simply mount a file containing the secret for one off containers as a single file read only host volume. e.g.:
docker run -v "$(pwd)/your_secret.txt:/run/secrets/your_secret.txt:ro" image_name
This has less security than a swarm mode secret, but the real value of swarm secrets are in multi-node clusters where you don't want to deploy and manage a directory of sensitive data on worker nodes.
As for docker-compose v3.1 file, it's possible to use docker secrets with docker-compose. https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets

Is there a Better way to run a command or shell on Docker swarm

Lets say I want to edit a config file for an NGINX Docker service that is replicated across 3 nodes.
Currently I list the services using docker service ls.
Then get the details to find a node running a container for that service using docker serivce ps servicename.
Then ssh to a node where one of the containers is running.
Finally, docker exec -it containername bash. Then I edit the config file.
Two questions:
Is there a better way to do this rather than ssh to a node running a container? Maybe there is a swarm or service command to do so?
If I were to edit that config file on one container would that change be replicated to the other 2 containers in the swarm?
The purpose of this exercise would be to edit configuration without shutting down a service.
You should not be exec'ing into containers to change their configuration, and so docker has not created an easy way to do this within Swarm Mode. You could use classic swarm to avoid the need to ssh into the other host, but I still don't recommend this.
The correct way to do this is to migrate your configuration file into a docker config entry. Version your config name. Then when you want to update it, you create a new version with the desired changes, and do a rolling update of your service to use that new configuration.
Unless the config is mounted from an external source like NFS, changes to one config in one container will not apply to other containers running on other nodes. If that config is stored locally inside your container as part of it's internal copy-on-write filesystem, then no changes from one container will be visible in any other container.

Resources