Deploy docker container to production preserving mounted volumes - docker

In my dev machine I have app container with mounted code directory f.e -v /host/code:/app/code
What is the best practice of deploying such containers to production?
How should I pack this mount binding inside container in a way that in prod I would only execute "docker load"... and get everything work.

The best practice is to use a data volume container (a container which is only docker create'd, not docker run, because it does not run any process)
See "Creating and mounting a data volume container"
That way, you can easily export and deploy that container alongside the other instead of relying on a local host path which might not be available once on the production host.

Related

On Windows, where are the files of my CMS running with Docker?

I already found a command line to get a path of the files corresponding to my CMS (Prestashop) that runs with Docker, i.e:
docker exec -it <mycontainer> bash
But, it brings me to:
root#4c3cae74d5b1:/var/www/html#
Which looks like a Linux path. So, do you know how to know where the files are situated on my Windows file system ?
Thanks a lot !
Aymeric
If you have not otherwise specified, the files are only inside the one container filesystem, not at all on your host filesystem. The files are in your Windows file system only if you have used bind mounts when running your container and mapped host files/directories to container volume mounts.
In general Docker files can exist in three places:
layered container filesystem (default)
volumes (persistent volumes in your Docker host, volumes can be shared between multiple containers running on the same host)
bind mounts (files or directories in your Docker host filesystem)
You did not provide the actual docker run command you have used to run your Prestashop. This would reveal what option your setup is. More info on Dockker volumes can be found here: https://docs.docker.com/storage/
Which ever way you have stored the volume, you can use docker cp command to copy data between your container and host operating system.
Technically of course also the container filesystems and volumes are stored on your host disk but are not meant to be accessible directly. It is not recommended to access them directly and different versions of Docker have different restrictions. Some info on where to find it on Docker for Windows can be found from answers to this question: Locating data volumes in Docker Desktop (Windows)

Docker container A access file system of Docker container B without host volume

Is it possible for a second Docker container to natively access the internal file system of another container if they're running on the same system?
without mapping volumes, I think it is not possible!
when you run a container Docker create a namespace for that container and this namespace creates a layer of isolation for the processes of that container meaning their PID sequence, hostname, filesystem, ..... are isolated and for them it is like they are the only processes in that machine
if you need more informations refer to this book: https://www.manning.com/books/kubernetes-in-action
You can use shared filesystem mounted as volume in two separate containers.
The following command will create a directory called nginxlogs in your current user’s home directory and bindmount it to /var/log/nginx in the container:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
Then you can perform same operations on another container.
Finally you have to remember that if two separate processes from different containers will try to access files it can cause conflicts.

Docker keeps saying must be a superuser

I have a docker image which runs tomcat. Whenever I deploy a detached container and login to the container using docker exec, I usually get logged in by default as root. However, whenever I try commands like mount/umount, the container shell keeps returning an error saying must be superuser
What is this error and how do I fix it?
Even as root, the set of things you can do inside a Docker container is limited. There's some discussion of this under "Runtime privilege and Linux capabilities" in the docker run documentation. Among the things you can't do in a container without additional configuration is mount(8) additional filesystems.
In general, though, it's not good Docker practice to docker exec into containers and start making changes. You usually want to set things up so that you can run a single docker run (or docker-compose up) command, and everything is automatically configured for you. This is especially important when you start looking at things like restart policies or clustered environments like Docker Swarm or Kubernetes: manually tweaking things after startup doesn't work well when you have multiple copies of a container, potentially on different hosts, that might restart on their own.
Docker has some built-in support for managing filesystems in the container and it's better to use that:
If you're trying to mount --bind a host directory for things like publishing logs out, Docker has its own bind mount system, so you can
docker run -v $PWD/host/directory/path:/container/path ...
If you're trying to mount a physical device for external storage, you can mount(8) it on the host and then bind-mount it into the container as above.
Or, you can manually configure a Docker named volume to mount a physical device. The docker volume create command takes extended options that let you manually specify most of the mount options, so you can
docker volume create disk --driver local --opt device=/dev/sdX
docker run -v disk:/container/path ...
If you need to unmount a volume, stop the container, delete it, and re-run it with one fewer -v option. (Stopping and recreating containers for config changes like this is extremely routine.)

Volumes in Docker inside Docker?

I am running buildbot which is a CI tool on an EC2 machine. It's currently running as docker containers one for buildbot master and one for buildbot worker. Inside buildbot worker, I have to again run docker for building images and running containers.
After doing some research on how to best do this, I have mounted the docker sock file from the host machine to the buildbot worker container. Now from inside the buildbot worker, I am able to connect to the host docker daemon and use the build cache.
Main problem now is that inside the buildbot worker, I have a docker compose file in which for one service, I am mounting a file like this
./configs/my.cnf:/etc/my.cnf
but it is failing. And doing some more research, it's because the configs/my.cnf is relative to the buildbot worker directory and since I am using the host docker daemon which resolves the files using the host paths, it is not able to find the file.
I am not able to figure out on how to best do this. There were some suggestions on using the data volumes for this, but I am not sure on how best to use those.
Any idea on how we can do this?
Do you have any control over the creation of the buildbot worker? Can you control the buildbot worker directory.
export BUILD_BOT_DIR=$(mktemp -d) &&
docker container create -v /var/run/docker.sock:/var/run/docker.sock -v ${BUILD_BOT_DIR}:${BUILD_BOT_DIR} -e BUILD_BOT_DIR ...
In this scenario, the path './configs/my:conf' points to the same file on both the container and the host.

Is there a Better way to run a command or shell on Docker swarm

Lets say I want to edit a config file for an NGINX Docker service that is replicated across 3 nodes.
Currently I list the services using docker service ls.
Then get the details to find a node running a container for that service using docker serivce ps servicename.
Then ssh to a node where one of the containers is running.
Finally, docker exec -it containername bash. Then I edit the config file.
Two questions:
Is there a better way to do this rather than ssh to a node running a container? Maybe there is a swarm or service command to do so?
If I were to edit that config file on one container would that change be replicated to the other 2 containers in the swarm?
The purpose of this exercise would be to edit configuration without shutting down a service.
You should not be exec'ing into containers to change their configuration, and so docker has not created an easy way to do this within Swarm Mode. You could use classic swarm to avoid the need to ssh into the other host, but I still don't recommend this.
The correct way to do this is to migrate your configuration file into a docker config entry. Version your config name. Then when you want to update it, you create a new version with the desired changes, and do a rolling update of your service to use that new configuration.
Unless the config is mounted from an external source like NFS, changes to one config in one container will not apply to other containers running on other nodes. If that config is stored locally inside your container as part of it's internal copy-on-write filesystem, then no changes from one container will be visible in any other container.

Resources