Edit docker volume by multiple users - docker

I've searched a while on the internet for a solution.
My setup is as following:
I have a php-apache docker image (basically apache with PHP support). I used a named volume to store the webroot (all Web files, most common PHP files).
This is working fine so far, I can see my files in the browser.
Because it's a multi user project (multiple devs) I want that multiple users should be able to edit the webroot.
The named volume can be edited under /var/lib/docker/volumes/apache_webroot. But it needs root access and that is not a good practice.
How could I manage the permission to this volume without using root? I tought about creating a container that just mounts the named volume and then forwards it to a path where I have access to with all users? Or can I somehow change the permission of /var/lib/docker/volumes/apache_webroot
Anyone ran into the same situation? Should I just mount it to a path on the host machine and not use named volumes at all?

An alternative would be to create a container for each user and bind them to this volume (docker containers support shared volumes). That would be a particularly good idea.
docker run -d --name some_users_container --volume my_webroot_shared_volume_name /bin/bash -c "while true; do sleep 10; done"
Then all you gotta do is to have the other users ssh to the remote docker container.

Related

How to work with the files from a docker container

I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?
You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.
Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag

Access data inside volume

I successfully ran a bind-mount for my blog, however I think that a managed volume would be a better choice instead of bind-mount, the question is, if I need to edit the theme through SFTP or vim or simply add some files to the volume, how do I do that? Right now the bind-mount allows me to edit the files, but how would I add/edit files on the volume or if I wanted later to get those files out?
For example: docker volume create --name test-volume
How can I add/edit data there or access via SFTP?
As says the official documentation:
Volumes are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem.
So the idea is to set up a new container, that bind the working directory as well as mounts a volume, and then manage files in them.
For example, lets say your working dir is /app:
docker run \
-v $PROJECT:/tmp/project
-v test-volume:/app \
alpine \
/bin/sh -c "cp /tmp/project/* /app"
Sync tools can be used, like here.
To manage data of your volume through container itself via SFTP, you need to make sure that image you are using supports SSH connections and map a 22 port, more info you could find here.

Docker: using a bind mount locally with swarm

Docker newcomer here.
I have a simple image of a django website with a volume defined for the app directory.
I can bind this volume to the actual folder where I do the development with this command :
docker container run --rm -p 8000:8000 --mount type=bind,src=$(pwd)/wordcount-project,target=/usr/src/app/wordcount-project wordcount-django
This works fairly well.
Now I tried to push that simple example in a swarm. Note that I have set up a local registry for the image to be available.
So to start my service I'd do :
docker service create -p 8000:8000 --mount type=bind,source=$(pwd)/wordcount-project,target=/usr/src/app/wordcount-project 127.0.0.1:5000/wordcount-django
It will work after some tries but only because it run on the local node (where the actual folder is) and not a remote node (where there is no wordcount-project folder).
Any idea how to solve this so that this folder can be accessible to all node and yet, still be accessible locally for development ?
Thanks !
Using bind-mount in docker swarn is not recommended, as you can read in the doc. In particular :
Important: Bind mounts can be useful but they can also cause problems. In most cases, it is recommended that you architect your application such that mounting paths from the host is unnecessary.
However, if you still want to use bind-mount, then you have two possibility :
Make sure your folder exists on all the nodes. The main problem here is that you'll have to update it everytime on every node.
Use a shared filesystem (such as sshfs for example) and mount it on a directory on each node. However, now that you have a shared filesystem, then you can just use a docker data volume and change the driver.
You can find some documentation on changing the volume data driver here

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

File storage options with Docker

We plan to use Docker with our new asp.net core project and one of the requirements is that app will upload files and we need to have them stored permanently.
We have read that Docker creates filesystem/volumes (i might be imprecise in terminology here) per container and if container is recreated for whatever reason - filesystem/volume exposed to container is lost.
We would like to avoid storing files in our database (mongodb).
What is the usual, best practice way to have files permanently&reliably stored with Docker?
Keeping non-ephemeral data in external storage servers is one solution. An more recent approach is to use S3 or a local equivalent like minio to store shared or private data that needs to outlive the lifetime of the container.
Refer to similar question
It's possible to create data volumes in the docker image/container.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp. But the files stored will be lost once the container is destroyed.
On the other hand, we can mount a host directory into a container. The host directory will then by accessible inside the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
The files stored by the docker container into this mounted directory will be available even if the container is destroyed. If you are planning to persist the files beyond the life time of container this will be a good option.

Resources