I have this image that writes into the /temp/config and I wanted to map those data into a shared volume in my host
docker-compose downversion: '2'
services:
service-test:
image: service-test:latest
container_name: service-test
volumes:
- source_data:/temp/config/
volumes:
source_data:
When my service-test:latest image tries to write into the /temp/config, I am getting a Permission Denied error.
Question, how do I make this host shared volume writable?
I checked the shared volume using
docker volume inspect source_data
and I noticed that it has no write functionality.
This is a linux based distro.
UPDATE 2:
To verify this, I tried checking the permissions on the shared volume
and I noticed that it has no write permissions also.
bash-4.2$ docker inspect volume service-test_source_data
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/scratch/docker/volumes/service-test_source_data/_data",
"Name": "configservice-test_config_data",
"Options": {},
"Scope": "local"
}
]
bash-4.2$ ls -l /scratch/docker/volumes/service-test_source_data/
**drwxr-xr-x** 1 root root 0 Apr 18 01:43 _data
I believe your container is running as some specific user other than root.
In your docker-compose.yml you can add user: root
See docker-compose-reference
You can try it like:
volumes:
- source_data:/temp/config/
...
RUN chown -R source_data
Related
I just started learning Docker and docker-compose and I want to try out ScyllaDB (database). I want to start a single instance of ScyllaDB in Docker through docker-compose with persistent storage. The persistent storage should be saved in folder 'target' relative to my docker-compose file. The problem is that I don't see any folder being created, but docker-compose seems to persist the data, but I am not sure where I can locate the files that ScyllaDB created. Step by step reproduction path:
Create a docker-compose.yml with the following content (/var/lib/scylla should be correct according to https://docs.scylladb.com/operating-scylla/procedures/tips/best_practices_scylla_on_docker/):
docker-compose.yml
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- ./target:/var/lib/scylla
ports:
- "127.0.0.1:9042:9042"
- "127.0.0.1:9160:9160"
This does not give any result: $ docker volume ls
Start up docker-compose and wait a minute for ScyllaDB to start up: $ docker-compose up -d
This does still not give any result: $ docker volume ls. I expect that Docker should created a volume (./target/).
Persist some data in ScyllaDB to verify that the data is saved somewhere:
Run the following commands:
$ docker exec -it b-scylla cqlsh
$ create keyspace somekeyspace with replication = {
'class': 'NetworkTopologyStrategy',
'replication_factor': 2
};
The created keyspace is saved somewhere, but I don't know where. I would expect it is just in the target folder, but that folder isn't even created. When I restart docker-compose, the keyspace is still present, so the data is saved somewhere, but where?
You are using the "short syntax" for data mounting (https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-3) that is creating a mount point binding. Bindings are not volumes. They can't be checked with the docker volume ls. You can find out about your mounts with docker inspect {container}.
However, Scylla image does not start for me correctly with the bind mounting. I saw constant file system errors for writing sstables in mounted directory:
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- ./target:/var/lib/scylla
$ docker compose up -f .\test.yaml
b-scylla | INFO 2021-03-04 07:24:53,132 [shard 0] init - initializing batchlog manager
b-scylla | INFO 2021-03-04 07:24:53,135 [shard 0] legacy_schema_migrator - Moving 0 keyspaces from legacy schema tables to the new schema keyspace (system_schema)
b-scylla | INFO 2021-03-04 07:24:53,136 [shard 0] legacy_schema_migrator - Dropping legacy schema tables
b-scylla | ERROR 2021-03-04 07:24:53,168 [shard 0] table - failed to write sstable /var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/mc-8-big-Data.db: std::system_error (error system:2, No such file or directory)
I did not find out what causes this, but the dir is writable and contains most of the normal initial data - reserved commitlog segments and system ks data folders.
What actually works is using Volumes:
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- type: volume
source: target
target: /var/lib/scylla
volume:
nocopy: true
volumes:
target:
$ docker compose up -f .\test.yaml
$ docker volume ls
DRIVER VOLUME NAME
local 6b57922b3380d61b960110dacf8d180e663b1ce120494d7a005fc08cee475234
local ad220954e311ea4503eb3179de0d1162d2e75b73d1d9582605b4e5c0da37502d
local projects_target
$ docker volume inspect projects_target
[
{
"CreatedAt": "2021-03-04T07:20:40Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "projects",
"com.docker.compose.version": "1.0-alpha",
"com.docker.compose.volume": "target"
},
"Mountpoint": "/var/lib/docker/volumes/projects_target/_data",
"Name": "projects_target",
"Options": null,
"Scope": "local"
}
]
And Scylla starts successfully in this mode.
You of course can mount this volume to any other container with:
$ docker run -it --mount source=projects_target,target=/app --entrypoint bash scylladb/scylla:4.3.1
or accessing it via WSL (Locating data volumes in Docker Desktop (Windows)):
$ \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\projects_target\_data
Turns out I needed to reset my credentials in Docker Desktop
server: docker ubuntu, 18.06.3-ce
local : docker for mac, 19.03.13
I have created a volume in the swarm manually, to a remote nfs server. When I try to mount this volume in a service it appears to work, but the contents are empty and any writes seem to succeed (calling code doesn't crash), but the bytes are gone. Maybe even to /dev/null.
When I declare a similar volume inside the compose file it works. The only difference I can find is the label "com.docker.stack.namespace".
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
my_nfs
version: "3.5"
services:
my-api:
volumes:
- "compose_nfs:/data1/" # works fine
- "externl_nfs:/data2/" # empty contents, forgotten writes
volumes:
externl_nfs:
external: true
compose_nfs:
driver: local
driver_opts:
type: nfs
o: addr=10.0.1.100
device: ":/data/"
When inspecting the networks they are identical, except for that label.
{
"CreatedAt": "2020-20-20T20:20:20Z",
"Driver": "local",
"Labels": {
# label missing on the manually created one
"com.docker.stack.namespace": "stackie"
},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "compose_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}
If you use an external volume, swarm is deferring the creation of that volume to you. Volumes are also local to the node they are created on, so you must create that volume on every node where swarm could schedule this job. For this reason, many will delegate the volume creation to swarm mode itself and put the definition in the compose file. So in your example, before scheduling the service, on each node run:
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
external_nfs
Otherwise, when the service gets scheduled on a node without the volume defined, it appears that swarm will create the container, and that create command generates a default named volume, storing the contents on that local node (I could also see swarm failing to schedule the service because of a missing volume, but your example shows otherwise).
Answering this, since it is an older version of docker and probably not relevant to most people, considering the NFS part.
It appears to be a bug of some sort in docker/swarm.
Create a NFS volume on the swarm (via api, from remote)
Volume is correct on the manager node which was contacted
Volume is missing the options on all other worker nodes
As some strange side effect, the volume seems to work. It can be mounted, writes succeed without issue, but all bytes written disappear. Reads work but every file is "not found", which is logical considering the writes disappearing.
On manager:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T15:56:44+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}]
On worker:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T16:22:16+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": null,
"Scope": "local"
}]
I have been frustrated by this issue for a while because this has been asked multiple times here, such as in How to deal with persistent storage (e.g. databases) in Docker and What is the (best) way to manage permissions for Docker shared volumes?, but the answers do not address the issue at all.
The first "answer" says to just use named volumes instead of traditional bind mounts. That solves nothing because when the named volume is mounted on the host, for instance at the default location /var/lib/docker/volumes/<volume name>/_data, then that mount point will have the uid/gid of the mount point inside the container.
The other "answer" given, before docker had named volumes, was to use a data-only container. This exhibits the same exact problem.
The reason this is a huge problem for me is that I have many embedded machines on which I want to run the docker host, and the user may have a different uid/gid on each of these. Therefore I cannot hardcode a uid/gid in a Dockerfile for the mount points for my persistent volumes, to achieve matching ids.
Here's an example of the problem: Say my user is foo on the host, with uid 1001 and gid 1001, and the user writing files to the volume inside the container has uid 1002. When I run the container, docker will chown 1002:1002 the mount point dir on the host, and write files with this uid, which I can't even read/write with my user foo.
Visually (all these operations on the host):
$ docker volume create --driver local --opt type=volume --opt device=/home/<my_host_user>/logs --opt o=bind logs
logs
$ docker volume inspect logs
[
{
"CreatedAt": "2020-08-26T16:26:08+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/logs/_data",
"Name": "logs",
"Options": {
"device": "/home/<myhostuser>/logs",
"o": "bind",
"type": "volume"
},
"Scope": "local"
}
]
$ pwd
/home/foo
$ mkdir logs && ls -ld logs
drwxr-xr-x 2 foo foo 4096 Aug 26 17:24 logs
Then running the container:
$ docker run --rm --name <cont_name> -it --net="host" --mount src=logs,target=/home/<container_user>/logs <my docker image>
And now the mount point:
$ ls -ld logs
drwxr-xr-x 2 1002 1002 4096 Aug 26 17:30 logs
$ ls -l logs/
total 4
-rw-r----- 1 1002 1002 0 Aug 26 17:30 log
-rw-r----- 1 1002 1002 2967 Aug 26 17:27 log.1
As you can see, the logs written to the volume have a uid/gid which doesn't correspond to something that exists on the host and which I can't access without root/sudo.
Now then, is there ANY way that docker can be told to map uid/gids in the container to uid/gids on the host, or even simpler to just use the specified uid/gid for the host mount point?
my env:
Ubuntu 22.04
Docker version 20.10.17, build 100c701
create mount piont path with suitable permission.
# docker file
RUN mkdir --parents '$volumeDir' ; chown --recursive '$userName':'$userGroup' '$volumeDir'
next, create container and mount volume .
# terminal
docker run --name=containerName --interactive
--user=$userName:$userGroup --mount='source=volumeName,target==$volumeDir,readonly=false'
imageName /bin/bash
you will got suitable permission
With the below docker-compose.yml file:
test:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes_from:
- cachev
cachev:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- /build
entrypoint: "true"
cachev service in above file launches volume container that creates anonymous volume in /var/lib/docker/ folder in docker host and creates mount point /cache within volume container(xx_cachev).
Does volumes_from instruction under test service create /build mount point in xx_test container? that points to /build mount point of xx_cachev container?
From volumes_from docs:
Mount all of the volumes from another service or container...
So the short answer is yes:
volumes_from mounts /build volume defined by cachev service inside test service.
Long answer:
To answer your question let's run the test service:
docker compose up test
Before answering your question, let's make sure the description is clear:
cachev service in above file launches volume container...
It's just regular container which exits immediately because of entrypoint: "true".
docker ps -a should show:
ac68a33abe59 cache "true" 16 hours ago Exited (0) 4 minutes ago cache_1
But before it exits it creates volumes specified in volumes:. So we can call it volume container if its volumes are used by other service, for caching for instance.
that creates anonymous volume in /var/lib/docker/ folder in docker host
Agree. - /build is anonymous volume. Can be verified by viewing all container mounts:
docker inspect [cachev_container_id] --format '{{json .Mounts}}' | jq
should show something like:
{
"Type": "volume",
"Name": "1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378",
"Source": "/var/lib/docker/volumes/1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378/_data",
"Destination": "/build",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
jq is great utility for working with jsons in bash. Install it for the above command to work.
and creates mount point /cache within volume container(xx_cachev).
Don't see any evidence of mounts in cachev: service spec you provided.
If you add mapping - /tmp/cache:/cache to its volumes section and run docker compose up test again and inspect the exited container you should see:
{
"Type": "bind",
"Source": "/tmp/cache",
"Destination": "/cache",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
Please, note that docker inspect [cachev_service_id] --format '{{json .Mounts}}' | jq will show all container mounts including those specified in docker/dev/Dockerfile using VOLUME instruction.
To answer to your question we need to inspect test service container:
docker inspect [test_container_id] --format '{{json .Mounts}}' | jq:
would show all the volumes specified in docker/dev/Dockerfile if any and all the volumes of cachev thanks to volumes_from instruction.
You can see that both test and cache containers have:
{
"Type": "volume",
"Name": "1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378",
"Source": "/var/lib/docker/volumes/1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378/_data",
"Destination": "/build",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
in their mounts and this volume survives subsequent runs of docker compose up test
Yes, you can verify by executing a command inside both containers. if creat file in test container under touch /build/fromtest.txt path, it will be visible in cacheV container on the same path /build/fromtest.txt.
volumes_from
Mount all of the volumes from another service or container
compose-file-volumes_from
A demo you can try
test:
image: alpine
command: sh -c "touch /build/fromtest.txt && echo hell from test-container && ls /build/"
volumes_from:
- cachev
cachev:
image: node:alpine
command: sh -c "touch /build/fromcache.txt && echo hello from cache-container && ls /build/"
volumes:
- /build
log will be
Recreating compose-volume_cachev_1 ... done
Recreating compose-volume_test_1 ... done
Attaching to compose-volume_cachev_1, compose-volume_test_1
cachev_1 | hello from cache-container
test_1 | hell from test-container
test_1 | fromcache.txt
test_1 | fromtest.txt
cachev_1 | fromcache.txt
cachev_1 | fromtest.txt
I have a docker-compose.yml which looks like this:
version: '3'
services:
tomcat:
container_name: tomcat8
restart: always
image: tomcat:8-jdk8
ports:
- 80:8080
volumes:
- /var/docker/myservice/tomcat/data/webapps:/usr/local/tomcat/webapps:Z
I want to mount the tomcat/webapps folder inside the container to the host so that I don't have to enter the docker container to modify the applications.
However, when this container starts up, the /usr/local/tomcat/webapps folder becomes empty. The ROOT/, docs/, examples/, host-manager/, manager/ folders that should have been created when tomcat starts up are all gone.
I originally thought this is because that the container does not have permission to write to the volume on the host machine. But I've followed this post's instruction to add an Z at the end of the volume.
What's wrong with my configuration? Why does /usr/local/tomcat/webapps folder inside the container become empty?
Is there any way to let the data in /usr/local/tomcat/webapps in the container to overwrite the data in /var/docker/myservice/tomcat/data/webapps in the host machine?
For tomcat:8-jdk8, we could see next:
$ docker inspect tomcat:8-jdk8 | grep Entrypoint
"Entrypoint": null,
"Entrypoint": null,
Also, see tomcat:8-jdk8 Dockerfile:
CMD ["catalina.sh", "run"]
To sum all, the only start script for a container is catalina.sh, so if we override it like next:
$ docker run -it --rm tomcat:8-jdk8 ls /usr/local/tomcat/webapps
ROOT docs examples host-manager manager
We can see even we did not start any start script like catalina.sh, we still can see ROOT, docks, etc in /usr/local/tomcat/webapps.
This means, above folders just in the image tomcat:8-jdk8 not dynamically generated by catalina.sh. So, when you use - /var/docker/myservice/tomcat/data/webapps:/usr/local/tomcat/webapps as a bind mount volume, the empty folder on host will just override all things in the container folder /usr/local/tomcat/webapps, so you will see empty folder in container.
UPDATE:
Is there any way to let the data in /usr/local/tomcat/webapps in the container to overwrite the data in /var/docker/myservice/tomcat/data/webapps in the host machine?
The nearest solution is to use named volume:
docker-compose.yaml:
version: '3'
services:
tomcat:
container_name: tomcat8
restart: always
image: tomcat:8-jdk8
ports:
- 80:8080
volumes:
- my-data:/usr/local/tomcat/webapps:Z
volumes:
my-data:
Then see next command list: (NOTE: 99_my-data, here 99 is the folder where you store your docker-compose.yaml)
shubuntu1#shubuntu1:~/99$ docker-compose up -d
Creating tomcat8 ... done
shubuntu1#shubuntu1:~/99$ docker volume inspect 99_my-data
[
{
"CreatedAt": "2019-07-15T15:09:32+08:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "99",
"com.docker.compose.version": "1.24.0",
"com.docker.compose.volume": "my-data"
},
"Mountpoint": "/var/lib/docker/volumes/99_my-data/_data",
"Name": "99_my-data",
"Options": null,
"Scope": "local"
}
]
shubuntu1#shubuntu1:~/99$ sudo -s -H
root#shubuntu1:/home/shubuntu1/99# cd /var/lib/docker/volumes/99_my-data/_data
root#shubuntu1:/var/lib/docker/volumes/99_my-data/_data# ls -alh
total 28K
drwxr-xr-x 7 root root 4.0K 7月 15 15:09 .
drwxr-xr-x 3 root root 4.0K 7月 15 15:09 ..
drwxr-xr-x 14 root root 4.0K 7月 15 15:09 docs
drwxr-xr-x 6 root root 4.0K 7月 15 15:09 examples
drwxr-xr-x 5 root root 4.0K 7月 15 15:09 host-manager
drwxr-xr-x 5 root root 4.0K 7月 15 15:09 manager
drwxr-xr-x 3 root root 4.0K 7月 15 15:09 ROOT
This is the nearest way can pop contents to host.
Another solution: mount /var/docker/myservice/tomcat/data/webapps to the container folder but not /usr/local/tomcat/webapps, e.g. /tmp/abc, then customize your CMD to copy the things in /usr/local/tomcat/webapps to /tmp/abc, then in your host's /var/docker/myservice/tomcat/data/webapps could also see them...