volumes_from instruction - docker compose - docker

With the below docker-compose.yml file:
test:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes_from:
- cachev
cachev:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- /build
entrypoint: "true"
cachev service in above file launches volume container that creates anonymous volume in /var/lib/docker/ folder in docker host and creates mount point /cache within volume container(xx_cachev).
Does volumes_from instruction under test service create /build mount point in xx_test container? that points to /build mount point of xx_cachev container?

From volumes_from docs:
Mount all of the volumes from another service or container...
So the short answer is yes:
volumes_from mounts /build volume defined by cachev service inside test service.
Long answer:
To answer your question let's run the test service:
docker compose up test
Before answering your question, let's make sure the description is clear:
cachev service in above file launches volume container...
It's just regular container which exits immediately because of entrypoint: "true".
docker ps -a should show:
ac68a33abe59 cache "true" 16 hours ago Exited (0) 4 minutes ago cache_1
But before it exits it creates volumes specified in volumes:. So we can call it volume container if its volumes are used by other service, for caching for instance.
that creates anonymous volume in /var/lib/docker/ folder in docker host
Agree. - /build is anonymous volume. Can be verified by viewing all container mounts:
docker inspect [cachev_container_id] --format '{{json .Mounts}}' | jq
should show something like:
{
"Type": "volume",
"Name": "1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378",
"Source": "/var/lib/docker/volumes/1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378/_data",
"Destination": "/build",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
jq is great utility for working with jsons in bash. Install it for the above command to work.
and creates mount point /cache within volume container(xx_cachev).
Don't see any evidence of mounts in cachev: service spec you provided.
If you add mapping - /tmp/cache:/cache to its volumes section and run docker compose up test again and inspect the exited container you should see:
{
"Type": "bind",
"Source": "/tmp/cache",
"Destination": "/cache",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
Please, note that docker inspect [cachev_service_id] --format '{{json .Mounts}}' | jq will show all container mounts including those specified in docker/dev/Dockerfile using VOLUME instruction.
To answer to your question we need to inspect test service container:
docker inspect [test_container_id] --format '{{json .Mounts}}' | jq:
would show all the volumes specified in docker/dev/Dockerfile if any and all the volumes of cachev thanks to volumes_from instruction.
You can see that both test and cache containers have:
{
"Type": "volume",
"Name": "1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378",
"Source": "/var/lib/docker/volumes/1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378/_data",
"Destination": "/build",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
in their mounts and this volume survives subsequent runs of docker compose up test

Yes, you can verify by executing a command inside both containers. if creat file in test container under touch /build/fromtest.txt path, it will be visible in cacheV container on the same path /build/fromtest.txt.
volumes_from
Mount all of the volumes from another service or container
compose-file-volumes_from
A demo you can try
test:
image: alpine
command: sh -c "touch /build/fromtest.txt && echo hell from test-container && ls /build/"
volumes_from:
- cachev
cachev:
image: node:alpine
command: sh -c "touch /build/fromcache.txt && echo hello from cache-container && ls /build/"
volumes:
- /build
log will be
Recreating compose-volume_cachev_1 ... done
Recreating compose-volume_test_1 ... done
Attaching to compose-volume_cachev_1, compose-volume_test_1
cachev_1 | hello from cache-container
test_1 | hell from test-container
test_1 | fromcache.txt
test_1 | fromtest.txt
cachev_1 | fromcache.txt
cachev_1 | fromtest.txt

Related

vscode containerEnv not working in mounts

I'm using the vscode command Remote-contains: Open Folder in container...
I'm trying to mount bind a file into the docker container.
~/.config/dart/pub-tokens.json
The host file is under my HOME directory and I need it mounted in the same location within the container's HOME directory.
Here is my mount command from the vscode devcontainer.json
"mounts": [
"source=${localEnv:HOME}/.config/dart/pub-tokens.json,target=${containerEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached",
]
Note the 'containerEnv' in the target clause.
Launching the container via the vscode Remote-contains: Open Folder in container...
produces the following error: (for readability I've added some newlines)
Start: Run: docker run --sig-proxy=false -a STDOUT -a STDERR
--mount type=bind,source=/home/bsutton/git/onepub/onepub,target=/workspaces/onepub
--mount source=/home/bsutton/.config/dart/pub-tokens.json,target=${containerEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached
--mount source=/home/bsutton/.onepub/onepub.yaml,target=${containerEnv:HOME}/.onepub/onepub.yaml,type=bind,consistency=cached
--mount type=volume,src=vscode,dst=/vscode -l devcontainer.local_folder=/home/bsutton/git/onepub/onepub
--entrypoint /bin/sh vsc-onepub-7ff341664d5755895634c2f74983ff45-uid -c echo Container started
docker: Error response from daemon:
invalid mount config for type "bind": invalid mount path: '${containerEnv:HOME}/.config/dart/pub-tokens.json' mount path must be absolute.
It would appear that vscode isn't expanding the the containerEnv.
If I replace containerEnv it with localEnv it does get expanded (but the wrong path).
i.e. the following works:
"mounts": [
"source=${localEnv:HOME}/.config/dart/pub-tokens.json,target=${localEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached",
]
Here is the complete devcontainer.json
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.0/containers/ubuntu
{
"name": "Ubuntu",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Ubuntu version: jammy / ubuntu-22.04, focal / ubuntu-20.04, bionic /ubuntu-18.04
// Use ubuntu-22.04 or ubuntu-18.04 on local arm64/Apple Silicon.
"args": { "VARIANT": "ubuntu-22.04" }
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode",
"features": {
"git": "latest",
"github-cli": "latest"
},
"mounts": [
"source=${localEnv:HOME}/.config/dart/pub-tokens.json,target=${containerEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached",
"source=${localEnv:HOME}/.onepub/onepub.yaml,target=${containerEnv:HOME}/.onepub/onepub.yaml,type=bind,consistency=cached"
]
}

Volume isn't created in Scylla when using docker-compose in Windows 10

I just started learning Docker and docker-compose and I want to try out ScyllaDB (database). I want to start a single instance of ScyllaDB in Docker through docker-compose with persistent storage. The persistent storage should be saved in folder 'target' relative to my docker-compose file. The problem is that I don't see any folder being created, but docker-compose seems to persist the data, but I am not sure where I can locate the files that ScyllaDB created. Step by step reproduction path:
Create a docker-compose.yml with the following content (/var/lib/scylla should be correct according to https://docs.scylladb.com/operating-scylla/procedures/tips/best_practices_scylla_on_docker/):
docker-compose.yml
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- ./target:/var/lib/scylla
ports:
- "127.0.0.1:9042:9042"
- "127.0.0.1:9160:9160"
This does not give any result: $ docker volume ls
Start up docker-compose and wait a minute for ScyllaDB to start up: $ docker-compose up -d
This does still not give any result: $ docker volume ls. I expect that Docker should created a volume (./target/).
Persist some data in ScyllaDB to verify that the data is saved somewhere:
Run the following commands:
$ docker exec -it b-scylla cqlsh
$ create keyspace somekeyspace with replication = {
'class': 'NetworkTopologyStrategy',
'replication_factor': 2
};
The created keyspace is saved somewhere, but I don't know where. I would expect it is just in the target folder, but that folder isn't even created. When I restart docker-compose, the keyspace is still present, so the data is saved somewhere, but where?
You are using the "short syntax" for data mounting (https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-3) that is creating a mount point binding. Bindings are not volumes. They can't be checked with the docker volume ls. You can find out about your mounts with docker inspect {container}.
However, Scylla image does not start for me correctly with the bind mounting. I saw constant file system errors for writing sstables in mounted directory:
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- ./target:/var/lib/scylla
$ docker compose up -f .\test.yaml
b-scylla | INFO 2021-03-04 07:24:53,132 [shard 0] init - initializing batchlog manager
b-scylla | INFO 2021-03-04 07:24:53,135 [shard 0] legacy_schema_migrator - Moving 0 keyspaces from legacy schema tables to the new schema keyspace (system_schema)
b-scylla | INFO 2021-03-04 07:24:53,136 [shard 0] legacy_schema_migrator - Dropping legacy schema tables
b-scylla | ERROR 2021-03-04 07:24:53,168 [shard 0] table - failed to write sstable /var/lib/scylla/data/system/truncated-38c19fd0fb863310a4b70d0cc66628aa/mc-8-big-Data.db: std::system_error (error system:2, No such file or directory)
I did not find out what causes this, but the dir is writable and contains most of the normal initial data - reserved commitlog segments and system ks data folders.
What actually works is using Volumes:
version: '3'
services:
b-scylla:
image: "scylladb/scylla:4.3.1"
container_name: b-scylla
volumes:
- type: volume
source: target
target: /var/lib/scylla
volume:
nocopy: true
volumes:
target:
$ docker compose up -f .\test.yaml
$ docker volume ls
DRIVER VOLUME NAME
local 6b57922b3380d61b960110dacf8d180e663b1ce120494d7a005fc08cee475234
local ad220954e311ea4503eb3179de0d1162d2e75b73d1d9582605b4e5c0da37502d
local projects_target
$ docker volume inspect projects_target
[
{
"CreatedAt": "2021-03-04T07:20:40Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "projects",
"com.docker.compose.version": "1.0-alpha",
"com.docker.compose.volume": "target"
},
"Mountpoint": "/var/lib/docker/volumes/projects_target/_data",
"Name": "projects_target",
"Options": null,
"Scope": "local"
}
]
And Scylla starts successfully in this mode.
You of course can mount this volume to any other container with:
$ docker run -it --mount source=projects_target,target=/app --entrypoint bash scylladb/scylla:4.3.1
or accessing it via WSL (Locating data volumes in Docker Desktop (Windows)):
$ \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\projects_target\_data
Turns out I needed to reset my credentials in Docker Desktop

NFS volume created manually mounts but shows empty contents

server: docker ubuntu, 18.06.3-ce
local : docker for mac, 19.03.13
I have created a volume in the swarm manually, to a remote nfs server. When I try to mount this volume in a service it appears to work, but the contents are empty and any writes seem to succeed (calling code doesn't crash), but the bytes are gone. Maybe even to /dev/null.
When I declare a similar volume inside the compose file it works. The only difference I can find is the label "com.docker.stack.namespace".
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
my_nfs
version: "3.5"
services:
my-api:
volumes:
- "compose_nfs:/data1/" # works fine
- "externl_nfs:/data2/" # empty contents, forgotten writes
volumes:
externl_nfs:
external: true
compose_nfs:
driver: local
driver_opts:
type: nfs
o: addr=10.0.1.100
device: ":/data/"
When inspecting the networks they are identical, except for that label.
{
"CreatedAt": "2020-20-20T20:20:20Z",
"Driver": "local",
"Labels": {
# label missing on the manually created one
"com.docker.stack.namespace": "stackie"
},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "compose_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}
If you use an external volume, swarm is deferring the creation of that volume to you. Volumes are also local to the node they are created on, so you must create that volume on every node where swarm could schedule this job. For this reason, many will delegate the volume creation to swarm mode itself and put the definition in the compose file. So in your example, before scheduling the service, on each node run:
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
external_nfs
Otherwise, when the service gets scheduled on a node without the volume defined, it appears that swarm will create the container, and that create command generates a default named volume, storing the contents on that local node (I could also see swarm failing to schedule the service because of a missing volume, but your example shows otherwise).
Answering this, since it is an older version of docker and probably not relevant to most people, considering the NFS part.
It appears to be a bug of some sort in docker/swarm.
Create a NFS volume on the swarm (via api, from remote)
Volume is correct on the manager node which was contacted
Volume is missing the options on all other worker nodes
As some strange side effect, the volume seems to work. It can be mounted, writes succeed without issue, but all bytes written disappear. Reads work but every file is "not found", which is logical considering the writes disappearing.
On manager:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T15:56:44+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}]
On worker:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T16:22:16+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": null,
"Scope": "local"
}]

Docker Compose Make Shared Volume Writable Permission Denied

I have this image that writes into the /temp/config and I wanted to map those data into a shared volume in my host
docker-compose downversion: '2'
services:
service-test:
image: service-test:latest
container_name: service-test
volumes:
- source_data:/temp/config/
volumes:
source_data:
When my service-test:latest image tries to write into the /temp/config, I am getting a Permission Denied error.
Question, how do I make this host shared volume writable?
I checked the shared volume using
docker volume inspect source_data
and I noticed that it has no write functionality.
This is a linux based distro.
UPDATE 2:
To verify this, I tried checking the permissions on the shared volume
and I noticed that it has no write permissions also.
bash-4.2$ docker inspect volume service-test_source_data
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/scratch/docker/volumes/service-test_source_data/_data",
"Name": "configservice-test_config_data",
"Options": {},
"Scope": "local"
}
]
bash-4.2$ ls -l /scratch/docker/volumes/service-test_source_data/
**drwxr-xr-x** 1 root root 0 Apr 18 01:43 _data
I believe your container is running as some specific user other than root.
In your docker-compose.yml you can add user: root
See docker-compose-reference
You can try it like:
volumes:
- source_data:/temp/config/
...
RUN chown -R source_data

Docker volumes-from blank, from network share

I have two container one is setup as a data volume, I can go inside the data container and explore the files that are mounted from a network share with out any issues.
how ever on the second docker instance when I go to the folder with mounted volumes the folder exists but all the files and directories that should be there are not visible.
this used to work so I can only assume its due to docker 1.9 I am seeing this on a linux and mac box.
Any ideas as to the cause ? is this a bug or is there something else i can investigate ?
output of inspect.
"Volumes": {
"/mnt/shared_app_data": {},
"/srv/shared_app_data": {}
},
"Mounts": [
{
"Name": "241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53",
"Source": "/var/lib/docker/volumes/241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53/_data",
"Destination": "/mnt/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
},
{
"Name": "061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57",
"Source": "/var/lib/docker/volumes/061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57/_data",
"Destination": "/srv/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
}
],
the files are mounted in the docker file in this manner
RUN echo '/srv/path ipaddress/255.255.255.0(rw,no_root_squash,subtree_check,fsid=0)' >> /etc/exports
RUN echo 'ipaddress:/srv/path /srv/shared_app_data nfs defaults 0 0' >> /etc/fstab
RUN echo 'ipaddress:/srv/path /mnt/shared_app_data nfs defaults 0 0' >> /etc/fstab
and then when the container starts it runs.
service rpcbind start
mount -a
You need to be sure that the second container does mount the VOLUME declared in the first one
docker run --volumes-from first_container second_container
Make sure the first container does have the right files: see "Locating a volume"
docker inspect first_container
# more precisely
sudo ls $(docker inspect -f '{{ (index .Mounts 0).Source }}' first_container)

Resources