I am working on an application. Using NodeJS for language and docker docker-compose for deployment.
I have an endpoint for uploading files. When i upload a file application keeps it at the "files" folder (files folder is at the root of the project). When container restarts i dont want to loose files, so i created a volume named "myfiles" and mounted it to "/files" path.
But when i check the myfiles volume path i dont see any of the created files by api. And if i restart the container uploaded files disappears.
Here is my docker-compose.yaml file for api
version: '3.1'
services:
api:
image: api-image
restart: always
volumes:
- myfiles:/files
volumes:
myfiles:
After docker-compose up -d i upload some files and see them in container by calling
docker exec -it container_name ls
files node_modules package.json
index.js package-lock.json src
docker exec -it container_name ls files
d5a3455a39d8185153308332ca050ad8.png
Files created succesfully.
I checked if container mounts correctly by
docker inspect container_name and result:
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/docker/volumes/myfiles/_data",
"Destination": "/files",
"Mode": "rw",
"RW": true,
"Propagation": "rslave"
}
]
You are creating volume, you are not using folder. Try to use folder myfiles from current directory:
version: '3.1'
services:
api:
image: api-image
restart: always
volumes:
- ./myfiles:/files
Related
I'm using Docker for Windows on WSL2. When I do this on a Dockerfile, the contents of my directory in the host are copied correctly to the new image after building:
FROM node:latest
WORKDIR /app
COPY . /app
However, this docker compose YAML doesn't mount the same directory in the same location:
version: "2.0"
services:
node:
image: node
user: node
environment:
- NODE_ENV=production
volumes:
- ./:/home/node/app
tty: true
After accessing the container, I'm not able to see the contents on my host directory in it. I've also tried with the full host path, but it didn't work.
When I run docker inspect on my container, I see this, which contains the correct information:
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/c/Users/my-user/my-dir",
"Destination": "/home/node/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
What am I missing?
I've created a simple docker with a nodejs server.
FROM node:12.16.1-alpine
WORKDIR /usr/src
COPY ./app/package.json .
RUN yarn
COPY ./app ./app
This works great and the service is running.
Now I'm trying to run the docker with a volume for local development using docker compose:
version: "3.4"
services:
web:
image: my-node-app
volumes:
- ./app:/usr/src/app
ports:
- "8080:8080"
command: ["yarn", "start"]
build:
context: .
dockerfile: ./app/Dockerfile
This is my folder structure in the host:
The service works without the volume. When I add the volume, the /usr/src/app is empty (even though it is full as shown in the folder structure).
Inspecting the docker container I get the following mount config:
"Mounts": [
{
"Type": "bind",
"Source": "/d/development/dockerNCo/app",
"Destination": "/usr/src/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
But still, browsing to the folder via shell of vscode show it as empty.
In addition, the command: docker volume ls shows an empty list.
I'm running docker 18.09.3 on windows 10.
Is there anything wrong with the configuration? How is it supposed to work?
Adding the volume to your service will remove all the files /usr/src/app and mount the content of ./app from your host machine. This also means that all files generated by running yarn in the docker image will be lost because they exist only in the docker image. This is the expected behaviour of adding volume in docker and it is not a bug.
volumes:
- ./app:/usr/src/app
Usually and for none development Envs you don't need the volume at all here.
If you would like to see the files on your host, you need to run yarn command from docker-compose (you can use an entry point)
I am trying to store nexus data in persistent volume. To do that I am using this compose yaml:
version: '3.5'
services:
nexus:
image: sonatype/nexus3
volumes:
- ./nexus-data:/nexus-data sonatype/nexus3
ports:
- "8081:8081"
networks:
- devops
extra_hosts:
- "my-proxy:my-proxy-address"
restart: on-failure
networks:
devops:
name: devops
driver: bridge
Before running docker-compose up I have created the nexus-data folder and gave required permissions to the uid/guid 200 as suggested here:
https://github.com/sonatype/docker-nexus3/blob/master/README.md#persistent-data.
root#master-node:~/docker# ll
total 16
drwxr-xr-x 3 root root 4096 Jan 8 13:37 ./
drwx------ 22 root root 4096 Jan 8 13:40 ../
-rw-r--r-- 1 root root 319 Jan 8 13:36 docker-compose.yml
drwxr-xr-x 2 200 200 4096 Jan 8 13:37 nexus-data/
And here docker's volumes list before running compose file (it's empty):
root#master-node:~/docker# docker volume ls
DRIVER VOLUME NAME
After docker-compose up command, docker created the volume as shown below:
root#master-node:~/docker# docker volume ls
DRIVER VOLUME NAME
local 7b7b6517e5ed0e286a8fc7caef756141b5bbdb6e074ef93a657850da3dd78b2b
root#master-node:~/docker# docker volume inspect 7b7b6517e5ed0e286a8fc7caef756141b5bbdb6e074ef93a657850da3dd78b2b
[
{
"CreatedAt": "2020-01-08T13:42:34+03:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/7b7b6517e5ed0e286a8fc7caef756141b5bbdb6e074ef93a657850da3dd78b2b/_data",
"Name": "7b7b6517e5ed0e286a8fc7caef756141b5bbdb6e074ef93a657850da3dd78b2b",
"Options": null,
"Scope": "local"
}
]
root#master-node:~/docker# ls /var/lib/docker/volumes/7b7b6517e5ed0e286a8fc7caef756141b5bbdb6e074ef93a657850da3dd78b2b/_data
admin.password cache db elasticsearch etc generated-bundles instances javaprefs karaf.pid keystores lock log orient port restore-from-backup tmp
but the folder I specified in compose file (nexus-data) is still empty:
root#master-node:~/docker# ls nexus-data/
root#master-node:~/docker#
So, what am I doing wrong here? Why isnexus-data empty and docker creating volume in another path?
You defined a volume instead of a bind mount, which is what you want. Read the documentation about it.
Basically, your configuration makes docker create a volume, which maps to a randomly created directory somewhere under /var/lib/docker/volumes.
If you want a specific directory that you control, you have to create a bind. That's the reason you don't have data on your chosen directory, as docker ignores it because it is not useful for a volume.
For it to work, set it like:
volumes:
- type: bind
source: ./nexus-data
target: /nexus-data
as explained in the compose documentation. (Also removed the image name from that configuration)
You've created a host volume, aka bind mount (which doesn't show in docker volume ls since it's not a named volume)B from ./nexus-data on the host to /nexus-data sonatype/nexus3 inside the container. This looks like a copy and paste error from a docker run command since you are adding the image name to the path being mounted inside the container. You should be able to exec into the container and see your files with:
docker exec ... ls "/nexus-data sonatype/nexus3"
You should remove the image name from the volume path to mount the typical location inside the container:
version: '3.5'
services:
nexus:
image: sonatype/nexus3
volumes:
- ./nexus-data:/nexus-data
ports:
- "8081:8081"
networks:
- devops
extra_hosts:
- "my-proxy:my-proxy-address"
restart: on-failure
networks:
devops:
name: devops
driver: bridge
The volume you did see was an anonymous volume. This would be from the image itself defining a volume that you did not include in your container spec. Inspecting the container using that volume would show where it's mounted, most likely at /nexus-data.
The below docker-compose.yaml works as expected:
version: '3.5'
services:
nexus:
image: sonatype/nexus3
volumes:
- ./nexus-data:/nexus-data
ports:
- "8081:8081"
networks:
- devops
extra_hosts:
- "my-proxy:my-proxy-address"
restart: on-failure
networks:
devops:
name: devops
driver: bridge
The trailing sonatype/nexus3 in your original docker-compose.yaml spec:
...
volumes:
- ./nexus-data:/nexus-data sonatype/nexus3
...
... throws docker off - the randomly named volume is created given the VOLUME instruction in the sonatype/nexus3 Dockerfile and is mounted to the running container hence why the intended locally mounted directory is empty.
I am pretty new to docker, so i might be doing something truly wrong
I need to share some files between docker containers, using a docker compose file
I have already created a volume like this
docker volume create shared
After that i can check the created volume
docker volume inspect shared
[
{
"CreatedAt": "2019-03-08T14:54:57-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/shared/_data",
"Name": "shared",
"Options": {},
"Scope": "local"
}
]
My docker-compose.yaml file looks like this
version: '3.1'
services:
service1:
build:
context: Service1
dockerfile: Dockerfile
restart: always
container_name: server1-server
volumes:
- shared:/shared
service2:
build:
context: Service2
dockerfile: Dockerfile
restart: always
container_name: server2-server
volumes:
- shared:/shared
volumes:
shared:
external: true
And the Dockerfile looks like this (just for testing purposes)
FROM microsoft/dotnet:2.2-sdk AS build-env
RUN echo "test" > /shared/test.info
When i issue a docker-compose up command i get this error
/bin/sh: 1: cannot create /shared/test.info: Directory nonexistent
If i modify the Dockerfile to this
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN cp *.csproj /shared/
I get this error
cp: cannot create regular file '/shared/': Not a directory
Any ideas how to achieve this ?
A Dockerfile contains instructions to create an image. After the image is built, the image can be run as a container.
A volume is attached when launching containers.
It thus makes no sense to use Dockerfile instructions to copy a file into a volume while building an image.
Volumes are generally used to share runtime data between containers, or to keep data after a container is stopped.
I have some containers for a web app for now (nginx, gunicorn, postgres and node to build static files from source and a React server side rendering). In a Dockerfile for the node container I have two steps: build and run (Dockerfile.node). It ends up with two directories inside a container: bundle_client - is a static for an nginx and bundle_server - it used in the node container itself to start an express server.
Then I need to share a built static folder (bundle_client) with the nginx container. To do so according to docker-compose reference in my docker-compose.yml I have the following services (See full docker-compose.yml):
node:
volumes:
- vnode:/usr/src
nginx:
volumes:
- vnode:/var/www/tg/static
depends_on:
- node
and volumes:
volumes:
vnode:
Running docker-compose build completes with no errors. Running docker-compose up runs everyting ok and I can open localhost:80 and there is nginx, gunicorn and node express SSR all working great and I can see a web page but all static files return 404 not found error.
If I check volumes with docker volume ls I can see two newly created volumes named tg_vnode (that we consider here) and tg_vdata (see full docker-compose.yml)
If I go into an nginx container with docker run -ti -v /tmp:/tmp tg_node /bin/bash I can't see my www/tg/static folder which should map my static files from the node volume. Also I tried to create an empty /var/www/tg/static folder with nginx container Dockerfile.nginx but it stays empty.
If I map a bundle_client folder from the host machine in the docker-compose.yml in a nginx.volumes section as - ./client/bundle_client:/var/www/tg/static it works ok and I can see all the static files served with nginx in the browser.
What I'm doing wrong and how to make my container to share built static content with the nginx container?
PS: I read all the docs, all the github issues and stackoverflow Q&As and as I understood it has to work and there is no info what to do when is does not.
UPD: Result of docker volume inspect vnode:
[
{
"CreatedAt": "2018-05-18T12:24:38Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "tg",
"com.docker.compose.version": "1.21.1",
"com.docker.compose.volume": "vnode"
},
"Mountpoint": "/var/lib/docker/volumes/tg_vnode/_data",
"Name": "tg_vnode",
"Options": null,
"Scope": "local"
}
]
Files:
Dockerfile.node,
docker-compose.yml
Nginx dockerfile: Dockerfile.nginx
UPD: I have created a simplified repo to reproduce a question: repo
(there are some warnings on npm install nevermind it installs and builds ok). Eventually when we open localhost:80 we see an empty page and 404 messages for static files (vendor.js and app.js) in Chrome dev tools but there should be a message React app: static loaded generated by react script.
Two changes you need. In your node service add the volume like:
volumes:
- vnode:/usr/src/bundle_client
Since you want to share /usr/src/bundle_client you should NOT be using /usr/src/ because that will share the full folder and the structure too.
And then in your nginx service add the volume like:
volumes:
- type: volume
source: vnode
target: /var/www/test/static
volume:
nocopy: true
The nocopy: true keeps our intention clear that on initial map of the container the content of the mapped folder should not be copied. And by default the first container to get mapped to the volume will get the contents of the mapped folder. In your case you want this to be the node container.
Also before testing make sure you run below command to kill the cached volumes:
docker-compose down -v
You can see during my test the container had the files:
Explanation of what happens step by step
Dockerfile.node
...
COPY ./client /usr/src
...
docker-compose.yml
services:
...
node:
...
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- vnode:/usr/src
...
volumes:
vnode:
docker-compose up creates with this Dockerfile.node and docker-compose section a named volume with data saved in /usr/src.
Dockerfile.nginx
FROM nginx:latest
COPY ./server/nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /var/www/tg/static
EXPOSE 80
EXPOSE 443
CMD ["nginx", "-g", "daemon off;"]
That produces that nginx containers created with docker-compose will have an empty /var/www/tg/static/
docker-compose.yml
...
nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
container_name: tg_nginx
restart: always
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- vnode:/var/www/tg/static
ports:
- "80:80"
- "443:443"
depends_on:
- node
- gunicorn
networks:
- nw_web_tg
volumes:
vdata:
vnode:
docker-compose up will produce that vnode named volume is created and filled with data from /var/www/tg/static (empty by now) to existing vnode.
So, at this point,
- nginx container has /var/www/tg/static empty because it was created empty (see mkdir in Dockerfile.nginx)
- node container has /usr/src dir with client file (see that was copied in Dockerfile.node)
- vnode has content of /usr/src from node and /var/www/tg/static from nginx.
Definitively, to pass data from /usr/src from your node container to /var/www/tg/static in nginx container you need to do something that is not very pretty because Docker hasn't developed another way yet: You need to combine named volume in source folder with bind volume in destination:
nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
container_name: tg_nginx
restart: always
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- /var/lib/docker/volumes/vnode/_data:/var/www/tg/static
ports:
- "80:80"
- "443:443"
depends_on:
- node
- gunicorn
networks:
- nw_web_tg
Just change in docker-compose - vnode:/var/www/tg/static by - /var/lib/docker/volumes/vnode/_data:/var/www/tg/static