Docker volume clears folder in container on windows 10 - docker

I've created a simple docker with a nodejs server.
FROM node:12.16.1-alpine
WORKDIR /usr/src
COPY ./app/package.json .
RUN yarn
COPY ./app ./app
This works great and the service is running.
Now I'm trying to run the docker with a volume for local development using docker compose:
version: "3.4"
services:
web:
image: my-node-app
volumes:
- ./app:/usr/src/app
ports:
- "8080:8080"
command: ["yarn", "start"]
build:
context: .
dockerfile: ./app/Dockerfile
This is my folder structure in the host:
The service works without the volume. When I add the volume, the /usr/src/app is empty (even though it is full as shown in the folder structure).
Inspecting the docker container I get the following mount config:
"Mounts": [
{
"Type": "bind",
"Source": "/d/development/dockerNCo/app",
"Destination": "/usr/src/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
But still, browsing to the folder via shell of vscode show it as empty.
In addition, the command: docker volume ls shows an empty list.
I'm running docker 18.09.3 on windows 10.
Is there anything wrong with the configuration? How is it supposed to work?

Adding the volume to your service will remove all the files /usr/src/app and mount the content of ./app from your host machine. This also means that all files generated by running yarn in the docker image will be lost because they exist only in the docker image. This is the expected behaviour of adding volume in docker and it is not a bug.
volumes:
- ./app:/usr/src/app
Usually and for none development Envs you don't need the volume at all here.
If you would like to see the files on your host, you need to run yarn command from docker-compose (you can use an entry point)

Related

How can I add a file to my volume without writing a new file to the host?

I'm trying to run a Next.js project inside docker-compose. To take advantage of hot-reloading, I'm mounting in the entire project to the Docker image as a volume.
So far, so good!
This is where things are starting to get tricky: For this particular project, it turns out Apple Silicon users need a .babelrc file included in their dockerized app, but NOT in the files on their computer.
All other users do not need a .babelrc file at all.
To sum up, this is what I'd like to be able to do:
hot reload project (hence ./:/usr/src/app/)
have an environment variable write content to /usr/src/app/.babelrc.
not have a .babelrc in the host's project root.
My attempt at solving was including the .babelrc under ci-cd/.babelrc in the host file system.
Then I tried mounting the file as a volume like - ./ci-cd/.babelrc:/usr/src/app/.babelrc. But then a .babelrc file gets written back to the root of the project in the host filesystem.
I also tried include COPY ./ci-cd/.babelrc /usr/src/app/.babelrc within the Dockerfile, but it seems to be overwritten with docker-composes's volume property.
Here's my Dockerfile:
FROM node:14
WORKDIR /usr/src/app/
COPY package.json .
RUN npm install
And the docker-compose.yml:
version: "3.8"
services:
# Database image
psql:
image: postgres:13
restart: unless-stopped
ports:
- 5432:5432
# image for next.js project
webapp:
build: .
command: >
bash -c "npm run dev"
ports:
- 3002:3002
expose:
- 3002
depends_on:
- testing-psql
volumes:
- ./:/usr/src/app/

Nx Monorepo - NestJs Docker development reloading not working

I want to run a Nx workspace containing a NestJs project in a Docker container, in development mode. The problem is I am unable to configure docker-compose + Dockerfile to make the project reload on save. I'm a bit confused on why this is not working as I configured a small nestjs project(without nx) in docker and it had no issues reloading on save.
Surely I am not mapping the ports corectly or something.
version: "3.4"
services:
nx-app:
container_name: nx-app
build: .
ports:
- 3333:3333
- 9229:9229
volumes:
- .:/workspace
FROM node:14.17.3-alpine
WORKDIR /workspace
COPY . .
RUN ["npm", "i", "-g", "#nrwl/cli"]
RUN ["npm", "i"]
EXPOSE 3333
EXPOSE 9229
ENTRYPOINT ["nx","serve","main"]
Also tried adding a Angular application to the workspace and was able to reload it on save in the container without issues...
Managed to solve it by adding "poll": 500 in project.json of nestJs app/library.
"targets": {
"build": {
"executor": "#nrwl/node:webpack",
...
"options": {
...
"poll": 500

Docker container cannot copy file into volume

I am pretty new to docker, so i might be doing something truly wrong
I need to share some files between docker containers, using a docker compose file
I have already created a volume like this
docker volume create shared
After that i can check the created volume
docker volume inspect shared
[
{
"CreatedAt": "2019-03-08T14:54:57-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/shared/_data",
"Name": "shared",
"Options": {},
"Scope": "local"
}
]
My docker-compose.yaml file looks like this
version: '3.1'
services:
service1:
build:
context: Service1
dockerfile: Dockerfile
restart: always
container_name: server1-server
volumes:
- shared:/shared
service2:
build:
context: Service2
dockerfile: Dockerfile
restart: always
container_name: server2-server
volumes:
- shared:/shared
volumes:
shared:
external: true
And the Dockerfile looks like this (just for testing purposes)
FROM microsoft/dotnet:2.2-sdk AS build-env
RUN echo "test" > /shared/test.info
When i issue a docker-compose up command i get this error
/bin/sh: 1: cannot create /shared/test.info: Directory nonexistent
If i modify the Dockerfile to this
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN cp *.csproj /shared/
I get this error
cp: cannot create regular file '/shared/': Not a directory
Any ideas how to achieve this ?
A Dockerfile contains instructions to create an image. After the image is built, the image can be run as a container.
A volume is attached when launching containers.
It thus makes no sense to use Dockerfile instructions to copy a file into a volume while building an image.
Volumes are generally used to share runtime data between containers, or to keep data after a container is stopped.

Docker container can't mount folder

I am working on an application. Using NodeJS for language and docker docker-compose for deployment.
I have an endpoint for uploading files. When i upload a file application keeps it at the "files" folder (files folder is at the root of the project). When container restarts i dont want to loose files, so i created a volume named "myfiles" and mounted it to "/files" path.
But when i check the myfiles volume path i dont see any of the created files by api. And if i restart the container uploaded files disappears.
Here is my docker-compose.yaml file for api
version: '3.1'
services:
api:
image: api-image
restart: always
volumes:
- myfiles:/files
volumes:
myfiles:
After docker-compose up -d i upload some files and see them in container by calling
docker exec -it container_name ls
files node_modules package.json
index.js package-lock.json src
docker exec -it container_name ls files
d5a3455a39d8185153308332ca050ad8.png
Files created succesfully.
I checked if container mounts correctly by
docker inspect container_name and result:
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/docker/volumes/myfiles/_data",
"Destination": "/files",
"Mode": "rw",
"RW": true,
"Propagation": "rslave"
}
]
You are creating volume, you are not using folder. Try to use folder myfiles from current directory:
version: '3.1'
services:
api:
image: api-image
restart: always
volumes:
- ./myfiles:/files

Docker - Can't share data between containers within a volume (docker-compose 3)

I have some containers for a web app for now (nginx, gunicorn, postgres and node to build static files from source and a React server side rendering). In a Dockerfile for the node container I have two steps: build and run (Dockerfile.node). It ends up with two directories inside a container: bundle_client - is a static for an nginx and bundle_server - it used in the node container itself to start an express server.
Then I need to share a built static folder (bundle_client) with the nginx container. To do so according to docker-compose reference in my docker-compose.yml I have the following services (See full docker-compose.yml):
node:
volumes:
- vnode:/usr/src
nginx:
volumes:
- vnode:/var/www/tg/static
depends_on:
- node
and volumes:
volumes:
vnode:
Running docker-compose build completes with no errors. Running docker-compose up runs everyting ok and I can open localhost:80 and there is nginx, gunicorn and node express SSR all working great and I can see a web page but all static files return 404 not found error.
If I check volumes with docker volume ls I can see two newly created volumes named tg_vnode (that we consider here) and tg_vdata (see full docker-compose.yml)
If I go into an nginx container with docker run -ti -v /tmp:/tmp tg_node /bin/bash I can't see my www/tg/static folder which should map my static files from the node volume. Also I tried to create an empty /var/www/tg/static folder with nginx container Dockerfile.nginx but it stays empty.
If I map a bundle_client folder from the host machine in the docker-compose.yml in a nginx.volumes section as - ./client/bundle_client:/var/www/tg/static it works ok and I can see all the static files served with nginx in the browser.
What I'm doing wrong and how to make my container to share built static content with the nginx container?
PS: I read all the docs, all the github issues and stackoverflow Q&As and as I understood it has to work and there is no info what to do when is does not.
UPD: Result of docker volume inspect vnode:
[
{
"CreatedAt": "2018-05-18T12:24:38Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "tg",
"com.docker.compose.version": "1.21.1",
"com.docker.compose.volume": "vnode"
},
"Mountpoint": "/var/lib/docker/volumes/tg_vnode/_data",
"Name": "tg_vnode",
"Options": null,
"Scope": "local"
}
]
Files:
Dockerfile.node,
docker-compose.yml
Nginx dockerfile: Dockerfile.nginx
UPD: I have created a simplified repo to reproduce a question: repo
(there are some warnings on npm install nevermind it installs and builds ok). Eventually when we open localhost:80 we see an empty page and 404 messages for static files (vendor.js and app.js) in Chrome dev tools but there should be a message React app: static loaded generated by react script.
Two changes you need. In your node service add the volume like:
volumes:
- vnode:/usr/src/bundle_client
Since you want to share /usr/src/bundle_client you should NOT be using /usr/src/ because that will share the full folder and the structure too.
And then in your nginx service add the volume like:
volumes:
- type: volume
source: vnode
target: /var/www/test/static
volume:
nocopy: true
The nocopy: true keeps our intention clear that on initial map of the container the content of the mapped folder should not be copied. And by default the first container to get mapped to the volume will get the contents of the mapped folder. In your case you want this to be the node container.
Also before testing make sure you run below command to kill the cached volumes:
docker-compose down -v
You can see during my test the container had the files:
Explanation of what happens step by step
Dockerfile.node
...
COPY ./client /usr/src
...
docker-compose.yml
services:
...
node:
...
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- vnode:/usr/src
...
volumes:
vnode:
docker-compose up creates with this Dockerfile.node and docker-compose section a named volume with data saved in /usr/src.
Dockerfile.nginx
FROM nginx:latest
COPY ./server/nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /var/www/tg/static
EXPOSE 80
EXPOSE 443
CMD ["nginx", "-g", "daemon off;"]
That produces that nginx containers created with docker-compose will have an empty /var/www/tg/static/
docker-compose.yml
...
nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
container_name: tg_nginx
restart: always
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- vnode:/var/www/tg/static
ports:
- "80:80"
- "443:443"
depends_on:
- node
- gunicorn
networks:
- nw_web_tg
volumes:
vdata:
vnode:
docker-compose up will produce that vnode named volume is created and filled with data from /var/www/tg/static (empty by now) to existing vnode.
So, at this point,
- nginx container has /var/www/tg/static empty because it was created empty (see mkdir in Dockerfile.nginx)
- node container has /usr/src dir with client file (see that was copied in Dockerfile.node)
- vnode has content of /usr/src from node and /var/www/tg/static from nginx.
Definitively, to pass data from /usr/src from your node container to /var/www/tg/static in nginx container you need to do something that is not very pretty because Docker hasn't developed another way yet: You need to combine named volume in source folder with bind volume in destination:
nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
container_name: tg_nginx
restart: always
volumes:
- ./server/nginx.conf:/etc/nginx/nginx.conf:ro
- /var/lib/docker/volumes/vnode/_data:/var/www/tg/static
ports:
- "80:80"
- "443:443"
depends_on:
- node
- gunicorn
networks:
- nw_web_tg
Just change in docker-compose - vnode:/var/www/tg/static by - /var/lib/docker/volumes/vnode/_data:/var/www/tg/static

Resources