I'm trying to use mounted volume directory in build process, but it's either not being mounted at the moment or mounted incorrectly.
docker-compose.yml
version: '2'
services:
assoi:
restart: on-failure
build:
context: ./assoi
expose:
- "4129"
links:
- assoi-redis
- assoi-postgres
- assoi-mongo
- assoi-rabbit
volumes:
- ./ugmk:/www
command: pm2 start /www/ugmk.json
...
Dockerfile
...
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
...
sudo docker-compose build out
...
Step 12 : WORKDIR /www
---> Using cache
---> 73504ed64194
Step 13 : RUN ls -al
---> Running in 37bb9f70d4ac
total 8
drwxr-xr-x 2 root root 4096 Aug 22 13:31 .
drwxr-xr-x 65 root root 4096 Aug 22 14:05 ..
---> be1ac6edce56
...
During build, you do not mount or more specifically, you cannot mount any volume.
What you do is COPY, so in your case
COPY ./ugmk /www
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
Volumes are for containers, not for images - volumes should store persistent user-generated data. By definition, this can only happen during the runtime, thus for "containers".
Nevertheless, the upper COPY is the default practice to what you want to achive "build a image with the application pre-deployed/assets compiled" ..
Related
As I know, commands in Dockerfile affects only the built image, not the container where it runs. But for me it seems that this scenario writing files to the attached volume which should be impossible, because the volume is not attached while building the image:
Dockerfile:
FROM alpine
RUN mkdir /data
RUN date > /data/timestamp
RUN echo "Content in image:" && cat /data/timestamp
docker-compose.yml:
version: "3"
services:
my-service:
build: .
image: my-image
entrypoint: cat /data/timestamp
volumes:
- my-volume:/data
other-service:
image: alpine
entrypoint: cat /data/timestamp
volumes:
- my-volume:/data
volumes:
my-volume:
Output:
$ docker-compose build --no-cache; docker-compose up
other-service uses an image, skipping
Building my-service
Step 1/4 : FROM alpine
---> e66264b98777
Step 2/4 : RUN mkdir /data
---> Running in 969f72e0e71c
Removing intermediate container 969f72e0e71c
---> 21277c5b67b6
Step 3/4 : RUN date > /data/timestamp
---> Running in 09b2e14d742a
Removing intermediate container 09b2e14d742a
---> ba94d6c58c1f
Step 4/4 : RUN echo "Directory content in image:" && cat /data/timestamp
---> Running in 985e8e48bd80
Content in image:
Fri Jul 15 11:20:14 UTC 2022
Removing intermediate container 985e8e48bd80
---> adcbeac42123
Successfully built adcbeac42123
Successfully tagged my-image:latest
Creating volume "docker-test_my-volume" with default driver
Creating docker-test_other-service_1 ... done
Creating docker-test_my-service_1 ... done
Attaching to docker-test_other-service_1, docker-test_my-service_1
my-service_1 | Fri Jul 15 11:20:14 UTC 2022
other-service_1 | Fri Jul 15 11:20:14 UTC 2022
docker-test_other-service_1 exited with code 0
docker-test_my-service_1 exited with code 0
The other-service should be not able to read the contents of /data/timestamp, because it's using a different image (alpine) and the file exists only in my-image, not in the volume. How is it possible that the file exists on the volume? It seems that nothing changes if I add VOLUME /data instead of RUN mkdir /data to Dockerfile, what should I expect from this command?
I'm trying to dockerize a simple create-react-app project. (Is the initial project after running npx create-react-app test, no files were changed).
The problem seems to be that in newer versions of React, they moved the annoying .eslintcache from the root folder to /node_modules/.cache, causing problems when the container is trying to run the application via docker-compose.
Dockerfile
FROM node:alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
RUN chown -R node.node /usr/app/node_modules
COPY . ./
CMD ["npm", "start"]
docker-compose
version: '3'
services:
test:
stdin_open: true
build:
context: .
dockerfile: Dockerfile
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- /usr/app/node_modules
- .:/usr/app
ports:
- '3000:3000'
The container is logging this error message:
test_1 | Failed to compile.
test_1 |
test_1 | EACCES: permission denied, mkdir '/usr/app/node_modules/.cache
As you can notice, I tried to set the node_modules folder owner to the node user (the default user for node:alpine), but it is not working; exploring the container, you can see that the node_modules folder is still owned by root:
drwxrwxr-x 5 node node 4096 Apr 14 07:04 .
drwxr-xr-x 1 root root 4096 Apr 14 07:08 ..
-rw-rw-r-- 1 node node 310 Apr 14 06:56 .gitignore
-rw-rw-r-- 1 node node 192 Apr 14 07:30 Dockerfile
-rw-rw-r-- 1 node node 3369 Apr 14 06:56 README.md
drwxrwxr-x 1061 root root 36864 Apr 14 07:12 node_modules
-rw-rw-r-- 1 node node 692936 Apr 14 06:56 package-lock.json
-rw-rw-r-- 1 node node 808 Apr 14 06:56 package.json
drwxrwxr-x 2 node node 4096 Apr 14 06:56 public
drwxrwxr-x 2 node node 4096 Apr 14 06:56 src
I also tried to create the folder RUN mkdir -p /usr/app and use USER node but that end up in an issue where npm wasn't able to create the node_modules folder.
Is there any workaround where either .eslintcache is disabled or node_modules is owned by the node user?
Update
Apparently, this is occurring because I'm using Ubuntu and docker mounts volumes as root on Linux systems.
Adding this line just after RUN npm install in your Dockerfile would solve the issue:
RUN mkdir -p node_modules/.cache && chmod -R 777 node_modules/.cache
Final Dockerfile
FROM node:alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install
RUN mkdir node_modules/.cache && chmod -R 777 node_modules/.cache
COPY . .
CMD ["npm", "run", "start"]
Then you don't need to copy the node_modules folder from your local dir to the container. You can safely bookmark it.
If you have a local node_modules folder delete it before running docker or docker-compose
Because there might have been another situation which causes
EACCES: permission denied, mkdir '/usr/app/node_modules/.cache
This might be a case that you previously run this react-app with local node_modules file then deleted the file to use the container node_module.
So after deleting the whole folder it still recreates the folder and keeps a .cache file in it which is generally not visible.
So when you bookmark all file using `".:/usr/app" in docker-compose.yml or
-v "$(pwd):/usr/app" it tries to bookmark that cache file stored in that local_node modules repo and causes all the fuss.
This is caused by a long-standing issue with NPM changing its process's UID which is now fixed NPM 9.
Specifically, it is described in this NPM RFC comment:
Docker volume mounts use the UID:GID of the host machine. The changes
npm has made to infer the execution user from the UID:GID effectively
break docker setups where the host users UID:GID does not match the
node user on the container. Setting UID:GID in a .env file for each
developer in our application is cumbersome and overall ridiculous.
Stop trying to infer best security practices when it comes to running
scripts, your job is to be a package manager
This fix is called out prominently in the v9.0.0 changelog announcement.
Related changes in NPM tracker:
https://github.com/npm/cli/blob/v9.1.2/CHANGELOG.md#900-pre6-2022-10-19
https://github.com/npm/rfcs/issues/546
https://github.com/npm/statusboard/issues/540
https://github.com/npm/cli/pull/5703
https://github.com/npm/cli/pull/5704
I've just stumbled across the same issue. If you're wondering how I dealt with it I simply added two lines before the last line. It worked like a charm.
FROM node:16.13.0-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm config set unsafe-perm true
RUN npm install --silent
COPY . .
RUN chown -R node /app/node_modules
USER node
CMD ["npm", "start"]
In the DOCKER-COMPOSE File remove the volumes where you trying to exculde node_modules.
It will work
Well Its clearly shown you don't have access in node modules in your docker container.I tried a lot, but below given form works for me.
RUN mkdir -p /usr/src/app
RUN chmod +rwx /usr/src/app
WORKDIR /usr/src/app
first give READ WRITE EDIT permission via above command
for your work-directory file, then it will working fine.
The solution I came out with was installing the node_modues locally and then modify the Dockerfile to copy everything into the container and use that as a volume instead of installing and bookmarking node_modules during the build process. I know it's not the best approach to this problem, but it's an easy solution for something I've been trying to solve for days.
Dockerfile
FROM node:alpine
WORKDIR /usr/app
COPY . ./
CMD ["npm", "start"]
docker-compose
version: '3'
services:
test:
stdin_open: true
build:
context: .
dockerfile: Dockerfile.dev
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- .:/usr/app
ports:
- '3000:3000'
Well, It is clearly showing that you don't have read, write and execute permission in your node modules folder and you will probably see a lock icon appearing on the node module folder.
FOR UBUNTU any version
sudo chmod a+rwx <path>/your-project-folder/node_modules
EXAMPLE
sudo chmod a+rwx Desktop/react-app/node_modules
explanation
chmod - to change permission
a - all
rwx - read, write and execute
To fix that issue in KUBERNETES, you have to mount an empty dir volume with the path directory /app/.next/cache. Check the example.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
spec:
template:
metadata:
labels:
type: my-label
spec:
containers:
image: ghcr.io/myimage
imagePullPolicy: Always
name: my-site
ports:
- containerPort: 3000
name: http
protocol: TCP
volumeMounts:
- mountPath: /app/.next/cache
name: cache
volumes:
- emptyDir: {}
name: cache
Move 'USER node' higher
I was only using Docker (no docker-compose) and the solution was to move USER node to before COPY package*...
I had the same issue.
The thing is, I used vue create inside /tmp, and then moved the content of the created folder inside my volume.
Which apparently npm is not too fond of.
So I removed node_modules, and recreated it using npm install, now it works like a charm.
I'm trying to build and run a docker image with docker-compose up
However, I get the error can't open /config/config.template: no such file
My Dockerfile is as follows:
FROM quay.io/coreos/clair-git
COPY config.template /config/config.template
#Set Defaults
ENV USER=clair PASSWORD=johnnybegood INSTANCE_NAME=postgres PORT=5432
RUN apk add gettext
CMD envsubst < /config/config.template > /config/config.yaml && rm -f /config/config.template && exec /clair -config=/config/config.yaml
ENTRYPOINT []
when adding the line RUN ls -la /config/ the following is returned after running docker-compose up --build:
drwxr-xr-x 2 root root 4096 Sep 16 06:46 .
drwxr-xr-x 1 root root 4096 Sep 16 06:46 ..
-rw-rw-r-- 1 root root 306 Sep 6 05:55 config.template
Here is the error:
clair_1_9345a64befa1 | /bin/sh: can't open /config/config.template: no such file
I've tried changing line endings and checking the docker version. It seems to work on a different machine running a different OS.
I'm using Ubuntu 18.04 and have docker version docker-compose version 1.23.1, build b02f1306
My docker-compose.yml file:
version: '3.3'
services:
clair:
build:
context: clair/
dockerfile: Dockerfile
environment:
- PASSWORD=johnnybegood
- USER=clair
- PORT=5432
- INSTANCE=postgres
ports:
- "6060:6060"
- "6061:6061"
depends_on:
- postgres
postgres:
build:
context: ../blah/postgres
dockerfile: Dockerfile
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=johnnybegood
- POSTGRES_USER=clair
- POSTGRES_DB=clair
Docker CMD is only designed to run a single process, following the docker philosophy of one process per container. Try using a start script to modify your template and then launch clair.
FROM quay.io/coreos/clair-git
COPY config.template /config/config.template
COPY start.sh /start.sh
#Set Defaults
ENV USER=clair PASSWORD=johnnybegood INSTANCE_NAME=postgres PORT=5432
RUN apk add gettext
ENTRYPOINT ["/start.sh"]
and have a startscript (with executable permissions) copied into the container using your Dockerfile
!#/bin/sh
envsubst </config/config.template > /config/config.yaml
/clair -config=/config/config.yaml
edit: changed the answer after a comment from David maze
Good day everyone.
My task here is to use some volume which will be shared among several services. This volume will be populated by ADD or COPY command in certain service Dockerfile. Problem I encountered is that volume is not getting updated when services are started via docker-compose up.
Consider following setup:
# docker-compose.yml
version: "3"
services:
service-a:
build:
context: service-a
dockerfile: Dockerfile
volumes:
- shared:/test/dir
service-b:
build:
context: service-b
dockerfile: Dockerfile
volumes:
- shared:/another-test/dir
volumes:
shared:
# ./service-a/Dockerfile
FROM alpine:3.8
COPY test-dir /test/dir
CMD [ "tail", "-f", "/dev/null" ]
# ./service-b/Dockerfile
FROM alpine:3.8
CMD [ "tail", "-f", "/dev/null" ]
And we will create ./service-a/test-dir folder. Now let's build it:
> docker-compose build
Building service-a
Step 1/3 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/3 : COPY test-dir /test/dir
---> ac66ed92b442
Step 3/3 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 932eb32b6184
Removing intermediate container 932eb32b6184
---> 7e0385d17f96
Successfully built 7e0385d17f96
Successfully tagged docker-compose-test_service-a:latest
Building service-b
Step 1/2 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/2 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 59a8b91c6b2d
Removing intermediate container 59a8b91c6b2d
---> 4e2c16ea5a80
Successfully built 4e2c16ea5a80
Successfully tagged docker-compose-test_service-b:latest
And start services:
> docker-compose up --no-build -d
Creating network "docker-compose-test_default" with the default driver
Creating volume "docker-compose-test_shared" with default driver
Creating docker-compose-test_service-a_1 ... done
Creating docker-compose-test_service-b_1 ... done
Let's check mapped directories:
> docker-compose exec service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:14 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
> docker-compose exec service-b ls -lah /another-test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:14 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
Now let's put few text files in ./service-a/test-dir on host machine and build again:
> docker-compose build
Building service-a
Step 1/3 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/3 : COPY test-dir /test/dir
---> bd168b0fc8cc
Step 3/3 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 6e81b32243e1
Removing intermediate container 6e81b32243e1
---> cc28fc6de9ac
Successfully built cc28fc6de9ac
Successfully tagged docker-compose-test_service-a:latest
Building service-b
Step 1/2 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/2 : CMD [ "tail", "-f", "/dev/null" ]
---> Using cache
---> 4e2c16ea5a80
Successfully built 4e2c16ea5a80
Successfully tagged docker-compose-test_service-b:latest
As you can see cache is not used on COPY step in service-a, meaning changes are baked into the image. Now let's start services:
> docker-compose up --no-build -d
Recreating docker-compose-test_service-a_1 ...
Recreating docker-compose-test_service-a_1 ... done
Once again service-b remains untouched, only service-a gets recreated. Let's check actual services (this is where problem happens):
> docker-compose exec service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:17 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:20 ..
> docker-compose exec service-b ls -lah /another-test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:17 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
So files are not reflected... However if we launch temporary container based on service-a image it will show proper list:
> docker run --rm docker-compose-test_service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:20 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:20 ..
-rwxr-xr-x 1 root root 0 Dec 12 06:20 rrrrrr.txt
-rwxr-xr-x 1 root root 0 Dec 12 06:16 test.txt
Any idea for workaround on this? So far it seems like only complete shutdown via docker-compose down with volume destruction helps. Not the best solution though as with real project it will cause serious downtime.
I hope configuration is readable, but I can put it into small git repo maybe if needed.
Thanks in advance.
Docker will only auto-populate a volume when it is first created. In the workflow you describe, when you delete and recreate the "first" container with its new image, you need to delete and recreate the volume too, which probably means deleting and recreating the whole stack.
There are a couple of ways around this issue:
Rearchitect your application to not need to share files. (Obviously the most work.) If there's some sort of semi-static content, you might need to "bake them in" to the consuming image, or set up the two parts of your system to communicate via HTTP. (This sort of data sharing across containers is not something Docker is great at, and it gets worse when you start looking at multi-host solutions like Swarm or Kubernetes.)
If there are only two parts involved, build a single image that runs the two processes. I've seen other SO questions that do this for a PHP-FPM component that serves dynamic content plus an nginx server that serves static content and forwards some requests to PHP-FPM, but where all of the static and dynamic content is "part of the application" and the HTTP-via-nginx entry point is "the single entrypoint into the container". supervisord is the de facto default control process when this is necessary.
Inside the "producing" container, write some startup-time code that copies data from something in the image into a shared-data location
#!/bin/sh
# Run me as the image's ENTRYPOINT.
if [ -d /data ]; then
cp -r /app/static /data
done
exec "$#"
This will repopulate the data volume on every startup.
I wanna copy new static files from Docker container via named volume to nginx container that has old static.
Prerequisites:
Host machine directory tree:
.
├── data
│ ├── bar.2.css
│ └── foo.2.js
├── docker-compose.yml
├── Dockerfile
Dockerfile:
FROM busybox:latest
COPY data /data
docker-compose.yml:
version: '3'
services:
static:
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Directory tree of named volume myvolume with old static:
.
├── bar.1.css
└── foo.1.js
Sequence of steps:
Build myimage with Dockerfile: docker build -t myimage .
Check new static files in myimage: docker run myimage ls /data
bar.2.css
foo.2.js
Run: docker-compose up -d --build static
In my mind it must rebuild service static and overwrite old static files. But it did't. Why and how to fix it? Also, what is a better approach?
I think that you are just coping the new files alongside the old files with the docker build -t myimage .
Maybe you can delete the previous data before you insert new, by running a one-time container??
docker exec -it static rm /data
and then just copy the new data, or build the new image:
docker cp /data static:/data
You can also, implement the build step inside the docker-compose file:
version: '3'
services:
static:
build: /
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Why -- I believe that you are mounting the pre-existing volume myvolume atop your /data folder of the static container. This is because your myvolume already exists. If myvolume did not exist, the content of /data would be copied to the volume.
See: Docker-Volume-Docs -- "If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents will be copied into the volume."
Sample Solution
Give this a shot. With the structure and content below do a:
docker-compose up --build
This is additive, so if you update/add content to the newdata folder and re-run your compose, then the new content will be present in the shared volume.
You can mount and inspect the shared volume, like this:
docker run -it --rm --mount type=volume,src={docker-volume-name},target=/shared busybox sh
Environment
Folder structure:
.
├── dockerfile
├── docker-compose.yml
├── newdata/
── apple.txt
── banana.txt
dockerfile
FROM busybox:latest
# From host machine to image
COPY newdata/* /newdata/
# #Runtime from image to where a shared volume could be mounted.
ENTRYPOINT [ "cp", "-r", "/newdata/", "/shared" ]
docker-compose.yml
version: '3.2'
services:
data-provider:
image: data-provider
build: .
volumes:
- type: volume
source: so
target: /shared
destination:
image: busybox:latest
volumes:
- type: volume
source: so
target: /shared-data
depends_on:
- data-provider
command: ls -la /shared-data/newdata
volumes:
so:
Sample Output:
$ docker-compose up --build
Creating volume "sodockervol_so" with default driver
Building data-provider
Step 1/3 : FROM busybox:latest
---> c75bebcdd211
Step 2/3 : COPY newdata/* /newdata/
---> bc85fc19ed7b
Removing intermediate container 2a39f4be8dd2
Step 3/3 : ENTRYPOINT cp -r /newdata/ /shared
---> Running in e755c3179b4f
---> 6e79a32bf668
Removing intermediate container e755c3179b4f
Successfully built 6e79a32bf668
Successfully tagged data-provider:latest
Creating sodockervol_data-provider_1 ...
Creating sodockervol_data-provider_1 ... done
Creating sodockervol_destination_1 ...
Creating sodockervol_destination_1 ... done
Attaching to sodockervol_data-provider_1, sodockervol_destination_1
destination_1 | total 16
destination_1 | drwxr-xr-x 2 root root 4096 Oct 9 17:50 .
destination_1 | drwxr-xr-x 3 root root 4096 Oct 9 17:50 ..
destination_1 | -rwxr-xr-x 1 root root 25 Oct 9 17:50 apple.txt
destination_1 | -rwxr-xr-x 1 root root 28 Oct 9 17:50 banana.txt
sodockervol_data-provider_1 exited with code 0
sodockervol_destination_1 exited with code 0