I'm trying to dockerize a simple create-react-app project. (Is the initial project after running npx create-react-app test, no files were changed).
The problem seems to be that in newer versions of React, they moved the annoying .eslintcache from the root folder to /node_modules/.cache, causing problems when the container is trying to run the application via docker-compose.
Dockerfile
FROM node:alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
RUN chown -R node.node /usr/app/node_modules
COPY . ./
CMD ["npm", "start"]
docker-compose
version: '3'
services:
test:
stdin_open: true
build:
context: .
dockerfile: Dockerfile
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- /usr/app/node_modules
- .:/usr/app
ports:
- '3000:3000'
The container is logging this error message:
test_1 | Failed to compile.
test_1 |
test_1 | EACCES: permission denied, mkdir '/usr/app/node_modules/.cache
As you can notice, I tried to set the node_modules folder owner to the node user (the default user for node:alpine), but it is not working; exploring the container, you can see that the node_modules folder is still owned by root:
drwxrwxr-x 5 node node 4096 Apr 14 07:04 .
drwxr-xr-x 1 root root 4096 Apr 14 07:08 ..
-rw-rw-r-- 1 node node 310 Apr 14 06:56 .gitignore
-rw-rw-r-- 1 node node 192 Apr 14 07:30 Dockerfile
-rw-rw-r-- 1 node node 3369 Apr 14 06:56 README.md
drwxrwxr-x 1061 root root 36864 Apr 14 07:12 node_modules
-rw-rw-r-- 1 node node 692936 Apr 14 06:56 package-lock.json
-rw-rw-r-- 1 node node 808 Apr 14 06:56 package.json
drwxrwxr-x 2 node node 4096 Apr 14 06:56 public
drwxrwxr-x 2 node node 4096 Apr 14 06:56 src
I also tried to create the folder RUN mkdir -p /usr/app and use USER node but that end up in an issue where npm wasn't able to create the node_modules folder.
Is there any workaround where either .eslintcache is disabled or node_modules is owned by the node user?
Update
Apparently, this is occurring because I'm using Ubuntu and docker mounts volumes as root on Linux systems.
Adding this line just after RUN npm install in your Dockerfile would solve the issue:
RUN mkdir -p node_modules/.cache && chmod -R 777 node_modules/.cache
Final Dockerfile
FROM node:alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install
RUN mkdir node_modules/.cache && chmod -R 777 node_modules/.cache
COPY . .
CMD ["npm", "run", "start"]
Then you don't need to copy the node_modules folder from your local dir to the container. You can safely bookmark it.
If you have a local node_modules folder delete it before running docker or docker-compose
Because there might have been another situation which causes
EACCES: permission denied, mkdir '/usr/app/node_modules/.cache
This might be a case that you previously run this react-app with local node_modules file then deleted the file to use the container node_module.
So after deleting the whole folder it still recreates the folder and keeps a .cache file in it which is generally not visible.
So when you bookmark all file using `".:/usr/app" in docker-compose.yml or
-v "$(pwd):/usr/app" it tries to bookmark that cache file stored in that local_node modules repo and causes all the fuss.
This is caused by a long-standing issue with NPM changing its process's UID which is now fixed NPM 9.
Specifically, it is described in this NPM RFC comment:
Docker volume mounts use the UID:GID of the host machine. The changes
npm has made to infer the execution user from the UID:GID effectively
break docker setups where the host users UID:GID does not match the
node user on the container. Setting UID:GID in a .env file for each
developer in our application is cumbersome and overall ridiculous.
Stop trying to infer best security practices when it comes to running
scripts, your job is to be a package manager
This fix is called out prominently in the v9.0.0 changelog announcement.
Related changes in NPM tracker:
https://github.com/npm/cli/blob/v9.1.2/CHANGELOG.md#900-pre6-2022-10-19
https://github.com/npm/rfcs/issues/546
https://github.com/npm/statusboard/issues/540
https://github.com/npm/cli/pull/5703
https://github.com/npm/cli/pull/5704
I've just stumbled across the same issue. If you're wondering how I dealt with it I simply added two lines before the last line. It worked like a charm.
FROM node:16.13.0-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm config set unsafe-perm true
RUN npm install --silent
COPY . .
RUN chown -R node /app/node_modules
USER node
CMD ["npm", "start"]
In the DOCKER-COMPOSE File remove the volumes where you trying to exculde node_modules.
It will work
Well Its clearly shown you don't have access in node modules in your docker container.I tried a lot, but below given form works for me.
RUN mkdir -p /usr/src/app
RUN chmod +rwx /usr/src/app
WORKDIR /usr/src/app
first give READ WRITE EDIT permission via above command
for your work-directory file, then it will working fine.
The solution I came out with was installing the node_modues locally and then modify the Dockerfile to copy everything into the container and use that as a volume instead of installing and bookmarking node_modules during the build process. I know it's not the best approach to this problem, but it's an easy solution for something I've been trying to solve for days.
Dockerfile
FROM node:alpine
WORKDIR /usr/app
COPY . ./
CMD ["npm", "start"]
docker-compose
version: '3'
services:
test:
stdin_open: true
build:
context: .
dockerfile: Dockerfile.dev
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- .:/usr/app
ports:
- '3000:3000'
Well, It is clearly showing that you don't have read, write and execute permission in your node modules folder and you will probably see a lock icon appearing on the node module folder.
FOR UBUNTU any version
sudo chmod a+rwx <path>/your-project-folder/node_modules
EXAMPLE
sudo chmod a+rwx Desktop/react-app/node_modules
explanation
chmod - to change permission
a - all
rwx - read, write and execute
To fix that issue in KUBERNETES, you have to mount an empty dir volume with the path directory /app/.next/cache. Check the example.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
spec:
template:
metadata:
labels:
type: my-label
spec:
containers:
image: ghcr.io/myimage
imagePullPolicy: Always
name: my-site
ports:
- containerPort: 3000
name: http
protocol: TCP
volumeMounts:
- mountPath: /app/.next/cache
name: cache
volumes:
- emptyDir: {}
name: cache
Move 'USER node' higher
I was only using Docker (no docker-compose) and the solution was to move USER node to before COPY package*...
I had the same issue.
The thing is, I used vue create inside /tmp, and then moved the content of the created folder inside my volume.
Which apparently npm is not too fond of.
So I removed node_modules, and recreated it using npm install, now it works like a charm.
I have following Dockerfile content
FROM python:3.7-slim
# Install packages needed to run your application (not build deps):
RUN mkdir /code/
WORKDIR /code/
COPY Pipfile Pipfile.lock /code/
# Install build deps, then run `pip install`, then remove unneeded build deps all in a single step.
COPY ./src scripts /code/
EXPOSE 8000
ENTRYPOINT ["/code/entrypoint.sh"]
Removed other lines not related to the question.
My directory structure in development is
\
|- scripts
|- entrypoint.sh
|- src
|- # Application files
|- Dockerfile
|- docker-compose.yml
The entrypoint.sh file is in scripts/ directory and is being copied to /code/ in the docker daemon. Same is verified by executing
docker run -it my_image ls -la
Which lists files as
drwxr-xr-x 1 root root 4096 Dec 28 19:57 .
drwxr-xr-x 1 root root 4096 Dec 28 19:59 ..
-rw-r--r-- 1 root root 545 Dec 28 19:24 Pipfile
-rw-r--r-- 1 root root 21517 Dec 28 19:24 Pipfile.lock
-rwxr-xr-x 1 root root 499 Dec 28 18:47 entrypoint.sh
-rwxr-xr-x 1 root root 540 Jan 3 2019 manage.py
But when I run the image using docker-compose.yml (docker-compose up) file with content
version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
image: my_image:latest
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code/
It gives error as
ERROR: for myproj_py_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/code/entrypoint.sh\": stat /code/entrypoint.sh: no such file or directory": unknown
The contents of entrypoint.sh
#!/bin/sh
set -e
if [ "x$DJANGO_MANAGE_COLLECTSTATIC" = 'xon' ]; then
echo "Collecting static files"
python manage.py collectstatic --noinput
echo "Done: Collecting static files"
fi
if [ "x$DJANGO_MANAGE_MIGRATE" = 'xon' ]; then
echo "Migrating database"
python manage.py migrate --noinput
echo "Done: Migrating database"
fi
exec "$#"
Your volumes: declaration hides the contents of /code inside the image, including the /code/entrypoint.sh script. When you launch a container Docker constructs a single command from both the entrypoint and command parts combined, so your two containers have combined commands like
/code/entrypoint.sh ls -la
/code/entrypoint.sh python manage.py runserver 0.0.0.0:8000
(Entrypoint scripts typically end with a line like exec "$#" that launches the command part to facilitate this pattern.)
In particular you have this problem because the filesystem layouts don't match; you rearrange things in your Dockerfile. If you were to run
docker run --rm -it --entrypoint /bin/bash -v $PWD:/code image
you'd see a /code/scripts/entrypoint.sh; if you ran it without the -v option you'd see /code/entrypoint.sh.
The straightforward solution to this is to delete the volumes: directive. Then Docker Compose will run the code that's actually built into the image, and you won't have this conflict.
I wanna copy new static files from Docker container via named volume to nginx container that has old static.
Prerequisites:
Host machine directory tree:
.
├── data
│ ├── bar.2.css
│ └── foo.2.js
├── docker-compose.yml
├── Dockerfile
Dockerfile:
FROM busybox:latest
COPY data /data
docker-compose.yml:
version: '3'
services:
static:
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Directory tree of named volume myvolume with old static:
.
├── bar.1.css
└── foo.1.js
Sequence of steps:
Build myimage with Dockerfile: docker build -t myimage .
Check new static files in myimage: docker run myimage ls /data
bar.2.css
foo.2.js
Run: docker-compose up -d --build static
In my mind it must rebuild service static and overwrite old static files. But it did't. Why and how to fix it? Also, what is a better approach?
I think that you are just coping the new files alongside the old files with the docker build -t myimage .
Maybe you can delete the previous data before you insert new, by running a one-time container??
docker exec -it static rm /data
and then just copy the new data, or build the new image:
docker cp /data static:/data
You can also, implement the build step inside the docker-compose file:
version: '3'
services:
static:
build: /
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Why -- I believe that you are mounting the pre-existing volume myvolume atop your /data folder of the static container. This is because your myvolume already exists. If myvolume did not exist, the content of /data would be copied to the volume.
See: Docker-Volume-Docs -- "If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents will be copied into the volume."
Sample Solution
Give this a shot. With the structure and content below do a:
docker-compose up --build
This is additive, so if you update/add content to the newdata folder and re-run your compose, then the new content will be present in the shared volume.
You can mount and inspect the shared volume, like this:
docker run -it --rm --mount type=volume,src={docker-volume-name},target=/shared busybox sh
Environment
Folder structure:
.
├── dockerfile
├── docker-compose.yml
├── newdata/
── apple.txt
── banana.txt
dockerfile
FROM busybox:latest
# From host machine to image
COPY newdata/* /newdata/
# #Runtime from image to where a shared volume could be mounted.
ENTRYPOINT [ "cp", "-r", "/newdata/", "/shared" ]
docker-compose.yml
version: '3.2'
services:
data-provider:
image: data-provider
build: .
volumes:
- type: volume
source: so
target: /shared
destination:
image: busybox:latest
volumes:
- type: volume
source: so
target: /shared-data
depends_on:
- data-provider
command: ls -la /shared-data/newdata
volumes:
so:
Sample Output:
$ docker-compose up --build
Creating volume "sodockervol_so" with default driver
Building data-provider
Step 1/3 : FROM busybox:latest
---> c75bebcdd211
Step 2/3 : COPY newdata/* /newdata/
---> bc85fc19ed7b
Removing intermediate container 2a39f4be8dd2
Step 3/3 : ENTRYPOINT cp -r /newdata/ /shared
---> Running in e755c3179b4f
---> 6e79a32bf668
Removing intermediate container e755c3179b4f
Successfully built 6e79a32bf668
Successfully tagged data-provider:latest
Creating sodockervol_data-provider_1 ...
Creating sodockervol_data-provider_1 ... done
Creating sodockervol_destination_1 ...
Creating sodockervol_destination_1 ... done
Attaching to sodockervol_data-provider_1, sodockervol_destination_1
destination_1 | total 16
destination_1 | drwxr-xr-x 2 root root 4096 Oct 9 17:50 .
destination_1 | drwxr-xr-x 3 root root 4096 Oct 9 17:50 ..
destination_1 | -rwxr-xr-x 1 root root 25 Oct 9 17:50 apple.txt
destination_1 | -rwxr-xr-x 1 root root 28 Oct 9 17:50 banana.txt
sodockervol_data-provider_1 exited with code 0
sodockervol_destination_1 exited with code 0
I want to build drupal from dockerfile and install a module in drupal using that dockerfile in container directory - /var/www/html/sites/all/modules.
but when I build the dockerfile by docker-compose build it extracts correctly ..
as soon as I perform docker-compose up , the files are gone but the volume is mapped .
please look the both the docker-compose and dockerfile
DockerFile
FROM drupal:7
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
ENV DRUPAL_VERSION 7.36
ENV DRUPAL_MD5 98e1f62c11a5dc5f9481935eefc814c5
ADD . /var/www/html/sites/all/modules
WORKDIR /var/www/html
RUN chown -R www-data:www-data sites
WORKDIR /var/www/html/sites/all/modules
# Install drupal-chat
ADD "http://ftp.drupal.org/files/projects/{drupal-module}.gz {drupal-module}.tar.gz"
RUN tar xzvf {drupal-module} \
&& rm {drupal-module} \
docker-compose file
# PHP Web Server
version: '2'
drupal_box:
build: .
ports:
- "3500:80"
external_links:
- docker_mysqldb_1
volumes:
- ~/Desktop/mydockerbuild/drupal/modules:/var/www/html/sites/all/modules
- ~/Desktop/mydockerbuild:/var/log/apache2
networks:
- default
- docker_default
environment:
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: drupal
restart: always
#entrypoint: ./Dockerfile
networks:
docker_default:
external: true
executing:
sudo docker-compose build
sudo docker-compose up
on executing both of the commands above-
the directory in the container doesnot have the {drupal-module folder} but i see it is successfully extracting in the directory in console(due to xzvf in tar command in dockerfile).
but this helps me in mapping both the host directory and the container directory and files added or deleted can be seen both virtually and locally.
but as soon as I remove the first mapping in volume (i.e ~/Desktop...) the module is extracted in the directory but the mapping is not done.
My main aim is to extract the {drupal-module} folder in /var/www/html/sites/all/modules and map the same folder to the host directory.
Please Help !
So yes.
The answer is that you cannot have the extracted contents of the container folder into your host folder specified in the volumes mapping in docker-compose. i.e (./modules:/var/www/html/sites/all/modules) is not acceptable for drupal docker.
I did it with named volumes where you can achieve this.
Eg - modules:/var/www/html/sites/all/modules
this will create a volume in var/lib/docker/volumes... (you can have the address by "docker inspect ") and the volume will have the same data as extracted in your container.
Note- the difference lies in ./ !!!
I'm trying to use mounted volume directory in build process, but it's either not being mounted at the moment or mounted incorrectly.
docker-compose.yml
version: '2'
services:
assoi:
restart: on-failure
build:
context: ./assoi
expose:
- "4129"
links:
- assoi-redis
- assoi-postgres
- assoi-mongo
- assoi-rabbit
volumes:
- ./ugmk:/www
command: pm2 start /www/ugmk.json
...
Dockerfile
...
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
...
sudo docker-compose build out
...
Step 12 : WORKDIR /www
---> Using cache
---> 73504ed64194
Step 13 : RUN ls -al
---> Running in 37bb9f70d4ac
total 8
drwxr-xr-x 2 root root 4096 Aug 22 13:31 .
drwxr-xr-x 65 root root 4096 Aug 22 14:05 ..
---> be1ac6edce56
...
During build, you do not mount or more specifically, you cannot mount any volume.
What you do is COPY, so in your case
COPY ./ugmk /www
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
Volumes are for containers, not for images - volumes should store persistent user-generated data. By definition, this can only happen during the runtime, thus for "containers".
Nevertheless, the upper COPY is the default practice to what you want to achive "build a image with the application pre-deployed/assets compiled" ..