Docker EACCES permission denied mkdir - docker

My friend gave me a project with a dockerfile which seems to work just fine for him but I get a permission error.
FROM node:alpine
RUN mkdir -p /usr/src/node-app && chown -R node:node /usr/src/node-app
WORKDIR /usr/src/node-app
COPY package.json yarn.lock ./
COPY ./api/package.json ./api/
COPY ./iso/package.json ./iso/
USER node
RUN yarn install --pure-lockfile
COPY --chown=node:node . .
EXPOSE 3000
error An unexpected error occurred: "EACCES: permission denied, mkdir '/usr/src/node-app/node_modules/<project_name>/api/node_modules'".
Could it be a docker version error?

COPY normally copies things into the image owned by root, and it will create directories inside the image if they don't exist. In particular, when you COPY ./api/package.json ./api/, it creates the api subdirectory owned by root, and when you later try to run yarn install, it can't create the node_modules subdirectory because you've switched users.
I'd recommend copying files into the container and running the build process as root. Don't chown anything; leave all of these files owned by root. Switch to an alternate USER only at the very end of the Dockerfile, where you declare the CMD. This means that the non-root user running the container won't be able to modify the code or libraries in the container, intentionally or otherwise, which is a generally good security practice.
FROM node:alpine
# Don't RUN mkdir; WORKDIR creates the directory if it doesn't exist
WORKDIR /usr/src/node-app
# All of these files and directories are owned by root
COPY package.json yarn.lock ./
COPY ./api/package.json ./api/
COPY ./iso/package.json ./iso/
# Run this installation command still as root
RUN yarn install --pure-lockfile
# Copy in the rest of the application, still as root
COPY . .
# RUN yarn build
# Declare how to run the container -- _now_ switch to a non-root user
EXPOSE 3000
USER node
CMD yarn start

Related

docker image is not rebuilt automatically on file change

I am running docker containers with WSL2. When I make changes to my files in the /client directory the changes are not reflected and I have to do docker compose stop client, docker compose build client and docker compose start client. If I cat a file after changing domething one can see the change.
Here is my Dockerfile:
FROM node:16.17.0-alpine
RUN mkdir -p /client/node_modules
RUN chown -R node:node /client/node_modules
RUN chown -R node:node /root
WORKDIR /client
# Copy Files
COPY . .
# Install Dependencies
COPY package.json ./
RUN npm install --force
USER root
I alse have a /server directory with the following Dockerfile and the automatic image rebuild happens on file change there just fine:
FROM node:16.17.0-alpine
RUN mkdir -p /server/node_modules
RUN chown -R node:node /server/node_modules
WORKDIR /server
COPY . .
# Install Dependencies
COPY package.json ./
RUN npm install --force --verbose
USER root
Any help is appreciated.
Solved by adding the following to my docker-compose.yml:
environment:
WATCHPACK_POLLING: "true"
Docker does not take care of the hot-reload.
You should look into the hot-reload documentation of the tools you are building with.

How to mount NextJS cache in Docker

Does anyone know a way to mount a Next cache in Docker?
I thought this would be relatively simple. I found buildkit had a cache mount feature and tried to add it to my Dockerfile.
COPY --chown=node:node . /code
RUN --mount=type=cache,target=/code/.next/cache npm run build
However I found that I couldn't write to the cache as node.
Type error: Could not write file '/code/.next/cache/.tsbuildinfo': EACCES: permission denied, open '/code/.next/cache/.tsbuildinfo'.
Apparently you need root permissions for using the buildkit cache mount. This is an issue because I cannot build Next as root.
My workaround was to make a cache somewhere else, and then copy the files to and from the .next/cache. For some reason the cp command does not work in docker(as node, you get a permission error and as root you get no error, but it still doesn't work.) I eventually came up with this:
# syntax=docker/dockerfile:1.3
FROM node:16.15-alpine3.15 AS nextcache
#create cache and mount it
RUN --mount=type=cache,id=nxt,target=/tmp/cache \
mkdir -p /tmp/cache && chown node:node /tmp/cache
FROM node:16.15-alpine3.15 as builder
USER node
#many lines later
# Build next
COPY --chown=node:node . /code
#copy mounted cache into actual cache
COPY --chown=node:node --from=nextcache /tmp/cache /code/.next/cache
RUN npm run build
FROM builder as nextcachemount
USER root
#update mounted cache
RUN mkdir -p tmp/cache
COPY --from=builder /code/.next/cache /tmp/cache
RUN --mount=type=cache,id=nxt,target=/tmp/cache \
cp -R /code/.next/cache /tmp
I managed to store something inside the mounted cache, but I have not noticed any performance boosts.(I am trying to implement this mounted cache for Next in order to save time every build. Right now, the build next step takes ~160 seconds, and I'm hoping to bring that down a bit.)
If you are using the node user in a node official image, which happens to have uid=1000 and the same gid, I think you should specify that when mounting the cache so that you have permission to write on it:
RUN --mount=type=cache,target=/code/.next/cache,uid=1000,gid=1000 npm run build

Dockerizing python project Dockerfile creation

This question is asked before yet After reviewing the answers I am still not able to copy the solution.
I am still new to docker and after watching tutorials and following articles I was able to create a Dockerfile for an existing GitHub repository.
I started by using the nearest available image as a base then adding what I need.
from what I read the problem is in WORKDIR and CMD commands
This is error message:
python: can't open file 'save_model.py': [Errno 2] No such file or directory*
This is my Dockerfile:
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR app
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY /home/pc/Desktop/yolo4_deep .
# command to run on container start
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
src
save_model.py
object_tracker.py
...
requirements.txt
Dockerfile
I tried WORKDIR command to set the absolute path: WORKDIR /home/pc/Desktop/yolo4_Deep_sort_nojupitor the result was Same Error.
I see multiple issues in your Dockerfile.
COPY /home/pc/Desktop/yolo4_deep .
The COPY command copies files from your local machine to the container. The path on your local machine must be path relative to your build context. The build context is the path you pass in when you run docker build . — in this case the . (the current directory) is the build context. Also the local machine path can only reference files located under the build context — i.e. paths containing .. (parent directory) or / (root directory) are not allowed.
WORKDIR app
WORKDIR sets the path inside the container not on your local machine. So WORKDIR /app means that all commands — RUN, CMD, ENTRYPOINT — will be executed from the /app directory.
CMD ["python","./app/save_model.py","./app/object_tracker.py" ]
As mentioned above WORKDIR /app causes all operations to be executed from the /app directory. So ./app/save_model.py is actually translated as /app/app/save_model.py.
Thanks for help Everyone.
As I mentioned earlier I'm beginner in the docker world. I solved the issue by editing the copy command.
# syntax=docker/dockerfile:1
FROM tensorflow/serving:2.3.0-rc0-devel-gpu
WORKDIR /home/pc/Desktop/yolo4_deep
COPY requirements-gpu.txt .
# install dependencies
RUN pip install -r requirements-gpu.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
ENTRYPOINT ["./start.sh"]

How do I restrict which directories and files are copied by Docker?

I have a Dockerfile that explicitly defines which directores and files from the context directory are copied to the app directory. But regardless of this Docker tries to copy all files in the context directory.
The Dockerfile is in the context directory.
My test code and data files are in directories directly below the context directory. It attempts to copy everything in the context directory, not just the directories and files specified by my COPY commands. So I get a few hundred of these following ERROR messages, except specifying each and every file in every directory and sub directory:
ERRO[0043] Can't add file /home/david/gitlab/etl/testdata/test_s3_fetched.csv to tar: archive/tar: missed writing 12029507 bytes
...
ERRO[0043] Can't close tar writer: archive/tar: missed writing 12029507 bytes
Sending build context to Docker daemon 1.164GB
Error response from daemon: Error processing tar file(exit status 1): unexpected EOF
My reading of the reference is that it only copies all files and directories if there are no ADD or COPY directives.
I have tried with the following COPY patterns
COPY ./name/ /app/name
COPY name/ /app/name
COPY name /app/name
WORKDIR /app
COPY ./name/ /name
WORKDIR /app
COPY name/ /name
WORKDIR /app
COPY name /name
My Dockerfile:
FROM python3.7.3-alpine3.9
RUN apk update && apk upgrade && apk add bash
# Copy app
WORKDIR /app
COPY app /app
COPY configfiles /configfiles
COPY logs /logs/
COPY errorfiles /errorfiles
COPY shell /shell
COPY ./*.py .
WORKDIR ../
COPY requirements.txt /tmp/
RUN pip install -U pip && pip install -U sphinx && pip install -r /tmp/requirements.txt
EXPOSE 22 80 8887
I expect it to only copy my files without the errors associated with trying to copy files I have not specified in COPY commands. Because the Docker output scrolls off my terminal window due to aqll thew error messages I cannot see if it succeeded with my COPY commands.
All files at and below the build directory are coppied into the initial layer of the docker build context.
Consider using a .dockerignore file to exclude files and directories from the build.
Try to copy the files in the following manner-
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# add app
COPY ./errorfiles /usr/src/app
Also, you will have to make sure that your docker-compose.yml file is correctly built-
version: "3.6"
services:
users:
build:
context: ./app
dockerfile: Dockerfile
volumes:
- "./app:/usr/src/app"
Here, I'm assuming that your docker-compose.yml file is inside the parent directory of your app.
See if this works. :)

Is there a way to avoid pushing node_modules on every push using Docker?

I've got a node_modules folder which is 120MB+ and I'm wondering if we can somehow only push the node_modules folder if it has changed?
This is what my docker file looks like at the moment:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
CMD export NODE_ENV=production
EXPOSE 80:7000
# EXPOSE 7000
CMD [ "npm", "start" ]
So what I'm wanting to do is only push the node_modules folder if it has changed! I don't mind manually specifying when the node_modules folder has changed, whether I do this by passing a flag & using an if statement, I don't know?
Use case:
I only made changes to my application code and didn't add any new packages.
I added some packages and require the node_modules folder to be pushed.
Edit:
So I tried the following docker file which brought in some logic from
http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
When I run docker built -t <name> . with the below Dockerfile & then gcloud docker -- push <url> it will still try push my whole directory to the registry?!
FROM node:6.2.0
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
# Create app directory
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app/
WORKDIR /usr/src/app
# Install app dependencies
# COPY package.json /usr/src/app/
# RUN npm install
# Bundle app source
ADD . /usr/src/app
CMD export NODE_ENV=production
EXPOSE 80:7000
# EXPOSE 7000
CMD [ "npm", "start" ]
Output from running gcloud docker -- push etc...:
f614bb7269f3: Pushed
658140f06d81: Layer already exists
be42b5584cbf: Layer already exists
d70c0d3ee1a2: Layer already exists
5f70bf18a086: Layer already exists
d0b030d94fc0: Layer already exists
42d0ce0ecf27: Layer already exists
6ec10d9b4afb: Layer already exists
a80b5871b282: Layer already exists
d2c5e3a8d3d3: Layer already exists
4dcab49015d4: Layer already exists
f614bb7269f3 is always being pushed and I can't figure out why (new to Docker). It's trying to push the whole directory which my app is in!?
Any ideas?
This blog post explains how to cache your dependencies in subsequent builds of your image by creating a layer that can be cached as long as the package.json file hasn't changes - http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
This is a link to the gist code snippet - https://gist.github.com/dweinstein/9468644
Worked wonders for our node app in my organization.

Resources