Docker COPY recursive --chmod --chown - docker

I am trying to recursively copy some files and directories into a Docker image. The source directory contains files and a sub-directory with some files in it.
src/
├─ subdir/
│ ├─ sub_file_1
│ ├─ sub_file_2
├─ file_1
├─ file_2
...
├─ file_n
I am running the following command in Dockerfile:
COPY --chown=user:user --chmod=600 src/ /dst/
The permissions are correctly applied for all top level files (file_1 to file_n) and the sub directory itself, but not for the files in subdir (e.g. sub_file_1).
Entering the container and running ls, the output is:
user#container:/$ ls -la /dst/subdir
ls: cannot access '/dst/subdir/sub_file_1': Permission denied
ls: cannot access '/dst/subdir/sub_file_2': Permission denied
total 0
d????????? ? ? ? ? ? .
d????????? ? ? ? ? ? ..
-????????? ? ? ? ? ? sub_file_1
-????????? ? ? ? ? ? sub_file_2
Is there a way to recursively apply --chmod and --chown options of the COPY command?

Short answer:
You have to actually give exec permissions to your folder and files by replacing your chmod with --chmod=700. And you can look up more details below or just google `Linux file permissions.
Long answer: So i recreated your example and as you can see bellow i have some scripts that i'm running in various stages before my server actually starts. I'm following good production docker practices as you do i can see and i also use your method to change permissions on the scripts file.
# App files
COPY --chown=node:node --from=builder /usr/src/app/build ./build
# Migration files
COPY --chown=node:node ./migrations ./migrations
# Scripts
COPY --chown=node:node --chmod=600 ./scripts ./scripts
# Client
COPY --chown=node:node ./client ./build/client
EXPOSE 80
And no surprise, i get a similar error.
sh: ./scripts/wait-for.sh: Permission denied
npm ERR! code ELIFECYCLE
npm ERR! errno 126
npm ERR! webpage-server#0.0.1 preprod: `./scripts/wait-for.sh page-db:5432 -- npm run migrate -- --config build/src/config/database.json -e production `
npm ERR! Exit status 126
The problem here is the code you are giving to your chmod. Your goal is to make them executable and for security reasons make them executable for only the node(default) docker user. Now these decimal numbers we give to our chmod are just sugar to our eyes, in reality, these will be converted to binary and assigned to every one of the permission variables for the directory and its child files/folders(in this case). You have 4 values in the Linux file system permissions, one bit to indicate file or directory, 3 bit to define user access to file/dir, another 3 for the group of users and 3 more for everyone else, you want to give execute permissions to the current user so you will need 111=7 so the final chmod will look more like this:
# App files
COPY --chown=node:node --from=builder /usr/src/app/build ./build
# Migration files
COPY --chown=node:node ./migrations ./migrations
# Scripts
COPY --chown=node:node --chmod=700 ./scripts ./scripts
# Client
COPY --chown=node:node ./client ./build/client
EXPOSE 80
And as you can see the problem no longer persists
> webpage-server#0.0.1 preprod /usr/src/app
> ./scripts/wait-for.sh page-db:5432 -- npm run migrate -- --config build/src/config/database.json -e production
> webpage-server#0.0.1 migrate /usr/src/app
> db-migrate up "--config" "build/src/config/database.json" "-e" "production"
[INFO] No migrations to run
[INFO] Done
> webpage-server#0.0.1 prod /usr/src/app
> npm run start
> webpage-server#0.0.1 start /usr/src/app
> node build/src/server.js
18:28:17 info: Initializing thanos webpage Server version: 0.0.1. Environment: development
18:28:17 info: /usr/src/app/build/src
18:28:17 info: Thanos web page server is listening on port -> 80

Related

How to mount NextJS cache in Docker

Does anyone know a way to mount a Next cache in Docker?
I thought this would be relatively simple. I found buildkit had a cache mount feature and tried to add it to my Dockerfile.
COPY --chown=node:node . /code
RUN --mount=type=cache,target=/code/.next/cache npm run build
However I found that I couldn't write to the cache as node.
Type error: Could not write file '/code/.next/cache/.tsbuildinfo': EACCES: permission denied, open '/code/.next/cache/.tsbuildinfo'.
Apparently you need root permissions for using the buildkit cache mount. This is an issue because I cannot build Next as root.
My workaround was to make a cache somewhere else, and then copy the files to and from the .next/cache. For some reason the cp command does not work in docker(as node, you get a permission error and as root you get no error, but it still doesn't work.) I eventually came up with this:
# syntax=docker/dockerfile:1.3
FROM node:16.15-alpine3.15 AS nextcache
#create cache and mount it
RUN --mount=type=cache,id=nxt,target=/tmp/cache \
mkdir -p /tmp/cache && chown node:node /tmp/cache
FROM node:16.15-alpine3.15 as builder
USER node
#many lines later
# Build next
COPY --chown=node:node . /code
#copy mounted cache into actual cache
COPY --chown=node:node --from=nextcache /tmp/cache /code/.next/cache
RUN npm run build
FROM builder as nextcachemount
USER root
#update mounted cache
RUN mkdir -p tmp/cache
COPY --from=builder /code/.next/cache /tmp/cache
RUN --mount=type=cache,id=nxt,target=/tmp/cache \
cp -R /code/.next/cache /tmp
I managed to store something inside the mounted cache, but I have not noticed any performance boosts.(I am trying to implement this mounted cache for Next in order to save time every build. Right now, the build next step takes ~160 seconds, and I'm hoping to bring that down a bit.)
If you are using the node user in a node official image, which happens to have uid=1000 and the same gid, I think you should specify that when mounting the cache so that you have permission to write on it:
RUN --mount=type=cache,target=/code/.next/cache,uid=1000,gid=1000 npm run build

Docker EACCES permission denied mkdir

My friend gave me a project with a dockerfile which seems to work just fine for him but I get a permission error.
FROM node:alpine
RUN mkdir -p /usr/src/node-app && chown -R node:node /usr/src/node-app
WORKDIR /usr/src/node-app
COPY package.json yarn.lock ./
COPY ./api/package.json ./api/
COPY ./iso/package.json ./iso/
USER node
RUN yarn install --pure-lockfile
COPY --chown=node:node . .
EXPOSE 3000
error An unexpected error occurred: "EACCES: permission denied, mkdir '/usr/src/node-app/node_modules/<project_name>/api/node_modules'".
Could it be a docker version error?
COPY normally copies things into the image owned by root, and it will create directories inside the image if they don't exist. In particular, when you COPY ./api/package.json ./api/, it creates the api subdirectory owned by root, and when you later try to run yarn install, it can't create the node_modules subdirectory because you've switched users.
I'd recommend copying files into the container and running the build process as root. Don't chown anything; leave all of these files owned by root. Switch to an alternate USER only at the very end of the Dockerfile, where you declare the CMD. This means that the non-root user running the container won't be able to modify the code or libraries in the container, intentionally or otherwise, which is a generally good security practice.
FROM node:alpine
# Don't RUN mkdir; WORKDIR creates the directory if it doesn't exist
WORKDIR /usr/src/node-app
# All of these files and directories are owned by root
COPY package.json yarn.lock ./
COPY ./api/package.json ./api/
COPY ./iso/package.json ./iso/
# Run this installation command still as root
RUN yarn install --pure-lockfile
# Copy in the rest of the application, still as root
COPY . .
# RUN yarn build
# Declare how to run the container -- _now_ switch to a non-root user
EXPOSE 3000
USER node
CMD yarn start

Could not find a required file

I'm trying to run docker container with create-react-app App. App works fine and here's how my dockerfile looks like.
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR ./
# add `//node_modules/.bin` to $PATH
ENV PATH ./node_modules/.bin:$PATH
# install and cache dependencies
COPY package.json ./package.json
COPY ./build/* ./public/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g
# start
CMD ["npm", "start"]
When I run docker im getting error
> my-app#0.1.0 start /
> react-scripts start
Could not find a required file.
Name: index.js
Searched in: /src
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! my-app#0.1.0 start: `react-scripts start`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the my-app#0.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-07-14T08_29_30_761Z-debug.log
has anybody have any idea?
npm start is for webpack - which serves you as the dev server. you are still using directly the src files, not the minified build (dist), which will only be used on production.
#Dockerfile.dev:
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR ./
# add `//node_modules/.bin` to $PATH
ENV PATH ./node_modules/.bin:$PATH
COPY package.json ./package.json
#use the minified build file for production, not now - npm start is for development.
#COPY ./build/* ./public/
#install dependencies:
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g
#copy your project files: (also bad for development, use volume(https://docs.docker.com/storage/volumes/) instead)
COPY . .
# start
CMD ["npm", "start"]
(This builds on #EfratLevitan's answer, but is a little more production-oriented. Their answer will be better if you want to use Docker as a core part of your development flow.)
If you have a working Webpack setup already, its output is static files that can be served up by any Web server. Once you've successfully run npm run build, you can use anything to serve the resulting build directory – serve it as static content from something like a Flask application, put it in a cloud service like Amazon S3 that can serve it for you, directly host it yourself. Any of the techniques described on the CRA Deployment page will work just fine in conjunction with a Docker-based backend.
If you'd like to serve this yourself via Docker, you don't need Node to serve the build directory, so a plain Web server like nginx will work fine. The two examples from the image description work for you here:
# Just use the image and inject the content as data
docker run -v $PWD/build:/usr/share/nginx/html -p 80:80 nginx
# Build an image with the content "baked in"
cat >Dockerfile <<EOF
FROM nginx
COPY ./build /usr/share/nginx/html
EOF
# Run it
docker build -t me/nginx .
docker run -p 80:80 me/nginx
The all-Docker equivalent to this is to use a multi-stage build to run the Webpack build inside Docker, then copy it out to a production Web server image.
FROM node:12.2.0-alpine AS build
WORKDIR /app
COPY package.json yarn.lock .
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g
COPY . .
RUN npm run build
FROM nginx
COPY --from=build /app/build /usr/share/nginx/html
In this model you'd develop your front-end code locally. (The Webpack/CRA stack has pretty minimal host dependencies, and since the application runs in the user's browser, it can't depend on Docker-specific networking features.) You'd only build this Dockerfile once you wanted to run an end-to-end test with all of the parts running together before you actually pushed out to production.

How do I restrict which directories and files are copied by Docker?

I have a Dockerfile that explicitly defines which directores and files from the context directory are copied to the app directory. But regardless of this Docker tries to copy all files in the context directory.
The Dockerfile is in the context directory.
My test code and data files are in directories directly below the context directory. It attempts to copy everything in the context directory, not just the directories and files specified by my COPY commands. So I get a few hundred of these following ERROR messages, except specifying each and every file in every directory and sub directory:
ERRO[0043] Can't add file /home/david/gitlab/etl/testdata/test_s3_fetched.csv to tar: archive/tar: missed writing 12029507 bytes
...
ERRO[0043] Can't close tar writer: archive/tar: missed writing 12029507 bytes
Sending build context to Docker daemon 1.164GB
Error response from daemon: Error processing tar file(exit status 1): unexpected EOF
My reading of the reference is that it only copies all files and directories if there are no ADD or COPY directives.
I have tried with the following COPY patterns
COPY ./name/ /app/name
COPY name/ /app/name
COPY name /app/name
WORKDIR /app
COPY ./name/ /name
WORKDIR /app
COPY name/ /name
WORKDIR /app
COPY name /name
My Dockerfile:
FROM python3.7.3-alpine3.9
RUN apk update && apk upgrade && apk add bash
# Copy app
WORKDIR /app
COPY app /app
COPY configfiles /configfiles
COPY logs /logs/
COPY errorfiles /errorfiles
COPY shell /shell
COPY ./*.py .
WORKDIR ../
COPY requirements.txt /tmp/
RUN pip install -U pip && pip install -U sphinx && pip install -r /tmp/requirements.txt
EXPOSE 22 80 8887
I expect it to only copy my files without the errors associated with trying to copy files I have not specified in COPY commands. Because the Docker output scrolls off my terminal window due to aqll thew error messages I cannot see if it succeeded with my COPY commands.
All files at and below the build directory are coppied into the initial layer of the docker build context.
Consider using a .dockerignore file to exclude files and directories from the build.
Try to copy the files in the following manner-
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# add app
COPY ./errorfiles /usr/src/app
Also, you will have to make sure that your docker-compose.yml file is correctly built-
version: "3.6"
services:
users:
build:
context: ./app
dockerfile: Dockerfile
volumes:
- "./app:/usr/src/app"
Here, I'm assuming that your docker-compose.yml file is inside the parent directory of your app.
See if this works. :)

Docker Add every file in current directory

I have a simple web application that I would like to place in a docker container. The angular application exists in the frontend/ folder, which is withing the application/ folder.
When the Dockerfile is in the application/ folder and reads as follows:
FROM node
ADD frontend/ frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
everything runs correctly.
However, when I move the Dockerfile into the frontend/ folder and change it to read
FROM node
ADD . frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
no files are copied and the project does not run.
How can I add every file and folder recursively in the current directory to my docker image?
The Dockerfile that ended up working was
FROM node
ADD . / frontend/
RUN (cd frontend/; npm install;)
CMD (cd frontend/; npm start;)
Shoutout to #Matt for the lead on . / ./, but I think the only reason that didn't work was because for some reason my application will only run when it is inside a directory, not in the 'root'. This might have something to do with #VonC's observation that the node image doesn't have a WORKDIR.
First, try COPY just to test if the issue persists.
Second, make sure that no files are copied by changing your CMD to a ls frontend
I do not see a WORKDIR in node/7.5/Dockerfile, so frontend could be in /frontend: check ls /frontend too.

Resources