Docker COPY not working and permissions issues - docker

I have read countless posts and can't figure out why COPY isn't working or permissions/ownership aren't changing.
My dockerfile is as follow (simplified)
FROM somealpineimage as prod
USER root
RUN mkdir -p /home/node/app/test
WORKDIR /home/node/app
COPY package*json ./
COPY ./src/folder/tests /home/node/app/test # <- this guy here nothing happens tried multiple variations
RUN npm install
COPY . . # I assumed this command would copy everything from project directory to image but doesn't
FROM prod
RUN npm prune --production
COPY . .
USER node
CMD ["npm", "run", "start"]
This particular folder I'm trying to copy is to resolve a permissions issue.
My initial thought was to copy the contents of tests folder to test with added --chown=node:node to set the correct ownership. But I can't seem to get the ownership to change.
I've tried chmod -R 777 as root user didn't work either.
Like so:
USER root
RUN chmod -R 777 /home/node/app/tests
# or with a higher folder
RUN chmod -R 777 /home/node # with -R it should recursively change permissions but it did nothing
The dockerfile is in the root directory of the project
project
├── Dockerfile
├── src
│ ├── begin-with-the-crazy-ideas.textile
│ └── isn't-docker-supposed-to-make-it-easier.markdown
├── tests
│ ├── test1.test
└── test2.test
The reason I need these files with different ownership is my Node app can't edit/open/unzip/zip them since they're owned by root and nodejs app runs under the user node. Don't have the option to run as root and it would be bad practice.
Any help I'd appreciate. For now I'll go research.
Note : the project is written/tested on a M1 MacOS Docker but the official container runs on Kubernetes/Linux I have permissions issues only when deployed.

Related

PNPM monorepo deployment with docker compose

I currently have a monorepo which has the following structure:
.
├── services/
│ └── api/
│ ├── src/
│ │ └── ...
│ └── Dockerfile
├── apps/
│ └── frontend
├── packages/
│ └── ...
├── .npmrc
├── docker-compose.yaml
└── pnpm-lock.yaml
The dockerfile contains the following commands:
FROM node:18-alpine As base
RUN npm i -g pnpm
FROM base AS dependencies
WORKDIR /usr/src/app
COPY package.json ../../pnpm-lock.yaml ./
COPY ../../.npmrc ./
RUN pnpm install --frozen-lockfile --prod
...
For the API I load in the Docker file using the docker compose file with the context of ./services/api. When I try this it cannot find the files of the parent directory, this is due to some security feature of Docker. I could change the context and change the commands accordingly, but this would load in the entire codebase for the API image. This slows down building and deploying and my question would be is there any other way to structure the monorepo to support pnpm with Docker? I can't find any resources on the topic.
Best Regards
I now implemented the Dockerfile as below. It is specific to my monorepo so it might not be useful, but I wanted to post it anyway if anyone ever runs into the same issue:
FROM node:18.10.0-alpine As base
ARG SERVICE_PATH
ARG PACKAGE_NAME
ARG PNPM_VERSION
# Install package manager
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
npm i --global --no-update-notifier --no-fund pnpm#${PNPM_VERSION}
# Use the node user from the image (instead of the root user)
USER node
# Get all dependencies and install
FROM base AS dependencies
WORKDIR /usr/app
COPY --chown=node:node pnpm-lock.yaml pnpm-workspace.yaml package.json .npmrc ./
COPY --chown=node:node ${SERVICE_PATH}/package.json ./${SERVICE_PATH}/package.json
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
pnpm install --frozen-lockfile --filter ${PACKAGE_NAME}\
| grep -v "cross-device link not permitted\|Falling back to copying packages from store"
# Build application using all dependencies, copy necessary files
FROM dependencies AS build
WORKDIR /usr/app/${SERVICE_PATH}
COPY --chown=node:node ${SERVICE_PATH} ./
ENV NODE_ENV production
RUN pnpm build
RUN rm -rf node_modules src \
&& pnpm -r exec -- rm -rf node_modules
# Use base image for correct context, get built files from build stage
# Install only production dependencies
FROM base AS deploy
WORKDIR /usr/app
ENV NODE_ENV production
COPY --chown=node:node --from=build /usr/app .
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
pnpm install --frozen-lockfile --filter ${PACKAGE_NAME} --prod \
| grep -v "cross-device link not permitted\|Falling back to copying packages from store"
ENV EXEC_PATH=${SERVICE_PATH}/dist/main.js
CMD node ${EXEC_PATH}
I also added a .dockerignore as suggested by bogdanoff, it looks like the following:
# Files
/**/*/.env
/**/*/.env.*
/**/*/README.md
/**/*/.git
/**/*/.gitignore
/**/*/npm-debug.log
/**/*/.dockerignore
/**/*/Dockerfile
# Directories
/**/*/node_modules
/**/*/test
/**/*/dist

Deploying Next.js in Docker container with custom dependencies

I am new to Next.js and Docker. What I am trying to achieve is to essentially deploy a Next.js project with Docker. I am in the process of creating the Dockerfile and docker-compose.yml files. However, the project has some custom packages that it uses outside of the source folder (on the root level). My build step is failing because it cannot resolve these packages.
ModuleNotFoundError: Module not found: Error: Can't resolve '#custompackage/themes/src/Simply/containers' in '/opt/app/src/pages'
This is what the imports look like
import Theme, { theme } from '#custompackage/themes/src/Simply';
import {
Navbar,
Copyright,
Welcome,
Services,
About,
Pricing,
Clients,
Contact,
} from '#custompackage/themes/src/Simply/containers';
import preview from '#custompackage/themes/src/Simply/assets/preview.jpg';
This is my Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
WORKDIR /opt/app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
# This is where because may be the case that you would try
# to build the app based on some `X_TAG` in my case (Git commit hash)
# but the code hasn't changed.
FROM node:16-alpine AS builder
ENV NODE_ENV=production
WORKDIR /opt/app
COPY . .
COPY --from=deps /opt/app/node_modules ./node_modules
RUN yarn build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
ARG X_TAG
WORKDIR /opt/app
ENV NODE_ENV=production
COPY --from=builder /opt/app/next.config.js ./
COPY --from=builder /opt/app/public ./public
COPY --from=builder /opt/app/.next ./.next
COPY --from=builder /opt/app/node_modules ./node_modules
CMD ["node_modules/.bin/next", "start"]
Folder structure
I have tried to use the COPY command in the Dockerfile before build step to copy the packages content into the /opt/app folder so that they can be resolved. However, I wasn't exactly sure if I was doing it right and kept getting nowhere.
You could sure find a way to make your app work without changing the directory structure, but I don't think you should.
Module imported import keyword should fall into one of this two category:
Application code, located in your source folder
Dependencies in usually in a node_modules folder
I you want to have multiple packages into one repository, you should look at the monorepo pattern.
Since you are using yarn, you can take a look at Yarn Workspace which is the solution provided by Yarn to build a monorepo.
You might want to slightly change you directory structure to something like that:
├── app
│ ├── src
│ └── package.json (next.js)
├── packages
│ ├── custom-package-1
│ │ ├── src
│ │ └── package.json
│ └── custom-package-2
│ ├── src
│ └── package.json
└── package.json (main)
In the package.json you will add custom-package-1 to your dependencies and your monorepo tool will do some magic to include custom-package-1 in your dependencies (mainly by creating some symlinks).

Specific `build` directory not added to Docker image

I have a repository with a directory structure like this
.
├── Dockerfile
├── README.md
├── frontend/
├── backend/
├── docs/
├── examples/
└── build/
The dockerfile is a simple ADD with no entrypoint:
FROM python:3.6-slim
WORKDIR /app
# Copy and install requirements.txt first for caching
ADD . /app
RUN pip install --no-cache-dir --trusted-host pypi.python.org -r backend/requirements.txt
EXPOSE 8200
WORKDIR /app/backend
My issue is that after docker build -t myimage ., the build folder is missing from the image.
I just ran an ls when verifying the image contents with docker run -it myimage /bin/bash, and the build folder is missing!
.
├── frontend/
├── backend/
├── docs/
├── examples/
Does anyone know why? How can I add modify my Dockerfile to add this folder into my image? All resources online say that ADD . <dest> should duplicate my current directory tree inside the image, but the build folder is missing...
Missed that there's a .dockerignore file in the repo that contains this folder. Whooooops, thank you #David Maze.

Building docker where does `RUN mkdir` create a directory - cannot find it when running container

I'm new to docker and I'm building the docker file below using docker build -t control . It builds successfully with no errors specifically it says that it makes the control directory. Then I try to run the image with docker run control but it gives an error saying that it can't find control/control_file/job.py
Where does docker create the control directory. Is it in a container that I cannot see? As I can't see it being create anywhere and I'm unsure how to debug?
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir control
COPY control_file/ /control
ENV PYTHONPATH="/control:$PYTHONPATH"
RUN export PYTHONPATH=/control:$PYTHONPATH
CMD ["python","/control/job.py"]
This is the directory structure:
├── control_file
│   ├── insert_to_container.py
│   ├── ip_path
│   ├── job.py
│   └── read_info.py
└── Dockerfile
The job.py is now in /control within your Docker build.
With the COPY command you copy all contents within control_file/ into the new directory /control.
Change the last line to:
CMD ["python", "control/job.py"]
you docker file has mistakes, please find below correct one and control_file directory should available in build directory (where you building docker image )....job.py should have execute permission
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir -p /control/control_file
COPY control_file/ /control/control_file
CMD [ "python" , "/control/control_file/job.py" ]

Docker ADD is failing with relative directory

My docker file has following entry
ENV SCPATH /etc/supervisor/conf.d
RUN apt-get -y update
# The daemons
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
The directory structure looks like this
├── .dockerignore
├── .gitignore
├── Dockerfile
├── Makefile
├── README.md
├── Vagrantfile
├── index.js
├── package.json
└── supervisord
└── conf.d
├── node.conf
└── supervisord.conf
As per my understanding this should work fine as
ADD ./supervisord/conf.d/* $SCPATH/
Points to a relative path in terms of dockerfile build context.
Still it fails with
./supervisord/conf.d : no such file or directory exists.
I am new to docker so might be a very basic thing I am missing. Really appreciate help
What are your .dockerignore file contents? Are you sure you did not accidentally exclude something below your supervisord directory that the docker daemon needs to build your image?
And: in which folder are you executing the docker build command? Make sure you execute it within the folder that holds the Dockerfile so that the relative paths match.
Update: I tried to reproduce your problem. What I did from within a temp folder:
mkdir -p a/b/c
echo "test" > a/b/c/test.txt
cat <<EOF > Dockerfile
FROM debian
ENV MYPATH /newdir
RUN mkdir $MYPATH
ADD ./a/b/c/* $MYPATH/
CMD cat $MYPATH/test.txt
EOF
docker build -t test .
docker run --rm -it test
That prints test as expected. The important part works: the ADD ./a/b/c* $MYPATH. The file at the end is found as its content test is displayed during runtime.
When I now change the path ./a/b/c/* to something else, I get the no such file or directory exists error. When I leave the path as is and invoke docker build from a different folder than the temp folder where I placed the files the error is shown, too.

Resources