I currently have a monorepo which has the following structure:
.
├── services/
│ └── api/
│ ├── src/
│ │ └── ...
│ └── Dockerfile
├── apps/
│ └── frontend
├── packages/
│ └── ...
├── .npmrc
├── docker-compose.yaml
└── pnpm-lock.yaml
The dockerfile contains the following commands:
FROM node:18-alpine As base
RUN npm i -g pnpm
FROM base AS dependencies
WORKDIR /usr/src/app
COPY package.json ../../pnpm-lock.yaml ./
COPY ../../.npmrc ./
RUN pnpm install --frozen-lockfile --prod
...
For the API I load in the Docker file using the docker compose file with the context of ./services/api. When I try this it cannot find the files of the parent directory, this is due to some security feature of Docker. I could change the context and change the commands accordingly, but this would load in the entire codebase for the API image. This slows down building and deploying and my question would be is there any other way to structure the monorepo to support pnpm with Docker? I can't find any resources on the topic.
Best Regards
I now implemented the Dockerfile as below. It is specific to my monorepo so it might not be useful, but I wanted to post it anyway if anyone ever runs into the same issue:
FROM node:18.10.0-alpine As base
ARG SERVICE_PATH
ARG PACKAGE_NAME
ARG PNPM_VERSION
# Install package manager
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
npm i --global --no-update-notifier --no-fund pnpm#${PNPM_VERSION}
# Use the node user from the image (instead of the root user)
USER node
# Get all dependencies and install
FROM base AS dependencies
WORKDIR /usr/app
COPY --chown=node:node pnpm-lock.yaml pnpm-workspace.yaml package.json .npmrc ./
COPY --chown=node:node ${SERVICE_PATH}/package.json ./${SERVICE_PATH}/package.json
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
pnpm install --frozen-lockfile --filter ${PACKAGE_NAME}\
| grep -v "cross-device link not permitted\|Falling back to copying packages from store"
# Build application using all dependencies, copy necessary files
FROM dependencies AS build
WORKDIR /usr/app/${SERVICE_PATH}
COPY --chown=node:node ${SERVICE_PATH} ./
ENV NODE_ENV production
RUN pnpm build
RUN rm -rf node_modules src \
&& pnpm -r exec -- rm -rf node_modules
# Use base image for correct context, get built files from build stage
# Install only production dependencies
FROM base AS deploy
WORKDIR /usr/app
ENV NODE_ENV production
COPY --chown=node:node --from=build /usr/app .
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store \
pnpm install --frozen-lockfile --filter ${PACKAGE_NAME} --prod \
| grep -v "cross-device link not permitted\|Falling back to copying packages from store"
ENV EXEC_PATH=${SERVICE_PATH}/dist/main.js
CMD node ${EXEC_PATH}
I also added a .dockerignore as suggested by bogdanoff, it looks like the following:
# Files
/**/*/.env
/**/*/.env.*
/**/*/README.md
/**/*/.git
/**/*/.gitignore
/**/*/npm-debug.log
/**/*/.dockerignore
/**/*/Dockerfile
# Directories
/**/*/node_modules
/**/*/test
/**/*/dist
Related
I have read countless posts and can't figure out why COPY isn't working or permissions/ownership aren't changing.
My dockerfile is as follow (simplified)
FROM somealpineimage as prod
USER root
RUN mkdir -p /home/node/app/test
WORKDIR /home/node/app
COPY package*json ./
COPY ./src/folder/tests /home/node/app/test # <- this guy here nothing happens tried multiple variations
RUN npm install
COPY . . # I assumed this command would copy everything from project directory to image but doesn't
FROM prod
RUN npm prune --production
COPY . .
USER node
CMD ["npm", "run", "start"]
This particular folder I'm trying to copy is to resolve a permissions issue.
My initial thought was to copy the contents of tests folder to test with added --chown=node:node to set the correct ownership. But I can't seem to get the ownership to change.
I've tried chmod -R 777 as root user didn't work either.
Like so:
USER root
RUN chmod -R 777 /home/node/app/tests
# or with a higher folder
RUN chmod -R 777 /home/node # with -R it should recursively change permissions but it did nothing
The dockerfile is in the root directory of the project
project
├── Dockerfile
├── src
│ ├── begin-with-the-crazy-ideas.textile
│ └── isn't-docker-supposed-to-make-it-easier.markdown
├── tests
│ ├── test1.test
└── test2.test
The reason I need these files with different ownership is my Node app can't edit/open/unzip/zip them since they're owned by root and nodejs app runs under the user node. Don't have the option to run as root and it would be bad practice.
Any help I'd appreciate. For now I'll go research.
Note : the project is written/tested on a M1 MacOS Docker but the official container runs on Kubernetes/Linux I have permissions issues only when deployed.
I am new to Next.js and Docker. What I am trying to achieve is to essentially deploy a Next.js project with Docker. I am in the process of creating the Dockerfile and docker-compose.yml files. However, the project has some custom packages that it uses outside of the source folder (on the root level). My build step is failing because it cannot resolve these packages.
ModuleNotFoundError: Module not found: Error: Can't resolve '#custompackage/themes/src/Simply/containers' in '/opt/app/src/pages'
This is what the imports look like
import Theme, { theme } from '#custompackage/themes/src/Simply';
import {
Navbar,
Copyright,
Welcome,
Services,
About,
Pricing,
Clients,
Contact,
} from '#custompackage/themes/src/Simply/containers';
import preview from '#custompackage/themes/src/Simply/assets/preview.jpg';
This is my Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
WORKDIR /opt/app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
# This is where because may be the case that you would try
# to build the app based on some `X_TAG` in my case (Git commit hash)
# but the code hasn't changed.
FROM node:16-alpine AS builder
ENV NODE_ENV=production
WORKDIR /opt/app
COPY . .
COPY --from=deps /opt/app/node_modules ./node_modules
RUN yarn build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
ARG X_TAG
WORKDIR /opt/app
ENV NODE_ENV=production
COPY --from=builder /opt/app/next.config.js ./
COPY --from=builder /opt/app/public ./public
COPY --from=builder /opt/app/.next ./.next
COPY --from=builder /opt/app/node_modules ./node_modules
CMD ["node_modules/.bin/next", "start"]
Folder structure
I have tried to use the COPY command in the Dockerfile before build step to copy the packages content into the /opt/app folder so that they can be resolved. However, I wasn't exactly sure if I was doing it right and kept getting nowhere.
You could sure find a way to make your app work without changing the directory structure, but I don't think you should.
Module imported import keyword should fall into one of this two category:
Application code, located in your source folder
Dependencies in usually in a node_modules folder
I you want to have multiple packages into one repository, you should look at the monorepo pattern.
Since you are using yarn, you can take a look at Yarn Workspace which is the solution provided by Yarn to build a monorepo.
You might want to slightly change you directory structure to something like that:
├── app
│ ├── src
│ └── package.json (next.js)
├── packages
│ ├── custom-package-1
│ │ ├── src
│ │ └── package.json
│ └── custom-package-2
│ ├── src
│ └── package.json
└── package.json (main)
In the package.json you will add custom-package-1 to your dependencies and your monorepo tool will do some magic to include custom-package-1 in your dependencies (mainly by creating some symlinks).
I have the following project separated into packages as follows:
libs/one/package.json
libs/one/index.js
libs/two/package.json
libs/two/index.js
package.json
I am looking for a way to copy all libs/*/package.json files in one line. in addition I want to avoid reinstalling all packages again and reading them from the cache.
this is my docker file
FROM node:16.14-alpine as base
RUN npm install -g lerna#4.0.0
WORKDIR /usr/app
FROM base as builder
ARG SERVICE_DIR
ARG TSCONFIG_BUILD_FILE
COPY .npmrc .
COPY lerna.json .
COPY ${TSCONFIG_BUILD_FILE} ./tsconfig.build.json
COPY package*.json ./
# copy package dependencies
COPY libs/service-http-handler/package.json ./libs/service-http-handler/
COPY libs/utils/package.json ./libs/utils/
COPY libs/commons/package.json ./libs/commons/
COPY libs/cache/package.json ./libs/cache/
COPY libs/internal-dto/package.json ./libs/internal-dto/
COPY libs/clients/package.json ./libs/clients/
COPY ${SERVICE_DIR}/package.json ./${SERVICE_DIR}/
# install service and package dependencies
RUN lerna bootstrap -- --production --no-optional --ignore-scripts --include-dependencies --scope ${SERVICE_DIR}
# Copy all libs and build by dependency order
COPY libs/ ./libs
COPY ${SERVICE_DIR}/ ./${SERVICE_DIR}
RUN lerna run build --include-dependencies --scope $(echo ${SERVICE_DIR} | cut -d'/' -f 2)
# copy recursive and follow symbolic link
RUN cp -RL ./node_modules/ /tmp/node_modules/
# Runner
FROM base
ARG SERVICE_DIR
# copy runtime dependencies
COPY --from=builder /tmp/node_modules/ ./node_modules/
# Copy runtime service
COPY --from=builder /usr/app/${SERVICE_DIR}/dist/ ./dist/
COPY ./${SERVICE_DIR}/package.json ./
EXPOSE 5001
#TODO - modify start script in production mode
CMD ["npm", "run", "start"]
any idea how I can do it without the need to run the step below on any changes in libs sources files.
RUN lerna bootstrap -- --production --no-optional --ignore-scripts --include-dependencies --scope ${SERVICE_DIR}
Currently, In my dockerfile i am using multiple COPY commands to copy directories from my repository.
COPY requirements.txt requirements.txt
COPY validation /opt/validation
COPY templates /opt/templates
COPY goss /opt/goss
COPY newman /opt/newman
COPY conftest.py /opt/validation/conftest.py
How can i achieve the same results as above using a single COPY command. Is there a way?
There is a little hack with the scratch image:
FROM scratch as tmp
COPY foo /opt/some/path/foo
COPY bar /usr/share/tmp/bar
FROM debian:buster
COPY --from=tmp / /
CMD bash -c "ls /opt/some/path /usr/share/tmp"
❯ docker build -t test . && docker run --rm test
/opt/some/path:
foo
/usr/share/tmp:
bar
scratch is a pseudo-image, it is much like an empty directory. The hack is to copy everything there as it should be in the final image, then merge root directories. The merge produces a single layer.
❯ docker inspect --format '{{.RootFS}}' test
{layers [
sha256:c2ddc1bc2645ab5d982c60434d8bbc6aecee1bd4e8eee0df7fd08c96df2d58bb
sha256:fd35279adf8471b9a168ec75e3ef830046d0d7944fe11570eef4d09e0edde936
] }
If you just want to copy things to the same folder /opt, maybe simply using next:
Folder structure:
.
├── conftest.py
├── Dockerfile
├── .dockerignore
├── goss
├── newman
├── requirements.txt
├── templates
└── validation
Dockerfile:
FROM alpine
COPY . /opt
#RUN mv /opt/conftest.py /opt/validation
RUN ls /opt
.dockerignore:
Dockerfile
Execution:
$ docker build -t abc:1 . --no-cache
Sending build context to Docker daemon 6.144kB
Step 1/3 : FROM alpine
---> 28f6e2705743
Step 2/3 : COPY . /opt
---> 8beb53be958c
Step 3/3 : RUN ls /opt
---> Running in cfc9228124fb
conftest.py
goss
newman
requirements.txt
templates
validation
Removing intermediate container cfc9228124fb
---> 4cdb9275d6f4
Successfully built 4cdb9275d6f4
Successfully tagged abc:1
Here, we use COPY . /opt to copy all things in current folder to /opt/ of container. We use .dockerignore to ignore the files/folders which won't want to copy to containers.
Additional, not sure the rules for COPY conftest.py /opt/validation/conftest.py correct or not, if it's correct, you may have to use RUN mv to move it to specified folder.
I have a repository with a directory structure like this
.
├── Dockerfile
├── README.md
├── frontend/
├── backend/
├── docs/
├── examples/
└── build/
The dockerfile is a simple ADD with no entrypoint:
FROM python:3.6-slim
WORKDIR /app
# Copy and install requirements.txt first for caching
ADD . /app
RUN pip install --no-cache-dir --trusted-host pypi.python.org -r backend/requirements.txt
EXPOSE 8200
WORKDIR /app/backend
My issue is that after docker build -t myimage ., the build folder is missing from the image.
I just ran an ls when verifying the image contents with docker run -it myimage /bin/bash, and the build folder is missing!
.
├── frontend/
├── backend/
├── docs/
├── examples/
Does anyone know why? How can I add modify my Dockerfile to add this folder into my image? All resources online say that ADD . <dest> should duplicate my current directory tree inside the image, but the build folder is missing...
Missed that there's a .dockerignore file in the repo that contains this folder. Whooooops, thank you #David Maze.