I have a folder dist having config.yaml, configuration.d.ts ,configuration.js and configuration.map files inside. The issue is all files are copied to container except the config.yaml file.
On debugging I found, If I write COPY dist dist before the line FROM abcd.com/baseos/node:buster-14.15.4-1 the config.yanl works. But I write COPY dist dist after the line FROM abcd.com/baseos/node:buster-14.15.4-1 it doesn't copy the config.yaml file but copies all other files configuration.d.ts ,configuration.js and configuration.map
The command to build my Dockerfile is below:-
docker build -t drs:1.0.0 -f . /srs/sync-data
Below is my Dockerfile
FROM abcd.com/baseos/node:buster-14.15.4-1 AS buildcontainer
COPY src src
COPY config config
COPY package*.json ./
COPY tsconfig.json .
COPY tsconfig.build.json .
COPY .eslintrc.js .
COPY .prettierrc .
RUN npm ci && \
npm run build && \
rm -rf node_modules && \
npm ci --production
FROM abcd.com/baseos/node:buster-14.15.4-1
ARG SERVICEVERSION=0.0.0-snapshot
ENV SERVICEVERSION=$SERVICEVERSION
COPY --from=buildcontainer dist dist
COPY --from=buildcontainer node_modules node_modules
COPY package.json .
CMD npm run start:prod
Since I'm using nestjs, I didn't copied over nest-cli.json in dockerfile. My nest-cli.json had below config:-
"compilerOptions": {
"assets": [{"include": "../config/*.yaml", "outDir": "./dist/config"}]
}
which tells the nest compiler to put it into dist folder.
Once I added below line in Dockerfile, it worked.
COPY .nest-cli.json .
Related
I have the following project separated into packages as follows:
libs/one/package.json
libs/one/index.js
libs/two/package.json
libs/two/index.js
package.json
I am looking for a way to copy all libs/*/package.json files in one line. in addition I want to avoid reinstalling all packages again and reading them from the cache.
this is my docker file
FROM node:16.14-alpine as base
RUN npm install -g lerna#4.0.0
WORKDIR /usr/app
FROM base as builder
ARG SERVICE_DIR
ARG TSCONFIG_BUILD_FILE
COPY .npmrc .
COPY lerna.json .
COPY ${TSCONFIG_BUILD_FILE} ./tsconfig.build.json
COPY package*.json ./
# copy package dependencies
COPY libs/service-http-handler/package.json ./libs/service-http-handler/
COPY libs/utils/package.json ./libs/utils/
COPY libs/commons/package.json ./libs/commons/
COPY libs/cache/package.json ./libs/cache/
COPY libs/internal-dto/package.json ./libs/internal-dto/
COPY libs/clients/package.json ./libs/clients/
COPY ${SERVICE_DIR}/package.json ./${SERVICE_DIR}/
# install service and package dependencies
RUN lerna bootstrap -- --production --no-optional --ignore-scripts --include-dependencies --scope ${SERVICE_DIR}
# Copy all libs and build by dependency order
COPY libs/ ./libs
COPY ${SERVICE_DIR}/ ./${SERVICE_DIR}
RUN lerna run build --include-dependencies --scope $(echo ${SERVICE_DIR} | cut -d'/' -f 2)
# copy recursive and follow symbolic link
RUN cp -RL ./node_modules/ /tmp/node_modules/
# Runner
FROM base
ARG SERVICE_DIR
# copy runtime dependencies
COPY --from=builder /tmp/node_modules/ ./node_modules/
# Copy runtime service
COPY --from=builder /usr/app/${SERVICE_DIR}/dist/ ./dist/
COPY ./${SERVICE_DIR}/package.json ./
EXPOSE 5001
#TODO - modify start script in production mode
CMD ["npm", "run", "start"]
any idea how I can do it without the need to run the step below on any changes in libs sources files.
RUN lerna bootstrap -- --production --no-optional --ignore-scripts --include-dependencies --scope ${SERVICE_DIR}
This is different than questions about copying from host to container.
I'm trying to copy the build folder to the nginx html folder. I don't know what copy command I should use, as cp didn't work.
My dockerfile segment (see the RUN copy command which isn't working)
FROM node:12.14.1 # If syntax is off, please ignore
WORKDIR /app
COPY . /app
RUN rm -rf node_modules &&\
npm ci &&\
npm run build # MAKES BUILD IN /app/build
# set up html files
COPY config/nginx.conf /etc/nginx/conf.d/default.conf
RUN cp /app/build /usr/share/nginx/html # BROKEN - HOW TO COPY STATIC FILES HERE?
Have you tried using a recursive copy?
RUN cp -r /app/build /usr/share/nginx/html
(Assuming the destination folder already exists. Add a RUN mkdir -p /usr/share/nginx/html line before copying the files if it does not.)
I'm trying to create a Docker container to act as a test environment for my application. I am using the following Dockerfile:
FROM node:14.4.0-alpine
WORKDIR /test
COPY package*.json ./
RUN npm install .
CMD [ "npm", "test" ]
As you can see, it's pretty simple. I only want to install all dependencies but NOT copy the code, because I will run that container with the following command:
docker run -v `pwd`:/test -t <image-name>
But the problem is that node_modules directory is deleted when I mount the volume with -v. Any workaround to fix this?
When you bind mount test directory with $PWD, you container test directory will be overridden/mounted with $PWD. So you will not get your node_modules in test directory anymore.
To fix this issue you can use two options.
You can run npm install in separate directory like /node and mount your code in test directory and export node_path env like export NODE_PATH=/node/node_modules
then Dockerfile will be like:
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
CMD [ "npm", "test" ]
Or you can write a entrypoint.sh script that will copy the node_modules folder to the test directory at the container runtime.
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
COPY Entrypoint.sh ./
ENTRYPOINT ["Entrypoint.sh"]
and Entrypoint.sh is something like
#!/bin/bash
cp -r /node/node_modules /test/.
npm test
Approach 1
A workaround is you can do
CMD npm install && npm run dev
Approach 2
Have docker install node_modules on docker-compose build and run the app on docker-compose up.
Folder Structure
docker-compose.yml
version: '3.5'
services:
api:
container_name: /$CONTAINER_FOLDER
build: ./$LOCAL_FOLDER
hostname: api
volumes:
# map local to remote folder, exclude node_modules
- ./$LOCAL_FOLDER:/$CONTAINER_FOLDER
- /$CONTAINER_FOLDER/node_modules
expose:
- 88
Dockerfile
FROM node:14.4.0-alpine
WORKDIR /test
COPY ./package.json .
RUN npm install
# run command
CMD npm run dev
I have a Dockerfile that explicitly defines which directores and files from the context directory are copied to the app directory. But regardless of this Docker tries to copy all files in the context directory.
The Dockerfile is in the context directory.
My test code and data files are in directories directly below the context directory. It attempts to copy everything in the context directory, not just the directories and files specified by my COPY commands. So I get a few hundred of these following ERROR messages, except specifying each and every file in every directory and sub directory:
ERRO[0043] Can't add file /home/david/gitlab/etl/testdata/test_s3_fetched.csv to tar: archive/tar: missed writing 12029507 bytes
...
ERRO[0043] Can't close tar writer: archive/tar: missed writing 12029507 bytes
Sending build context to Docker daemon 1.164GB
Error response from daemon: Error processing tar file(exit status 1): unexpected EOF
My reading of the reference is that it only copies all files and directories if there are no ADD or COPY directives.
I have tried with the following COPY patterns
COPY ./name/ /app/name
COPY name/ /app/name
COPY name /app/name
WORKDIR /app
COPY ./name/ /name
WORKDIR /app
COPY name/ /name
WORKDIR /app
COPY name /name
My Dockerfile:
FROM python3.7.3-alpine3.9
RUN apk update && apk upgrade && apk add bash
# Copy app
WORKDIR /app
COPY app /app
COPY configfiles /configfiles
COPY logs /logs/
COPY errorfiles /errorfiles
COPY shell /shell
COPY ./*.py .
WORKDIR ../
COPY requirements.txt /tmp/
RUN pip install -U pip && pip install -U sphinx && pip install -r /tmp/requirements.txt
EXPOSE 22 80 8887
I expect it to only copy my files without the errors associated with trying to copy files I have not specified in COPY commands. Because the Docker output scrolls off my terminal window due to aqll thew error messages I cannot see if it succeeded with my COPY commands.
All files at and below the build directory are coppied into the initial layer of the docker build context.
Consider using a .dockerignore file to exclude files and directories from the build.
Try to copy the files in the following manner-
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# add app
COPY ./errorfiles /usr/src/app
Also, you will have to make sure that your docker-compose.yml file is correctly built-
version: "3.6"
services:
users:
build:
context: ./app
dockerfile: Dockerfile
volumes:
- "./app:/usr/src/app"
Here, I'm assuming that your docker-compose.yml file is inside the parent directory of your app.
See if this works. :)
I am using yarn workspaces and I have this packages in my package.json:
"workspaces": ["packages/*"]
I am trying to create a docker image to deploy and I have the following Dockerfile:
# production dockerfile
FROM node:9.2
# add code
COPY ./packages/website/dist /cutting
WORKDIR /cutting
COPY package.json /cutting/
RUN yarn install --pure-lockfile && yarn cache clean --production
CMD npm run serve
But I get the following error:
error An unexpected error occurred:
"https://registry.yarnpkg.com/#cutting%2futil: Not found"
#cutting/util is the name of one of my workspace packages.
So the problem is that there is no source code in the docker image so it is trying to install it from yarnpkg.
what is the best way to handle workspaces when deploying to a docker image.
This code won't work outside of the docker vm, so it will refuse in the docker, too.
The problem is you have built a code, and copy the bundled code. The yarn workspaces is looking for a package.json that you don't have in the dist folder. The workspaces is just creating a link in a common node_modules folder to the other workspace that you are using. The source code is needed there. (BTW why don't you build code inside the docker vm? That way source code and dist would also be available.)
Here is my dockerfile. I use yarn workspaces and lerna, but without lerna should be similar. You want to build your shared libraries and then test the build works locally by running your code in your dist folder.
###############################################################################
# Step 1 : Builder image
FROM node:11 AS builder
WORKDIR /usr/src/app
ENV NODE_ENV production
RUN npm i -g yarn
RUN npm i -g lerna
COPY ./lerna.json .
COPY ./package* ./
COPY ./yarn* ./
COPY ./.env .
COPY ./packages/shared/ ./packages/shared
COPY ./packages/api/ ./packages/api
# Install dependencies and build whatever you have to build
RUN yarn install --production
RUN lerna bootstrap
RUN cd /usr/src/app/packages/shared && yarn build
RUN cd /usr/src/app/packages/api && yarn build
###############################################################################
# Step 2 : Run image
FROM node:11
LABEL maintainer="Richard T"
LABEL version="1.0"
LABEL description="This is our dist docker image"
RUN npm i -g yarn
RUN npm i -g lerna
ENV NODE_ENV production
ENV NPM_CONFIG_LOGLEVEL error
ARG PORT=3001
ENV PORT $PORT
WORKDIR /usr/src/app
COPY ./package* ./
COPY ./lerna.json ./
COPY ./.env ./
COPY ./yarn* ./
COPY --from=builder /usr/src/app/packages/shared ./packages/shared
COPY ./packages/api/package* ./packages/api/
COPY ./packages/api/.env* ./packages/api/
COPY --from=builder /usr/src/app/packages/api ./packages/api
RUN yarn install
CMD cd ./packages/api && yarn start-production
EXPOSE $PORT
###############################################################################