My git repo structure
app
- app.server
- server files
- app.client
- node_modules
- public
- src
- .dockerignore
- Dockerfile
- package.json
- package-log.json
I've set up CI/CD with GitHub actions but something is wrong in my Dockerimage for my client application (React)?
Error message: COPY failed: file not found in build context or excluded by .dockerignore: stat package.json: file does not exist
My .dockeringore file:
node_modules
build
.dockerignore
Dockerfile
Dockerfile.prod
My Dockerfile:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app.client
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app.client/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY ./ ./
# start app
CMD ["npm", "start"]
My GitHub action command for invoking the Dockerfile:
docker build app.client/ -t mycontainerregistry.azurecr.io/appdb:${{ github.sha }}
This is part of a publish to Azure Container registry that I'm trying to learn. I guess the Dockerfile works, because before failing at step 4/9 it goes thru the Dockerfile:
Step 1/9 : FROM node:13.12.0-alpine
13.12.0-alpine: Pulling from library/node
aad63a933944: Pulling fs layer
... (and so on)
Step 3/6 : COPY package.json ./
COPY failed: file not found in build context or excluded by .dockerignore: stat package.json: file does not exist
Error: Process completed with exit code 1.
WORKDIR refers to the working directory in the container. The working directory on the host is still the root of your repo.
Related
I'm running
docker build --build-arg npm_token=//NPM TOKEN HERE// -t test .
and the build is failing with an error message of
error Command "dotenv" not found even though the dotenv nom package is included in the package.json and the yarn.lock files. Here is my docker file:
# image has Cypress npm module installed globally in /root/.npm/node_modules
# and Cypress binary cached in /root/.cache/Cypress folder
FROM cypress/included:9.7.0
WORKDIR /usr/src/app
# Set up NPM token to access private GitHub packages
ARG npm_token
ENV NPM_TOKEN=$npm_token
COPY .npmrc ./
RUN npm config set //npm.pkg.github.com/:_authToken $NPM_TOKEN
COPY config cypress .env package.json cypress.json yarn.lock tsconfig.json ./
RUN yarn \
dotenv -- node e2e-tests.js
Does anyone know why this is happening?
you should install dotenv-cli to run .env commands :
yarn add dotenv-cli
Hi everyone I'm facing a strange issue in the creating of a docker image for a remix.run app, and using it inside a github job.
I have this Dockerfile
FROM node:16-alpine as deps
WORKDIR /app
ADD package.json yarn.lock ./
RUN yarn install
# Build the app
FROM node:16-alpine as build
ENV NODE_ENV=production
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
COPY . .
RUN yarn run build
# Build production image
FROM node:16-alpine as runner
ENV NODE_ENV=production
ENV PORT=80
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
COPY --from=build /app/build /app/build
COPY --from=build /app/public /app/public
COPY --from=build /app/api /app/api
COPY . .
EXPOSE 80
CMD ["npm", "run", "start"]
If build the image on my local machine, everything works fine, and I'm able to run the container and point at it.
I made a github workflow build the same image and push it on my docker hub.
But when the github job runs it always fail with this error
Step 16/21 : COPY --from=build /app/build /app/build
COPY failed: stat app/build: file does not exist
My remix.run config is:
/**
* #type {import('#remix-run/dev/config').AppConfig}
*/
module.exports = {
appDirectory: "app",
assetsBuildDirectory: "public/build",
publicPath: "/build/",
serverBuildDirectory: "api/_build",
devServerPort: 8002,
ignoredRouteFiles: [".*"],
};
Thanks in advance for any help
You are using the config option serverBuildDirectory: "api/_build", so when you run the remix build, your server built files are in api/_build/ and not in build/ directory.
In your docker image, at the last stage you try to copy the content of the build/ directory from the previous stage named build. But that directory does not exist there. The server content is built in api/_build/ instead.
So you just don't need that line:
COPY --from=build /app/build /app/build
One possible reason I see for the fact it works for you locally:
You have a local directory named build/, and which contains your local build for server files. Maybe because you built it before changing the serverBuildDirectory option. And it's probably ignored from your git repository.
When you build your docker image, the stage named build copies
everything from your local directory to its own environment with
COPY . ., and so it gets your local build/ directory.
then during the last stage that directory is copied to the final environment with COPY --from=build /app/build /app/build.
When you do the same thing in a fresh environment like github actions, the build/ directory does not exist in the environment that runs the docker command so it's not copied from stage to stage. And you finally get an error trying to copy something that does not exist.
This artifact copying is probably not what you want. If you do the build in your docker image, you don't want also to copy it from your local environment.
To avoid these unwanted copies, you can add a .dockerignore file with at least the following things:
node_modules
build
api/_build
public/build
I have a Dockerfile and when I run it locally, everything works fine, however my build through GitHub actions seems to fail, the error I am getting is:
error An unexpected error occurred: "ENOENT: no such file or directory, stat '/home/runner/work/akira/akira/README.md'".
I tried to remove the yarn.lock but without success, a full log of the build that fails can be found here, my Dockerfile is below:
Dockerfile:
FROM node:14.0.0 AS base
WORKDIR /usr/src/app
FROM base as builder
COPY ./lerna.json .
COPY ./package.json .
COPY ./tsconfig.json .
COPY ./yarn.lock .
COPY ./packages/akira/prisma ./packages/akira/prisma
COPY ./packages/akira/src ./packages/akira/src
COPY ./packages/akira/types ./packages/akira/types
COPY ./packages/akira/package*.json ./packages/akira/
COPY ./packages/akira/tsconfig.json ./packages/akira
RUN yarn install --frozen-lockfile
RUN yarn build
FROM builder as migrate
RUN yarn workspace akira prisma migrate up --experimental
FROM base AS app
COPY --from=builder /usr/src/app/yarn.lock .
COPY --from=builder /usr/src/app/packages/akira/dist ./dist
COPY --from=builder /usr/src/app/packages/akira/prisma ./prisma
COPY --from=builder /usr/src/app/packages/akira/package.json .
RUN yarn install --production
USER node
ENV NODE_ENV=production
EXPOSE 4000
CMD ["node", "dist/index.js"]
If you look at your GitHub Actions workflow,
or the log from the failing build that you linked, it seems to be running yarn commands outside of docker.
It looks like yarn is struggling with the README symlink, not sure why, but as it seems you want to build with docker, I would try the following:
Replace this part of the yaml
- name: Use Node.js
uses: actions/setup-node#master
with:
node-version: 14.4.0
- name: Install dependencies
run: yarn --frozen-lockfile
- name: Build packages
run: yarn build
with something like
- name: Build docker image
run: docker build .
Edit:
As pointed out in below comment, the Dockerfile includes a side-effect of deploying database migrations.
If you don't want to run everything from the Dockerfile in the Build pipeline,
you can leverage multi-stage builds and stop at a specific stage.
I.e., move the migrations into its own stage:
FROM node:14.0.0 AS base
WORKDIR /usr/src/app
FROM base as builder
COPY ./lerna.json .
<< lines omitted >>
RUN yarn install --frozen-lockfile
RUN yarn build
FROM builder AS migr
RUN yarn workspace akira prisma migrate up --experimental
FROM base AS app
COPY --from=builder /usr/src/app/yarn.lock .
<< lines omitted >>
Then you can stop after the builder stage with
docker build --target builder .
Edit 2:
Or you could keep the build pipeline and Dockerfile as it is, and instead fix the broken symlink, i.e. revert commit 0c87fa3
I have followed file structure
...
- public
- app
- docker/
- node-js/Dockerfile
docker-compose.yml
package.json
in my dockerfile I have a logic to copy package.json and run npm install
FROM node:12.0.0-alpine
MAINTAINER Bogdan Dubyk <bogdan.dubyk#gmail.comn>
COPY package.json /var/www/frontend/
RUN npm install
CMD [ "npm", "start" ]
but I'm getting an error ERROR: Service 'node-js' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder184577258/package.json: no such file or directory while building the image, looks like Dockerfile looking for files only inside his own folder? is it possible to copy files from outside the folder?
I tried COPY ../../package.json /var/www/frontend/ but also getting error ERROR: Service 'node-js' failed to build: COPY failed: Forbidden path outside the build context: ../../package.json ()
I have made the following dockerfile to contain my node js application, the problem is that an error appears when building the dockerfile:
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM node:10
---> 0d5ae56139bd
Step 2/7 : WORKDIR /usr/src/app
---> Using cache
---> 5bfc0405d8fa
Step 3/7 : COPY package.json ./
COPY failed: stat /var/lib/docker/tmp/docker-
builder803334317/package.json: no such file or directory
this is my dockerfile:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
i executed the command:
sudo docker build - < Dockerfile
into my root project folder.
My project folder is simple, like this:
-Project
-app.js
-Dockerfile
-package.json
-package-lock.json
-README.md
I am doing something wrong?
When you use the Dockerfile-on-stdin syntax
sudo docker build - < Dockerfile
the build sequence runs in a context that only has the Dockerfile, and no other files on disk.
The directory layout you show is pretty typical, and pointing docker build at that directory should work better
sudo docker build .
(This is the same rule as the "Dockerfiles can't access files in parent directories" rule, but instead of giving the current directory as the base directory to Docker, you're giving no directory at all, so it can't even access files in the current directory.)