In my Dockerfile in docker/my-project-node/ folder I have RUN npm i --legacy-peer-deps --only=production --no-optional.
I want to run docker build locally and see if everything works fine.
When I run docker build -t my-project-image . I get this npm error:
#9 0.913 npm ERR! code ENOENT
#9 0.913 npm ERR! syscall open
#9 0.914 npm ERR! path /app/package.json
#9 0.914 npm ERR! errno -2
#9 0.916 npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json'
#9 0.917 npm ERR! enoent This is related to npm not being able to find a file.
#9 0.917 npm ERR! enoent
Dockerfile:
FROM node:16-alpine
RUN apk update && apk add --no-cache ca-certificates bash
WORKDIR '/app'
COPY . .
RUN npm i --legacy-peer-deps --only=production --no-optional
CMD ["npm","run", "start:prod"]
What's wrong in my docker file?
You are passing the wrong build context to docker.
The build context is the directory Docker starts working from and all the COPY commands find the files from this path.
There are 2 ways to properly specify this:
Run the command from the root of your project and specify the Dockerfile path explicitly if its not on the root. docker build -t my-web-app -f docker/my-project-node/Dockerfile
In the docker command, specify the root you want. docker build -t my-web-app ../.. (as your root is 2 folders up)
Finally, please look at docker-compose. All these pesky docker commands can be beautifully represented in a docker-compose file like this:
# docker-compose.yaml placed at root of the project
version: '3'
services:
web:
container_name: "example1"
build:
context: .
dockerfile: docker/my-project-node/Dockerfile
ports:
- "8000:8000"
Once the file is created, simply run docker-compose up -d to create and run the containers, and docker-compose down to shutdown and remove the containers :)
Related
Dockerfile
# Dockerfile
FROM node:12.11.1
# update and install dependency
RUN apt-get update
# copy / target Main Folder
COPY . /opt/frontend
# work in folder
WORKDIR /opt/frontend
# install dependency
RUN npm install
# set app serving to permissive / assigned
# if you do not specify host, you cannot connect from the host
ENV HOST 0.0.0.0
# output port
EXPOSE 3000
CMD [ "npm", "run", "dev" ]
Error:
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /opt/frontend/package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/opt/frontend/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-11-15T14_31_01_250Z-debug.log
the build was succesful and read about this error that is package.json is not copy over, but isit this line already copy the main frontend folder everything over ?
# copy / target Main Folder
COPY . /opt/frontend
what i missout ? my github : https://github.com/differentMonster/nuxt3-troisjs
Your are building the image with wrong context.
Try this:
cd frontend
docker build . -f ./docker/dev/Dockerfile -t your_app_name
Or
docker build ./frontend -f ./docker/dev/Dockerfile -t your_app_name
You need to specify a context path
I am using Symfony ApiPlatform 2.5 on Docker, with a "client" service for ReactJs frontend. I don't really know what happened, but I can't do anything more with npm, always going to have this error :
npm ERR! cb() never called!
npm ERR! This is an error with npm itself. Please report this error at:
npm ERR! <https://npm.community>
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-04-04T09_55_47_791Z-debug.log
I then tryed "npm install --no-package-lock", getting this error :
npm ERR! code EBUSY
npm ERR! syscall rmdir
npm ERR! path /usr/src/client/node_modules/.webpack-dev-server.DELETE/ssl
npm ERR! errno -16
npm ERR! EBUSY: resource busy or locked, rmdir '/usr/src/client/node_modules/.webpack-dev-server.DELETE/ssl'
When I try to run "rm -rf node_modules" on the container, I got the same kind of error :
rm: can't remove 'node_modules/.webpack-dev-server.DELETE/ssl': Resource busy
Here is the docker-compose part :
client:
build:
context: ./client
target: api_platform_client_development
cache_from:
- ${CLIENT_IMAGE:-quay.io/api-platform/client}
image: ${CLIENT_IMAGE:-quay.io/api-platform/client}
tty: true # https://github.com/facebook/create-react-app/issues/8688
environment:
- API_PLATFORM_CLIENT_GENERATOR_ENTRYPOINT=http://api
- API_PLATFORM_CLIENT_GENERATOR_OUTPUT=src
depends_on:
- dev-tls
volumes:
- ./client:/usr/src/client:rw,cached
- dev-certs:/usr/src/client/node_modules/webpack-dev-server/ssl:rw,nocopy
ports:
- target: 3000
published: 443
protocol: tcp
And the associated Dockerfile :
# https://docs.docker.com/develop/develop-images/multistage-build/#stop-at-a-specific-build-stage
# https://docs.docker.com/compose/compose-file/#target
# https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
ARG NODE_VERSION=13
ARG NGINX_VERSION=1.17
# "development" stage
FROM node:${NODE_VERSION}-alpine AS api_platform_client_development
WORKDIR /usr/src/client
RUN yarn global add #api-platform/client-generator
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
build-base
# prevent the reinstallation of node modules at every changes in the source code
COPY package.json yarn.lock ./
RUN set -eux; \
yarn install
COPY . ./
VOLUME /usr/src/client/node_modules
ENV HTTPS true
CMD ["yarn", "start"]
# "build" stage
# depends on the "development" stage above
FROM api_platform_client_development AS api_platform_client_build
ARG REACT_APP_API_ENTRYPOINT
RUN set -eux; \
yarn build
# "nginx" stage
# depends on the "build" stage above
FROM nginx:${NGINX_VERSION}-alpine AS api_platform_client_nginx
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /usr/src/client/build
COPY --from=api_platform_client_build /usr/src/client/build ./
Is there a way to completely RESET (ie: clean install) only this service, without touching the others ? I know we can remove all volumes with options, but didn't find how to act only on one service. I have a database on another service that I don't want to loose. :/
Thanks !
I'm trying to dockerize my create-react-app development environment and preserving hot reloads. According to most guides (and this guy), the most direct way is docker run -p 3000:3000 -v "$(pwd):/var/www" -w "/var/www" node npm start in the project folder.
However, I'm getting this error instead:
$ docker run -p 3000:3000 -v "$(pwd):/var/www" -w "/var/www" node npm start
> my-app#0.1.0 start /var/www
> react-scripts start
sh: 1: react-scripts: Input/output error
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! my-app#0.1.0 start: `react-scripts start`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the my-app#0.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2020-04-02T06_55_22_257Z-debug.log
I'm running on Windows. I believe mounting the volume might have some permission issues leading to the input/output error, but testing various settings didn't work out. I'm honestly stumped. All I want is to run my app in Docker with hot reload for development.
As it turns out, setting up create-react-app in docker takes a little more work.
The primary issue is that mounted volumes are not available in the build step, so when node npm start runs the mounted project files technically don't exist yet.
As such, you need to copy over and install the project first to let it run the first time before the volume mounts. Hot reloading works normally afterwards.
Here's my final working setup:
docker-compose.yml:
create-react-app:
build:
context: create-react-app
ports:
- 3000:3000
environment:
- NODE_PATH=/node_modules
- CHOKIDAR_USEPOLLING=true
volumes:
- ./create-react-app:/create-react-app
Dockerfile:
FROM node:alpine
# Extend PATH
ENV PATH=$PATH:/node_modules/.bin
# Set working directory
WORKDIR /client
# Copy project files for build
ADD . .
# Install dependencies
RUN npm install
# Run create-react-app server
CMD ["npm", "run", "start"]
I am trying to create and run a Docker image of my Angular application. It works perfectly fine locally however I am having problems when it is coming to the docker run command.
My Dockerfile is:
FROM node:current
#FROM node:current-slim
#FROM node:12.13.1-alpine
WORKDIR /usr/src/app
COPY package.json .
COPY proxy.conf.json .
RUN npm install
EXPOSE 4200
CMD [ "npm", "run", "dev" ]
In my package.json the scripts are:
"dev": "concurrently \"npm start\" \"ng serve --proxy-config proxy.conf.json\""
and the two Docker commands I am running are:
docker build --tag fuel-consumption-front:0.0.0 .
docker run --publish 8000:4200 --detach --name fuel fuel-consumption-front:0.0.0
The Docker logs are saying (or this is the output on my desktop Docker application for this):
> fuel-consumption-front#0.0.0 dev /usr/src/app
> concurrently "npm start" "ng serve --proxy-config proxy.conf.json"
sh: 1: concurrently: not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! fuel-consumption-front#0.0.0 dev: `concurrently "npm start" "ng serve --proxy-config proxy.conf.json"`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the fuel-consumption-front#0.0.0 dev script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2020-03-25T15_30_19_063Z-debug.log
It is the first line that is making me think this is something to do with concurrently not being able to be run
I have tried different node images (commented out in the Dockerfile) and also npm insall -g within the Dockerfile. There is not an entry for concurrentrly in the package.json file either. All different methods are throwing up the same error
I am pretty new with Docker and was using the example from the Docker pages as a template. Their example worked perfectly fine when doing this.
I had more of a play with this after leaving it for a while. Turned out I didn't save my Dockerfile when I added npm install concurrently. Now the Dockerfile reads (with a few more tweaks):
FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
COPY proxy.conf.json .
RUN npm install
RUN npm install concurrently
COPY . .
EXPOSE 4200
CMD [ "npm", "run", "dev" ]
Now onto the next issue....need to make the image smaller and it says it compiles fine in the logs but can't connect on localhost:8000 (another problem for another day!)
I dockerized my mean application with docker-compose. This works fine.
Now I try to use "volumes" so that my angular app (with ng serve) and my express app (with nodemon.js) auto-restart when coding.
But identical error appears for both angular and express container :
angular_1 |
angular_1 | up to date in 1.587s
angular_1 | found 0 vulnerabilities
angular_1 |
angular_1 | npm ERR! path /usr/src/app/package.json
angular_1 | npm ERR! code ENOENT
angular_1 | npm ERR! errno -2
angular_1 | npm ERR! syscall open
angular_1 | npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
angular_1 | npm ERR! enoent This is related to npm not being able to find a file.
angular_1 | npm ERR! enoent
angular_1 |
angular_1 | npm ERR! A complete log of this run can be found in:
angular_1 | npm ERR! /root/.npm/_logs/2019-04-07T20_51_38_933Z-debug.log
harmonie_angular_1 exited with code 254
See my folder hierarchy :
-project
-client
-Dockerfile
-package.json
-server
-Dockerfile
-package.json
-docker-compose.yml
Here's my Dockerfile for angular :
# Create image based on the official Node 10 image from dockerhub
FROM node:10
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package*.json /usr/src/app/
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app/
# Expose the port the app runs in
EXPOSE 4200
# Serve the app
CMD ["npm", "start"]
My Dockerfile for express :
# Create image based on the official Node 6 image from the dockerhub
FROM node:6
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package*.json /usr/src/app/
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app/
# Expose the port the app runs in
EXPOSE 3000
# Serve the app
CMD ["npm", "start"]
And finally my docker-compose.yml
version: '3' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forwarding
#WHEN ADDING VOLUMES, ERROR APPEARS!!!!!!
volumes:
- ./client:/usr/src/app
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database
#WHEN ADDING VOLUMES, ERROR APPEARS!!!!!!
volumes:
- ./server:/usr/src/app
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forwarding
I also had this error, it turned out to be an issue with my version of docker-compose. I'm running WSL on windows 10 and the version of docker-compose installed inside WSL did not handle volume binding correctly. I fixed this by removing /usr/local/bin/docker-compose and then adding an alias to the windows docker-compose executable alias docker-compose="/mnt/c/Program\ Files/Docker/Docker/resources/bin/docker-compose.exe"
If The above does not apply to you then try to update your version of docker-compose
you volumes section should look like this:
volumes:
- .:/usr/app
- /usr/app/node_modules
after mounting source folder node_modules in Docker container is 'overwritten' so you need to add the '/usr/app/node_modules'. Full tutorial with proper docker-compose.yml - https://all4developer.blogspot.com/2019/01/docker-and-nodemodules.html