Docker no such file package.json when running npm install - docker

I am running Docker on Windows 10 and when I run docker-compose up -d I get this errror but I don't know why.
npm WARN saveError ENOENT: no such file or directory, open '/var/www/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/var/www/package.json'
npm WARN www No description
npm WARN www No repository field.
npm WARN www No README data
npm WARN www No license field.
Here is my docker-compose.yaml file
version: '3'
services:
# Nginx client app server
nginx-client:
container_name: nginx-client
build:
context: ./docker/nginx-client
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 28874:3000
volumes:
- ./client:/var/www
networks:
- app-network
# Networks
networks:
app-network:
driver: bridge
And here is my Dockerfile
FROM node:12
WORKDIR /var/www
RUN npm install
CMD ["npm", "run", "serve"]

This is happening because first you are building an image out of your Dockerfile, which contains this commands:
WORKDIR /var/www
RUN npm install
But right now this directory is empty. It's only after the container is created, bind mounting will take place. You can read more about it in the docs, it states:
A service definition contains configuration that is applied to each
container started for that service, much like passing command-line
parameters to docker run. Likewise, network and volume definitions are
analogous to docker network create and docker volume create.
If you need to have this file available at the image build time, I'd suggest using COPY command, like:
COPY ./client/package*.json ./
RUN npm install

Related

How to sync node_modules directory from Docker container to Host machine?

I use Ubuntu 22.04 as a host machine and Docker 20.10.17 for the container environment.
Problem is that when I install the node packages inside the container, then I do not see these packages in the host machine in the node_modules folder, it's empty, so the IDE (Visual Studio Code) just gives errors in every TypeScript file cause of the missing node packages.
However, in the container, I can see, the node pcakges are installed and there under the node_modules folder, so the application is up and live. So it works there, just locally, on my host machine is not available.
I added the node_modules as a volume in the docker-compose.yml, but that trick does not work as I have seen this in other posts.
version: '3.8'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
stdin_open: true
Here is the Dockerfile:
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
# Fix Permissions for Packages
# RUN npm config set unsafe-perm true
RUN npm install -g npm
RUN npm install
COPY . .
# Fix Cache Permissions for node_modules
RUN chown -R node /app
USER node
EXPOSE 3000
CMD ["npm", "run", "dev"]
The project structure looks like this on the host machine:
/
- node_modules/
- public/
- src/
- Dockerfile
- docker-compose.yml

docker build IMAGE results in error but docker-compose up -d works fine

I am new to the Docker. I am trying to create a docker image for the NodeJS project which I will upload/host on Docker repository. When I execute docker-compose up -d everything works fine and I can access the nodeJS server that is hosted inside docker containers. After that, I stopped all containers and tried to create a docker image from Dockerfiles using following commands:
docker build -t adonis-app .
docker run adonis-app
The first command executes without any error but the second command throws this error:
> adonis-fullstack-app#4.1.0 start /app
> node server.js
internal/modules/cjs/loader.js:983
throw err;
^
Error: Cannot find module '/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:980:15)
at Function.Module._load (internal/modules/cjs/loader.js:862:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! adonis-fullstack-app#4.1.0 start: `node server.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the adonis-fullstack-app#4.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /app/.npm/_logs/2020-02-09T17_33_22_489Z-debug.log
Can someone help me with this error and tell me what is wrong with it?
Dockerfile I am using:
FROM node
ENV HOME=/app
RUN mkdir /app
ADD package.json $HOME
WORKDIR $HOME
RUN npm i -g #adonisjs/cli && npm install
CMD ["npm", "start"]
docker-compose.yaml
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
ports:
- '3306:3306'
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: 1
networks:
- api-network
adonis-api:
container_name: "${APP_NAME}-api"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
ports:
- "3333:3333"
depends_on:
- adonis-mysql
networks:
- api-network
networks:
api-network:
Your Dockerfile is missing a COPY step to actually copy your application code into the image. When you docker run the image, there's no actual source code to run, and you get the error you're seeing.
Your Dockerfile should look more like:
FROM node
WORKDIR /app # creates the directory; no need to set $HOME
COPY package.json package.lock .
RUN npm install # all of your dependencies are in package.json
COPY . . # actually copy the application in
CMD ["npm", "start"]
Now that your Docker image is self-contained, you don't need the volumes: that try to inject host content into it. You can also safely rely on several of Docker Compose's defaults (the default network and the generated container_name: are both fine to use). A simpler docker-compose.yml looks like
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
# As you have it, except delete the networks: block
adonis-api:
build: . # this directory is context:, use default dockerfile:
ports:
- "3333:3333"
depends_on:
- adonis-mysql
There's several key problems in the set of artifacts and commands you show:
docker run and docker-compose ... are separate commands. In the docker run command you show, it runs the image, as-is, with its default command, with no volumes mounted, and with no ports published. docker run doesn't know about the docker-compose.yml file, so whatever options you have specified there won't have an effect. You might mean docker-compose up, which will also start the database. (In your application remember to try several times for it to come up, it often can take 30-60 seconds.)
If you're planning to push the image, you need to include the source code. You're essentially creating two separate artifacts in this setup: a Docker image with Node and some libraries, and also a Javascript application on your host. If you docker push the image, it won't include the application (because you're not COPYing it in), so you'll also have to separately distribute the source code. At that point there's not much benefit to using Docker; an end user may as well install Node, clone your repository, and run npm install themselves.
You're preventing Docker from seeing library updates. Putting node_modules in an anonymous volume seems to be a popular setup, and Docker will copy content from the image into that directory the first time you run the application. The second time you run the application, Docker sees the directory already exists, assumes it to contain valuable user data, and refuses to update it. This leads to SO questions along the lines of "I updated my package.json file but my container isn't updating".
Your docker-compose.yaml file has two services:
adonis-mysql
adonis-api
Only the second item is using the current docker file. As can be seen by the following section:
build:
context: .
dockerfile: Dockerfile
The command docker build . will only build the image in current docker file aka adonis-api. And then it is run.
So most probably it could be the missing mysql service that is giving you the error. You can verify by running
docker ps -aq
to check if the sql container is also running. Hope it helps.
Conclusion: Use docker-compose.

No such file or directory : Docker-compose up

I dockerized my mean application with docker-compose. This works fine.
Now I try to use "volumes" so that my angular app (with ng serve) and my express app (with nodemon.js) auto-restart when coding.
But identical error appears for both angular and express container :
angular_1 |
angular_1 | up to date in 1.587s
angular_1 | found 0 vulnerabilities
angular_1 |
angular_1 | npm ERR! path /usr/src/app/package.json
angular_1 | npm ERR! code ENOENT
angular_1 | npm ERR! errno -2
angular_1 | npm ERR! syscall open
angular_1 | npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
angular_1 | npm ERR! enoent This is related to npm not being able to find a file.
angular_1 | npm ERR! enoent
angular_1 |
angular_1 | npm ERR! A complete log of this run can be found in:
angular_1 | npm ERR! /root/.npm/_logs/2019-04-07T20_51_38_933Z-debug.log
harmonie_angular_1 exited with code 254
See my folder hierarchy :
-project
-client
-Dockerfile
-package.json
-server
-Dockerfile
-package.json
-docker-compose.yml
Here's my Dockerfile for angular :
# Create image based on the official Node 10 image from dockerhub
FROM node:10
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package*.json /usr/src/app/
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app/
# Expose the port the app runs in
EXPOSE 4200
# Serve the app
CMD ["npm", "start"]
My Dockerfile for express :
# Create image based on the official Node 6 image from the dockerhub
FROM node:6
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package*.json /usr/src/app/
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app/
# Expose the port the app runs in
EXPOSE 3000
# Serve the app
CMD ["npm", "start"]
And finally my docker-compose.yml
version: '3' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forwarding
#WHEN ADDING VOLUMES, ERROR APPEARS!!!!!!
volumes:
- ./client:/usr/src/app
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database
#WHEN ADDING VOLUMES, ERROR APPEARS!!!!!!
volumes:
- ./server:/usr/src/app
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forwarding
I also had this error, it turned out to be an issue with my version of docker-compose. I'm running WSL on windows 10 and the version of docker-compose installed inside WSL did not handle volume binding correctly. I fixed this by removing /usr/local/bin/docker-compose and then adding an alias to the windows docker-compose executable alias docker-compose="/mnt/c/Program\ Files/Docker/Docker/resources/bin/docker-compose.exe"
If The above does not apply to you then try to update your version of docker-compose
you volumes section should look like this:
volumes:
- .:/usr/app
- /usr/app/node_modules
after mounting source folder node_modules in Docker container is 'overwritten' so you need to add the '/usr/app/node_modules'. Full tutorial with proper docker-compose.yml - https://all4developer.blogspot.com/2019/01/docker-and-nodemodules.html

Docker and private packages with .npmrc

I am using a .npmrc file to configure a private repo (font-awesome-pro).
It works well without docker.
But with docker, the npm install fails:
npm ERR! code E401
npm ERR! 404 401 Unauthorized: #fortawesome/fontawesome-pro-light#https://npm.fontawesome.com/7D46BEC2-1565-40B5-B5FC-D40C724E60C6/#fortawesome/fontawesome-pro-light/-/fontawesome-pro-light-5.0.12.tgz
I have read the doc from NPM : Docker and private packages, but I don't know how to apply it with docker-compose.yml and I not sure passing variables is the solution (?).
Is it possible that the .npmrc file is not read during installation inside the docker instance ? Am I missing something ?
Here is my docker-compose.yaml :
version: '2.1'
services:
app:
image: node:8.9.4
# restart: always
container_name: jc-vc
environment:
- APP_ENV=${JC_ENV:-dev}
- HOST=0.0.0.0
- BASE_URL=${JC_BASE_URL}
- BROWSER_BASE_URL=${JC_BROWSER_BASE_URL}
working_dir: /usr/src/app
volumes:
- ${DOCKER_PATH}/jc/vc/app:/usr/src/app
command: npm install
# command: npm run dev
# command: npm run lintfix
# command: npm run build
# command: npm start
expose:
- 3000
nginx:
image: nginx
logging:
driver: none
# restart: always
volumes:
- ${DOCKER_PATH}/jc/vc/nginx/www:/usr/share/nginx/html
- ${DOCKER_PATH}/jc/vc/nginx/default.${JC_ENV:-dev}.conf:/etc/nginx/nginx.conf
- ${DOCKER_PATH}/jc/vc/nginx/.htpasswd:/etc/nginx/.htpasswd
- ${DOCKER_PATH}/jc/letsencrypt:/etc/letsencrypt
container_name: jc-nginx-vc
depends_on:
- app
ports:
- ${PORT_80:-4020}:${PORT_80:-4020}
- ${PORT_443:-4021}:${PORT_443:-4021}
and my .npmrc (with replaced token) :
#fortawesome:registry=https://npm.fontawesome.com/
//npm.fontawesome.com/:_authToken=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX
The correct way to fix this, as documented in the link you reference, is to use arg variables in the dockerfile. I think the bit you're missing is how to do this in compose:
version: "3"
services:
myapp:
build:
context: "."
args:
NPM_TOKEN: "s3kr!t"
You need to reference this argument in your dockerfile and create a .npmrc file in the root of your project:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
I like to generate this in the dockerfile to minimise the risk of exposure (but, be aware, the token is still stored in the image's layers), so it would look something like this:
FROM node:current-buster-slim
ARG NPM_TOKEN
WORKDIR /app
COPY package.json /app/package.json
RUN echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > /app/.npmrc && \
npm install && \
rm -f /app/.npmrc
COPY . /app/
CMD npm start
You can then run docker-compose build myapp and get a good result. This solution still suffers from having the secret in your compose file and in the docker images, but this is only a sketch for SO. In the real world you'd not want to put your secrets in your source-files so realistically you'd replace the secret with a dynamic secret that has a short Time To Live (TTL) and a single-use policy (and you'd probably want to use Hashicorp Vault to help with that).
In the root directory of your project, create a custom .npmrc file with the following contents:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
Now add these commands to Dockerfile
COPY .npmrc .npmrc
COPY package.json package.json
RUN npm install
RUN rm -f .npmrc
That should fix the issue, hope that helps
package-lock.json needs to be re-generate with the new .npmrc file. Delete it package-lock.json and recreate it with npm install then redeploy the image.

Docker + node_modules: receiving error for local dependency while trying to run Dockerfile

I am working on creating a docker container for a node.js microservice and am running into an issue with a local dependency from another folder.
I added the dependency to the node_modules folder using:
npm install -S ../dependency1(module name).
This also added an entry in the package.json as follows:
"dependency1": "file:../dependency1".
When I run the docker-compose up -d command, I receive an error indicating the folowing:
npm ERR! Could not install from "../dependency1" as it does not contain a package.json file.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD [ "npm", "start" ]
EXPOSE 3000
docker-compose.yml:
customer:
container_name: "app_customer"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/usr/src/app/
- /usr/src/app/node_modules
ports:
- "3000:3000"
depends_on:
- mongo
- rabbitmq
I found articles outlining an issue with symlinks in a node_modules folder and docker and a few outlining this issue but none seem to provide a solution to this problem. I am looking for a solution to this problem or a really good workaround.
A Docker build can't reference files outside of the build context, which is the . defined in the docker-compose.yml file.
docker build creates a tar bundle of all the files in a build context and sends that to the Docker daemon for the build. Anything outside the context directory doesn't exist to the build.
You could move your build context with context: ../ to the parent directory and shuffle all the paths you reference in the Dockerfile to match. Just be careful not to make the build context to large as it can slow down the build process.
The other option is to publish the private npm modules to a scope, possible on a private npm registry that you and the build server have access to and install the dependencies normally.

Resources