I'm trying to create a Docker container to act as a test environment for my application. I am using the following Dockerfile:
FROM node:14.4.0-alpine
WORKDIR /test
COPY package*.json ./
RUN npm install .
CMD [ "npm", "test" ]
As you can see, it's pretty simple. I only want to install all dependencies but NOT copy the code, because I will run that container with the following command:
docker run -v `pwd`:/test -t <image-name>
But the problem is that node_modules directory is deleted when I mount the volume with -v. Any workaround to fix this?
When you bind mount test directory with $PWD, you container test directory will be overridden/mounted with $PWD. So you will not get your node_modules in test directory anymore.
To fix this issue you can use two options.
You can run npm install in separate directory like /node and mount your code in test directory and export node_path env like export NODE_PATH=/node/node_modules
then Dockerfile will be like:
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
CMD [ "npm", "test" ]
Or you can write a entrypoint.sh script that will copy the node_modules folder to the test directory at the container runtime.
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
COPY Entrypoint.sh ./
ENTRYPOINT ["Entrypoint.sh"]
and Entrypoint.sh is something like
#!/bin/bash
cp -r /node/node_modules /test/.
npm test
Approach 1
A workaround is you can do
CMD npm install && npm run dev
Approach 2
Have docker install node_modules on docker-compose build and run the app on docker-compose up.
Folder Structure
docker-compose.yml
version: '3.5'
services:
api:
container_name: /$CONTAINER_FOLDER
build: ./$LOCAL_FOLDER
hostname: api
volumes:
# map local to remote folder, exclude node_modules
- ./$LOCAL_FOLDER:/$CONTAINER_FOLDER
- /$CONTAINER_FOLDER/node_modules
expose:
- 88
Dockerfile
FROM node:14.4.0-alpine
WORKDIR /test
COPY ./package.json .
RUN npm install
# run command
CMD npm run dev
Related
I keep having an issue where I get the error: Cannot find module '/mfa/main.js'.
However, the main.js is inside of /mfa/dist/apps/api
This is the latest configuration of Dockerfile I have:
FROM node:14
WORKDIR /mfa/
COPY package.json .
COPY decorate-angular-cli.js .
COPY yarn.lock .
# Configure NPM with the group access token
ENV GROUP_NPM_TOKEN="asdfghjkiuy"
RUN npm config set #my-web:registry http://git.hoosiers.com/api/v4/packages/npm
RUN npm config set //git.hoosiers.com/api/v4/packages/npm/:_authToken=${GROUP_NPM_TOKEN}
RUN npm config set //git.hoosiers.com/api/v4/packages/projects/:_authToken=${GROUP_NPM_TOKEN}
RUN yarn add typescript
RUN yarn install --frozen-lockfile
COPY ./dist .
CMD ["node", "apps/api/main.js"]
So now docker run <image-hash> runs just fine, but when I attempt docker-compose up is when I once again get Cannot find module '/mfa/main.js'.
This is my docker-compose.yml file:
version: '3.9'
services:
web-app:
build:
context: .
dockerfile: mostly-failed-apps.Dockerfile
ports:
- "3000:3000"
You have difine your WORKDIR is /mfa and you execute your main.js in apps/api/main.js
And tip for copy it's not mandatory to write /mfa/ you can just write dot (.) because your in WORKDIR
`
FROM node:14
# Go on /mfa if is dosen't exist WORKDIR create it and go in
WORKDIR /mfa/
# Copy of package.json where we are so we are with the dot so in /mfa/ it's the same for all copy
COPY package.json .
COPY decorate-angular-cli.js .
COPY yarn.lock .
# Configure NPM with the group access token
ENV GROUP_NPM_TOKEN="token"
RUN npm config set #my-web:registry http://git.hoosiers.com/api/v4/packages/npm
RUN npm config set //git.hoosiers.com/api/v4/packages/npm/:_authToken=${GROUP_NPM_TOKEN}
RUN npm config set //git.hoosiers.com/api/v4/packages/projects/:_authToken=${GROUP_NPM_TOKEN}
RUN yarn add typescript
RUN yarn install --frozen-lockfile
COPY ./dist .
# You have create your docker with /mfa/ so you need to excute it in /mfa/
CMD ["node", "/mfa/dist/apps/api/main.js"]
`
I'm trying to copy my ./dist after building my angular app.
here is my Dockerfile
# Create image based off of the official Node 10 image
FROM node:12-alpine
RUN apk update && apk add --no-cache make git
RUN mkdir -p /home/project/frontend
# Change directory so that our commands run inside this new directory
WORKDIR /home/project/frontend
# Copy dependency definitions
COPY package*.json ./
RUN npm cache verify
## installing packages
RUN npm install
COPY ./ ./
RUN npm run build --output-path=./dist
COPY /dist /var/www/front
but when I run docker-compose build dashboard I get this error
Service 'dashboard' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builderxxx/dist: no such file or directory
I don't know why is there something wrong?
if you need to check also docker-compose file
...
dashboard:
container_name: dashboard
build: ./frontend
image: dashboard
container_name: dashboard
restart: unless-stopped
networks:
- app-network
...
The Dockerfile COPY directive copies content from the build context (the host-system directory in the build: line) into the image. If you're just trying to move around content within the image, you can RUN cp or RUN mv to use the ordinary Linux shell commands instead.
RUN npm run build --output-path=./dist \
&& cp -a dist /var/www/front
Quick question regarding Dockerfile.
I've got a folder structure like so:
docker-compose.yml
client
src
package.json
Dockerfile
...etc
Client folder contains reactjs application and root is nodejs server with typescript. I've created Dockerfile like so:
FROM node
RUN mkdir -p /server/node_modules && chown -R node:node /server
WORKDIR /server
USER node
COPY package*.json ./
RUN npm install
COPY --chown=node:node . ./dist
RUN npm run build-application
COPY /src/views ./dist/src/views
COPY /src/public ./dist/src/public
EXPOSE 4000
CMD node dist/src/index.js
npm run build-application command executes client build (npm run build --prefix ./client) and server(rimraf dist && mkdir dist && tsc -p .). The problem is that Docker cannot find client folder error:
npm ERR! enoent ENOENT: no such file or directory, open '/server/client/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
Can someone explain why? And how to fix this?
Docker compose file:
...
server:
build:
context: ./server
dockerfile: Dockerfile
image: mazosios-pedutes-server
container_name: mazosios-pedutes-server
restart: unless-stopped
networks:
- app-network
env_file:
- ./server/.env
ports:
- "4000:4000"
Since the error is saying that there is no client/package.json in /server my question is the following one.
Is your ./client directory located within /server?
Dockerfile WORKDIR instruction makes all commands that follow it to be executed within the directory that you pass to WORKDIR as parameter.
I guess if you add RUN tree -d (lists only nested directories) after the last COPY instruction, you will be able to see where your client directory is located and you will be able to fix the path to it.
I want to create a volume for my "public" folder on Docker on Express App. Because when users upload pictures, I save them to "public/uploads", but when I make changes on code, and have to rebuild with docker-compose run --build, I lose all these images.
I tried to find a way to create a volume but I don't know how to link it.
My Dockerfile only consist of these:
FROM node:8.10.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# RUN npm ci --only=production
COPY . .
CMD [ "npm", "start" ]
My goal is to serve uploaded images from "public/uploads", and don't get them removed upon docker-compose run --build.
According to the official documentation, you can use the --mount flag:
//Dockerfile
FROM node:8.10.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# RUN npm ci --only=production
RUN --mount=target=/some_location_in_file_system,type=bind,source=public/uploads
COPY . .
CMD [ "npm", "start" ]
QUESTION: (edited: solution is added at the end of this post)
I have VueJS project (developed in webpack), which I want to docker-size.
My Dockerfile looks like:
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "dev"]
which is basically following the flow from this post.
I also have a .dockerignore file, where I copied the same files from my .gitignore and it looks like:
.DS_Store
node_modules/
/dist/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
.git/
I have created a docker image with the command:
docker build -t test/my-image-name .
and then run it into a container with the command:
docker run -it -p 8080:8080 --rm --name my-container-name test/my-image-name
as a result of this last command, I got the same output in the terminal (which is normally showing in cases of debugging with webpack / vuejs) as when I run the app locally:
BUT: at the end, in the browser window the app is not loaded
If I run the commands docker images and docker ps I can see that the image and the container are there, and while creating them, I did not got any error messages.
I found this post and had a few tries for changing the Dockerfile as:
Option 1
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
Option 2
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
EXPOSE 8080
CMD ["npm", "run", "dev"]
But it seems none of them is working.
btw. my package.json file looks like:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
}
So I'm wondering: how to make the app to be opened in the browser from the docker image?
SOLUTION: not, sure if this was the reason for the fix, but I did two things. As mentioned, I'm working with the VueJS and webpack, so inside of the file named config/index.js, which initially looked like:
module.exports = {
dev: {
// Paths
assetsSubDirectory: 'static',
assetsPublicPath: '/',
proxyTable: {},
// Various Dev Server settings
host: 'localhost', // <---- this one
port: 8080,
I changed the host property from 'localhost' into '0.0.0.0', removed the EXPOSE 8080 line from the Dockerfile (the initial Docker file from my question above) since I noticed that the port from the config file is used by default and also restarted the installed Docker tool on my local machine.