QUESTION: (edited: solution is added at the end of this post)
I have VueJS project (developed in webpack), which I want to docker-size.
My Dockerfile looks like:
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "dev"]
which is basically following the flow from this post.
I also have a .dockerignore file, where I copied the same files from my .gitignore and it looks like:
.DS_Store
node_modules/
/dist/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Editor directories and files
.idea
.vscode
*.suo
*.ntvs*
*.njsproj
*.sln
.git/
I have created a docker image with the command:
docker build -t test/my-image-name .
and then run it into a container with the command:
docker run -it -p 8080:8080 --rm --name my-container-name test/my-image-name
as a result of this last command, I got the same output in the terminal (which is normally showing in cases of debugging with webpack / vuejs) as when I run the app locally:
BUT: at the end, in the browser window the app is not loaded
If I run the commands docker images and docker ps I can see that the image and the container are there, and while creating them, I did not got any error messages.
I found this post and had a few tries for changing the Dockerfile as:
Option 1
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
Option 2
FROM node:8.11.1 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENTRYPOINT ["ng", "serve", "-H", "0.0.0.0"]
EXPOSE 8080
CMD ["npm", "run", "dev"]
But it seems none of them is working.
btw. my package.json file looks like:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
}
So I'm wondering: how to make the app to be opened in the browser from the docker image?
SOLUTION: not, sure if this was the reason for the fix, but I did two things. As mentioned, I'm working with the VueJS and webpack, so inside of the file named config/index.js, which initially looked like:
module.exports = {
dev: {
// Paths
assetsSubDirectory: 'static',
assetsPublicPath: '/',
proxyTable: {},
// Various Dev Server settings
host: 'localhost', // <---- this one
port: 8080,
I changed the host property from 'localhost' into '0.0.0.0', removed the EXPOSE 8080 line from the Dockerfile (the initial Docker file from my question above) since I noticed that the port from the config file is used by default and also restarted the installed Docker tool on my local machine.
Related
I am having an issue with my docker-compose configuration file. My goal is to run a Next.js app with a docker-compose file and enable hot reload.
Running the Next.js app from its Dockerfile works but hot reload does not work.
Running the Next.js app from the docker-compose file triggers an error: /bin/sh: next: not found and I was not able to figure what's wrong...
Dockerfile: (taken from Next.js' documentation website)
[Notice it's a multistage build however, I am only referencing the builder stage in the docker-compose file.]
# Install dependencies only when needed
FROM node:18-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install # --frozen-lockfile
# Rebuild the source code only when needed
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3001
ENV PORT 3001
CMD ["node", "server.js"]
docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}
backend:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_USERNAME: ${MYAPP_DATABASE_USERNAME}
DATABASE_PASSWORD: ${POSTGRESQL_PASSWORD}
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
command: yarn dev
volumes:
- ./frontend:/app
expose:
- "3001"
ports:
- "3001:3001"
depends_on:
- backend
environment:
FRONTEND_BUILD: ${FRONTEND_BUILD}
PORT: 3001
package.json:
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "latest",
"react": "^18.1.0",
"react-dom": "^18.1.0"
}
}
When calling yarn dev from docker-compose.yml it actually calls next dev and that's when it triggers the error /bin/sh: next: not found. However, running the container straight from the Dockerfile works and does not lead to this error.
[Update]:
If I remove the volume attribute from my docker-compse.yml file, I don't get the /bin/sh: next: not found error and the container runs however, I now don't get the hot reload feature I am looking for. Any idea why the volume is messing up with the /bin/sh next command?
This is happening because your local filesystem is being mounted over what is in the docker container. Your docker container does build the node modules in the builder stage, but I'm guessing you don't have the node modules available in your local file system.
To see if this is what is happening, on your local file system, you can do a yarn install. Then try running your container via docker again. I'm predicting that this will work, as yarn will have installed next locally, and it is actually your local file system's node modules that will be run in the docker container.
One way to fix this is to volume mount everything except the node modules folder. Details on how to do that: Add a volume to Docker, but exclude a sub-folder
So in your case, I believe you can add a line to your compose file:
frontend:
...
volumes:
- ./frontend:/app
- ./frontend/node_modules # <-- try adding this!
...
That should allow the docker container's node_modules to not be overwritten by any volume mount.
I was trying to dockerize my existing simple vue app , following on this tutorial from vue webpage https://v2.vuejs.org/v2/cookbook/dockerize-vuejs-app.html. I successfully created the image and the container. My problem is that when I edit my code like "hello world" in App.vue it will not automatically update or what they called this hot reload ? or should I migrate to the latest Vue so that it will work ?
docker run -it --name=mynicevue -p 8080:8080 mynicevue/app
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
# RUN npm run build
EXPOSE 8080
CMD [ "http-server", "serve" ]
EDIT:
Still no luck. I comment out the npm run build. I set up also vue.config.js and add this code
module.exports = {
devServer: {
watchOptions: {
ignored: /node_modules/,
aggregateTimeout: 300,
poll: 1000,
},
}
};
then I run the container like this
`docker run -it --name=mynicevue -v %cd%:/app -p 8080:8080 mynicevue/app
when the app launches to browser I get this error in terminal and the browser is whitescreen
"GET /" Error (404): "Not found"
Can someone help me please of my Dockerfile what is wrong or missing so that I can play my vue app using docker ?
Thank you in advance.
Okay I tried your project in my local and here's how you do it.
Dockerfile
FROM node:lts-alpine
# bind your app to the gateway IP
ENV HOST=0.0.0.0
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
EXPOSE 8080
ENTRYPOINT [ "npm", "run", "dev" ]
Use this command to run the docker image after you build it:
docker run -v ${PWD}/src:/app/src -p 8080:8080 -d mynicevue/app
Explanation
It seems that Vue is expecting your app to be bound to your gateway IP when it is served from within a container. Hence ENV HOST=0.0.0.0 inside the Dockerfile.
You need to mount your src directory to the running container's /app/src directory so that the changes in your local filesystem directly reflects and visible in the container itself.
The way in Vue to watch for the file changes is using npm run dev, hence ENTRYPOINT [ "npm", "run", "dev" ] in Dockerfile
if you tried previous answers and still doesn't work , try adding watch:{usePolling: true} to vite.config.js file
import { defineConfig } from 'vite'
import vue from '#vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
server: {
host: true,
port: 4173,
watch: {
usePolling: true
}
}
})
OK, I ran out of ideas now. I am trying to get nodemon to work in a dockerized (Docker Toolbox, Win 8.1) simple nodejs app.
file structure
Dockerfile
FROM node:latest
USER root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install -g nodemon
COPY . .
EXPOSE 3000
CMD [ "npm", "run", "devStart" ]
script
"devStart": "nodemon --ext ejs,js,json,css --watch server --watch views server/server.js",
-it output
Everything seems to match, but when I edit a partial/view nodemon does not restart.
Tried with nodemon as production (ie. not dev) dependency and then npx nodemon... - wouldn't work either.
I'm trying to create a Docker container to act as a test environment for my application. I am using the following Dockerfile:
FROM node:14.4.0-alpine
WORKDIR /test
COPY package*.json ./
RUN npm install .
CMD [ "npm", "test" ]
As you can see, it's pretty simple. I only want to install all dependencies but NOT copy the code, because I will run that container with the following command:
docker run -v `pwd`:/test -t <image-name>
But the problem is that node_modules directory is deleted when I mount the volume with -v. Any workaround to fix this?
When you bind mount test directory with $PWD, you container test directory will be overridden/mounted with $PWD. So you will not get your node_modules in test directory anymore.
To fix this issue you can use two options.
You can run npm install in separate directory like /node and mount your code in test directory and export node_path env like export NODE_PATH=/node/node_modules
then Dockerfile will be like:
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
CMD [ "npm", "test" ]
Or you can write a entrypoint.sh script that will copy the node_modules folder to the test directory at the container runtime.
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
COPY Entrypoint.sh ./
ENTRYPOINT ["Entrypoint.sh"]
and Entrypoint.sh is something like
#!/bin/bash
cp -r /node/node_modules /test/.
npm test
Approach 1
A workaround is you can do
CMD npm install && npm run dev
Approach 2
Have docker install node_modules on docker-compose build and run the app on docker-compose up.
Folder Structure
docker-compose.yml
version: '3.5'
services:
api:
container_name: /$CONTAINER_FOLDER
build: ./$LOCAL_FOLDER
hostname: api
volumes:
# map local to remote folder, exclude node_modules
- ./$LOCAL_FOLDER:/$CONTAINER_FOLDER
- /$CONTAINER_FOLDER/node_modules
expose:
- 88
Dockerfile
FROM node:14.4.0-alpine
WORKDIR /test
COPY ./package.json .
RUN npm install
# run command
CMD npm run dev
I want to create a volume for my "public" folder on Docker on Express App. Because when users upload pictures, I save them to "public/uploads", but when I make changes on code, and have to rebuild with docker-compose run --build, I lose all these images.
I tried to find a way to create a volume but I don't know how to link it.
My Dockerfile only consist of these:
FROM node:8.10.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# RUN npm ci --only=production
COPY . .
CMD [ "npm", "start" ]
My goal is to serve uploaded images from "public/uploads", and don't get them removed upon docker-compose run --build.
According to the official documentation, you can use the --mount flag:
//Dockerfile
FROM node:8.10.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# RUN npm ci --only=production
RUN --mount=target=/some_location_in_file_system,type=bind,source=public/uploads
COPY . .
CMD [ "npm", "start" ]