I'm trying to set up a docker container running a next.js app for development purposes using docker compose, so it can be used along side other containers (such an api or other services), and what I need is the next.js app could compile any change made in real time after an update in the source code is made. So I created a volume for the container mapped to the source code on my host machine. The thing is that the app on the container does not update after some changes are applied on the source code, the container actually updates the files (the volume is mapped correctly) but the application does not applies the changes and does not compile again. The output console for docker compose is this, saying that #next/swc-linux-x64 is not installed:
I believe that there is some issue with the image because when I run npm run dev on my local machine it does not shows that warning and it works well (after some change is made in the source code the app is updated and compiles the changes):
So the questions are: Is it possible to create a development container for a next.js app using Docker? And it'll be able to update/compile after some change is made from the source code on my host machine? What could be the issue that the app does not update any change being made from the source code?
My host OS is Windows 11. And node version: 18.14.0
My project structure is:
The files that I'm using are the following:
Dockerfile.dev:
FROM node:18.14.0
WORKDIR /app
COPY package.json ./
RUN npm install
COPY /public .
COPY /src .
EXPOSE 3000
ENV NODE_ENV=development
CMD ["npm", "run", "dev"]
docker-compose.dev.yml:
services:
frontend:
environment:
NODE_ENV: development
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile.dev
restart: always
ports:
- 3000:3000
volumes:
- ./frontend:/app
command: npm run dev
networks:
- my_network
networks:
my_network:
next.config.js:
/**
* #type {import('next').NextConfig}
*/
module.exports = {
output: process.env.NODE_ENV === 'production' ? 'standalone' : 'module',
};
package.json
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"#popperjs/core": "^2.11.6",
"bootstrap": "^5.2.1",
"date-fns": "^2.29.2",
"eslint-config-next": "^13.1.6",
"gray-matter": "^4.0.3",
"next": "^13.1.6",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"remark": "^14.0.2",
"remark-html": "^15.0.1"
}
}
Related
I tried to find solution over a long time - reloading nodemon in docker while updating e.g index.js. I've windows 10.
I've node project with docker:
proj/backend/src/index.js:
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('Hello world.')
})
const port = process.env.PORT || 3001
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
proj/backend/package.json:
{
"scripts": {
"start": "node ./bin/www",
"start:legacy": "nodemon --legacy-watch -L --watch src src/index.js"
},
"dependencies": {
"express": "^4.17.2"
},
"devDependencies": {
"nodemon": "^2.0.15"
}
}
proj/backend/dev.Dockerfile:
FROM node:lts-alpine
RUN npm install --global nodemon
WORKDIR /usr/src/app
COPY . .
RUN npm ci
EXPOSE 3001
ENV DEBUG=playground:*
CMD npm run start:legacy
proj/docker-compose.dev.yml:
version: '3.8'
services:
backend:
image: backend-img
build:
context: ./backend
dockerfile: ./dev.Dockerfile
ports:
- 3001:3001
environment:
- PORT=3001
If I am not wrong, the docker container is made for kill itself when the process ends. When using nodemon (and updating the code), the process will be stopped and restarted and the container will stop. You could do the npm start not be the main process, but this is not a good practice.
Probably, it's already late. But I will write.
There are misconceptions in your configuration.
In command "start:legacy". You should use only one option to run legacy --legacy-watch or -L, not both. Because these commands are equal. According to nodemon docs: Via the CLI, use either --legacy-watch or -L for short
Your Dockerfile configurations seems fine. But to synchronize your local machine files and directory with docker container you should use volumes in docker-compose. So, your docker-compose file will look something like:
version: '3.8'
services:
backend:
image: backend-img
build:
context: ./backend
dockerfile: ./dev.Dockerfile
volumes:
- ./your_project_dir:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3001:3001
environment:
- PORT=3001
I believe, if you define volumes you will able to make changes locally and container also will see changes
I want to run a Nx workspace containing a NestJs project in a Docker container, in development mode. The problem is I am unable to configure docker-compose + Dockerfile to make the project reload on save. I'm a bit confused on why this is not working as I configured a small nestjs project(without nx) in docker and it had no issues reloading on save.
Surely I am not mapping the ports corectly or something.
version: "3.4"
services:
nx-app:
container_name: nx-app
build: .
ports:
- 3333:3333
- 9229:9229
volumes:
- .:/workspace
FROM node:14.17.3-alpine
WORKDIR /workspace
COPY . .
RUN ["npm", "i", "-g", "#nrwl/cli"]
RUN ["npm", "i"]
EXPOSE 3333
EXPOSE 9229
ENTRYPOINT ["nx","serve","main"]
Also tried adding a Angular application to the workspace and was able to reload it on save in the container without issues...
Managed to solve it by adding "poll": 500 in project.json of nestJs app/library.
"targets": {
"build": {
"executor": "#nrwl/node:webpack",
...
"options": {
...
"poll": 500
I've noticed this is quite common issue when working with containerized Cypress.
I've found one topic here but resetting
settings isn't really a real solution. In some cases may be.
I'm using docker-compose to manage build of my containers:
...
other services
...
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile
restart: always
ports:
- 80:80
depends_on:
- users
- client
cypress:
build:
context: ./services/cypress
dockerfile: Dockerfile
depends_on:
- nginx
here's my cypress.json:
{
"baseUrl": "http://172.17.0.1",
"video": false
}
I know it's recommended to refer to service directly like this "http://nginx" but it never worked for me and referring to it by IP worked when I used non-containerized Cypress. Now I'm using Cypress in a container to make it consistent with all other services but Cypress is giving me a hard time. I'm not including volumes because so far I didn't see reason for including them. I don't need to persist any data at this point.
Dockerfile:
FROM cypress/base:10.18.0
RUN mkdir /app
WORKDIR /app
ADD cypress_dev /app
RUN npm i --save-dev cypress#4.9.0
RUN $(npm bin)/cypress verify
RUN ["npm", "run", "cypress:e2e"]
package.json:
{
"scripts": {
"cypress:e2e": "cypress run"
}
}
I'd very much appreciated any guidance. Please do ask for more info if I haven't provided enough.
I've created a simple docker with a nodejs server.
FROM node:12.16.1-alpine
WORKDIR /usr/src
COPY ./app/package.json .
RUN yarn
COPY ./app ./app
This works great and the service is running.
Now I'm trying to run the docker with a volume for local development using docker compose:
version: "3.4"
services:
web:
image: my-node-app
volumes:
- ./app:/usr/src/app
ports:
- "8080:8080"
command: ["yarn", "start"]
build:
context: .
dockerfile: ./app/Dockerfile
This is my folder structure in the host:
The service works without the volume. When I add the volume, the /usr/src/app is empty (even though it is full as shown in the folder structure).
Inspecting the docker container I get the following mount config:
"Mounts": [
{
"Type": "bind",
"Source": "/d/development/dockerNCo/app",
"Destination": "/usr/src/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
But still, browsing to the folder via shell of vscode show it as empty.
In addition, the command: docker volume ls shows an empty list.
I'm running docker 18.09.3 on windows 10.
Is there anything wrong with the configuration? How is it supposed to work?
Adding the volume to your service will remove all the files /usr/src/app and mount the content of ./app from your host machine. This also means that all files generated by running yarn in the docker image will be lost because they exist only in the docker image. This is the expected behaviour of adding volume in docker and it is not a bug.
volumes:
- ./app:/usr/src/app
Usually and for none development Envs you don't need the volume at all here.
If you would like to see the files on your host, you need to run yarn command from docker-compose (you can use an entry point)
I have a little vueJS app runnig on docker.
When i run the app via yarn serve it runs fine, also it does in docker.
My problem is hot reloading will not work.
My Dockerfile:
FROM node:12.2.0-alpine
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
RUN npm install #vue/cli -g
CMD ["npm", "run", "serve"]
My docker-compose.yml:
version: '3.7'
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8082:8080'
Does anyone can see the mistake i did?
I found a solution:
I added the following to my compose file:
environment:
- CHOKIDAR_USEPOLLING=true
What has worked for me in the past is to use this in the docker-compose.yml file:
frontend:
build:
context: .
dockerfile: vuejs.Dockerfile
# command to start the development server
command: npm run serve
# ------------------ #
volumes:
- ./frontend:/app
- /app/node_modules # <---- this enables a much faster start/reload
ports:
- "8080:8080"
environment:
- CHOKIDAR_USEPOLLING=true # <---- this enables the hot reloading
Also expose 8080 port
FROM node:12.2.0-alpine
EXPOSE 8080 # add this line in docker file.
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
RUN npm install #vue/cli -g
CMD ["npm", "run", "serve"]
Docker compose as
version: '3.7'
services:
client:
container_name: client
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
server will be running in localhost:8080
One of the answers above suggests setting an environment variable for the chokidar polling. According to this issue you can set the polling options to true in vue.config.js.
module.exports = {
configureWebpack: {
devServer: {
port: 3000,
// https://github.com/vuejs-templates/webpack/issues/378
watchOptions: {
poll: true,
},
},
}
};
Additionally, make sure that the volume you are mounting is correct as per your working dir, etc. to ensure that the files are watched correctly.
For me it was the working on Windows + Docker Desktop. After switching to WSL2 + Docker Desktop the hot reload worked again without needed to do additionally work / variables.