I have 3 services sharing 1 common package.
I've set a docker-compose.yaml file to run these services together (each service has its own container). Also I configured, using bind mount, hot reload for each service.
Now, I also have 1 shared package, common that being used by each service.
What I want is, to make this package trigger change for each service, upon own change. By this I mean, if I changed some code in the common package, it will make all 3 other services to reload again.
This is my docker-compose.yaml file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:6.0.2
restart: on-failure
networks:
- dashboard_network
volumes:
- ./docker/scripts/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:6.0.2
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:6.0.2
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:6.0.2
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- dashboard_network
backend:
container_name: backend
build:
context: .
dockerfile: ./docker/Dockerfile.backend-dev
env_file:
- ./apps/backend/envs/.env.development
- ./docker/envs/.env.development
ports:
- 3000:3000
restart: always
depends_on:
- mongo_replica_1
- mongo_replica_2
- mongo_replica_3
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/backend/src
target: /dashboard/apps/backend/src
cli-backend:
container_name: cli-backend
build:
context: .
dockerfile: ./docker/Dockerfile.cli-backend-dev
env_file:
- ./apps/cli-backend/envs/.env.development
- ./docker/envs/.env.development
ports:
- 4000:4000
restart: always
depends_on:
- mongo_replica_1
- mongo_replica_2
- mongo_replica_3
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/cli-backend/src
target: /dashboard/apps/cli-backend/src
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/Dockerfile.frontend-dev
env_file:
- ./apps/frontend/.env.development
ports:
- 8080:8080
restart: always
depends_on:
- backend
- cli-backend
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/frontend/src
target: /dashboard/apps/frontend/src
networks:
dashboard_network:
driver: bridge
This is typical dockerfile for a service (may differ for each service, but the idea is the same):
FROM node:18
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
WORKDIR /dashboard
COPY ./package.json ./pnpm-workspace.yaml ./.npmrc ./
COPY ./apps/backend/package.json ./apps/backend/
COPY ./packages/common/package.json ./packages/common/
COPY ./prisma/schema.prisma ./prisma/
RUN pnpm i -w
RUN pnpm --filter backend --filter common i
COPY ./tsconfig.base.json ./nx.json ./
COPY ./apps/backend/ ./apps/backend/
COPY ./packages/common/ ./packages/common/
CMD ["pnpm", "exec", "nx", "start:dev:docker", "backend"]
My pnpm-workspace.yaml file:
packages:
- 'apps/*'
- 'packages/*'
I also use nx package, nx.json file is:
{
"workspaceLayout": {
"appsDir": "apps",
"libsDir": "packages"
},
"tasksRunnerOptions": {
"default": {
"runner": "nx/tasks-runners/default",
"options": {
"cacheableOperations": ["build", "lint", "type-check", "depcheck", "stylelint"]
}
}
},
"namedInputs": {
"source": ["{projectRoot}/src/**/*"],
"jsSource": ["{projectRoot}/src/**/*.{ts,js,cjs}"],
"reactTsSource": ["{projectRoot}/src/**/*.{ts,tsx}"],
"scssSource": ["{projectRoot}/src/**/*.scss"]
},
"targetDefaults": {
"build": {
"inputs": ["source", "^source"],
"dependsOn": ["^build"]
},
"lint": {
"inputs": ["jsSource", "{projectRoot}/.eslintrc.cjs", "{projectRoot}/.eslintignore"],
"outputs": []
},
"type-check": {
"inputs": [
"reactTsSource",
"{projectRoot}/tsconfig.json",
"{projectRoot}/tsconfig.base.json",
"{workspaceRoot}/tsconfig.base.json"
],
"dependsOn": ["^build"],
"outputs": []
},
"depcheck": {
"inputs": ["{projectRoot}/.depcheckrc.json", "{projectRoot}/package.json"],
"outputs": []
},
"stylelint": {
"inputs": ["scssSource", "{projectRoot}/stylelint.config.cjs"]
},
"start:dev": {
"dependsOn": ["^build"]
},
"start:dev:docker": {
"dependsOn": ["^build"]
}
}
}
For each service, I installed the shared package in package.json#devDependencies:
"common": "workspace:1.0.0",
As you can see, my start:dev:docker script depends on build script of the shared package. So in the containers, the common package will build. This is the scripts of common/package.json:
"build": "rimraf ./dist && tsc --project ./tsconfig.build.json",
"start:dev": "tsc --project ./tsconfig.build.json --watch",
So I need to use start:dev somehow, but surely not with dependsOn of NX.
Related
I have created a simple app connected with PostgreSQL and pgAdmin, as well as a web server in a Docker images running in a container.
My question is how I can make it reload, like with nodemon in a local server, without the need of deleting the container everytime.
I have been trying different solutions and methods I have seen around but I haven't been able to make it work.
I have already tried inserting the command: ["npm", "run", "start:dev"] in the docker-compose.file as well...
My files are:
Dockerfile
FROM node:latest
WORKDIR /
COPY package*.json ./
COPY . .
COPY database.json .
COPY .env .
EXPOSE 3000
CMD [ "npm", "run", "watch ]
Docker-compose.file
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
# command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '3000:3000'
env_file:
- .env
Nodemon.json file:
{
"watch": ["dist"],
"ext": ".ts,.js",
"ignore": [],
"exec": "ts-node ./dist/server.js"
}
Package.json file:
"scripts": {
"start:dev": "nodemon",
"build": "rimraf ./dist && tsc",
"start": "npm run build && node dist/server.js",
"watch": "tsc-watch --esModuleInterop src/server.ts --outDir ./dist --onSuccess \"node ./dist/server.js\"",
"jasmine": "jasmine",
"test": "npm run build && npm run jasmine",
"db-test": "set ENV=test&& db-migrate -e test up && npm run test && db-migrate -e test reset",
"lint": "eslint . --ext .ts",
"prettier": "prettier --config .prettierrc src/**/*.ts --write",
"prettierLint": "prettier --config .prettierrc src/**/*.ts --write && eslint . --ext .ts --fix"
},
Thanks
The COPY . . command only runs when the image is built, which only happens when you first run docker compose up. In order for the container to be aware of changes, you need the code changes on your host machine to be synchronized with the code inside the container, even after the build is complete.
Below I've added the volume mount to the web container in your docker compose and uncommented the command that should support hot-reloading. I assumed that the source code you wanted to change lives in a src directory, but feel free to update to reflect how you've organized your source code.
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '2000:2000'
env_file:
- .env
volumes:
# <host-path>:<container-path>
- ./src:/src/
If that isn't clear, here's an article that might help:
https://www.freecodecamp.org/news/how-to-enable-live-reload-on-docker-based-applications/
i want to ask, why my docker can't find axios module. I can't find solution in the internet and documentations. This is my docker file:
Dockerfile
# build stage
FROM node:lts-alpine as build-stage
# Create app directory
WORKDIR /app
# Install all dependecies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
I have installed by "npm install axios --save" and without this flag.
My package.json
{
"name": "frontend",
"version": "0.0.0",
"scripts": {
"dev": "vite --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview --port 5050",
"lint": "eslint . --ext .vue,.js,.jsx,.cjs,.mjs --fix --ignore-path .gitignore"
},
"dependencies": {
"axios": "^0.27.2",
"primeicons": "^5.0.0",
"primevue": "^3.12.5",
"vee-validate": "^4.5.11",
"vue": "^3.2.31",
"vue-router": "^4.0.13",
"vuex": "^4.0.2"
},
"devDependencies": {
"#rushstack/eslint-patch": "^1.1.0",
"#vitejs/plugin-vue": "^2.3.1",
"#vue/eslint-config-prettier": "^7.0.0",
"eslint": "^8.5.0",
"eslint-plugin-vue": "^8.2.0",
"prettier": "^2.5.1",
"vite": "^2.9.5"
}
}
Answer of docker-compose
frontedMatol | Failed to resolve import "axios" from "src/services/auth.service.js". Does the file exist?
frontedMatol | 2:13:05 PM [vite] Internal server error: Failed to resolve import "axios" from "src/services/auth.service.js". Does the file exist?
frontedMatol | Plugin: vite:import-analysis
frontedMatol | File: /app/src/services/auth.service.js
frontedMatol | 1 | import axios from 'axios';
frontedMatol | | ^
frontedMatol | 2 | const API_URL = 'http://localhost:8000/api/v1/'
frontedMatol | 3 | class AuthService {
frontedMatol | at formatError (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36435:46)
frontedMatol | at TransformContext.error (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36431:19)
frontedMatol | at normalizeUrl (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:59840:26)
frontedMatol | at async TransformContext.transform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:59989:57)
frontedMatol | at async Object.transform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36672:30)
frontedMatol | at async doTransform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:55662:29)
I have to add docker-compose too:
version: '3.9'
services:
postgres:
container_name: postgresxx
image: postgres
environment:
POSTGRES_USER: xx
POSTGRES_PASSWORD: xx
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadminxx
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-xx}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
maildev:
container_name: mailDevXx
image: maildev/maildev
ports:
- "1080:1080"
- "1025:1025"
networks:
- spring
backend:
container_name: backendxx
image: xx/xx-backend:latest
ports:
- "8000:8000"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- postgres
- spring
depends_on:
- postgres
frontend:
container_name: frontedXx
build:
context: ../frontend
dockerfile: Dockerfile
working_dir: /app
command: [ "npm", "run", "dev" ]
env_file:
- ../config/env/server-developer.env
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- type: bind
source: ../frontend
target: /app
- type: volume
source: node_modules
target: /app/node_modules
ports:
- 8080:8080
depends_on:
- backend
networks:
postgres:
driver: bridge
spring:
driver: bridge
volumes:
postgres:
pgadmin:
node_modules:
I changed names to xx to make it private.
So as i said, I am building project with docker-compose -f name_file up/build, and the server can't import axios. All of the "tutorials" said " change from ../axios to 'axios'. This can't help.
Also, i am working with visual studio code with docker extension, i went inside, and check yts package.json and lock. All of them has an axios dependency. So i am black whole.
THE MOST IMPORTANT is when i am building only vite, from frontend dir with npm run dev- axios works.
I can put repository flow, if you need it.
I have a simple, working Laravel app which uses Docker and I am trying to add Vue.
Here is my docker-compose.yml:
version: '3.1'
services:
### THIS IS WHAT I ADDED
# Frontend service
frontend:
image: node:current-alpine
build: ./sandbox
container_name: my_app_frontend
ports:
- 8080:8080
volumes:
- "/app/node_modules"
- ".:/app"
command: "npm run serve"
###
#PHP Service
my_app:
build:
context: .
dockerfile: 'Dockerfile'
image: digitalocean.com/php
container_name: my_app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#NPM Service
my_app_npm:
image: node:latest
container_name: my_app_npm
volumes:
- ./:/var/www/
working_dir: /var/www/
tty: true
networks:
- app-network
#Nginx Service
my_app_server:
image: nginx:alpine
container_name: my_app_webserver
restart: unless-stopped
tty: true
ports:
- "82:80"
- "4443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
my_app_db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password --innodb-use-native-aio=0
container_name: my_app_db
restart: always
tty: true
ports:
- "8082:3306"
environment:
MYSQL_PASSWORD: xxxxx
MYSQL_ROOT_PASSWORD: xxxxx
volumes:
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Phpmyadmin Service
my_app_phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: my_app_phpmyadmin
restart: always
links:
- my_app_db
depends_on:
- my_app_db
ports:
- 8083:80
environment:
PMA_HOST: my_app_db
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
Here is my Dockerfile in my Vue directory:
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
EXPOSE 8080
CMD ["npm", "run", "serve"]
The file structure looks like this:
-app
-database
-mysql
-nginx
-sandbox(vue)
-node_modules
-public
-src
Dockerfile
package.json
...
docker-compose.yml
Dockerfile
package.json
...
docker-compose build //works
docker-compose up //fails with this error
This relative module was not found:
my_app_frontend |
my_app_frontend | * ./src/main.js in multi (webpack)-dev-server/client/index.js (webpack)/hot/dev-server.js ./src/main.js
my_app_frontend | [webpack.Progress] 100%
What am I missing?
Figured it out, here is my updated Dockerfile.
FROM node:lts-alpine
# install simple http server for serving static content
WORKDIR /app
COPY package*.json ./
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . . // THIS WAS MISSING
EXPOSE 8080
CMD ["npm", "run", "serve"]
Those who are using Vue webpack in Laravel and trying to dockerize.
You just need to add these below two lines in your Dockerfile.
FROM node:10-alpine
RUN npm i -g webpack webpack-cli
Reference:
https://hub.docker.com/r/91dave/webpack-cli
I am running dev containers on my project that utilizes docker-compose for multiple containers.
My issue is that I cannot view my docker-compose logs. I am not sure how to access it.
Inside the folder .devcontainer I have two files:
devcontainer.json:
{
"name": "TrendR",
"dockerComposeFile": [
"../docker-compose.yml",
"docker-compose.yml"
],
"service": "api",
"workspaceFolder": "/workspace",
"settings": {
"python.pythonPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
},
"extensions": ["ms-python.python","ms-azuretools.vscode-docker"]
}
docker-compose.yml:
version: '3.8'
services:
api:
volumes:
- .:/workspace:cached
- /var/run/docker.sock:/var/run/docker.sock
command: /bin/sh -c "while sleep 1000; do :; done"
This is the main docker-compose.yml inside the project folder.
version: "3.8"
services:
db:
container_name: db
image: postgres:13
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- /var/lib/postgresql/data
api:
build:
context: ./api/
dockerfile: Dockerfile
volumes:
- ./api/app:/app/app
ports:
- "1000:80"
depends_on:
- db
env_file:
- .env
command: ["/start-reload.sh"]
labels:
- "traefik.enable=true"
- "traefik.http.routers.${API_SUBDOMAIN}.rule=Host(`${API_SUBDOMAIN}.${DOMAIN}`)"
frontend:
build:
context: ./frontend/
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- /app/node_modules
- ./frontend:/app
environment:
- NODE_ENV=development
stdin_open: true
links:
- api
labels:
- "traefik.enable=true"
- "traefik.http.routers.${CLIENT_SUBDOMAIN}.rule=Host(`${CLIENT_SUBDOMAIN}.${DOMAIN}`)"
redis:
container_name: trendr_redis
image: "redis:alpine"
ports:
- "6379:6379"
traefik:
image: traefik:v2.4
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "$PWD/traefik/traefik.dev.toml:/etc/traefik/traefik.toml"
If you open a terminal and run docker-compose up in the same location as the docker-compose file you should see your logs
I have application built in React running on Docker. I am looking for a way to debug it. I am using Visual Studio Code. Here is my Docker file and Docker-compose file
FROM node:boron
ARG build_env
RUN mkdir /usr/share/unicode && cd /usr/share/unicode && wget ftp://ftp.unicode.org/Public/UNIDATA/UnicodeData.txt
COPY package.json /tmp/package.json
RUN cd /tmp && npm install
COPY ./shim/RelayDefaultNetworkLayer.js /tmp/node_modules/react-relay/lib/RelayDefaultNetworkLayer.js
COPY ./shim/buildRQL.js /tmp/node_modules/react-relay/lib/buildRQL.js
RUN mkdir -p /var/www && cp -a /tmp/node_modules /var/www/
WORKDIR /var/www
COPY . ./
RUN if [ "$build_env" != "development" ]; then npm run build-webpack && npm run gulp; fi
EXPOSE 8080
CMD ["npm", "run", "--debug=5858 prod"]
My docker-compose file looks like
version: '2'
services:
nginx:
container_name: nginx
image: openroad/nginx
build:
context: nginx
ports:
- "80:80"
volumes:
- ./nginx/nginx.development.conf:/etc/nginx/nginx.conf
networks:
- orion-network
graphql:
container_name: graphql
image: openroad/graphql
build:
context: integration_api
volumes:
- ./integration_api:/var/www
environment:
- NODE_ENV=development
command: npm run dev
working_dir: /var/www
networks:
orion-network:
ipv4_address: 172.16.238.10
pegasus:
container_name: pegasus
image: openroad/pegasus
build:
context: pegasus
args:
build_env: development
expose:
- "3000"
volumes:
- ./pegasus:/var/www/public
environment:
- NODE_ENV=development
command: npm run dev
working_dir: /var/www/public
extra_hosts:
- "local.pegasus.com:192.168.99.100"
networks:
orion-network:
ipv4_address: 172.16.238.11
frontend:
container_name: orion-frontend
image: openroad/orion-frontend
build:
context: orion-frontend
args:
build_env: development
expose:
- "3000"
ports:
- "5858:5858"
volumes:
- ./orion-frontend:/var/www/public
environment:
- NODE_ENV=development
command: npm run --debug=5858 dev
working_dir: /var/www/public
networks:
orion-network:
ipv4_address: 172.16.238.12
admin:
container_name: orion-admin
image: openroad/orion-admin
build:
context: orion-admin
args:
build_env: development
expose:
- "3000"
volumes:
- ./orion-admin:/var/www/
environment:
- NODE_ENV=development
command: npm run dev
working_dir: /var/www/
networks:
orion-network:
ipv4_address: 172.16.238.13
uploads:
container_name: orion-uploads
image: openroad/orion-uploads
build:
context: orion-uploads
volumes:
- ./orion-uploads:/var/www/
working_dir: /var/www/
networks:
orion-network:
ipv4_address: 172.16.238.14
dashboard:
container_name: orion-dashboard
image: openroad/orion-dashboard
build:
context: orion-dashboard
args:
build_env: development
volumes:
- ./orion-dashboard/src:/var/www/src
- ./orion-dashboard/package.json:/var/www/package.json
- ./orion-dashboard/webpack.config.babel.js:/var/www/webpack.config.babel.js
- ./orion-dashboard/node_modules:/var/www/node_modules
- ./orion-dashboard/data/babelRelayPlugin.js:/var/www/data/babelRelayPlugin.js
working_dir: /var/www
environment:
- NODE_ENV=development
- GRAPHQLURL=http://172.16.238.10:8080/graphql
- PORT=8080
command: npm run dev
networks:
orion-network:
ipv4_address: 172.16.238.15
networks:
orion-network:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_ip_masquerade: "true"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
I wanted ability to debug application running under orion-frontend container. I tried various option without any success. I tried https://codefresh.io/docker-tutorial/debug_node_in_docker/ and https://blog.docker.com/2016/07/live-debugging-docker/ already.
I may be wrong about the command syntax for npm run (didn't find this command in the npm docs), but you may need to separate the --debug=5858 and prod args, like this:
CMD ["npm", "run", "--debug=5858", "prod"]