I am starting a project using Turborepo with PNPM as the package manager and for it to be Dockerised. I have installed Turborepo using npx create-turbo#latest(selecting PNPM during the setup) and have added a docker-compose.yaml file and Dockerfile.local's for my apps, which as docs, web and server. apps/web Dockerfile.local can be found below as that appears to be where the error is coming from as you can see in the error logs it's runnning through the web dev/dependencies.
When I run my make local command(below) I get the following error in my terminal.
Terminal error
#0 0.732 #types/node is linked to /app/node_modules from /node_modules/.pnpm/#types+node#17.0.45/node_modules/#types/node
#0 0.733 #types/react is linked to /app/node_modules from /node_modules/.pnpm/#types+react#18.0.26/node_modules/#types/react
#0 0.733 #types/react-dom is linked to /app/node_modules from /node_modules/.pnpm/#types+react-dom#18.0.10/node_modules/#types/react-dom
#0 0.733 eslint is linked to /app/node_modules from /node_modules/.pnpm/eslint#7.32.0/node_modules/eslint
#0 0.733 typescript is linked to /app/node_modules from /node_modules/.pnpm/typescript#4.9.4/node_modules/typescript
#0 0.734 next is linked to /app/node_modules from /node_modules/.pnpm/next#13.0.0_pjwopsidmaokadturxaafygjp4/node_modules/next
#0 0.734 react is linked to /app/node_modules from /node_modules/.pnpm/react#18.2.0/node_modules/react
#0 0.734 react-dom is linked to /app/node_modules from /node_modules/.pnpm/react-dom#18.2.0_react#18.2.0/node_modules/react-dom
#0 0.734 react-query is linked to /app/node_modules from /node_modules/.pnpm/react-query#3.39.2_biqbaboplfbrettd7655fr4n2y/node_modules/react-query
#0 0.741 ERR_PNPM_NO_MATCHING_VERSION_INSIDE_WORKSPACE In : No matching version found for eslint-config-custom#* inside the workspace
#0 0.741
#0 0.741 This error happened while installing a direct dependency of /app
------
failed to solve: executor failed running [/bin/sh -c pnpm install]: exit code: 1
make: *** [local] Error 17
apps/web package.json
"name": "web",
"version": "0.0.0",
"private": true,
"scripts": {
"dev": "next dev --port 3002",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"next": "13.0.0",
"react": "18.2.0",
"react-dom": "18.2.0",
"react-query": "^3.39.2",
"ui": "workspace:*"
},
"devDependencies": {
"#babel/core": "^7.0.0",
"#types/node": "^17.0.12",
"#types/react": "^18.0.22",
"#types/react-dom": "^18.0.7",
"eslint": "7.32.0",
"eslint-config-custom": "workspace:*",
"tsconfig": "workspace:*",
"typescript": "^4.5.3"
}
}
Makefile
local:
#docker-compose stop && docker-compose up --build --remove-orphans;
docker-compose.yaml
version: "3.9"
services:
frontend:
container_name: frontend
build:
context: ./apps/web
dockerfile: Dockerfile.local
restart: always
env_file: .env
ports:
- "${FRONTEND_PORT}:${FRONTEND_PORT}"
networks:
- gh-network
command: "npm start"
backend:
container_name: backend
build:
context: ./apps/server
dockerfile: Dockerfile.local
restart: always
env_file: .env
volumes:
- ./apps/server:/svr/app
- "./scripts/wait.sh:/wait.sh"
- /svr/app/node_modules
networks:
- gh-network
ports:
- "${BACKEND_PORT}:${BACKEND_PORT}"
depends_on:
- gh-pg-db
links:
- gh-pg-db
gh-pg-db:
image: postgres:12-alpine
restart: always
container_name: gh-pg-db
env_file:
- .env
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
PGDATA: /var/lib/postgresql/data
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "${DB_PORT}:${DB_PORT}"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- gh-network
pgadmin-portal:
image: dpage/pgadmin4
restart: always
container_name: pgadmin-portal
env_file:
- .env
environment:
PGADMIN_DEFAULT_PASSWORD: "${PGADMIN_DEFAULT_PASSWORD}"
PGADMIN_DEFAULT_EMAIL: "${PGADMIN_DEFAULT_EMAIL}"
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT}:80"
depends_on:
- gh-pg-db
networks:
- gh-network
volumes:
pgdata:
pgadmin:
networks:
gh-network:
driver: bridge
apps/web Dockerfile.local
FROM node:14-alpine
RUN npm i -g pnpm
RUN pnpm --version
WORKDIR /app
COPY . .
RUN pnpm install
I have tried checking the versions of eslint-config-custom in the web package.json and it's 7.32.0 whereas the eslint version in the shared package is 7.23.0. I have amended these to match but it did not work.
My expected outcome is for the make local command to successfully boot up the project so I can see the initial Next/React files from Turborepo in my browser at port 3002.
I have looked at the following question(pnpm workspace:* dependencies) but unfortunately to solve the problem the question asker switched their build tools which I cannot do as I'm using Turborepo.
Related
I have 3 services sharing 1 common package.
I've set a docker-compose.yaml file to run these services together (each service has its own container). Also I configured, using bind mount, hot reload for each service.
Now, I also have 1 shared package, common that being used by each service.
What I want is, to make this package trigger change for each service, upon own change. By this I mean, if I changed some code in the common package, it will make all 3 other services to reload again.
This is my docker-compose.yaml file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:6.0.2
restart: on-failure
networks:
- dashboard_network
volumes:
- ./docker/scripts/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:6.0.2
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:6.0.2
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:6.0.2
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- dashboard_network
backend:
container_name: backend
build:
context: .
dockerfile: ./docker/Dockerfile.backend-dev
env_file:
- ./apps/backend/envs/.env.development
- ./docker/envs/.env.development
ports:
- 3000:3000
restart: always
depends_on:
- mongo_replica_1
- mongo_replica_2
- mongo_replica_3
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/backend/src
target: /dashboard/apps/backend/src
cli-backend:
container_name: cli-backend
build:
context: .
dockerfile: ./docker/Dockerfile.cli-backend-dev
env_file:
- ./apps/cli-backend/envs/.env.development
- ./docker/envs/.env.development
ports:
- 4000:4000
restart: always
depends_on:
- mongo_replica_1
- mongo_replica_2
- mongo_replica_3
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/cli-backend/src
target: /dashboard/apps/cli-backend/src
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/Dockerfile.frontend-dev
env_file:
- ./apps/frontend/.env.development
ports:
- 8080:8080
restart: always
depends_on:
- backend
- cli-backend
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/frontend/src
target: /dashboard/apps/frontend/src
networks:
dashboard_network:
driver: bridge
This is typical dockerfile for a service (may differ for each service, but the idea is the same):
FROM node:18
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
WORKDIR /dashboard
COPY ./package.json ./pnpm-workspace.yaml ./.npmrc ./
COPY ./apps/backend/package.json ./apps/backend/
COPY ./packages/common/package.json ./packages/common/
COPY ./prisma/schema.prisma ./prisma/
RUN pnpm i -w
RUN pnpm --filter backend --filter common i
COPY ./tsconfig.base.json ./nx.json ./
COPY ./apps/backend/ ./apps/backend/
COPY ./packages/common/ ./packages/common/
CMD ["pnpm", "exec", "nx", "start:dev:docker", "backend"]
My pnpm-workspace.yaml file:
packages:
- 'apps/*'
- 'packages/*'
I also use nx package, nx.json file is:
{
"workspaceLayout": {
"appsDir": "apps",
"libsDir": "packages"
},
"tasksRunnerOptions": {
"default": {
"runner": "nx/tasks-runners/default",
"options": {
"cacheableOperations": ["build", "lint", "type-check", "depcheck", "stylelint"]
}
}
},
"namedInputs": {
"source": ["{projectRoot}/src/**/*"],
"jsSource": ["{projectRoot}/src/**/*.{ts,js,cjs}"],
"reactTsSource": ["{projectRoot}/src/**/*.{ts,tsx}"],
"scssSource": ["{projectRoot}/src/**/*.scss"]
},
"targetDefaults": {
"build": {
"inputs": ["source", "^source"],
"dependsOn": ["^build"]
},
"lint": {
"inputs": ["jsSource", "{projectRoot}/.eslintrc.cjs", "{projectRoot}/.eslintignore"],
"outputs": []
},
"type-check": {
"inputs": [
"reactTsSource",
"{projectRoot}/tsconfig.json",
"{projectRoot}/tsconfig.base.json",
"{workspaceRoot}/tsconfig.base.json"
],
"dependsOn": ["^build"],
"outputs": []
},
"depcheck": {
"inputs": ["{projectRoot}/.depcheckrc.json", "{projectRoot}/package.json"],
"outputs": []
},
"stylelint": {
"inputs": ["scssSource", "{projectRoot}/stylelint.config.cjs"]
},
"start:dev": {
"dependsOn": ["^build"]
},
"start:dev:docker": {
"dependsOn": ["^build"]
}
}
}
For each service, I installed the shared package in package.json#devDependencies:
"common": "workspace:1.0.0",
As you can see, my start:dev:docker script depends on build script of the shared package. So in the containers, the common package will build. This is the scripts of common/package.json:
"build": "rimraf ./dist && tsc --project ./tsconfig.build.json",
"start:dev": "tsc --project ./tsconfig.build.json --watch",
So I need to use start:dev somehow, but surely not with dependsOn of NX.
I have created a simple app connected with PostgreSQL and pgAdmin, as well as a web server in a Docker images running in a container.
My question is how I can make it reload, like with nodemon in a local server, without the need of deleting the container everytime.
I have been trying different solutions and methods I have seen around but I haven't been able to make it work.
I have already tried inserting the command: ["npm", "run", "start:dev"] in the docker-compose.file as well...
My files are:
Dockerfile
FROM node:latest
WORKDIR /
COPY package*.json ./
COPY . .
COPY database.json .
COPY .env .
EXPOSE 3000
CMD [ "npm", "run", "watch ]
Docker-compose.file
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
# command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '3000:3000'
env_file:
- .env
Nodemon.json file:
{
"watch": ["dist"],
"ext": ".ts,.js",
"ignore": [],
"exec": "ts-node ./dist/server.js"
}
Package.json file:
"scripts": {
"start:dev": "nodemon",
"build": "rimraf ./dist && tsc",
"start": "npm run build && node dist/server.js",
"watch": "tsc-watch --esModuleInterop src/server.ts --outDir ./dist --onSuccess \"node ./dist/server.js\"",
"jasmine": "jasmine",
"test": "npm run build && npm run jasmine",
"db-test": "set ENV=test&& db-migrate -e test up && npm run test && db-migrate -e test reset",
"lint": "eslint . --ext .ts",
"prettier": "prettier --config .prettierrc src/**/*.ts --write",
"prettierLint": "prettier --config .prettierrc src/**/*.ts --write && eslint . --ext .ts --fix"
},
Thanks
The COPY . . command only runs when the image is built, which only happens when you first run docker compose up. In order for the container to be aware of changes, you need the code changes on your host machine to be synchronized with the code inside the container, even after the build is complete.
Below I've added the volume mount to the web container in your docker compose and uncommented the command that should support hot-reloading. I assumed that the source code you wanted to change lives in a src directory, but feel free to update to reflect how you've organized your source code.
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '2000:2000'
env_file:
- .env
volumes:
# <host-path>:<container-path>
- ./src:/src/
If that isn't clear, here's an article that might help:
https://www.freecodecamp.org/news/how-to-enable-live-reload-on-docker-based-applications/
i want to ask, why my docker can't find axios module. I can't find solution in the internet and documentations. This is my docker file:
Dockerfile
# build stage
FROM node:lts-alpine as build-stage
# Create app directory
WORKDIR /app
# Install all dependecies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
I have installed by "npm install axios --save" and without this flag.
My package.json
{
"name": "frontend",
"version": "0.0.0",
"scripts": {
"dev": "vite --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview --port 5050",
"lint": "eslint . --ext .vue,.js,.jsx,.cjs,.mjs --fix --ignore-path .gitignore"
},
"dependencies": {
"axios": "^0.27.2",
"primeicons": "^5.0.0",
"primevue": "^3.12.5",
"vee-validate": "^4.5.11",
"vue": "^3.2.31",
"vue-router": "^4.0.13",
"vuex": "^4.0.2"
},
"devDependencies": {
"#rushstack/eslint-patch": "^1.1.0",
"#vitejs/plugin-vue": "^2.3.1",
"#vue/eslint-config-prettier": "^7.0.0",
"eslint": "^8.5.0",
"eslint-plugin-vue": "^8.2.0",
"prettier": "^2.5.1",
"vite": "^2.9.5"
}
}
Answer of docker-compose
frontedMatol | Failed to resolve import "axios" from "src/services/auth.service.js". Does the file exist?
frontedMatol | 2:13:05 PM [vite] Internal server error: Failed to resolve import "axios" from "src/services/auth.service.js". Does the file exist?
frontedMatol | Plugin: vite:import-analysis
frontedMatol | File: /app/src/services/auth.service.js
frontedMatol | 1 | import axios from 'axios';
frontedMatol | | ^
frontedMatol | 2 | const API_URL = 'http://localhost:8000/api/v1/'
frontedMatol | 3 | class AuthService {
frontedMatol | at formatError (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36435:46)
frontedMatol | at TransformContext.error (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36431:19)
frontedMatol | at normalizeUrl (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:59840:26)
frontedMatol | at async TransformContext.transform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:59989:57)
frontedMatol | at async Object.transform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36672:30)
frontedMatol | at async doTransform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:55662:29)
I have to add docker-compose too:
version: '3.9'
services:
postgres:
container_name: postgresxx
image: postgres
environment:
POSTGRES_USER: xx
POSTGRES_PASSWORD: xx
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadminxx
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-xx}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
maildev:
container_name: mailDevXx
image: maildev/maildev
ports:
- "1080:1080"
- "1025:1025"
networks:
- spring
backend:
container_name: backendxx
image: xx/xx-backend:latest
ports:
- "8000:8000"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- postgres
- spring
depends_on:
- postgres
frontend:
container_name: frontedXx
build:
context: ../frontend
dockerfile: Dockerfile
working_dir: /app
command: [ "npm", "run", "dev" ]
env_file:
- ../config/env/server-developer.env
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- type: bind
source: ../frontend
target: /app
- type: volume
source: node_modules
target: /app/node_modules
ports:
- 8080:8080
depends_on:
- backend
networks:
postgres:
driver: bridge
spring:
driver: bridge
volumes:
postgres:
pgadmin:
node_modules:
I changed names to xx to make it private.
So as i said, I am building project with docker-compose -f name_file up/build, and the server can't import axios. All of the "tutorials" said " change from ../axios to 'axios'. This can't help.
Also, i am working with visual studio code with docker extension, i went inside, and check yts package.json and lock. All of them has an axios dependency. So i am black whole.
THE MOST IMPORTANT is when i am building only vite, from frontend dir with npm run dev- axios works.
I can put repository flow, if you need it.
I have application built in React running on Docker. I am looking for a way to debug it. I am using Visual Studio Code. Here is my Docker file and Docker-compose file
FROM node:boron
ARG build_env
RUN mkdir /usr/share/unicode && cd /usr/share/unicode && wget ftp://ftp.unicode.org/Public/UNIDATA/UnicodeData.txt
COPY package.json /tmp/package.json
RUN cd /tmp && npm install
COPY ./shim/RelayDefaultNetworkLayer.js /tmp/node_modules/react-relay/lib/RelayDefaultNetworkLayer.js
COPY ./shim/buildRQL.js /tmp/node_modules/react-relay/lib/buildRQL.js
RUN mkdir -p /var/www && cp -a /tmp/node_modules /var/www/
WORKDIR /var/www
COPY . ./
RUN if [ "$build_env" != "development" ]; then npm run build-webpack && npm run gulp; fi
EXPOSE 8080
CMD ["npm", "run", "--debug=5858 prod"]
My docker-compose file looks like
version: '2'
services:
nginx:
container_name: nginx
image: openroad/nginx
build:
context: nginx
ports:
- "80:80"
volumes:
- ./nginx/nginx.development.conf:/etc/nginx/nginx.conf
networks:
- orion-network
graphql:
container_name: graphql
image: openroad/graphql
build:
context: integration_api
volumes:
- ./integration_api:/var/www
environment:
- NODE_ENV=development
command: npm run dev
working_dir: /var/www
networks:
orion-network:
ipv4_address: 172.16.238.10
pegasus:
container_name: pegasus
image: openroad/pegasus
build:
context: pegasus
args:
build_env: development
expose:
- "3000"
volumes:
- ./pegasus:/var/www/public
environment:
- NODE_ENV=development
command: npm run dev
working_dir: /var/www/public
extra_hosts:
- "local.pegasus.com:192.168.99.100"
networks:
orion-network:
ipv4_address: 172.16.238.11
frontend:
container_name: orion-frontend
image: openroad/orion-frontend
build:
context: orion-frontend
args:
build_env: development
expose:
- "3000"
ports:
- "5858:5858"
volumes:
- ./orion-frontend:/var/www/public
environment:
- NODE_ENV=development
command: npm run --debug=5858 dev
working_dir: /var/www/public
networks:
orion-network:
ipv4_address: 172.16.238.12
admin:
container_name: orion-admin
image: openroad/orion-admin
build:
context: orion-admin
args:
build_env: development
expose:
- "3000"
volumes:
- ./orion-admin:/var/www/
environment:
- NODE_ENV=development
command: npm run dev
working_dir: /var/www/
networks:
orion-network:
ipv4_address: 172.16.238.13
uploads:
container_name: orion-uploads
image: openroad/orion-uploads
build:
context: orion-uploads
volumes:
- ./orion-uploads:/var/www/
working_dir: /var/www/
networks:
orion-network:
ipv4_address: 172.16.238.14
dashboard:
container_name: orion-dashboard
image: openroad/orion-dashboard
build:
context: orion-dashboard
args:
build_env: development
volumes:
- ./orion-dashboard/src:/var/www/src
- ./orion-dashboard/package.json:/var/www/package.json
- ./orion-dashboard/webpack.config.babel.js:/var/www/webpack.config.babel.js
- ./orion-dashboard/node_modules:/var/www/node_modules
- ./orion-dashboard/data/babelRelayPlugin.js:/var/www/data/babelRelayPlugin.js
working_dir: /var/www
environment:
- NODE_ENV=development
- GRAPHQLURL=http://172.16.238.10:8080/graphql
- PORT=8080
command: npm run dev
networks:
orion-network:
ipv4_address: 172.16.238.15
networks:
orion-network:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_ip_masquerade: "true"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
I wanted ability to debug application running under orion-frontend container. I tried various option without any success. I tried https://codefresh.io/docker-tutorial/debug_node_in_docker/ and https://blog.docker.com/2016/07/live-debugging-docker/ already.
I may be wrong about the command syntax for npm run (didn't find this command in the npm docs), but you may need to separate the --debug=5858 and prod args, like this:
CMD ["npm", "run", "--debug=5858", "prod"]
I'm setting up a docker stack with PHP, PostgreSQL, Nginx, Laravel-Echo-Server and Redis and having some issues with Redis and the echo-server connecting. I'm using a docker-compose.yml:
version: '3'
networks:
app-tier:
driver: bridge
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
networks:
- app-tier
ports:
- 9002:9000
volumes:
- .:/srv/app
nginx:
build:
context: .
dockerfile: .docker/nginx/Dockerfile
networks:
- app-tier
ports:
- 8080:80
volumes:
- ./public:/srv/app/public
db:
build:
context: .docker/postgres/
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 5433:5432
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
volumes:
- .docker/postgres/data:/var/lib/postgresql/data
laravel-echo-server:
build:
context: .docker/laravel-echo-server
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 6001:6001
links:
- 'redis:redis'
redis:
build:
context: .docker/redis
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
volumes:
- .docker/redis/data:/var/lib/redis/data
My echo-server Dockerfile:
FROM node:10-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN apk add --update \
python \
python-dev \
py-pip \
build-base
RUN npm install
COPY laravel-echo-server.json /usr/src/app/laravel-echo-server.json
EXPOSE 3000
CMD [ "npm", "start" ]
Redis Dockerfile:
FROM redis:latest
LABEL maintainer="maintainer"
COPY . /usr/src/app
COPY redis.conf /usr/src/app/redis/redis.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server", "/usr/src/app/redis/redis.conf"]
My laravel-echo-server.json:
{
"authHost": "localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {
"port": "6379",
"host": "redis"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": ""
}
The redis.conf is the default right now. The error I am getting from the laravel-echo-server is:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1163:14)
Redis is up and running fine, using the configuration file and ready to accept connections. docker ps shows both redis and echo-server are up, so they're just not connecting as the error indicates. If I change the final line in the Redis Dockerfile to just CMD ["redis-server"] it appears to connect and auto uses the default config (which is the same as the one I have in my .docker directory), but I get this error: Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection aborted.