NestJs cannot find any module when ran in Dockerfile - docker

I have this Dockerfile that is supposed to run my nest.js project:
# syntax:docker/dockerfile:1
#base
FROM node:16.14.2-alpine AS base
ENV NODE_ENV=production
WORKDIR /app
RUN npm i -g npm#8.19.3 #nestjs/cli
COPY ["./package.json", "./"]
#dev
FROM base AS dev
RUN npm i
COPY . .
CMD ["npm", "run", "start:dev"]
#prod
FROM base AS prod
COPY . .
RUN npm i --frozen-lockfile --production
CMD ["npm", "run", "start:prod"]
and when I run it with this docker-compose.yml:
version: '3.9'
services:
backend: #THE BACKEND SIDE OF OUR APP ITSELF :
build:
context: ./back
#dockerfile: ./back/Dockerfile
target: dev #for multi stage
env_file:
- ./env/back.env
ports:
- 2000:2000
volumes:
- ./back:/app
networks:
- backend
depends_on:
- database
restart: unless-stopped
database: #THE POSTGRESQL DATABASE
image: postgres:latest
restart: unless-stopped
ports:
- 5432:5432
# env_file:
# - ./env/postgres.env
environment:
POSTGRES_USER: transcendeur
POSTGRES_PASSWORD: bigData
POSTGRES_DB: transcendb
networks:
- backend
volumes:
- postgres:/var/lib/postgresql/data
databaseadmin: #THE DATABASE 'ADMIN GRAPHICAL INTERFACE'
image: dpage/pgadmin4:latest
restart: unless-stopped
ports:
- 5050:80
env_file:
- ./env/pgadmin.env
environment:
PGADMIN_DEFAULT_EMAIL: nidma#nidma.com
PGADMIN_DEFAULT_PASSWORD: dimnagp4
networks:
- backend
volumes:
- ./docker/pgadmin_servers.json:/pgadmin4/servers.json
depends_on:
- database
logging:
driver: none
# frontend: #THE FRONTEND SIDE OF OUR APP ITSELF :
# build:
# dockerfile: front/Dockerfile
# env_file:
# - ./env/front.env
# ports:
# - 8080:8080
# restart: unless-stopped
# networks:
# - backend
# volumes:
# - ./front:/app
# depends_on:
# - backend
volumes:
postgres:
back:
networks:
backend:
driver: bridge
Everything builds fine until npm run start:dev, at compilation my files in src folder can't resolve any modules (for example #nestjs/common is not found)
Here's my package.json:
{
"name": "back",
"version": "0.0.1",
"description": "",
"author": "",
"private": true,
"license": "UNLICENSED",
"scripts": {
"prebuild": "rimraf dist",
"build": "nest build",
"format": "prettier --write \"src/**/*.ts\" \"test/**/*.ts\"",
"start": "DB_ENVFILE=db nest start",
"start:dev": "DB_ENVFILE=db nest start --watch",
"start:debug": "DB_ENVFILE=db nest start --debug --watch",
"start:prod": "DB_ENVFILE=db node dist/main",
"lint": "eslint \"{src,apps,libs,test}/**/*.ts\" --fix",
"test": "DB_ENVFILE=db jest",
"test:watch": "DB_ENVFILE=db jest --watch",
"test:cov": "DB_ENVFILE=db jest --coverage",
"test:debug": "DB_ENVFILE=db node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
"test:e2e": "DB_ENVFILE=db jest --config ./test/jest-e2e.json"
},
"dependencies": {
"#nestjs/cli": "^9.0.0",
"#nestjs/common": "^9.0.0",
"#nestjs/config": "^2.2.0",
"#nestjs/core": "^9.0.0",
"#nestjs/mapped-types": "*",
"#nestjs/platform-express": "^9.0.0",
"#nestjs/swagger": "^6.1.3",
"#nestjs/typeorm": "^9.0.1",
"pg": "^8.8.0",
"reflect-metadata": "^0.1.13",
"rimraf": "^3.0.2",
"rxjs": "^7.2.0",
"swagger-ui-express": "^4.5.0",
"typeorm": "^0.3.10"
},
"devDependencies": {
"#nestjs/schematics": "^9.0.0",
"#nestjs/testing": "^9.0.0",
"#types/express": "^4.17.13",
"#types/jest": "28.1.8",
"#types/multer": "^1.4.7",
"#types/node": "^16.0.0",
"#types/supertest": "^2.0.11",
"#typescript-eslint/eslint-plugin": "^5.0.0",
"#typescript-eslint/parser": "^5.0.0",
"eslint": "^8.0.1",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-prettier": "^4.0.0",
"jest": "28.1.3",
"prettier": "^2.3.2",
"source-map-support": "^0.5.20",
"supertest": "^6.1.3",
"ts-jest": "28.0.8",
"ts-loader": "^9.2.3",
"ts-node": "^10.0.0",
"tsconfig-paths": "4.1.0",
"typescript": "^4.7.4"
},
"jest": {
"moduleFileExtensions": [
"js",
"json",
"ts"
],
"rootDir": "src",
"testRegex": ".*\\.spec\\.ts$",
"transform": {
"^.+\\.(t|j)s$": "ts-jest"
},
"collectCoverageFrom": [
"**/*.(t|j)s"
],
"coverageDirectory": "../coverage",
"testEnvironment": "node"
}
}
I'm stuck on this since this morning and I can't find any solution on other posts, my Dockerfile looks fine I don't see where could be the problem
EDIT: I forgot to say that it works perfectly fine without docker when running the same commands
The error message I get is the same as if I did npm run start::dev without the node_modules folder

You do copy first the package.json and later your whole code into the working directory (/app) of your image in your Dockerfile and you run npm i in that directory while building.
But in your docker-compose.yml you bind-mount the ./back/ directory over the container's /app/ which makes the previous npm install irrelevant since now the contents of ./back/ are in that directory and not the ones baked into the image.
So as long as there is no node_modules directory on your host ./back/ the modules can not be found.

Related

Turborepo(Docker), PNPM workspace dependency package version mismatch

I am starting a project using Turborepo with PNPM as the package manager and for it to be Dockerised. I have installed Turborepo using npx create-turbo#latest(selecting PNPM during the setup) and have added a docker-compose.yaml file and Dockerfile.local's for my apps, which as docs, web and server. apps/web Dockerfile.local can be found below as that appears to be where the error is coming from as you can see in the error logs it's runnning through the web dev/dependencies.
When I run my make local command(below) I get the following error in my terminal.
Terminal error
#0 0.732 #types/node is linked to /app/node_modules from /node_modules/.pnpm/#types+node#17.0.45/node_modules/#types/node
#0 0.733 #types/react is linked to /app/node_modules from /node_modules/.pnpm/#types+react#18.0.26/node_modules/#types/react
#0 0.733 #types/react-dom is linked to /app/node_modules from /node_modules/.pnpm/#types+react-dom#18.0.10/node_modules/#types/react-dom
#0 0.733 eslint is linked to /app/node_modules from /node_modules/.pnpm/eslint#7.32.0/node_modules/eslint
#0 0.733 typescript is linked to /app/node_modules from /node_modules/.pnpm/typescript#4.9.4/node_modules/typescript
#0 0.734 next is linked to /app/node_modules from /node_modules/.pnpm/next#13.0.0_pjwopsidmaokadturxaafygjp4/node_modules/next
#0 0.734 react is linked to /app/node_modules from /node_modules/.pnpm/react#18.2.0/node_modules/react
#0 0.734 react-dom is linked to /app/node_modules from /node_modules/.pnpm/react-dom#18.2.0_react#18.2.0/node_modules/react-dom
#0 0.734 react-query is linked to /app/node_modules from /node_modules/.pnpm/react-query#3.39.2_biqbaboplfbrettd7655fr4n2y/node_modules/react-query
#0 0.741  ERR_PNPM_NO_MATCHING_VERSION_INSIDE_WORKSPACE  In : No matching version found for eslint-config-custom#* inside the workspace
#0 0.741
#0 0.741 This error happened while installing a direct dependency of /app
------
failed to solve: executor failed running [/bin/sh -c pnpm install]: exit code: 1
make: *** [local] Error 17
apps/web package.json
"name": "web",
"version": "0.0.0",
"private": true,
"scripts": {
"dev": "next dev --port 3002",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"next": "13.0.0",
"react": "18.2.0",
"react-dom": "18.2.0",
"react-query": "^3.39.2",
"ui": "workspace:*"
},
"devDependencies": {
"#babel/core": "^7.0.0",
"#types/node": "^17.0.12",
"#types/react": "^18.0.22",
"#types/react-dom": "^18.0.7",
"eslint": "7.32.0",
"eslint-config-custom": "workspace:*",
"tsconfig": "workspace:*",
"typescript": "^4.5.3"
}
}
Makefile
local:
#docker-compose stop && docker-compose up --build --remove-orphans;
docker-compose.yaml
version: "3.9"
services:
frontend:
container_name: frontend
build:
context: ./apps/web
dockerfile: Dockerfile.local
restart: always
env_file: .env
ports:
- "${FRONTEND_PORT}:${FRONTEND_PORT}"
networks:
- gh-network
command: "npm start"
backend:
container_name: backend
build:
context: ./apps/server
dockerfile: Dockerfile.local
restart: always
env_file: .env
volumes:
- ./apps/server:/svr/app
- "./scripts/wait.sh:/wait.sh"
- /svr/app/node_modules
networks:
- gh-network
ports:
- "${BACKEND_PORT}:${BACKEND_PORT}"
depends_on:
- gh-pg-db
links:
- gh-pg-db
gh-pg-db:
image: postgres:12-alpine
restart: always
container_name: gh-pg-db
env_file:
- .env
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
PGDATA: /var/lib/postgresql/data
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "${DB_PORT}:${DB_PORT}"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- gh-network
pgadmin-portal:
image: dpage/pgadmin4
restart: always
container_name: pgadmin-portal
env_file:
- .env
environment:
PGADMIN_DEFAULT_PASSWORD: "${PGADMIN_DEFAULT_PASSWORD}"
PGADMIN_DEFAULT_EMAIL: "${PGADMIN_DEFAULT_EMAIL}"
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT}:80"
depends_on:
- gh-pg-db
networks:
- gh-network
volumes:
pgdata:
pgadmin:
networks:
gh-network:
driver: bridge
apps/web Dockerfile.local
FROM node:14-alpine
RUN npm i -g pnpm
RUN pnpm --version
WORKDIR /app
COPY . .
RUN pnpm install
I have tried checking the versions of eslint-config-custom in the web package.json and it's 7.32.0 whereas the eslint version in the shared package is 7.23.0. I have amended these to match but it did not work.
My expected outcome is for the make local command to successfully boot up the project so I can see the initial Next/React files from Turborepo in my browser at port 3002.
I have looked at the following question(pnpm workspace:* dependencies) but unfortunately to solve the problem the question asker switched their build tools which I cannot do as I'm using Turborepo.

Running Hot Reload for shared package in monorepo environment

I have 3 services sharing 1 common package.
I've set a docker-compose.yaml file to run these services together (each service has its own container). Also I configured, using bind mount, hot reload for each service.
Now, I also have 1 shared package, common that being used by each service.
What I want is, to make this package trigger change for each service, upon own change. By this I mean, if I changed some code in the common package, it will make all 3 other services to reload again.
This is my docker-compose.yaml file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:6.0.2
restart: on-failure
networks:
- dashboard_network
volumes:
- ./docker/scripts/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:6.0.2
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:6.0.2
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:6.0.2
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- dashboard_network
backend:
container_name: backend
build:
context: .
dockerfile: ./docker/Dockerfile.backend-dev
env_file:
- ./apps/backend/envs/.env.development
- ./docker/envs/.env.development
ports:
- 3000:3000
restart: always
depends_on:
- mongo_replica_1
- mongo_replica_2
- mongo_replica_3
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/backend/src
target: /dashboard/apps/backend/src
cli-backend:
container_name: cli-backend
build:
context: .
dockerfile: ./docker/Dockerfile.cli-backend-dev
env_file:
- ./apps/cli-backend/envs/.env.development
- ./docker/envs/.env.development
ports:
- 4000:4000
restart: always
depends_on:
- mongo_replica_1
- mongo_replica_2
- mongo_replica_3
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/cli-backend/src
target: /dashboard/apps/cli-backend/src
frontend:
container_name: frontend
build:
context: .
dockerfile: ./docker/Dockerfile.frontend-dev
env_file:
- ./apps/frontend/.env.development
ports:
- 8080:8080
restart: always
depends_on:
- backend
- cli-backend
networks:
- dashboard_network
volumes:
- type: bind
source: ./apps/frontend/src
target: /dashboard/apps/frontend/src
networks:
dashboard_network:
driver: bridge
This is typical dockerfile for a service (may differ for each service, but the idea is the same):
FROM node:18
RUN curl -f https://get.pnpm.io/v6.16.js | node - add --global pnpm
WORKDIR /dashboard
COPY ./package.json ./pnpm-workspace.yaml ./.npmrc ./
COPY ./apps/backend/package.json ./apps/backend/
COPY ./packages/common/package.json ./packages/common/
COPY ./prisma/schema.prisma ./prisma/
RUN pnpm i -w
RUN pnpm --filter backend --filter common i
COPY ./tsconfig.base.json ./nx.json ./
COPY ./apps/backend/ ./apps/backend/
COPY ./packages/common/ ./packages/common/
CMD ["pnpm", "exec", "nx", "start:dev:docker", "backend"]
My pnpm-workspace.yaml file:
packages:
- 'apps/*'
- 'packages/*'
I also use nx package, nx.json file is:
{
"workspaceLayout": {
"appsDir": "apps",
"libsDir": "packages"
},
"tasksRunnerOptions": {
"default": {
"runner": "nx/tasks-runners/default",
"options": {
"cacheableOperations": ["build", "lint", "type-check", "depcheck", "stylelint"]
}
}
},
"namedInputs": {
"source": ["{projectRoot}/src/**/*"],
"jsSource": ["{projectRoot}/src/**/*.{ts,js,cjs}"],
"reactTsSource": ["{projectRoot}/src/**/*.{ts,tsx}"],
"scssSource": ["{projectRoot}/src/**/*.scss"]
},
"targetDefaults": {
"build": {
"inputs": ["source", "^source"],
"dependsOn": ["^build"]
},
"lint": {
"inputs": ["jsSource", "{projectRoot}/.eslintrc.cjs", "{projectRoot}/.eslintignore"],
"outputs": []
},
"type-check": {
"inputs": [
"reactTsSource",
"{projectRoot}/tsconfig.json",
"{projectRoot}/tsconfig.base.json",
"{workspaceRoot}/tsconfig.base.json"
],
"dependsOn": ["^build"],
"outputs": []
},
"depcheck": {
"inputs": ["{projectRoot}/.depcheckrc.json", "{projectRoot}/package.json"],
"outputs": []
},
"stylelint": {
"inputs": ["scssSource", "{projectRoot}/stylelint.config.cjs"]
},
"start:dev": {
"dependsOn": ["^build"]
},
"start:dev:docker": {
"dependsOn": ["^build"]
}
}
}
For each service, I installed the shared package in package.json#devDependencies:
"common": "workspace:1.0.0",
As you can see, my start:dev:docker script depends on build script of the shared package. So in the containers, the common package will build. This is the scripts of common/package.json:
"build": "rimraf ./dist && tsc --project ./tsconfig.build.json",
"start:dev": "tsc --project ./tsconfig.build.json --watch",
So I need to use start:dev somehow, but surely not with dependsOn of NX.

How to make my server in my Docker container reload with changes

I have created a simple app connected with PostgreSQL and pgAdmin, as well as a web server in a Docker images running in a container.
My question is how I can make it reload, like with nodemon in a local server, without the need of deleting the container everytime.
I have been trying different solutions and methods I have seen around but I haven't been able to make it work.
I have already tried inserting the command: ["npm", "run", "start:dev"] in the docker-compose.file as well...
My files are:
Dockerfile
FROM node:latest
WORKDIR /
COPY package*.json ./
COPY . .
COPY database.json .
COPY .env .
EXPOSE 3000
CMD [ "npm", "run", "watch ]
Docker-compose.file
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
# command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '3000:3000'
env_file:
- .env
Nodemon.json file:
{
"watch": ["dist"],
"ext": ".ts,.js",
"ignore": [],
"exec": "ts-node ./dist/server.js"
}
Package.json file:
"scripts": {
"start:dev": "nodemon",
"build": "rimraf ./dist && tsc",
"start": "npm run build && node dist/server.js",
"watch": "tsc-watch --esModuleInterop src/server.ts --outDir ./dist --onSuccess \"node ./dist/server.js\"",
"jasmine": "jasmine",
"test": "npm run build && npm run jasmine",
"db-test": "set ENV=test&& db-migrate -e test up && npm run test && db-migrate -e test reset",
"lint": "eslint . --ext .ts",
"prettier": "prettier --config .prettierrc src/**/*.ts --write",
"prettierLint": "prettier --config .prettierrc src/**/*.ts --write && eslint . --ext .ts --fix"
},
Thanks
The COPY . . command only runs when the image is built, which only happens when you first run docker compose up. In order for the container to be aware of changes, you need the code changes on your host machine to be synchronized with the code inside the container, even after the build is complete.
Below I've added the volume mount to the web container in your docker compose and uncommented the command that should support hot-reloading. I assumed that the source code you wanted to change lives in a src directory, but feel free to update to reflect how you've organized your source code.
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '2000:2000'
env_file:
- .env
volumes:
# <host-path>:<container-path>
- ./src:/src/
If that isn't clear, here's an article that might help:
https://www.freecodecamp.org/news/how-to-enable-live-reload-on-docker-based-applications/

Docker-compose with vue-vite + axios

i want to ask, why my docker can't find axios module. I can't find solution in the internet and documentations. This is my docker file:
Dockerfile
# build stage
FROM node:lts-alpine as build-stage
# Create app directory
WORKDIR /app
# Install all dependecies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
I have installed by "npm install axios --save" and without this flag.
My package.json
{
"name": "frontend",
"version": "0.0.0",
"scripts": {
"dev": "vite --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview --port 5050",
"lint": "eslint . --ext .vue,.js,.jsx,.cjs,.mjs --fix --ignore-path .gitignore"
},
"dependencies": {
"axios": "^0.27.2",
"primeicons": "^5.0.0",
"primevue": "^3.12.5",
"vee-validate": "^4.5.11",
"vue": "^3.2.31",
"vue-router": "^4.0.13",
"vuex": "^4.0.2"
},
"devDependencies": {
"#rushstack/eslint-patch": "^1.1.0",
"#vitejs/plugin-vue": "^2.3.1",
"#vue/eslint-config-prettier": "^7.0.0",
"eslint": "^8.5.0",
"eslint-plugin-vue": "^8.2.0",
"prettier": "^2.5.1",
"vite": "^2.9.5"
}
}
Answer of docker-compose
frontedMatol | Failed to resolve import "axios" from "src/services/auth.service.js". Does the file exist?
frontedMatol | 2:13:05 PM [vite] Internal server error: Failed to resolve import "axios" from "src/services/auth.service.js". Does the file exist?
frontedMatol | Plugin: vite:import-analysis
frontedMatol | File: /app/src/services/auth.service.js
frontedMatol | 1 | import axios from 'axios';
frontedMatol | | ^
frontedMatol | 2 | const API_URL = 'http://localhost:8000/api/v1/'
frontedMatol | 3 | class AuthService {
frontedMatol | at formatError (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36435:46)
frontedMatol | at TransformContext.error (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36431:19)
frontedMatol | at normalizeUrl (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:59840:26)
frontedMatol | at async TransformContext.transform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:59989:57)
frontedMatol | at async Object.transform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:36672:30)
frontedMatol | at async doTransform (/app/node_modules/vite/dist/node/chunks/dep-27bc1ab8.js:55662:29)
I have to add docker-compose too:
version: '3.9'
services:
postgres:
container_name: postgresxx
image: postgres
environment:
POSTGRES_USER: xx
POSTGRES_PASSWORD: xx
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadminxx
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-xx}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
maildev:
container_name: mailDevXx
image: maildev/maildev
ports:
- "1080:1080"
- "1025:1025"
networks:
- spring
backend:
container_name: backendxx
image: xx/xx-backend:latest
ports:
- "8000:8000"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- postgres
- spring
depends_on:
- postgres
frontend:
container_name: frontedXx
build:
context: ../frontend
dockerfile: Dockerfile
working_dir: /app
command: [ "npm", "run", "dev" ]
env_file:
- ../config/env/server-developer.env
environment:
- CHOKIDAR_USEPOLLING=true
volumes:
- type: bind
source: ../frontend
target: /app
- type: volume
source: node_modules
target: /app/node_modules
ports:
- 8080:8080
depends_on:
- backend
networks:
postgres:
driver: bridge
spring:
driver: bridge
volumes:
postgres:
pgadmin:
node_modules:
I changed names to xx to make it private.
So as i said, I am building project with docker-compose -f name_file up/build, and the server can't import axios. All of the "tutorials" said " change from ../axios to 'axios'. This can't help.
Also, i am working with visual studio code with docker extension, i went inside, and check yts package.json and lock. All of them has an axios dependency. So i am black whole.
THE MOST IMPORTANT is when i am building only vite, from frontend dir with npm run dev- axios works.
I can put repository flow, if you need it.

Docker container in CircleCI not showing files even though volume appears mounted

This is the docker-compose command and the results:
$ docker-compose -f docker-compose-base.yml -f docker-compose-test.yml run api sh -c 'pwd && ls'
Starting test-db ... done
/usr/src/api
node_modules
I then inspected the most recent container id:
$ docker inspect --format='{{json .Mounts}}' e150beeef85c
[
{
"Type": "bind",
"Source": "/home/circleci/project",
"Destination": "/usr/src/api",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff",
"Source": "/var/lib/docker/volumes/4f86174ca322af6d15489da91f745861815a02f5b4e9e879ef5375663b9defff/_data",
"Destination": "/usr/src/api/node_modules",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Which means, these files are not appearing:
$ ls /home/circleci/project
Dockerfile docker-compose-base.yml docker-compose-prod.yml migrations nodemon-debug.json package-lock.json src test-db.env tsconfig.build.json tslint.json
README.md docker-compose-dev.yml docker-compose-test.yml nest-cli.json nodemon.json package.json test test.env tsconfig.json
Why could this be?
Update: I should mention that all this works fine on my local dev environment. The above is failing on CircleCI.
When I inspect the differences between the containers, the only major things that I see is that my dev environment runs Docker 19 using overlay2 graph driver and the above failing environment runs Docker 17 using aufs graph driver.
Update 2: Actual docker-compose files:
# docker-compose-base.yml
version: '3'
services:
api:
build: .
restart: on-failure
container_name: api
# docker-compose-test.yml
version: '3'
networks:
default:
external:
name: lb_lbnet
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
volumes:
- ./:/usr/src/api
- /usr/src/api/node_modules
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And finally Dockerfile:
FROM node:11
WORKDIR /usr/src/api
COPY package*.json ./
RUN npm install
COPY . .
# not using an execution list here so we get shell variable substitution
CMD npm run start:$NODE_ENV
As #allisongranemann pointed out, CircleCI states:
It is not possible to mount a volume from your job space into a
container in Remote Docker (and vice versa).
The original reason why I wanted to mount the project directory to docker was that in the development environment, I could change code quickly and run tests without rebuilding the container.
With this limitation, the solution I went with was to remove volumes mounting from docker-compose-test.yml as follow:
version: '3'
services:
test-db:
image: postgres:11
container_name: test-db
env_file:
- ./test-db.env # uses POSTGRES_DB and POSTGRES_PASSWORD to create a fresh db with a password when first run
api:
restart: 'no'
env_file:
- test.env
depends_on:
- test-db
ports:
- 9229:9229
- 3000:3000
command: npm run start:debug
And I also added docker-compose-test-dev.yml that adds the volumes for the dev environment:
version: '3'
services:
api:
volumes:
- ./:/usr/src/api
Finally, to run tests on the dev environment, I run:
docker-compose -f docker-compose-base.yml -f docker-compose-test.yml -f docker-compose-test-dev.yml run api npm run test:e2e

Resources