When I run the following Dockerfile,in the middle of the process, after installing prisma, the RUN command db-migrate up is stopped. But when I used docker exec bin bash to run it, it worked without any problem. I don't think I can run the app before serving it. But there is a workaround like putting the migration commands as a service in docker-compose.yml. How to achieve it? or if there's any way to run those RUN commands of migration in this Dockerfile?
Dockerfile
FROM node:16.15.0-alpine
WORKDIR /app
COPY package*.json ./
# generated prisma files
COPY prisma ./prisma/
# COPY ENV variable
COPY .env ./
# COPY
COPY . .
RUN npm install
RUN npm install -g db-migrate
RUN npm install -g prisma
RUN db-migrate up
RUN prisma db pull
RUN prisma generate
EXPOSE 3000
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql:5.7
restart: unless-stopped
env_file: ./.env
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
auth:
depends_on:
- mysqldb
build: ./auth
restart: unless-stopped
env_file: ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:
Related
I have a main domen and 3 subdomains on server are running on it, but I decided to add 3 subdomains and for some reason I get an error (Error: P1001: Can't reach database server at db-subdomen:5436 or 5432 ) . Project Next.js + prisma +docker.
As I understand it, the problem is either inside the scope in the container. Or in the .env file
In the other 3 subdomains , I have the same cat in Dockerfile and env . In docker-compose, which is the same for all projects, everything is also the same, but there is still an error. There may be a typo , I don't know anymore, tk did everything as always
My docker-compose (for all projects , this example for main and one subdomen):
#subdomen
app-subdomen:
container_name: app-subdomen
image: subdomen-image
build:
context: subdomen
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
networks:
- subdomen-net
env_file: subdomen/.env
ports:
- 7000:3000
depends_on:
- "db-subdomen"
command: sh -c "sleep 13 && npx prisma migrate deploy && npm start"
db-subdomen:
container_name: db-subdomen
env_file:
- subdomen/.env
image: postgres:latest
restart: always
volumes:
- db-subdomen-data:/var/lib/postgresql/data
networks:
- subdomen-net
#main domen
app-main:
image: main-image
build:
context: main
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
env_file: main/.env
ports:
- 3000:3000
depends_on:
- "db-main"
command: sh -c "sleep 3 && npx prisma migrate deploy && npm start"
networks:
- main-net
db-main:
env_file:
- main/.env
image: postgres:latest
restart: always
volumes:
- db-main-data:/var/lib/postgresql/data
networks:
- main-net
volumes:
db-main-data: {}
db-subdomen-data: {}
networks:
main-net:
name: main-net
subdomen-net:
name: subdomen-net
.env subdomen :
POSTGRES_USER=subdomenUser
POSTGRES_PASSWORD=subdomen
POSTGRES_DB=subdomen-db
SECRET=88xU_X8yfsfdsfsdfsdfsdfsdfdsdc
HOST=https://subdomen.domen.ru
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#db-arkont:5436/${POSTGRES_DB}?schema=public
Submodem Dockerfile (the other 3 projects and subdomains have the same problem and there is no problem:
FROM node:lts-alpine AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:lts-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/prisma ./prisma
ENV NODE_ENV=production
EXPOSE 3000
I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!
I used docker-compose to create docker containers in my React, Node.js, and Postgres structured project.
After I created Dockerfile and docker-compose.yml, I did docker up 'docker-compose up --build.'
Then, I wasn't be able to create containers, and get an errors.
I get Error says:
Error 1
Error 2
Error 3
How can I fix it and successfully build containers?
Here is a docker-compose.yml file in './'
version: '3'
services:
server:
container_name: mylivingcity_server
build: ./server
expose:
- 3001
ports:
- 3001:3001
volumes:
- ./server/config:/usr/src/app/server/config
- ./server/controllers:/usr/src/app/server/controllers
- ./server/db:/usr/src/app/server/db
command: npm run start
postgres:
image: postgres:12
container_name: mylivingcity_postgres
ports:
- 5432:5432
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=mylivingcity
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
frontend:
container_name: mylivingcity_frontend
build: ./frontend
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./frontend/src:/usr/src/app/frontend/src
- ./frontend/public:/usr/src/app/frontend/public
command: npm start
stdin_open: true
Here is a Dockerfile in './frontend'
FROM node:12
# Create frontend directory
RUN mkdir -p /usr/src/app/frontend/
WORKDIR /usr/src/app/frontend/
# Install dependencies
COPY package*.json /usr/src/app/frontend/
RUN npm install
COPY . /usr/src/app/frontend/
CMD [ "npm" , "start" ]
Here is a Dockerfile in './server'
FROM node:12
# Create server directory
RUN mkdir -p /usr/src/app/server/
WORKDIR /usr/src/app/server/
# Install dependencies
COPY package*.json /usr/src/app/server/
RUN npm install
COPY . /usr/src/app/server/
CMD [ "npm" , "run" , "start" ]
I'm trying to get nodemon and docker-compose work with each others and I'm having a problem: Every time I run docker-compose up I get the following error: sh: missing ]. Here are the relevant files:
Dockerfile
FROM node:11.1.0-alpine
MAINTAINER TileHalo
WORKDIR /usr/src/app
RUN npm install -g nodemon
COPY package*.json ./
RUN npm install --save
COPY . .
EXPOSE 3000
CMD [ "nodemon", "./bin/www"]
and docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- postgres
volumes:
- .:/usr/src/app
postgres:
image: "postgres:alpine"
environment:
POSTGRES_PASSWORD: supersalainen
POSTGRES_USER: kipa
EDIT: Works on pure docker after the comma fix, but doesn't work on docker-compose
Can you help with running gulp tasks inside the Docker container with docker-compose, so it would compile SCSS files?
My file structure at host machine:
/application
--/config
--/models
--/public
----/scss
----/css
----index.html
----gulfile.js
--/routes
.dockerignore
.gitignore
Dockerfile
docker-compose.yml
package.json
server.js
/application/public/gulpfile.js:
var gulp = require('gulp');
var sass = require('gulp-sass');
gulp.task('sass', function() {
gulp.src('./scss/styles.scss')
.pipe(sass().on('error', sass.logError))
.pipe(gulp.dest('./css'));
});
gulp.task('sass:watch', function() {
gulp.watch('./scss/styles.scss', ['sass']);
});
Dockerfile:
FROM node:6
RUN mkdir -p /app
WORKDIR /app
COPY . /app
RUN npm install nodemon -g && npm install bower -g && npm install gulp -g
RUN cd /app
RUN npm install
RUN cd /app/public && bower install --allow-root
RUN cd /app
COPY . /app
docker-compose.yml:
version: '2'
services:
web:
build: .
command: nodemon /app/server.js
volumes:
- .:/app/
- /app/node_modules
- /app/public
- /app/config
- /app/models
- /app/routes
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo:latest
ports:
- "27018:27017"
restart: always
Ideally, it would be good to run the command from the docker-compose.yml file to start watching scss files inside the /application/public folder, but I wasted couple of days to solve this problem.
Also, I tried to run gulp inside the container. Actually, it works ok, but changes were not reflected at host machine.
Please do not suggest to use ready-made Docker-Hub images. I have used them and they did not solve my issue.
I'll be thankful for any help, links, info or ready-solution.
You can add a separate service in your compose file to run a separate command in a separate container (see the web line & the gulp service line below
version: '2'
services:
web: &web
build: .
command: nodemon /app/server.js
volumes:
- .:/app/
- /app/node_modules
- /app/public
- /app/config
- /app/models
- /app/routes
ports:
- "3000:3000"
depends_on:
- mongo
gulp: *web
command: <gulp command to watch files>
mongo:
image: mongo:latest
ports:
- "27018:27017"
restart: always