How to run docker-compose in production - docker

I have built a MEAN stack application with nginx front end.
I have 2 docker files - one for front end and one for back end
And I have a docker-compose file that pulls them together along with the database
This works great on my development machine
I then push the images to my dockerhub site
On my production ubuntu machine I pull the images that I want from my dockerhub repository
But how should I run them?
I transfer my docker-compose file to the server and try to run it:
docker-compose -f docker-compose.prod.yml up
but it complains that the folder structure isnt what I have on my dev machine:
ERROR: build path /home/demo/api either does not exist, is not accessible, or is not a valid URL.
I dont want to put all the code on the server and rebuild it.. surely that defeats the purpose of using dockerhub images?
I also need the docker compose file to pull in the .prod.env file for database credentials etc.
I know Im missing something here.
How do I run my images without rebuilding them from scratch?
Do I need another service for this?
Thanks in advance
docker-compose.prod.yml:
version: '3'
services:
# Database
database:
env_file:
- .prod.env
image: mongo
restart: always
environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: $DB_ADMIN_PASSWORD
# Create a new database. Please note, the
# /docker-entrypoint-initdb.d/init.js has to be executed
# in order for the database to be created
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: $MONGO_INITDB_ROOT_PASSWORD
DB_NAME: $DB_NAME
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
MONGO_INITDB_DATABASE: $DB_NAME
volumes:
# Add the db-init.js file to the Mongo DB container
- ./mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
- /data/db
ports:
- '27017-27019:27017-27019'
networks:
- backend-net
# Database management
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: $MONGO_INITDB_ROOT_PASSWORD
ME_CONFIG_MONGODB_SERVER: database
depends_on:
- database
networks:
- backend-net
# Nodejs API
backend:
depends_on:
- database
env_file:
- .prod.env
build:
context: ./api
dockerfile: Dockerfile-PROD-API
# Note: put this container name into proxy.conf.json for local angular CLI development instead of localhost
container_name: node-api-prod
networks:
- backend-net
# Nginx and compiled angular app
frontend:
build:
context: ./ui
dockerfile: Dockerfile-PROD-UI
ports:
- "8180:80"
container_name: nginx-ui-prod
networks:
- backend-net
networks:
backend-net:
driver: bridge
DOCKERFILE-PROD-API:
#SERVER ========================================
FROM node:10-alpine as server
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
#RUN ls -lha
EXPOSE 3000
CMD ["npm", "run", "start"]
DOCKERFILE-PROD-UI:
#APP ========================================
FROM node:10-alpine as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install #angular/cli && npm install
COPY . .
RUN npm run build
#RUN ls -lha
#FINAL ========================================
FROM nginx:1.18.0-alpine
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/conf.d/default.conf

Using full image names including dockerhub path resolved the issue for me.
Working solution shown below:
Dockerfile-PROD-UI
#GET ANGULAR ========================================
FROM node:10-alpine as base
WORKDIR /usr/src/app
COPY ui/package*.json ./
RUN npm install #angular/cli && npm install
COPY ui/. .
#BUILD ANGULAR ========================================
FROM base as build
RUN npm run build
#RUN ls -lha
#NGINX ========================================
FROM nginx:1.18.0-alpine
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Dockerfile-PROD-API
#SERVER ========================================
FROM node:10-alpine as server
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
#RUN ls -lha
EXPOSE 3000
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3.5'
services:
# Database
database:
image: mongo
restart: always
env_file:
- .prod.env
environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: $DB_ADMIN_PASSWORD
# Create a new database. Please note, the
# /docker-entrypoint-initdb.d/init.js has to be executed
# in order for the database to be created
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: $MONGO_INITDB_ROOT_PASSWORD
DB_NAME: $DB_NAME
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
MONGO_INITDB_DATABASE: $DB_NAME
volumes:
# Add the db-init.js file to the Mongo DB container
- ./mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
- db-data:/data/db
ports:
- '27017-27019:27017-27019'
networks:
- backend-net
# Nodejs API
backend:
image: DOCKERHUBHUSER/DOCKERHUB_REPO:prod-backend
restart: always
depends_on:
- database
env_file:
- .prod.env
build:
context: ./api
dockerfile: Dockerfile-PROD-API
container_name: backend
networks:
- backend-net
# Nginx and compiled angular app
frontend:
image: DOCKERHUBHUSER/DOCKERHUB_REPO:prod-frontend
restart: always
depends_on:
- backend
build:
context: .
dockerfile: Dockerfile-PROD-UI
ports:
- "8180:80"
container_name: frontend
networks:
- backend-net
networks:
backend-net:
driver: bridge
volumes:
db-data:
name: db-data
external: true

Related

Docker subdomen (Error: P1001: Can't reach database server at `db-subdomen`:`5436` )

I have a main domen and 3 subdomains on server are running on it, but I decided to add 3 subdomains and for some reason I get an error (Error: P1001: Can't reach database server at db-subdomen:5436 or 5432 ) . Project Next.js + prisma +docker.
As I understand it, the problem is either inside the scope in the container. Or in the .env file
In the other 3 subdomains , I have the same cat in Dockerfile and env . In docker-compose, which is the same for all projects, everything is also the same, but there is still an error. There may be a typo , I don't know anymore, tk did everything as always
My docker-compose (for all projects , this example for main and one subdomen):
#subdomen
app-subdomen:
container_name: app-subdomen
image: subdomen-image
build:
context: subdomen
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
networks:
- subdomen-net
env_file: subdomen/.env
ports:
- 7000:3000
depends_on:
- "db-subdomen"
command: sh -c "sleep 13 && npx prisma migrate deploy && npm start"
db-subdomen:
container_name: db-subdomen
env_file:
- subdomen/.env
image: postgres:latest
restart: always
volumes:
- db-subdomen-data:/var/lib/postgresql/data
networks:
- subdomen-net
#main domen
app-main:
image: main-image
build:
context: main
dockerfile: Dockerfile
restart: always
environment:
NODE_ENV: production
env_file: main/.env
ports:
- 3000:3000
depends_on:
- "db-main"
command: sh -c "sleep 3 && npx prisma migrate deploy && npm start"
networks:
- main-net
db-main:
env_file:
- main/.env
image: postgres:latest
restart: always
volumes:
- db-main-data:/var/lib/postgresql/data
networks:
- main-net
volumes:
db-main-data: {}
db-subdomen-data: {}
networks:
main-net:
name: main-net
subdomen-net:
name: subdomen-net
.env subdomen :
POSTGRES_USER=subdomenUser
POSTGRES_PASSWORD=subdomen
POSTGRES_DB=subdomen-db
SECRET=88xU_X8yfsfdsfsdfsdfsdfsdfdsdc
HOST=https://subdomen.domen.ru
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#db-arkont:5436/${POSTGRES_DB}?schema=public
Submodem Dockerfile (the other 3 projects and subdomains have the same problem and there is no problem:
FROM node:lts-alpine AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:lts-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/prisma ./prisma
ENV NODE_ENV=production
EXPOSE 3000

Docker compose - build property and volumes

I created this docker-compose file based on my two images (front and back-end), both are running ok when I run docker-compose up.
The problem is that I want to make hot reload work properly for my frontend which is built upon react.
I tried many things but the properties volumes and build do not seem to be able to find my directories.
So, the directory for both projects are:
C:\Users\pwdie\Downloads\projects\typescript-api (api)
C:\Users\pwdie\Downloads\projects\frontend-api-consumer (front)
being "projects" for the root for both of them, and frontend-api-consumer containing the docker-compose.yml as the image shows.
Didn't understand that part of the docker-compose system, explanations will be well received.
Front Dockerfile
FROM node:17
WORKDIR /app
COPY package.json ./
RUN yarn
COPY ./ ./
EXPOSE 3000:3000
CMD ["yarn", "dev"]
Api Dockerfile
FROM node:17
WORKDIR /app
COPY package.json ./
RUN yarn
COPY . .
ENV DB_HOST=host.docker.internal
EXPOSE 5000:5000
CMD ["yarn", "dev"]
Docker-compose.yml
version: "3.8"
services:
api:
image: api
build: ../typescript_api
volumes:
- ../typescript_api:/var/app
environment:
DB_USER: postgres
DB_PASSWORD: postgres
DB_HOST: host.docker.internal
DB_PORT: 5432
DB_NAME: typescript_api
ACCESS_TOKEN_SECRET: 1c6c3296f699e051220674d329e040dee0abae986f62d62eb89f55dfa95bff1ac9b52731177e664e25bff9b6ce0eba6ec7a9b6cf7d03e94487dc03179dc31c7e
ports:
- "5000:5000"
front:
image: front
build: .
volumes:
- ./:/var/app
environment:
REACT_APP_MAPBOX_TOKEN: pk.eyJ1Ijoic291c2FkaWVnbzExIiwiYSI6ImNrdHEwbDRrdTBycTEycXBtbXZ5eXEzMm4ifQ.U58H7S1um_WcRC2rBoEuNw
REACT_APP_API_URL: http://localhost:5000
depends_on:
- api
ports:
- "3000:3000"

docker-compose is giving the error: backend exited with code 2 backend | /bin/sh: syntax error: unterminated quoted string

I am stuck in building the docker containers for MERN stack E-commerce app. These are my following Docker files and Docker compose files. I am getting this error
enter image description here
# Dockerfile for React client
# Build react client
FROM node:lts-buster-slim
# Working directory be app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
COPY package-lock.json /usr/src/app
### Installing dependencies
RUN npm ci
# copy local files to app folder
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm","start"]
# Dockerfile for Node Express Backend
FROM node:lts-buster-slim
# Create App Directory
WORKDIR /usr/src/app
# Install Dependencies
COPY package.json /usr/src/app/package.json
COPY package-lock.json /usr/src/app/package-lock.json
RUN npm ci
COPY . /usr/src/app
# Exports
EXPOSE 5000
CMD ["npm", "run", "dev"]
version: "3.7"
services:
frontend:
build: frontend
ports:
- 3000:3000
stdin_open: true
volumes:
- ./frontend:/usr/src/app
- /usr/src/app/node_modules
container_name: frontend
restart: always
networks:
- react-express
depends_on:
- backend
backend:
container_name: backend
restart: always
build: backend
volumes:
- ./backend:/usr/src/app
- /usr/src/app/node_modules
depends_on:
- mongo
networks:
- express-mongo
- react-express
ports:
- 5000:5000
mongo:
container_name: mongo
restart: always
image: mongo:4.2.0
volumes:
- ./data:/data/db
networks:
- express-mongo
ports:
- 27017:27017
networks:
react-express:
express-mongo:
I am getting this error when I execute the command: "docker-compose up". I am doing a MERN E-Commerce App. The error is: backend | /bin/sh: syntax error: unterminated quoted string
backend exited with code 2

How to include volumes in my docker configuration?

I have a docker-compose configuration for an MEAN app that's working fine.
I would like my angular (ng serve) and express servers (nodemon) to rerun automaticaly when I hit ctrl + s as if I was running my app in local.
For that, my containers need to be aware that the files changed.
How can I do that ?
Angular's Dockerfile :
FROM node:10
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 4200
CMD ["npm", "start"]
Express's Dockerfile :
FROM node:6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml :
version: '3'
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200"
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000"
links:
- database
database: # name of the third service
image: mongo
ports:
- "27017:27017"
Both Angular and Express have an .dockerignore for node_modules
If you are in dev environment, you can add the volumes section in your docker-compose.yml as below :
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200"
volumes:
- /path/in/host/machine:/path/in/container
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000"
links:
- database
volumes:
- /path/in/host/machine:/path/in/container
database: # name of the third service
image: mongo
ports:
- "27017:27017"
Reference:
volumes in docker-compose

"Error: Cannot find module" with Nodemon and Docker, even with volumes mounted

I keep getting errors that my modules don't exist when I'm running nodemon inside Docker and I save the node files. It takes a couple of saves before it throws the error. I have the volumes mounted like how the answers suggested here but I'm still getting the error and I'm not too sure what's causing it.
Here is my docker-compose.yml file.
version: "3.7"
services:
api:
container_name: api
build:
context: ./api
target: development
restart: on-failure
ports:
- "3000:3000"
- "9229:9229"
volumes:
- "./api:/home/node/app"
- "node_modules:/home/node/app/node_modules"
depends_on:
- db
networks:
- backend
db:
container_name: db
command: mongod --noauth --smallfiles
image: mongo
restart: on-failure
volumes:
- "mongo-data:/data/db"
- "./scripts:/scripts"
- "./data:/data/"
ports:
- "27017:27017"
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongo-data:
node_modules:
Here is my docker file:
# Ger current Node Alpine Linux image.
FROM node:alpine AS base
# Expose port 3000 for node.
EXPOSE 3000
# Set working directory.
WORKDIR /home/node/app
# Copy project content.
COPY package*.json ./
# Development environment.
FROM base AS development
# Set environment of node to development to trigger flag.
ENV NODE_ENV=development
# Express flag.
ENV DEBUG=app
# Run NPM install.
RUN npm install
# Copy source code.
COPY . /home/node/app
# Run the app.
CMD [ "npm", "start" ]
# Production environment.
FROM base AS production
# Set environment of node to production to trigger flag.
ENV NODE_ENV=production
# Run NPM install.
RUN npm install --only=production --no-optional && npm cache clean --force
# Copy source code.
COPY . /home/node/app
# Set user to node for better security.
USER node
# Run the app.
CMD [ "npm", "run", "start:prod" ]
Turns out I didn't put my .dockerignore in the proper folder. You're supposed to put it in the context folder.

Resources