Dockerfile and docker-compose.yaml for different environments - docker

docker-compose for prod:
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build: .
ports:
- "443:443"
links:
- db
volumes:
- /www/node_modules
Dockerfile for prod:
FROM alpine:3.4
LABEL authors="John Doe"
RUN apk add --update nodejs bash git
COPY package.json /www/package.json
RUN cd /www; apk --no-cache add --virtual builds-deps build-base python && npm install && npm rebuild bcrypt --build-from-source && apk del builds-deps
COPY . /www
WORKDIR /www
ENV PORT 8080
EXPOSE 8080
CMD ["npm", "start"]
docker-compose for dev:
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build: .
ports:
- "8080:8080"
links:
- db
volumes:
- .:/www
- /www/node_modules
Dockerfile for dev
FROM alpine:3.4
LABEL authors="John Doe"
RUN apk add --update nodejs bash git
COPY package.json /www/package.json
RUN cd /www; apk --no-cache add --virtual builds-deps build-base python && npm install && npm rebuild bcrypt --build-from-source && apk del builds-deps
WORKDIR /www
ENV PORT 8080
EXPOSE 8080
CMD ["npm", "run", "dev"]
I'm running it with docker-compose up.
Right now i have to manually make changes to files in order to change environment, which is, of course, wrong way to do this.
I assume there should be a way to avoid these manual changes. How do i do that?

You can specify environment in the services part of the docker-compose.yml file.
Example:
services:
environment:
NODE_ENV: "development"
APP_PORT: 5000
DB_URI: "<DB URI>"
And in your code, you can take these values by specifying process.env.NODE_ENV

Dockerfile should contain the commands for creating the image. That image when used with docker-compose api-server service of yours will run the server as required.
For eg., in your case:
Your Dockerfile should look something like this.
FROM alpine:3.4
LABEL authors="John Doe"
RUN apk add --update nodejs bash git
RUN mkdir /www
WORKDIR /www
ADD package.json /www/package.json
RUN apk --no-cache add --virtual builds-deps build-base python && npm install && npm rebuild bcrypt --build-from-source && apk del builds-deps
This will create your image.
regarding your docker-compose.yml file, use two separate docker-compose files for both production and development. Use env files to separate out development and production variables. you can check in the development docker-compose.yml and development env file in your repository, production docker-compose and environment files will be specific to your production server.
Your sample docker-compose.yml file should look something like this
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
links:
- db
env_file:
development.env
volumes:
- ./:/www
- /www/node_modules # I really don't understand this statement
command: >
/bin/ash -c "npm run dev"
This will be running your development server.
Similarly a same docker-compose.yml file with different ports exposed for production -443:443 in your case and env_file to production.env and command set to /bin/ash -c "npm start" will help run your production server.
version: '2'
services:
db:
image: mongo:3
ports:
- "27017:27017"
api-server:
build:
context: .
dockerfile: Dockerfile
ports:
- "443:443"
links:
- db
env_file:
production.env
volumes:
- ./:/www
- /www/node_modules # I really don't understand this statement
command: >
/bin/ash -c "npm start"
In case you are running the development server and production server (never advisable) in the same machine you can create two files named docker-compose-development.yml and docker-compose-production.yml for development and production systems respectively and then you can run the server by using the command:
sudo docker-compose -f docker-compose-development.yml up
sudo docker-compose -f docker-compose-production.yml up
for development and production system respectively.

you can use environment variables as
for example
set env as export env="prod" in you local machine terminal
and in docker-compose-file
image: container_image_${env} or image: container_image:${env}
will create images as container_image_prod or container_image:prod
you can also set the service name for db as db_${env} so that you get the service name according to the environment as db_prod in this case similarly you can do for other services if required

Related

dial tcp 127.0.0.1:8080: connect: connection refused. go docker app

I have two apps in go language. user_management app, which I run (docker-compose up --build) first, then I run(docker-compose up --build) sport_app. sport_app is dependent from user_management app.
sport_app Dockerfile file as below.
FROM golang:alpine
RUN apk update && apk upgrade && apk add --no-cache bash git openssh curl
WORKDIR /go-sports-entities-hierarchy
COPY . /go-sports-entities-hierarchy/
RUN rm -rf /go-sports-entities-hierarchy/.env
RUN go mod download
RUN chmod +x /go-sports-entities-hierarchy/scripts/*
RUN ./scripts/build.sh
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.2.1/wait /wait
RUN chmod +x /wait
ENV GIN_MODE="debug" \
GQL_SERVER_HOST="localhost" \
GQL_SERVER_PORT=7777 \
ALLOWED_ORIGINS=* \
USER_MANAGEMENT_SERVER_URL="http://localhost:8080/user/me" \
# GQLGen config
GQL_SERVER_GRAPHQL_PATH="graphql" \
GQL_SERVER_GRAPHQL_PLAYGROUND_ENABLED=true \
GQL_SERVER_GRAPHQL_PLAYGROUND_PATH="playground" \
# Export necessary port
EXPOSE 7777
CMD /wait && ./scripts/run.sh
sport_app docker-compose.yml file as below.
version: '3'
volumes:
postgres_data:
driver: local
services:
go-sports-entities-hierarchy:
restart: always
build:
dockerfile: Dockerfile
context: .
environment:
WAIT_HOSTS: postgres:5432
# Web framework config
GIN_MODE: debug
GQL_SERVER_HOST: go-sports-entities-hierarchy
GQL_SERVER_PORT: 7777
ALLOWED_ORIGINS: "*"
USER_MANAGEMENT_SERVER_URL: http://localhost:8080/user/me
# GQLGen config
GQL_SERVER_GRAPHQL_PATH: graphql
GQL_SERVER_GRAPHQL_PLAYGROUND_ENABLED: "true"
GQL_SERVER_GRAPHQL_PLAYGROUND_PATH: playground
ports:
- 7777:7777
depends_on:
- postgres
- redisearch
go-sports-events-workflow:
restart: always
build:
dockerfile: Dockerfile
context: .
environment:
WAIT_HOSTS: postgres:5432
# Web framework config
GIN_MODE: debug
GQL_SERVER_HOST: go-sports-events-workflow
GQL_SERVER_PORT: 7778
ALLOWED_ORIGINS: "*"
# GQLGen config
GQL_SERVER_GRAPHQL_PATH: graphql
GQL_SERVER_GRAPHQL_PLAYGROUND_ENABLED: "true"
GQL_SERVER_GRAPHQL_PLAYGROUND_PATH: playground
depends_on:
- postgres
- redisearch
- go-sports-entities-hierarchy
user_management app Dockerfile as below:
FROM golang:alpine
RUN apk update && apk add --no-cache git ca-certificates && update-ca-certificates
# Set necessary environmet variables needed for our image
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
# Move to working directory /build
WORKDIR /build
# Copy and download dependency using go mod
COPY go.mod .
COPY go.sum .
RUN go mod download
# Copy the code into the container
COPY . .
# Build the application
RUN go build -o main .
# Move to /dist directory as the place for resulting binary folder
WORKDIR /dist
# Copy binary from build to main folder
RUN cp -r /build/html .
RUN cp /build/main .
# Environment Variables
ENV DB_HOST="127.0.0.1" \
APP_PROTOCOL="http" \
APP_HOST="localhost" \
APP_PORT=8080 \
ALLOWED_ORIGINS="*"
# Export necessary port
EXPOSE 8080
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.2.1/wait /wait
RUN chmod +x /wait
# Command to run when starting the container
CMD /wait && /dist/main
user_management app docker-compose.yml file as below:
version: '3'
volumes:
postgres_data:
driver: local
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- 5432:5432
go-user-management:
restart: always
build:
dockerfile: Dockerfile
context: .
environment:
# Postgres Details
DB_PORT: 5432
# APP details
APP_PROTOCOL: http
APP_HOST: localhost
APP_PORT: 8080
# System Configuration Details
ALLOWED_ORIGINS: "*"
ports:
- 8080:8080
depends_on:
- postgres
In sport_app I write below code and get error:
client := resty.New()
resp, err := client.R().SetHeader("Content-Type", "application/json").SetHeader("Authorization", "Bearer "+token).Get("http://localhost:8080/user/me")
Error is: Get "http://localhost:8080/user/me": dial tcp 127.0.0.1:8080: connect: connection refused:"
This API(http://localhost:8080/user/me) is written in the user_management app and this is working, I check with the postman.
I already read this question answers, but can not solve my problem.
I am new to docker, please help.
For communicating between multiple docker-compose clients, you need to make sure that the containers you want to talk to each other are on the same network.
For example, (edited for brevity) here you have one of the docker-compose.yml
# sport_app docker-compose.yml
version: '3'
services:
go-sports-entities-hierarchy:
...
networks:
- some-net
go-sports-events-workflow
...
networks:
- some-net
networks:
some-net:
driver: bridge
And the other docker-compose.yml
# user_management app docker-compose.yml
version: '3'
services:
postgres:
...
networks:
- some-net
go-user-management
...
networks:
- some-net
networks:
some-net:
external: true
Note: Your app’s network is given a name based on the project name, which is based on the name of the directory it lives in, in this case a prefix user_ was added.
They can then talk to each other using the service name, i.e. go-user-management, etc.
You can, after running the docker-compose up --build commands, run the docker network ls command to see it, then docker network inspect bridge, etc.

Prisma deploy - authentication error for local deployment

I am trying to run prisma deploy using a local prisma server running on port 4466 but when I run prisma deploy I get this message
Authenticating...
Opening https://app.prisma.io/cli-auth?secret=$2a$08$u3VSbu6GSxSV8l86BFs24O in the browser
Could not open the authentication link, maybe this is an environment without a browser. Please open this url in your browser to authenticate: https://app.prisma.io/cli-auth?secret=$2a$08$u3VSbu6GSxSV8l86BFs24O
This is prisma server file
mongodb:
image: mongo:4.2
container_name: mongodb
volumes:
- ./mongo-volume:/data/db
ports:
- "27017:27017"
prisma-server:
image: prismagraphql/prisma:1.34.10
container_name: prisma-server
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
managementApiSecret: password#123
databases:
default:
connector: mongo
uri: mongodb://mongodb
this is my prisma.yml file. I am running prisma deploy within another dockerfile.
endpoint: ${env:PRISMA_ENDPOINT}
datamodel: datamodel.prisma
secret: ${env:PRISMA_SECRET}
databaseType: document
generate:
- generator: javascript-client
output: ./src/generated/prisma-client
hooks:
post-deploy:
- prisma generate
- npx nexus-prisma-generate --client ./src/generated/prisma-client --output ./src/generated/nexus-prisma
this is my .env file
PRISMA_SECRET=password#123
PRISMA_ENDPOINT=http://prisma-server:4466/app/dev
API_SECRET=password#123
This helped me to run prisma deploy within a Dockerfile
FROM node:9-alpine
WORKDIR /app
COPY . .
# To handle 'not get uid/gid' error in alpine linux set unsafe-perm true
RUN apk update && apk upgrade && apk add bash \
&& npm config set unsafe-perm true \
&& chmod +x ./docker-scripts/entrypoint.sh \
&& yarn install \
&& yarn global add prisma
EXPOSE 4000
CMD ["./docker-scripts/entrypoint.sh"]
entrypoint.sh
#!/bin/bash
# prisma deploy
cd /prisma
prisma deploy
# go into the project...
cd /app
npm run start
docker-compose file
services:
prisma-client:
image: image-name-here
container_name: prisma-client
restart: always
ports:
- "4000:4000"
environment:
PRISMA_ENDPOINT: http://prisma-server:4466
networks:
- prisma
Now once I did docker-compose prisma client container was also created.

Docker-compose: How to share data between services without using named volumes or multi-stage builds

Are there any ways to share data between containers. There is following docker-compose file
version: '3'
services:
app_build_prod:
container_name: 'app'
build:
context: ../
dockerfile: docker/Dockerfile
args:
command: build:prod
nginx:
container_name: 'nginx'
image: nginx:alpine
ports:
- "80:80"
depends_on:
- app_build_prod
Dockerfile content is:
FROM node:10-alpine as builder
## Installing missing packages, fixing git self signed certificate issue
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh && \
rm -rf /var/cache/apk/* && \
git config --global http.sslVerify false
## Defigning app directory
WORKDIR /usr/app
## Copying files. Files listed in .dockerignore are omitted
COPY . .
## node_modules are on a separate intermediate image will prevent unnecessary npm installs at each build
RUN npm ci
## Declaring arguments and environment variables. Important to declara env var to consume them on run stage
ARG command=build:prod
ENV command=$command
ENTRYPOINT npm run ${command}
Tried with #Robert's solution, but couldn't make it work - app container crashes because of:
EBUSY: resource busy or locked, rmdir '/usr/app/dist
Error: EBUSY: resource busy or locked, rmdir '/usr/app/dist'
My assumption is that /usr/app/dist directory is mounted with read-only access, therefore when Angular attempt to remove it prior the build, it throws an error.
Need to send data following direction
app_build_prod:/usr/app/dist => nginx:/usr/share/nginx/html
I have the same problem and change the sharing to use multi-stage build :
FROM alpine:latest AS builder
...build app_build_prod
FROM nginx:alpine
COPY --from=builder /usr/app/dist /usr/share/nginx/html
and change docker-compose to:
version: '3'
services:
nginx:
container_name: 'nginx'
build:
...
ports:
- "80:80"

Bitnami/Express 4.16.4 - npm install

I need to install other node.js modules on the bitnami docker container.
I would like to install body-parser module to the container. I've started the container with sudo docker-compose up and it runs fine. i tried to modify the dockerfile and docker-compose.yml file to install the body-parser but i get EACCES permission denied, access '/app/node_modules' error. Can you help?
TIA,
Thomas
**** UPDATE 4/23/2019 ***
This is the docker file.
I added body-parser line.
## Dockerfile for building production image
FROM bitnami/express:4.16.4-debian-9-r166
LABEL maintainer "John Smith <john.smith#acme.com>"
ENV DISABLE_WELCOME_MESSAGE=1
ENV NODE_ENV=production \
PORT=3000
# Skip fetching dependencies and database migrations for production image
ENV SKIP_DB_WAIT=0 \
SKIP_DB_MIGRATION=1 \
SKIP_NPM_INSTALL=1 \
SKIP_BOWER_INSTALL=1
COPY . /app
RUN sudo chown -R bitnami: /app
RUN npm install
RUN npm install --save body-parser
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '2'
services:
mongodb:
image: 'bitnami/mongodb:latest'
express:
tty: true # Enables debugging capabilities when attached to this container.
image: 'bitnami/express:4'
command: npm start
environment:
- PORT=3000
- NODE_ENV=development
- DATABASE_URL=mongodb://mongodb:27017/myapp
- SKIP_DB_WAIT=0
- SKIP_DB_MIGRATION=0
- SKIP_NPM_INSTALL=0
- SKIP_BOWER_INSTALL=0
depends_on:
- mongodb
ports:
- 3000:3000
volumes:
- .:/app

Docker container communication - "Could not translate host name \"mydbalias\" to address: Temporary failure in name resolution"

I have a PostgreSQL container and a Swift server container. I need to pass the DB IP to the Server to start it. So I created an alias for DB in my custom bridge network. Have a look at my docker-compose.yml
version: '3'
services:
db:
build: database
image: postgres
networks:
mybridgenet:
aliases:
- mydbalias
web:
image: mywebserver:latest
ports:
- "8000:8000"
depends_on:
- db
networks:
- mybridgenet
environment:
WAIT_HOSTS: db:5432
networks:
mybridgenet:
driver: bridge
Dockerfile to build webserver.
FROM swift:4.2.1
RUN apt-get update && apt-get install --no-install-recommends -y libpq-dev uuid-dev && rm -rf /var/lib/apt/lists/*
EXPOSE 8000
WORKDIR /app
COPY client ./client
COPY Package.swift ./
COPY Package.resolved ./
COPY Sources ./Sources
RUN swift build
COPY pkg-swift-deps.sh ./
RUN chmod +x ./pkg-swift-deps.sh
RUN ./pkg-swift-deps.sh ./.build/debug/bridgeOS
FROM busybox:glibc
COPY --from=0 /app/swift_libs.tar.gz /tmp/swift_libs.tar.gz
COPY --from=0 /app/.build/debug/bridgeOS /usr/bin/
RUN tar -xzvf /tmp/swift_libs.tar.gz && \
rm -rf /tmp/*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.2.1/wait /wait
RUN chmod +x /wait
CMD /wait && mywebserver db "10.0.2.2"
Database Dockerfile
FROM postgres
COPY init.sql /docker-entrypoint-initdb.d/
The server is started using mybinary mydbalias. Like I said earlier, I pass the alias to start the server. While doing this, I get the following error.
message: "could not translate host name \"mydbalias\" to address: Temporary failure in name resolution\n"
What could be the problem?
UPDATE
After 4 days of a grueling raid, I finally found the rat. He is busybox container. I changed it to ubuntu:16.04 and it's a breeze. Feeling so good about this whole conundrum. Thanks, everyone who helped.
Simplify. There is no need in your explicit network declaration (it is done automatically by docker-compose, nor aliases (services get their host names based on service names)
docker-compose.yml
version: '3'
services:
db:
build: database
image: postgres
web:
image: mywebserver:latest
ports:
- "8000:8000"
depends_on:
- db
environment:
WAIT_HOSTS: db:5432
Then just use db as a hostname to connect to database from web

Resources