Assume my current public IP is 101.15.14.71, I have a domain called example.com which I configured using cloudflare and I created multiple DNS entry pointing to my public ip.
Eg:
1) new1.example.com - 101.15.14.71
2) new2.example.com - 101.15.14.71
3) new3.example.com - 101.15.14.71
Now, Here's my example project structure,
├── myapp
│ ├── app
│ │ └── main.py
│ ├── docker-compose.yml
│ └── Dockerfile
├── myapp1
│ ├── app
│ │ └── main.py
│ ├── docker-compose.yml
│ └── Dockerfile
└── traefik
├── acme.json
├── docker-compose.yml
├── traefik_dynamic.toml
└── traefik.toml
Here I have two fastAPIs (i.e., myapp, myapp1)
Here's the example code I have in main.py in both myapp and myapp1, Its exactly same but return staement is different that's all
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_main():
return {"message": "Hello world for my project myapp"}
Here's my Dockerfile for myapp and myapp1, here too both are exactly same but the only difference is I deploy myapp on 7777 and myapp1 on 7778 in different containers
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
# python dependencies
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
# copy required files
RUN bash -c 'mkdir -p /app'
COPY ./app /app
ENTRYPOINT /usr/local/bin/gunicorn \
-b 0.0.0.0:7777 \ # this line I use for myapp dockerfile
-b 0.0.0.0:7778 \ # this line I change for myapp1 dockerfile
-w 1 \
-k uvicorn.workers.UvicornWorker app.main:app \
--chdir /app
Here's my docker-compose.yml file for myapp and myapp1, here also I have exactly same but only difference is I change the port,
services:
myapp: # I use this line for myapp docker-compose file
myapp1: # I use this line for myapp1 docker-compose file
build: .
restart: always
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik_public"
- "traefik.backend=myapp" # I use this line for myapp docker-compose file
- "traefik.backend=myapp1" # I use this line for myapp1 docker-compose file
- "traefik.frontend.rule=Host:new2.example.com" # I use this for myapp compose file
- "traefik.frontend.rule=Host:new3.example.com" # I use this for myapp1 compose file
- "traefik.port=7777" # I use this line for myapp docker-compose file
- "traefik.port=7778" # I use this line for myapp1 docker-compose file
networks:
- traefik_public
networks:
traefik_public:
external: true
Now coming to traefik folder,
acme.json # I created it using nano acme.json command with nothing in it,
but did chmod 600 acme.json for proper permissions.
traefik_dynamic.toml
[http]
[http.routers]
[http.routers.route0]
entryPoints = ["web"]
middlewares = ["my-basic-auth"]
service = "api#internal"
rule = "Host(`new1.example.com`)"
[http.routers.route0.tls]
certResolver = "myresolver"
[http.middlewares.test-auth.basicAuth]
users = [
["admin:your_encrypted_password"]
]
traefik.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http]
[entryPoints.web.http.redirections]
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
[certificatesResolvers.myresolver.acme]
email = "reallygoodtraefik#gmail.com"
storage= "acme.json"
[certificatesResolvers.myresolver.acme.httpChallenge]
entryPoint = "web"
[providers]
[providers.docker]
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
docker-compose.yml
services:
traefik:
image: traefik:latest
ports:
- 80:80
- 443:443
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./acme.json:/acme.json
- ./traefik_dynamic.toml:/traefik_dynamic.toml
networks:
- web
networks:
web:
These are the details about my files, what I am trying to achieve here is,
I want to setup traefik and traefik dashboard with basic authentication, and I deploy two of my fastapi services,
myapp 7777, I need to access this app via new2.example.com
myapp1 7778, I need to access this app via new3.example.com
traefik dashboard, I need to access this via new1.example.com
All of these should be https and also has certification autorenew enabled.
I got all these from online articles for latest version of traefik. But the problem is this is not working. I used docker-compose to build and deploy the traefik and I open the api dashboard. It is asking for password and user (basic auth I setup) I entered my user details I setup in traefik_dynamic.toml but its not working.
Where did I do wrong? Please help me correcting mistakes in my configuration. I am really interested to learn more about this.
Error Update:
traefik_1 | time="2021-06-16T01:51:16Z" level=error msg="Unable to obtain ACME certificate for domains \"new1.example.com\": unable to generate a certificate for the domains [new1.example.com]: error: one or more domains had a problem:\n[new1.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new1.example.com/.well-known/acme-challenge/mu85LkYEjlvnbDI-wM2xMaRFO1QsPDNjepTDb47dWF0 [2606:4700:3032::6815:55c4]: 404\n" rule="Host(`new1.example.com`)" routerName=api#docker providerName=myresolver.acme
traefik_1 | time="2021-06-16T01:51:19Z" level=error msg="Unable to obtain ACME certificate for domains \"new2.example.com\": unable to generate a certificate for the domains [new2.example.com]: error: one or more domains had a problem:\n[new2.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new2.example.com/.well-known/acme-challenge/ykiCAEpJeQ1qgVdeFtSRo3q-ATTwgKdRdGHUs2kgIsY [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp1#docker rule="Host(`new2.example.com`)"
traefik_1 | time="2021-06-16T01:51:20Z" level=error msg="Unable to obtain ACME certificate for domains \"new3.example.com\": unable to generate a certificate for the domains [new3.example.com]: error: one or more domains had a problem:\n[new3.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new3.example.com/.well-known/acme-challenge/BUZWuWdNd2XAXwXCwkeqe5-PHb8cGV8V6UtzeLaKryE [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp#docker rule="Host(`new3.example.com`)"
You only need one docker-compose file for all the services, and there is no need to define one for each container.
The project structure you should be using should be something like:
├── docker-compose.yml
├── myapp
│ ├── .dockerignore
│ ├── Dockerfile
│ └── app
│ └── main.py
├── myapp1
│ ├── .dockerignore
│ ├── Dockerfile
│ └── app
│ └── main.py
└── traefik
├── acme.json
└── traefik.yml
When creating containers, unless they are to be used for development purposes, it is recommended to not use a full-blown image, like ubuntu. Specifically for your purposes I would recommend a python image, such as python:3.7-slim.
Not sure if you are using this for development or production purposes, but you could also use volumes to mount the app directories inside the containers (especially useful if you are using this for development), and only use one Dockerfile for both myapp and myapp1, customizing it via environment variables.
Since you are already using traefik's dynamic configuration, I will do most of the setup for the container configuration via docker labels in the docker-compose.yml file.
Your dockerfile for myapp and myapp1 will be very similar at this point, but I've kept them as seperate ones, since you may need to make changes to them depending on the requirements of your apps in the future. I've used an environment variable for the port, which can allow you to change the port from your docker-compose.yml file.
You can use the following Dockerfile (./myapp/Dockerfile and ./myapp1/Dockerfile):
FROM python:3.7-slim
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN pip3 install -U pip setuptools wheel && \
pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
COPY . /app
ENV PORT=7777 # and 7778 for myapp1
ENTRYPOINT /usr/local/bin/gunicorn -b 0.0.0.0:$PORT -w 1 -k uvicorn.workers.UvicornWorker app.main:app --chdir /app
Note: you should really be using something like poetry or a requirements.txt file for your app dependencies.
The .dockerignore file (./myapp/.dockerignore and ./myapp1/.dockerignore) should contain:
Dockerfile
Since the whole directory is being copied inside the container and you don't need the Dockerfile to be in there.
Your main traefik config (./traefik/traefik.yml) can be something like:
providers:
docker:
exposedByDefault: false
global:
checkNewVersion: false
sendAnonymousUsage: false
api: {}
accessLog: {}
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: "websecure"
scheme: "https"
websecure:
address: ":443"
ping:
entryPoint: "websecure"
certificatesResolvers:
myresolver:
acme:
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
email: "example#example.com"
storage: "/etc/traefik/acme.json"
httpChallenge:
entryPoint: "web"
Note: The above acme config will use the stage letsencrypt server. Make sure all the details are correct, and remove caServer after you've tested that everything works, in order to communicate with the letsencrypt production server.
Your ./docker-compose.yml file should be something like:
version: "3.9"
services:
myapp:
build:
context: ./myapp
dockerfile: ./Dockerfile
image: myapp
depends_on:
- traefik
expose:
- 7777
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp.tls=true"
- "traefik.http.routers.myapp.tls.certResolver=myresolver"
- "traefik.http.routers.myapp.entrypoints=websecure"
- "traefik.http.routers.myapp.rule=Host(`new2.example.com`)"
- "traefik.http.services.myapp.loadbalancer.server.port=7777"
myapp1:
build:
context: ./myapp1
dockerfile: ./Dockerfile
image: myapp1
depends_on:
- traefik
expose:
- 7778
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp1.tls=true"
- "traefik.http.routers.myapp1.tls.certResolver=myresolver"
- "traefik.http.routers.myapp1.entrypoints=websecure"
- "traefik.http.routers.myapp1.rule=Host(`new3.example.com`)"
- "traefik.http.services.myapp1.loadbalancer.server.port=7778"
traefik:
image: traefik:v2.4
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- ./traefik/acme.json:/etc/traefik/acme.json
ports:
- 80:80
- 443:443
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.tls=true"
- "traefik.http.routers.api.tls.certResolver=myresolver"
- "traefik.http.routers.api.entrypoints=websecure"
- "traefik.http.routers.api.rule=Host(`new1.example.com`)"
- "traefik.http.routers.api.service=api#internal"
- "traefik.http.routers.api.middlewares=myAuth"
- "traefik.http.middlewares.myAuth.basicAuth.users=admin:$$apr1$$4zjvsq3w$$fLCqJddLvrIZA.CCoGE2E." # generate with htpasswd. replace $ with $$
You can generate the password by using the command:
htpasswd -n admin | sed 's/\$/\$\$/g'
Note: If you need a literal dollar sign in the docker-compose file you need to use $$ as documented here.
Issuing docker-compose up in the directory should bring all the services up, and working as expected.
The above should work for you based on the details you have provided, but can be further improved at multiple points, depending on your needs.
Moreover, having the credentials for the traefik dashboard in the docker-compose.yml file is probably not the best, and you may want to use docker secrets for it. You can also add healthchecks and consider placing myapp and myapp1 into a seperate internal network.
If you want to get further into it, I propose that you start with Get started with Docker Compose and also read: Dockerfile reference and Compose file version 3 reference
Related
System Information
docker compose version
Docker Compose version v2.5.0
Design
I am using a Makefile that uses the multiple compose file logic using the -f flag in docker compose alongside the config command to build a docker-compose.yml. The Makefile can build the compose file, run it as well as bring it down and remove the file
Structure
├── conf
│ ├── grafana
│ │ ├── config
│ │ │ └── grafana.ini
| | |-- .env
├── docker-compose.base.yml
├── services
│ ├── docker-compose.grafana.yml
The conf directory has all the .env file where I pass the necessary admin credentials for Grafana
conf/grafana/.env
## Grafana Admin Credentials
GF_SECURITY_ADMIN_USER=admin
GF_SECURITY_ADMIN_PASSWORD=supersecretpass
docker-compose.base.yml
networks:
internal:
services:
grafana:
env_file:
- ./conf/grafana/.env
volumes:
- grafana:/var/lib/grafana
- ./conf/grafana/config:/usr/local/etc/grafana
services/docker-compose.grafana.yml
services:
grafana:
image: grafana/grafana:8.4.5
container_name: my-grafana
environment:
- GF_SERVER_ROOT_URL=/grafana
- GF_SERVER_SERVE_FROM_SUB_PATH=true
- GF_PATHS_CONFIG=/usr/local/etc/grafana/grafana.ini
logging:
options:
max-size: "1m"
networks:
- internal
ports:
- "3000:3000"
security_opt:
- "no-new-privileges:true"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
Makefile
.PHONY: help build compose clean down run
.SILENT: help
SERVICES_DIR=services
COMPOSE_FILES:=-f docker-compose.base.yml
# STEP: 1 Add the service name here
define OPTIONS
- grafana -
endef
export OPTIONS
ARGS=$(wordlist 2, $(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
$(eval $(ARGS):;#:)
ifeq (grafana, $(filter grafana,$(ARGS)))
COMPOSE_FILES:=$(COMPOSE_FILES) -f $(SERVICES_DIR)/docker-compose.grafana.yml
endif
SERVICES:=$(filter-out ${OPTIONS},$(ARGS))
.PHONY: $(OPTIONS)
help:
echo "refer to the README.md file"
build:
make compose grafana
compose:
docker compose $(COMPOSE_FILES) config > docker-compose.yml
clean:down
rm -rf ./docker-compose.yml
down:
docker compose down
run:build
docker compose up -d $(SERVICES)
Problem Reproduction
I run the following:
make run
which will build the docker-compose.yml file in the root using the docker compose config command and using the -f services/docker-compose.grafana.yml command with the base file.
Once the container is up and reachable on localhost:3000 I check by entering the password and it works
Now I change the password in the conf/grafana/.env to supersecretpass2 and run the make run again
This actually rewrites the docker-compose.yml file with the newly updated environment variables for grafana service and re-runs the docker compose up command again which should pick up the new configuration i.e., new password.
Problem
Even though the docker-compose.yml is updated and the CLI states that the container is recreated and restarted, upon entering the new password the Grafana UI does not pick up the adapted environment variable
Inspection
upon doing
docker inspect my-grafana
I can clearly see the
"StdinOnce": false,
"Env": [
"GF_SECURITY_ADMIN_PASSWORD=supersecretpass2",
"GF_SERVER_SERVE_FROM_SUB_PATH=true",
"GF_SECURITY_ADMIN_USER=admin",
"GF_PATHS_CONFIG=/usr/local/etc/grafana/grafana.ini",
"GF_SERVER_ROOT_URL=/grafana",
"PATH=/usr/share/grafana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GF_PATHS_DATA=/var/lib/grafana",
"GF_PATHS_HOME=/usr/share/grafana",
"GF_PATHS_LOGS=/var/log/grafana",
"GF_PATHS_PLUGINS=/var/lib/grafana/plugins",
"GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
],
by executing:
docker compose exec -it grafana /bin/bash env
I can see that the updated env var of the password is passed into the container, however Grafana does not pick this changes up.
the UI mentions the password is invalid, but accepts the original password.
Repo for Bug Reproduction
In order to reproduce this bug I have the following repo
I have a Golang project I am working and have multiple micro-services in the same code repository. My directory structure is roughly as follows:
├── pkg
├── cmd
│ ├── servicea
│ └── serviceb
├── internal
│ ├── servicea
│ └── serviceb
├── Makefile
├── scripts
│ └── protogen.sh
├── vendor
│ └── ...
├── go.mod
├── go.sum
└── readme.md
The main.go files for the respective services are in cmd/servicex/main.go
I've put the individual Dockerfiles for the services in cmd/servicex.
Roughly, this is how my Dockerfile looks like:
FROM golang:1.15.6
ARG version
COPY go.* <repo-path>
COPY pkg/ <repo-path>/pkg/
COPY internal/servicea internal/servicea
COPY vendor/ <repo-path>/vendor/
COPY cmd/servicea/ <repo-path>/cmd/servicea/
WORKDIR <repo-path>/cmd/servicea/
RUN GO111MODULE=on GOFLAGS=-mod=vendor CGO_ENABLED=0 GOOS=linux go build -v -ldflags "-X <repo-path>/cmd/servicea/main.version=$version" -a -installsuffix cgo -o servicea .
FROM alpine:3.12
RUN apk --no-cache add ca-certificates
WORKDIR /servicea/
COPY --from=0 <repo-path>/cmd/servicea .
EXPOSE 50051
ENTRYPOINT ["/servicea/servicea"]
I am using Scylla as my DB for this service and gRPC is the protocol for communication.
This is my docker-compose.yml for this service.
version: '3'
services:
db:
container_name: servicedb
image: scylladb/scylla
hostname: db
environment:
GET_HOST_FROM: dns
SCYLLA_USER: <user>
SCYLLA_PASS: <password>
ports:
- 9042:9042
networks:
- serviceanet
servicea:
container_name: servicea
image: servicea-production:latest
hostname: servicea
build:
context: .
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- serviceanet
volumes:
- .:<repo-path>
ports:
- 50051:50051
depends_on:
- db
links:
- db
labels:
kompose.service.type: LoadBalancer
networks:
serviceannet:
driver: bridge
I am using kompose to generate the corresponding kubernetes yaml files.
However, when I run the compose locally or try to deploy it on minikube/GKE, my service instance is not able to connect to my DB and I get an error like this:
failed to create scylla session, gocql: unable to create session: control: unable to connect to initial hosts: dial tcp 127.0.0.1:9042: connect: connection refused
Otherwise, if I run a local scylla docker instance with the following command:
docker run --name some-scylla -p 9042:9042 -d scylladb/scylla --broadcast-address 127.0.0.1 --listen-address 0.0.0.0 --broadcast-rpc-address 127.0.0.1
and then do a go run cmd/servicea/main.go my service seems to be running and the API endpoints are working(verified with Evans).
127.0.0.1 (localhost) is the host/container on which your service is running. in the case of multiple containers (either using docker-compose or k8s) they would have different IP addresses and 127.0.0.1 would correspond to different hosts depending where you are connecting from. In your gocql initialization, provide the db address using a configuration/environment variable. docker-compose will automatically configure a hostname for the db container and with k8s you can use its service discovery mechanism.
I'm using a VPS with Debian 10 on it. I also have one domain name.
My goal is to self-host a few services, like FreshRSS or Nextcloud. To deploy these services, I'm using Docker and Docker Compose. I have one folder per service.
Because I would like to get a reverse proxy and assign subdomains to services (for example : cloud.domainname.com to Nextcloud), I installed Traefik yesterday. However, I cannot get the service to work. This is probably bad configuration from me, as I'm a total beginner in setting up reverse proxies. For example, I'm trying to get it to work with ArchiveBox, which runs on the port 8000. I would like Traefik to map my subdomain archive.domainname.com to the port 8000 of my VPS.
These are the steps I did yesterday:
Installed ArchiveBox on my VPS with Docker Compose and configured it. It's working successfully.
Created a new network for traefik: sudo docker network create --driver=bridge --subnet=192.168.0.0/16 traefik_lan
Installed Traefik with Docker Compose, added dynamic configuration by following tutorials.
Added the labels and the network to the Docker Compose file of ArchiveBox.
Started both. However, ArchiveBox creates a new network and does not seems to use the Traefik one. I can also still access directly ArchiveBox at domainname.com:8000.
Below are the config files.
ArchiveBox
ArchiveBox docker-compose.yml
# Usage:
# docker-compose up -d
# docker-compose run archivebox init
# echo "https://example.com" | docker-compose run archivebox archivebox add
# docker-compose run archivebox add --depth=1 https://example.com/some/feed.rss
# docker-compose run archivebox config --set PUBLIC_INDEX=True
# Documentation:
# https://github.com/ArchiveBox/ArchiveBox/wiki/Docker#docker-compose
version: '3.7'
services:
archivebox:
# build: .
image: ${DOCKER_IMAGE:-archivebox/archivebox:latest}
command: server 0.0.0.0:8000
stdin_open: true
tty: true
ports:
- 8000:8000
environment:
- USE_COLOR=True
- SHOW_PROGRESS=False
- SEARCH_BACKEND_ENGINE=sonic
- SEARCH_BACKEND_HOST_NAME=sonic
- SEARCH_BACKEND_PASSWORD=SecretPassword
volumes:
- ./data:/data
depends_on:
- sonic
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik_lan"
- "traefik.http.routers.archiveboxnotls.rule=Host(`${ARCHIVE_URL}`)"
- "traefik.http.routers.archiveboxnotls.entrypoints=webinsecure"
- "traefik.http.routers.archiveboxnotls.middlewares=tlsredir#file"
- "traefik.http.routers.archivebox.rule=Host(`${ARCHIVE_URL}`)"
- "traefik.http.routers.archivebox.entrypoints=websecure"
- "traefik.http.routers.archivebox.tls=true"
- "traefik.http.routers.archivebox.tls.certresolver=letsencrypt"
networks:
- traefik_lan
# Run sonic search backend
sonic:
image: valeriansaliou/sonic:v1.3.0
ports:
- 1491:1491
environment:
- SEARCH_BACKEND_PASSWORD=SecretPassword
volumes:
- ./etc/sonic/config.cfg:/etc/sonic.cfg
- ./data:/var/lib/sonic/store/
networks:
traefik_lan:
external: true
I'm then expected to run it like so:
sudo ARCHIVE_URL=archive.mydomain.com docker-compose up -d
Traefik
This is the structure of my /traefik folder in /home.
.
├── config
│ ├── acme.json
│ ├── dynamic-conf
│ │ ├── dashboard.toml
│ │ ├── tlsredir.toml
│ │ └── tls.toml
│ └── traefik.toml
└── docker-compose.yml
docker-compose.yml
version: '3'
services:
reverse-proxy:
container_name: traefik
image: traefik:v2.4
restart: unless-stopped
ports:
- "80:80"
- "443:443"
networks:
- traefik_lan
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config:/etc/traefik:ro
- ./config/acme.json:/acme.json
networks:
traefik_lan:
external: true
traefik.toml
[api]
dashboard = true
[providers]
[providers.docker]
exposedByDefault = false
[providers.file]
directory = "/etc/traefik/dynamic-conf"
watch = true
[entryPoints]
[entryPoints.websecure]
address = ":443"
[entryPoints.webinsecure]
address = ":80"
[entryPoints.dot]
address = ":853"
[certificatesResolvers.letsencrypt.acme]
email = "myemail#gmail.com"
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
#caServer: "https://acme-v02.api.letsencrypt.org/directory"
storage = "acme.json"
[certificatesResolvers.letsencrypt.acme.tlsChallenge]
[accessLog]
format = "json"
[accessLog.fields]
defaultMode = "drop"
[accessLog.fields.names]
"ClientAddr" = "keep"
"RequestAddr" = "keep"
"RequestMethod" = "keep"
"RequestPath" = "keep"
"DownstreamStatus" = "keep"
dashboard.toml
[http.routers.api]
rule = "Host(`traefik.domain.tld`)"
entrypoints = ["webinsecure"]
service = "api#internal"
middlewares = ["tlsredir#file"]
[http.routers.api-secure]
rule = "Host(`traefik.domain.tld`)"
entrypoints = ["websecure"]
service = "api#internal"
middlewares = ["secured"]
[http.routers.api-secure.tls]
certResolver = "letsencrypt"
tlsredir.toml
[http.middlewares]
[http.middlewares.tlsredir.redirectScheme]
scheme = "https"
permanent = true
tls.toml
[tls]
[tls.options]
[tls.options.default]
minVersion = "VersionTLS12"
sniStrict = true
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
]
curvePreferences = ["CurveP521","CurveP384"]
Thank you in advance for you help.
So I'm learning Docker and how to containerize apps into services. Not sure if I get it all the way.
I have three services:
web-server nginx server that binds ports 80 and 443 to outside. Basically the "frontend" of the app.
app1 NodeJS app that serves content on port 3000
app2 NodeJS app that serves content on port 3000, basically it's a clone of app1 and I use it just for learning purposes.
Now my idea of running this "app" in prod would be to launch docker-compose up which launches all the services. Is this a correct way to launch multi container apps?
My repo structure is this:
.
├── Dockerfile
├── app1
│ ├── Dockerfile
│ ├── index.js
│ ├── package-lock.json
│ └── package.json
├── app2
│ ├── Dockerfile
│ ├── index.js
│ ├── package-lock.json
│ └── package.json
├── docker-compose.yml
├── index.html
└── web-server.conf
The root Dockerfile is for web-server service. My docker-compose.yml looks like this:
version: '3'
services:
web-server:
container_name: web-server
build: .
ports:
- "80:80"
- "443:443"
depends_on:
- app1
- app2
app1:
container_name: app1
build: ./app1
restart: always
app2:
container_name: app2
build: ./app2
restart: always
The command docker-compose up builds app1 and app2 images, however it fails on web-server image because nginx throws an error:
host not found in upstream "app1" in /etc/nginx/conf.d/default.conf:11
nginx: [emerg] host not found in upstream "app1" in /etc/nginx/conf.d/default.conf:11
web-server service Dockerfile
FROM nginx:alpine
EXPOSE 80
COPY ./web-server.conf /etc/nginx/conf.d/default.conf
COPY ./index.html /var/www/html/
RUN apk add bash
RUN nginx
Contents of web-server.conf:
server {
listen *:80;
server_name web_server;
root /var/www/html;
location / {
index index.html;
}
location /app1 {
proxy_pass http://app1:3000;
proxy_redirect off;
}
location /app2 {
proxy_pass http://app2:3000;
proxy_redirect off;
}
}
To me it looks like the config for nginx won't recognize http://app1 hostname at the build time. I tried experimenting and I replaced the values for proxy_pass directives with localhost. This will build the image which I can run along with other images. If I do bundle exec -it web-server-image /bin/bash and I try curl http://app1:3000 then it works. If I edit the nginx config to point to these URLs it starts to work.
So I think I'm getting there but it seems it won't recognize the host names when running docker-compose up.
Any ideas? Is my approach correct?
If you just delete the two RUN lines at the end of the web-server Dockerfile this will work fine.
If you look at the nginx image's Dockerfile that ends with
CMD ["nginx", "-g", "daemon off;"]
which will be inherited by your image; you don't need to do anything special to cause nginx to start when the container launches.
Meanwhile, at the point the Dockerfile runs, the build sequence runs in a fairly isolated environment: none of the Compose network environment is set up, other containers aren't necessarily running, volumes aren't attached, etc. The only custom settings that take effect are things specifically in the build: block in the docker-compose.yml file.
So: when the Dockerfile runs RUN nginx, it tries to start nginx inside the build environment, but that's not attached to the Compose network, so it fails. But you don't need to do that at all because the base image already has a CMD nginx ... setting, and just deleting that line will fix it.
I am using docker-compose to create a multi-container environment where I have one mongodb instance and two python applications. I am having trouble when I am changing my files locally, but docker-compose up doesn't reflect the changes I made in my file. What am I doing wrong?
My project structure:
.
├── docker-compose.yml
├── form
│ ├── app.py
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── static
│ └── templates
│ ├── form_action.html
│ └── form_sumbit.html
├── notify
│ ├── app.py
│ ├── Dockerfile
│ ├── requirements.txt
└── README
Dockerfiles are pretty similar for two apps. One is given below:
FROM python:2.7
ADD . /notify
WORKDIR /notify
RUN pip install -r requirements.txt
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: mongo:3.0.2
container_name: mongo
networks:
db_net:
ipv4_address: 172.16.1.1
web:
build: form
command: python -u app.py
ports:
- "5000:5000"
volumes:
- form:/form
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.2
notification:
build: notify
command: python -u app.py
volumes:
- notify:/notify
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.3
networks:
db_net:
external: true
volumes:
form:
notify:
Here is my output for docker volume ps:
local form
local healthcarereminder_form
local healthcarereminder_notify
local notify
[My understanding till now: You can see there are two instances of form and notify, with one having project folder name appended. So docker might be looking for changes in a different file. I am not sure.]
If you're trying to mount a host directory in the docker-compose file do not declare notify as a VOLUME directive.
Instead treat it like a local folder
notification:
build: notify
command: python -u app.py
volumes:
# this points to a relative ./notify directory.
- ./notify:/notify
environment:
....
volumes:
form:
#do not declare the volume here.
# notify:
When you were declare a VOLUME node at the bottom of the docker-compose file, docker makes an special internal directory meant to be shared between docker images. Here is more details: https://docs.docker.com/engine/tutorials/dockervolumes/#add-a-data-volume