So here is a rough template of my docker-compose file:
services:
proxy_main:
image: jwilder/nginx-proxy:1.0.4-alpine
.....
profiles: ["proxy"]
networks:
- proxy_network
- project_1
- project_2
proxy_encrypt:
image: nginxproxy/acme-companion:2.2
.....
profiles: ["proxy"]
networks:
- proxy_network
project1_web:
.....
profiles: ["p1"]
networks:
- project_1
project1_db:
.....
profiles: ["p1"]
networks:
- project_1
project2_web:
.....
profiles: ["p2"]
networks:
- project_2
networks:
proxy_network:
project_1:
project_2:
I frequently use profiles to spin up and stop certain containers, but because all of my containers need to go through a reverse proxy, the nginx proxy is linked to all of the networks, so they can get the right certificates.
When I stop a container however, I don't want to stop the proxy container, because that would affect the containers I want to keep running.
So when I do a command like this:
docker compose --profile p2 down
I get the following message:
[+] Running 1/1
⠿ Container docker-project2_web-1 Removed 0.6s
⠿ Network docker_p2_network Error 0.0s
failed to remove network docker_p2_network: Error response from daemon: error while removing network: network docker_p2_network id cd9dc8e29b3af6e1f6ade9bfbdfc2ffceb0fd5af365cf82c27a96b02eb252035 has active endpoints
Now this is (I think) that it is becase the proxy is the active container, so I get that - but error messages annoy me, so my question is:
Can I setup my proxy container differently so it is not linked to all the networks??
Related
I have 2 different directories each containing docker containers for different purposes and both spun up with docker compose.
Dir A has Traefik config and container (and other containers) as well as environment variables whereas Dir B is a bunch of containers.
I want to now include Traefik labels in Dir B containers but when I run compose in Dir B, I'm facing:
WARN[0000] The "DOMAIN_NAME" variable is not set. Defaulting to a blank string.
service "[service name]" refers to undefined network traefik_proxy: invalid compose project
I'm guessing this is because services in Dir B can't see traefik_proxy since it's part of a different stack and same with the DOMAIN_NAME variable.
How can I have Dir B 'reach across' to Dir A? Is it even possible with my current config?
If you want to have multiple compose projects share a single Traefik frontend, that's certainly possible, but you need to place Traefik on a shared network. For this model, I would suggest starting with a docker-compose.yaml that only deploys Traefik; e.g:
version: "3"
services:
traefik:
image: docker.io/traefik:latest
command:
- --api.insecure=true
- --providers.docker
- --accesslog=true
- --accesslog.filepath=/dev/stderr
- --providers.docker.exposedByDefault=false
ports:
- "80:80"
- "443:443"
- "127.0.0.2:8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
services:
external: true
Start by creating the shared network:
docker network create services
And then starting the Traefik project:
pushd traefik; docker-compose up -d; popd
Now for every project you want to make available via Traefik, put your services on the services network. For example, let's say we have this in app1/docker-compose.yaml:
version: "3"
services:
app1:
image: docker.io/containous/whoami
networks:
- services
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1.rule=PathPrefix(`/app1`)"
networks:
services:
external: true
Then I can run:
pushd app1; docker-compose up -d; popd
And now my app1 service is available at http://localhost/app1/.
We can add as many services as we want like this; the only requirement is that the containers are attached to the services network.
I have a docker compose file set up with 3 separate containers (Flask, Nginx and Solr)
After starting up all 3 run successfully but my Flask application can't connect to my Solr instance and when I run:
wget -S http://localhost:8983/solr/CORE_NAME/select
I get the error "Connecting to localhost (localhost)|127.0.0.1|:8983... failed: Connection refused."
I am fairly new to docker and been around a few different forums looking at this issue but nothing has worked so far. I have tried creating a network also but running into the same issue.
Here is my docker-compose.yml.
version: "2.7"
services:
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
container_name: nginx
ports:
- "80:80"
- "8181:8181"
volumes:
- ./:/opt/ee1
- ee1-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
depends_on:
- flask
flask:
build:
context: .
dockerfile: Dockerfile-flask
entrypoint: ["/bin/bash", "./system/start-uwsgi-docker.bash"]
container_name: flask
user: root
restart: always
volumes:
- ./:/opt/ee1
- ./ee1config.ini:/opt/ee1config.ini
- ee1jobs-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
links:
- solr
solr:
build:
context: .
dockerfile: Dockerfile-solr
container_name: solr
volumes:
- data:/var/solr
entrypoint:
- bash
- "-c"
- "precreate-core ee1_1; precreate-core ee1_2; exec solr -f"
ports:
- "8983:8983"
volumes:
sockets-volume: {}
ee1-logs-volume: {}
data:
Every docker container is - network wise - a separate host with it's own IP.
Traffic to localhost or 127.0.0.1 will definitely never leave that container.
So what you need to find out is the IP of the server container (solr) you actually want to talk to, then configure the client container (flask) accordingly. This can be done by e.g. docker inspect. Be aware that upon container restart the IPs can change. You will want to use something like DNS rather than raw IPs.
Since you use docker compose, each container for a service joins the same network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For more details check out
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/
I am starting to use portrainer.io to manage my docker images, instead of Synology DSM Docker GUI.
Background information:
I've used MacVLAN to create an own IP address for my Pihole Docker, overall everything regarding this piHole is running fine with this settings, made by DSM GUI.
environment network volumesports
Problem:
I now would like to use portrainer.io to manage my Docker installation. Including the Stack option which should be docker compose.
I am now struggeling to get my PiHole Image up with this Docker script:
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
networks: docker
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "80:80/tcp"
environment:
TZ: 'Europe/Berlin'
WEBPASSWORD: 'password'
ServerIP: "0.0.0.0"
# Volumes store your data between container upgrades
volumes:
- '/pihole/pihole/:/etc/pihole/'
- '/pihole/dnsmasq/:/etc/dnsmasq.d/'
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
Does anyone have an idea why I get "Unable to deploy stack" as error message?
You are telling the service to use a network called "docker", but the network is not defined in the compose file. Is this the complete docker-compose file?
If yes, then you are missing the networks section:
networks:
docker:
external: true
I have a Docker Swarm web application that can deploy fine and can be reached from outside by a browser.
But it was not showing the client IP address to the HTTP service.
So I decided to add a Traefik service to the Docker Compose file to expose the client IP to the HTTP service.
I use a mode: host and a driver: overlay for this reason.
The complete configuration is described in two Docker Compose files that I run in sequence.
First I run the docker stack deploy --compose-file docker-compose-dev.yml common command on the file:
version: "3.9"
services:
traefik:
image: traefik:v2.5
networks:
common:
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
command:
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=common"
- "--entrypoints.web.address=:80"
# Set a debug level custom log file
- "--log.level=DEBUG"
- "--log.filePath=/var/log/traefik.log"
- "--accessLog.filePath=/var/log/access.log"
# Enable the Traefik dashboard
- "--api.dashboard=true"
deploy:
placement:
constraints:
- node.role == manager
labels:
# Expose the Traefik dashboard
- "traefik.enable=true"
- "traefik.http.routers.dashboard.service=api#internal"
- "traefik.http.services.traefik.loadbalancer.server.port=888" # A port number required by Docker Swarm but not being used in fact
- "traefik.http.routers.dashboard.rule=Host(`traefik.learnintouch.com`)"
- "traefik.http.routers.traefik.entrypoints=web"
# Basic HTTP authentication to secure the dashboard access
- "traefik.http.routers.traefik.middlewares=traefik-auth"
- "traefik.http.middlewares.traefik-auth.basicauth.users=stephane:$$apr1$$m72sBfSg$$7.NRvy75AZXAMtH3C2YTz/"
volumes:
# So that Traefik can listen to the Docker events
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "~/dev/docker/projects/common/volumes/logs/traefik.service.log:/var/log/traefik.log"
- "~/dev/docker/projects/common/volumes/logs/traefik.access.log:/var/log/access.log"
networks:
common:
name: common
driver: overlay
Then I run the docker stack deploy --compose-file docker-compose.yml www_learnintouch command on the file:
version: "3.9"
services:
www:
image: localhost:5000/www.learnintouch
networks:
common:
volumes:
- "~/dev/docker/projects/learnintouch/volumes/www.learnintouch/account/data:/usr/local/learnintouch/www/learnintouch.com/account/data"
- "~/dev/docker/projects/learnintouch/volumes/www.learnintouch/account/backup:/usr/local/learnintouch/www/learnintouch.com/account/backup"
- "~/dev/docker/projects/learnintouch/volumes/engine:/usr/local/learnintouch/engine"
- "~/dev/docker/projects/common/volumes/letsencrypt/certbot/conf/live/thalasoft.com:/usr/local/learnintouch/letsencrypt"
- "~/dev/docker/projects/common/volumes/logs:/usr/local/apache/logs"
- "~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs"
user: "${CURRENT_UID}:${CURRENT_GID}"
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
labels:
- "traefik.enable=true"
- "traefik.http.routers.www.rule=Host(`dev.learnintouch.com`)"
- "traefik.http.routers.www.entrypoints=web"
- "traefik.http.services.www.loadbalancer.server.port=80"
healthcheck:
test: curl --fail http://127.0.0.1:80/engine/ping.php || exit 1
interval: 10s
timeout: 3s
retries: 3
networks:
common:
external: true
name: common
Here are the networks:
stephane#stephane-pc:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6beaf0c3a518 bridge bridge local
ouffqdmdesuy common overlay swarm
17et43c5tuf0 docker-registry_default overlay swarm
1ae825c8c821 docker_gwbridge bridge local
7e6b4b7733ca host host local
2ui8s1yomngt ingress overlay swarm
460aad21ada9 none null local
tc846a14ftz5 verdaccio overlay swarm
The docker ps command shows that all containers are healthy.
But a request to http://dev.learnintouch.com/ responds with a Bad Gateway error MOST of the times except when rarely it does not and the web application displays fine.
As a side note, I would like any unhealthy service to be restarted and seen by Traefik. Just like Docker Swarm restarts unhealthy services I would like Traefik to restart unhealthy services too.
The service log:
{"level":"debug","msg":"Configuration received from provider docker: {\"http\":{\"routers\":{\"dashboard\":{\"service\":\"api#internal\",\"rule\":\"Host(`traefik.learnintouch.com`)\"},\"nodejs\":{\"entryPoints\":[\"web\"],\"service\":\"nodejs\",\"rule\":\"Host(`dev.learnintouch.com`)\"},\"traefik\":{\"entryPoints\":[\"web\"],\"middlewares\":[\"traefik-auth\"],\"service\":\"traefik\",\"rule\":\"Host(`common-reverse-proxy`)\"},\"www\":{\"entryPoints\":[\"web\"],\"service\":\"www\",\"rule\":\"Host(`dev.learnintouch.com`)\"}},\"services\":{\"nodejs\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://10.0.14.17:9001\"}],\"passHostHeader\":true}},\"traefik\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://10.0.14.8:888\"}],\"passHostHeader\":true}},\"www\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://10.0.14.18:80\"}],\"passHostHeader\":true}}},\"middlewares\":{\"traefik-auth\":{\"basicAuth\":{\"users\":[\"stephane:$apr1$m72sBfSg$7.NRvy75AZXAMtH3C2YTz/\"]}}}},\"tcp\":{},\"udp\":{}}","providerName":"docker","time":"2021-07-04T10:25:01Z"}
{"level":"info","msg":"Skipping same configuration","providerName":"docker","time":"2021-07-04T10:25:01Z"}
I also tried to have Docker Swarm doing the load balancing with adding the - "traefik.docker.lbswarm=true" property to my service but the Bad Gateway error remained.
I also restarted the Swarm manager:
docker swarm leave --force
docker swarm init
but the Bad Gateway error remained.
I also added the two labels:
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.backend.loadbalancer.stickiness=true"
but the Bad Gateway error remained.
It feels like Traefik hits the web service before that one has a chance to be ready. Would there be any way to tell Traefik to wait a given amount of seconds before hitting the web service ?
UPDATE: I could find, not a solution, but a workaround the issue, by splitting the first above common stack file into two files, with one dedicated to the traefik stack. I could then start the 3 stacks in the following order: common, www_learnintouch and traefik
The important thing was to start the traefik stack after the others. If I then had to remove and start again the www_learnintouch stack for example, then I had to follow this by removing and starting again the traefik stack.
Also, if I removed the container of www_learnintouch with a docker rm -f CONTAINER_ID command then I also needed to remove and start again the traefik stack.
I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.