traefik configuration always end with bad gateway with the lounge - docker

I'm trying to run a docker on my personal server and make it accessible through traefik (it works if I expose directly the port).
Here is the command I tried.
# This is not working, and always ends in Bad Gateway
sudo docker run --detach \
--name thelounge \
--volume ~/.thelounge:/var/opt/thelounge \
--restart always \
--label traefik.enable=true \
--label 'traefik.http.routers.thelounge.rule=Host(`irc.example.fr`)' \
--label 'traefik.http.routers.thelounge.priority=10' \
--label 'traefik.http.routers.thelounge.entryPoints=websecure' \
--label 'traefik.http.routers.thelounge.tls=true' \
--label 'traefik.http.routers.thelounge.tls.certresolver=example' \
thelounge/thelounge:latest
Notice: example certResolver works for every other domain, and I also have this configuration for it:
[http.routers.Router-Example-To-Legacy]
# won't listen to entry point web
entryPoints = ["websecure"]
# https://docs.traefik.io/routing/routers/#rule
# rule = "Host(`localhost`)"
rule = "HostRegexp(`example.fr`, `{subdomain:.*}.example.fr`)"
service = "legacy-webserver-service"
priority = 2
[http.routers.Router-Example-To-Legacy.tls]
certResolver = "example"
[[http.routers.Router-Example-To-Legacy.tls.domains]]
main = "example.fr"
sans = ["*.example.fr"]
Problem: I have a bad gateway on curl https://irc.example.fr

This is basically running a docker directly inside a traefik network. This last word matters.
I needed to add the network of traefik to my command. To get the network I simply used docker network list.
Also I change my mind about using just the command line and I created a complete docker-compose file.
version: '3'
services:
homeirc:
image: thelounge/thelounge:latest
volumes:
- ./thelounge:/var/opt/thelounge
restart: always
networks:
- traefik
labels:
- "traefik.enable=true"
- "traefik.http.routers.homeirc.rule=Host(`irc.example.fr`)"
- "traefik.http.routers.homeirc.priority=10"
- "traefik.http.routers.homeirc.entryPoints=websecure"
- "traefik.http.routers.homeirc.tls=true"
- "traefik.http.routers.homeirc.tls.certresolver=example"
- "traefik.http.services.homeirc.loadbalancer.server.port=9000"
# Http redirect to https
- "traefik.http.routers.homeirc-non-secure.rule=Host(`irc.example.fr`)"
- "traefik.http.routers.homeirc-non-secure.priority=10"
- "traefik.http.routers.homeirc-non-secure.entryPoints=web"
- "traefik.http.routers.homeirc-non-secure.middlewares=home-irc-https"
- "traefik.http.middlewares.home-irc-https.redirectscheme.scheme=https"
- "traefik.http.middlewares.home-irc-https.redirectscheme.permanent=true"
networks:
traefik:
external:
name: traefikexample_default

Related

How to set up alertmanager.service for running in docker container

I am running prometheus in a docker container, and I want to configure an AlertManager for making it send me an email when the service is down. I created the alert_rules.yml and the prometheus.yml, and I run everything with the following command, mounting both the yml files onto the docker container at the path /etc/prometheus:
docker run -d -p 9090:9090 --add-host host.docker.internal:host-gateway -v "$PWD/prometheus.yml":/etc/prometheus/prometheus.yml -v "$PWD/alert_rules.yml":/etc/prometheus/alert_rules.yml prom/prometheus
Now, I also want prometheus to send me an email when an alert comes up, and that's where I encounter some problems. I configured my alertmanager.yml as follows:
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
receivers:
- name: 'gmail'
email_configs:
- to: 'my_email#gmail.com'
from: 'askonlinetraining#gmail.com'
smarthost: smtp.gmail.com:587
auth_username: 'my_email#gmail.com'
auth_identity: 'my_email#gmail.com'
auth_password: 'the_password'
I actually don't know if the smarthost parameter is configured correctly since I can't find any documentation about it and I don't know which values it should contain
I also created an alertmanager.service file:
[Unit]
Description=AlertManager Server Service
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
Type=Simple
ExecStart=/usr/local/bin/alertmanager \
--config.file /etc/alertmanager.yml
[Install]
WantedBy=multi-user.target
I think something here is messed up: I think the first parameter I pass to ExecStart is a path that doesn't exist in the container, but I have no idea on how I should replace it.
I tried mounting the last two files into the docker container in the same directory where I mount the first two yml files by using the following command:
docker run -d -p 9090:9090 --add-host host.docker.internal:host-gateway -v "$PWD/prometheus.yml":/etc/prometheus/prometheus.yml -v "$PWD/alert_rules.yml":/etc/prometheus/alert_rules.yml -v "$PWD/alertmanager.yml":/etc/prometheus/alertmanager.yml -v "$PWD/alertmanager.service":/etc/prometheus/alertmanager.service prom/prometheus
But the mailing alert is not working and I don't know how to fix the configuration for smoothly running all of this into a docker container. As I said, I suppose the main problem resides in the ExecStart command present in alertmanager.service, but maybe I'm wrong. I can't find anything helpful online, hence I would really appreciate some help
The best practice with containers is to aim to run a single process per container.
In your container, this suggests one container for prom/prometheus and another for prom/alertmanager.
You can run these using docker as:
docker run \
--detach \
--name=prometheus \
--volume=${PWD}:/prometheus.yml:/etc/prometheus/prometheus.yml \
--volume=${PWD}:/rules.yml:/etc/alertmanager/rules.yml \
--publish=9090:9090 \
prom/prometheus:v2.26.0 \
--config.file=/etc/prometheus/promtheus.yml
docker run \
--detach \
--name=alertmanager \
--volume=${PWD}:/rules.yml:/etc/alertmanager/rules.yml \
--publish=9093:9093 \
prom/alertmanager:v0.21.0
A good tool when you run multiple container is Docker Compose in which case, your docker-compose.yml could be:
version: "3"
services:
prometheus:
restart: always
image: prom/prometheus:v2.26.0
container_name: prometheus
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ${PWD}/prometheus.yml:/etc/prometheus/prometheus.yml
- ${PWD}/rules.yml:/etc/alertmanager/rules.yml
expose:
- "9090"
ports:
- 9090:9090
alertmanager:
restart: always
depends_on:
- prometheus
image: prom/alertmanager:v0.21.0
container_name: alertmanager
volumes:
- ${PWD}/alertmanager.yml:/etc/alertmanager/alertmanager.yml
expose:
- "9093"
ports:
- 9093:9093
and you could:
docker-compose up
In either case, you can then browse:
Prometheus on the host's port 9090 i.e. localhost:9090
Alert Manager on the host's port 9093, i.e. localhost:9093

Prometheus cAdvisor with docker swarm

I have setup a docker Cadvisor using docker service cluster and need to dynamically monitor the nodes of docker cluster using active service discovery.
If I have started the prometheus CAdvisor through docker cluster using docker service command, it's working fine and I am successfully able to discover the docker cluster nodes dynamically. But, if I've passed the same parameters of that command in docker compose-file, I cannot see any nodes. Following is the docker compose configuration of prometheus CAdvisor.
cadvisor:
image: google/cadvisor
container_name: cadvisor
ports:
- target: 8080
mode: host
published: 8040
network_mode: "host"
deploy:
mode: replicated
command:
- --docker_only=true
labels:
- "prometheus-job=cadvisor"
volumes:
- /:/rootfs:ro
- /var/run:/var/run
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock:rw
Docker service command:
docker service create --name cadvisor -l prometheus-job=cadvisor \
--mode=global --publish published=8040,target=8080,mode=host \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro \
--mount type=bind,src=/,dst=/rootfs,ro \
--mount type=bind,src=/var/run,dst=/var/run \
--mount type=bind,src=/sys,dst=/sys,ro \
--mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,ro \
google/cadvisor -docker_only
Any help in this regard will be appreciated.

docker run vs docker-compose one of these things is not like the other

I have an nginx proxy setup with a shellscript that looks something like this
docker run --detach --name nginx-proxy --publish 80:80 --publish 443:443 --volume /etc/nginx/certs \
--volume /etc/nginx/vhost.d --volume /usr/share/nginx/html --volume /var/run/docker.sock:/tmp/docker.sock:ro --restart unless-stopped jwilder/nginx-proxy:alpine
echo proxy up
docker run --detach --name nginx-proxy-letsencrypt --volumes-from nginx-proxy --volume /var/run/docker.sock:/var/run/docker.sock:ro \
--restart unless-stopped jrcs/letsencrypt-nginx-proxy-companion
echo ssl companion up
docker run -d \
-e VIRTUAL_HOST=[domain] \
\-e "LETSENCRYPT_HOST=[domain]" \
-e "LETSENCRYPT_EMAIL=[emailaddress]" \
--name [domain] \
--expose 80 \
--restart always \
-v /code/[domain]:/var/www/html \
fauria/lamp
echo test site up at [domain]
and this site works properly and functions as expected.
I then stop the web server container and use the following docker-compose.yaml and it fails with a 502..
version: '3.3'
services:
lamp:
restart: always
image: fauria/lamp
container_name: [domain]
expose:
- "80"
volumes:
- /code/[domain]:/var/www/html
environment:
- VIRTUAL_HOST=[domain]
- LETSENCRYPT_HOST=[domain]
- LETSENCRYPT_EMAIL=[emailaddress]
Why? Aren't they the same? What am I missing?
When you use docker-compose, docker-compose creates a docker network for you, in which all of the services can communicate with each other. Since you simply stopped the container and started it with docker-compose, now it does not have access to the containers on your localhost. This is why you get the 502 error. What you need to do is add the other containers to your docker compose file, and make sure you are connecting to the hosts using the proper service name (instead of localhost use http://service_name:443). Alternatively you can somehow give the containers in your docker network access to your localhost, but I'm not sure how to do this. Maybe you need to use 0.0.0.0 instead of 127.0.0.1?
The problem is that i was not connecting my docker-compose to the bridge network used by default in the proxy image.
version: '3.3'
services:
lamp:
restart: always
image: fauria/lamp
network-mode: bridge
container_name: [domain]
expose:
- "80"
volumes:
- /code/[domain]:/var/www/html
environment:
- VIRTUAL_HOST=[domain]
- LETSENCRYPT_HOST=[domain]
- LETSENCRYPT_EMAIL=[emailaddress]

Docker gitlab container heatly but not accessible

Hello,
I have the following problem on docker 18.06.1-ce.
I have an owncloud container that works with the following configurations:
Image : owncloud/server:10.0
Status healthy
Ports : 0.0.0.0:4090->80/tcp, 0.0.0.0:4093->443/tcp
So far, so good, this container is functional.
Now, I want to add a gitlab container with the following configurations:
Image : gitlab/gitlab-ce:latest
Status : heatly
Ports : 0.0.0.0:2222->22/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:4443->443/tcp
The problem is that I can't access the containers with the ports listed above (connection failed).
I tried to install the container in a different way:
By docker run command :
docker run --detach --hostname nsXXXXX.ip-XX-XXX-XX.eu --env GITLAB_OMNIBUS_CONFIG="external_url 'https://nsXXXXX.ip-XX-XXX-XX.eu:4443'; gitlab_rails['lfs_enabled'] = true;" --publish 4443:443 --publish 8080:80 --publish 2222:22 --name gitlab --restart always --volume /srv/gitlab/config:/etc/gitlab --volume /srv/gitlab/logs:/var/log/gitlab --volume /srv/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
And by docker-compose:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'nsXXXXXXX.ip-XX-XXX-XX.eu'
privileged: true
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://nsXXXXXXX.ip-XX-XXX-XX.eu:4443/'
gitlab_rails['gitlab_shell_ssh_port'] = 4182
ports:
- '4180:80'
- '4443:443'
- '4182:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
My docker is on a dedicated Debain Stretch hosted by kimsufi.
Do you have any ideas to help me? Thank you very much.
Solved : https://forum.gitlab.com/t/docker-gitlab-container-healthy-but-not-accessible/20042/5
It was necessary to map the port of the external URL to the internal port... Beginner's error:)

Traefik cannot reach backend when docker-compose has port mapping

My docker is in swarm mode.
I am puzzled about why traefik is no more able to reach my nexus backend as soon as I settle a port mapping from within its compose file : I got a 504 (timeout) error instead. Without the mapping, traefils works fine.
Traefik is deployed on the swarm, as a service, with the following command :
docker network create --driver=overlay traefik-net
docker service create \
--name traefik \
--constraint=node.role==manager \
--publish 80:80 --publish 8088:8080 \
--with-registry-auth \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--mount type=bind,source=/var/opt/data/flat/gerdce/shared/servers/traefik/out/,target=/out/ \
--mount type=bind,source=/var/opt/data/flat/gerdce/shared/servers/traefik/traefik.toml,target=/traefik.toml \
--network traefik-net \
dvckzasc03.rouen.francetelecom.fr:5000/pli/traefik \
--docker \
--docker.domain=docker.localhost \
--docker.swarmMode=true \
--docker.watch=true \
--api
(Il also tried running traefik from a docker-compose file, but with no more success)
The nexxus stack :
version: '3.3'
services:
nexus:
image: some_nexus:5000/sonatype/nexus3
volumes:
- /var/opt/data/flat/gerdce/shared/repositories/nexus/data:/nexus-data
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
labels:
- "traefik.enable=true"
- "traefik.static.frontend.rule=PathPrefix:/static/rapture"
- "traefik.serviceext.frontend.rule=PathPrefix:/service/extdirect"
- "traefik.serviceout.frontend.rule=PathPrefix:/service/outreach"
- "traefik.nexus.frontend.rule=PathPrefixStrip:/nexus"
- "traefik.port=8081"
networks:
- traefik-net
#ports:
#- "5050:5050"
networks:
traefik-net:
external: true
Everything works fine this way : traefik redirects well every call to /nexus (and s.o.) .... until I uncomment the port mapping!
I really need this port mapping, in order to login / push / pull from my VM.
Any idea on
why this is happening (have I missed stg from the docs ?
what may be the fix or workaround here?
Versions :
Docker version 18.03.0-ce, build 0520e24
docker-compose version 1.22.0, build f46880fe
Traefik 1.6.5
First, I would recommend sticking this into a docker-stack.yml like your Nexus stack file as it will be easier to maintain.
Here's an example of a traefik proxy I deployed yesterday which works with port mappings
version: "3.4"
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
- "8080:8080"
Eventually, I had it working adding a missing label:
- "traefik.docker.network=traefik-net"

Resources