Nginx reverse proxy fails to access prometheus container - docker

I've created this docker compose file which build + runs without errors
version: '3.7'
volumes:
grafana:
driver: local
prometheus-data:
driver: local
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
expose:
- 9090
volumes:
- prometheus-data:/prometheus
- ./prometheus:/etc/prometheus
restart: unless-stopped
command:
- "--config.file=/etc/prometheus/prometheus.yml"
grafana:
image: grafana/grafana-oss:latest
container_name: grafana
expose:
- 3000
volumes:
- grafana:/etc/grafana
restart: unless-stopped
nginx:
image: nginx
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /opt/certs/:/etc/nginx/certs/
- ./nginx/gw-web/:/usr/share/nginx/html:ro
- ./nginx/nginx_proxy.conf:/etc/nginx/conf.d/default.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
command: [nginx-debug, '-g', 'daemon off;']
nginx has the following reverse-proxy configuration:
server {
listen 80;
listen [::]:80;
server_name 10.23.225.72;
location /prometheus {
proxy_pass http://prometheus:9090;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
proxy_read_timeout 3000;
}
location /grafana {
proxy_pass http://grafana:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
proxy_read_timeout 3000;
}
location / {
root /usr/share/nginx/html;
}
}
The root page is accessible and works as expected but when I try to go to any of the containers I just get a '404: not found' error.
What I've also tried:
location /prometheus {
proxy_pass http://prometheus:9090/;
and
location /prometheus/ {
proxy_pass http://prometheus:9090/;
and
location /prometheus/ {
proxy_pass http://prometheus:9090;
I've also tried running nginx as a service on the host machine, no success there either and anyway my project requires me to containerize this.
For shits and giggles, I tried adding another nginx container to the docker compose file:
nginx2:
image: nginx
container_name: webserver2
restart: unless-stopped
expose:
- 8080
and nginx config:
location /webserver {
proxy_pass http://webserver2:8080/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
proxy_read_timeout 3000;
}
But that also returns 404.
Is there something going on with the docker networking maybe?
I check the network configuration json by inspecting it from vscode
{
"Name": "frp-6_default",
"Id": "940872d1cf151f8d1bb060edbf95c1bcfe29a544c7ea9561091b79a6c5588510",
"Created": "2023-01-25T11:52:01.520893117Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4a2c2b6ca8aec52f801ad554c6b2c14abbe616078393f1e8ef40bd82ad12aa2a": {
"Name": "webserver2",
"EndpointID": "86228e6678d82a9d6735b5440618be7fc281127e6d610d281d9ffc5bcb4f256f",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
},
"9e13cbc4805fedf4db05b5de989f726bc110b97596b99b933a93e701641294a9": {
"Name": "webserver",
"EndpointID": "1bbcfe058373b225e5f53e8ff4d907d39bed578d9ac14d514ccf8da2d3c9628a",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"b846cc374fa09e90589c982613224f31135660a17a5f4585b24d876ea2abd53c": {
"Name": "grafana",
"EndpointID": "3c0dac6d612a48dc067a5c208334af4f90c8c1d9122e263ca464186c1cf19f35",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"dbe81d7b88413f78a06ce94f86781242299a3f6eee0f96f607ebaf8fb0b17be6": {
"Name": "prometheus",
"EndpointID": "4792bbd19cc7b782ec522d148eef7dcb5e31b5e38d99e886ff728a3f519b4973",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "frp-6",
"com.docker.compose.version": "2.15.1"
}
}
The following stand out to me:
"Internal": false,
"Attachable": false,
Can those be the culprits? If so, how do I change them?
For the record, I also tried putting the IP addresses for the containers in the nginx config like so
location /prometheus {
proxy_pass http://172.18.0.2:9090/;
still unsuccessful...

The problem here is in your prometheus configuration. Because you have:
location /prometheus {
proxy_pass http://prometheus:9090;
...
}
When you access http://yourhost/prometheus/, that request gets proxied to http://prometheus:9090/prometheus/, and by default prometheus doesn't know what to do with that /prometheus path.
You need to tell it that it's being served from a non-root path using the --web.external-url command line option. That might look something like:
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus:/etc/prometheus
- prometheus-data:/prometheus
restart: unless-stopped
command:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
- --web.console.libraries=/usr/share/prometheus/console_libraries
- --web.console.templates=/usr/share/prometheus/consoles
- --web.external-url=http://localhost:8080/prometheus/
(I've preserved all the command line options that are used by default in the prometheus image; the only thing new thing here is the --web.external-url option.)
Before making this change:
$ curl -i http://localhost:8080/prometheus
HTTP/1.1 404 Not Found
Server: nginx/1.23.3
Date: Wed, 25 Jan 2023 13:17:42 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 19
Connection: keep-alive
X-Content-Type-Options: nosniff
404 page not found
After making this change:
$ curl -i http://localhost:8080/prometheus
HTTP/1.1 302 Found
Server: nginx/1.23.3
Date: Wed, 25 Jan 2023 13:18:19 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 40
Connection: keep-alive
Location: /prometheus/graph
Found.

Related

502 Error Nginx in docker proxy to backend server

Help me please. I am trying to setup configuration with backend server in docker container and nginx container which is proxying requests to backend server.
Here is my configuration:
docker network inspect
[
{
"Name": "note",
"Id": "b58827aea0e606437a6be690f68c2f28226775da1cf060e5f3d66e8a7a5ecd2b",
"Created": "2023-01-14T21:09:25.741085061+05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4afa74e27d45b9a8aba8f2f9135ee60ea1dc8900cf8eaacd7b23edb9524fbc47": {
"Name": "backend",
"EndpointID": "5d3f7920df2db22bbd4d62f1a362642660931626524d097b5c3c9f0fbcecd464",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"cc60ab137301a32c6b5daee5e185eb732460616e35a85b140900d388b305366b": {
"Name": "mongo",
"EndpointID": "6e2ba2aeed86c09e10ee60c32fbc9977fce3d50ee927ecaa081be8a906be6709",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"fddcabdf6d5a676d381e08edf45315697940e05a38b9ebc36728820fa778a6eb": {
"Name": "nginx",
"EndpointID": "a25f40ad8967844d26ab247e81c17e5316d1774441d544157e7742bc3dd115a9",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fddcabdf6d5a nginx "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx
cc60ab137301 mongo:latest "docker-entrypoint.s…" 55 minutes ago Up 55 minutes 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp mongo
4afa74e27d45 registry.gitlab.com/noteit/backend:master "/bin/sh -c 'npm run…" 55 minutes ago Up 55 minutes 0.0.0.0:3002->3001/tcp, :::3002->3001/tcp backend
nginx.conf
events {
worker_connections 1024;
}
http {
include mime.types;
upstream backend {
server backend:3002;
}
server {
listen 80;
location /api {
proxy_pass http://backend;
}
location /auth {
proxy_pass http://backend;
}
location / {
return 500;
}
}
}
So, request to http://localhost:8080/auth/login recives 502 response from nginx.
In nginx logs I see:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/15 15:20:37 [error] 31#31: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /auth/login/ HTTP/1.1", upstream: "http://172.19.0.2:3002/auth/login/", host: "localhost:8080"
172.19.0.1 - - [15/Jan/2023:15:20:37 +0000] "POST /auth/login/ HTTP/1.1" 502 157 "-" "PostmanRuntime/7.29.2"
Explain me please, how can I solve it?
I tried to search the solution, but all my tries were unsucsessful.

Jenkins with nginx using docker Port 50000 config

I am using Jenkins and Nginx both in Docker,
From Jenkins docker documentation, it seems that jenkins need 2 ports, 50000 and 8080, Reference : https://github.com/jenkinsci/docker/blob/master/README.md
Nginx acting as reverse proxy has this configuration right now
server {
listen 80;
server_name jenkins.kryptohive.com
www.jenkins.kryptohive.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://jenkins.kryptohive.com$request_uri;
}
}
server {
listen 443 ssl http2;
server_name www.jenkins.kryptohive.com;
server_tokens off;
include /etc/nginx/conf.d/ssl.kryptohive;
return 301 https://jenkins.kryptohive.com$request_uri;
}
# configuration of the server
server {
listen 443 ssl http2;
server_name jenkins.kryptohive.com;
access_log /var/log/nginx/jenkins_access.log;
error_log /var/log/nginx/jenkins_error.log;
include /etc/nginx/conf.d/ssl.kryptohive;
include /etc/nginx/conf.d/gzip_conf;
server_tokens off;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://jenkins:8080;
proxy_read_timeout 90;
proxy_redirect http://jenkins:8080 https://jenkins.kryptohive.com;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
add_header 'X-SSH-Endpoint' 'jenkins.kryptohive.com:50000' always;
}
}
Reference for nginx config : http://web.archive.org/web/20190723112236/https://wiki.jenkins.io/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
and this works perfectly fine to serve Jenkins website ,but i get error
SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate)
Request headers:
Content-Type: application/json
User-Agent: GitLab/14.0.0-pre
X-Gitlab-Event: Push Hook
Request body:
{
"object_kind": "push",
"event_name": "push",
"before": "e7f7c62995e68446fb1c519fb7f2316eb083bb83",
"after": "9d62e92972ed32ca67c09108395ebad9a20f3e92",
"ref": "refs/heads/master",
"checkout_sha": "9d62e92972ed32ca67c09108395ebad9a20f3e92",
"message": null,
"user_id": 4642285,
"user_name": "Abstract Aesthetics",
"user_username": "4bstractanimation",
"user_email": "",
"user_avatar": "https://secure.gravatar.com/avatar/99a9286c8eaf9b7335f91c3ddbdad7fd?s=80&d=identicon",
"project_id": 26279073,
"project": {
"id": 26279073,
"name": "ucurs-default-shop",
"description": "",
"web_url": "https://gitlab.com/4bstractanimation/django-standard-shop",
"avatar_url": null,
"git_ssh_url": "git#gitlab.com:4bstractanimation/django-standard-shop.git",
"git_http_url": "https://gitlab.com/4bstractanimation/django-standard-shop.git",
"namespace": "Abstract Aesthetics",
"visibility_level": 0,
"path_with_namespace": "4bstractanimation/django-standard-shop",
"default_branch": "master",
"ci_config_path": "",
"homepage": "https://gitlab.com/4bstractanimation/django-standard-shop",
"url": "git#gitlab.com:4bstractanimation/django-standard-shop.git",
"ssh_url": "git#gitlab.com:4bstractanimation/django-standard-shop.git",
"http_url": "https://gitlab.com/4bstractanimation/django-standard-shop.git"
},
"commits": [
{
"id": "9d62e92972ed32ca67c09108395ebad9a20f3e92",
"message": "theme updated\n",
"title": "theme updated",
"timestamp": "2021-05-31T20:45:11+05:00",
"url": "https://gitlab.com/4bstractanimation/django-standard-shop/-/commit/9d62e92972ed32ca67c09108395ebad9a20f3e92",
"author": {
"name": "Abstract Aesthetics",
"email": "4bstractanimation#gmail.com"
},
"added": [
],
"modified": [
"public/index.html"
],
"removed": [
]
},
{
"id": "6eaf6296ce2a7215431ae2e641fd64159fd26be0",
"message": "theme updated\n",
"title": "theme updated",
"timestamp": "2021-05-31T20:44:57+05:00",
"url": "https://gitlab.com/4bstractanimation/django-standard-shop/-/commit/6eaf6296ce2a7215431ae2e641fd64159fd26be0",
"author": {
"name": "Abstract Aesthetics",
"email": "4bstractanimation#gmail.com"
},
"added": [
"src/components/admin-view/images/logo_feild.js"
],
"modified": [
"src/StateStore/reducer.js",
"src/components/admin-view/images/index.js",
"src/components/admin-view/information/TextView.js",
"src/components/admin-view/information/index.js",
"src/components/layout/footer/index.js",
"src/components/layout/header/index.js"
],
"removed": [
]
},
{
"id": "e7f7c62995e68446fb1c519fb7f2316eb083bb83",
"message": "theme updated\n",
"title": "theme updated",
"timestamp": "2021-05-31T19:38:49+05:00",
"url": "https://gitlab.com/4bstractanimation/django-standard-shop/-/commit/e7f7c62995e68446fb1c519fb7f2316eb083bb83",
"author": {
"name": "Abstract Aesthetics",
"email": "4bstractanimation#gmail.com"
},
"added": [
"src/components/customers-view/filter-product/price_filter.js"
],
"modified": [
"public/index.html",
"src/components/customers-view/filter-product/index.js",
"src/components/customers-view/populated-view/index.js",
"src/components/customers-view/single-product-card/index.js",
"src/components/customers-view/single-product-view/index.js"
],
"removed": [
]
}
],
"total_commits_count": 3,
"push_options": {
},
"repository": {
"name": "ucurs-default-shop",
"url": "git#gitlab.com:4bstractanimation/django-standard-shop.git",
"description": "",
"homepage": "https://gitlab.com/4bstractanimation/django-standard-shop",
"git_http_url": "https://gitlab.com/4bstractanimation/django-standard-shop.git",
"git_ssh_url": "git#gitlab.com:4bstractanimation/django-standard-shop.git",
"visibility_level": 0
}
}
when i try to connect GitLab to jenkins.
how can i configure nginx to also serv port 50000 of jenkins over ssl
My docker compose enviroment :
version: "3.4"
services:
# JENKINS
jenkins:
image: jenkins/jenkins:lts-jdk11
volumes:
- ${PWD}/Jenkins:/var/jenkins_home
# NGINX SERVER
nginx_server:
image: webdevops/php-nginx:7.3
volumes:
- ${PWD}/config/nginx/conf.d:/etc/nginx/conf.d
- ${PWD}/log/nginx:/var/log/nginx
- ${PWD}/../get-cert/data/certbot/conf:/certs
ports:
- 80:80 # app port
- 443:443
- 50000:50000
container_name: nginx_server
#####################################################
EDIT
The problem is actually with SSL in general, as i tried running
curl jenkins.kryptohive.com
and it gave the following error
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I recreated the SSL certificate, yet still the same error, although in browser it seems to validate my ssl certificate
It was probably some cache issue, as it worked when i commented some code in nginx for proxy headers.
and restarted the server
after that i un commented that code again and restarted server, it still worked.

docker compose: No assignment of host names when bridge is used

In order to make every container part of the default bridge, I added network_mode: bridge in every service. These became part of bridge but the containers are not getting attached with the hostnames. Below is the config.
docker-compose.yml
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
network_mode: bridge
hostname: elasticsearch
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
network_mode: bridge
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
network_mode: bridge
depends_on:
- elasticsearch
Docker compose up
$ docker-compose up -d
Creating docker-elk_elasticsearch_1 ... done
Creating docker-elk_kibana_1 ... done
Creating docker-elk_logstash_1 ... done
Docker network inspect
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f561a85fb2b22bbf251545c7021d57020cf152bd3a5c3c061c7d6b0cb4e267e5",
"Created": "2018-09-19T07:02:49.36259364Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0aedc2ce900b8a51f028e58a85c8db9480fd2816874a608540a899a4daab32fd": {
"Name": "docker-elk_kibana_1",
"EndpointID": "df3af338e0accb880ccc44323e5581064ee8ef84574485f1928d12dc415b598e",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"3f2088847bd8e958a047093b1af879c91c4071f57f0105bb7bf80fb8df832d41": {
"Name": "docker-elk_logstash_1",
"EndpointID": "6588b7eece43144833ae2f9ffe753e3cc6c70d0891a587c3e9a4e9ca84993532",
"MacAddress": "02:42:ac:11:00:06",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
},
"ace35bb6fadd50823f64e9075b5972e6e3b24e8b73273a41e7a48f9eeff89da1": {
"Name": "roach",
"EndpointID": "dd058e3e9f46b2459f14a2e5bdf96eae277e81dcf7ac2e6ac1c97d8220ead30d",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"f90378063d2a0157110b77af39f2526347f1ea9634839e0d2c0c584fb14ff957": {
"Name": "docker-elk_elasticsearch_1",
"EndpointID": "294a2f67196788135f370bbf83526395ba4401afb25db9eb0b59fba7fd358912",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"f954c218e5ab15c83c2a0e2c848549c18879613f6f46d07f7ebf71cc89b6e55b": {
"Name": "rabbitmq",
"EndpointID": "e675ddc6076fe2256553e8b367a82aa36f488457e06ae6cf969c2e04feeb9fb8",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Docker inspect elasticsearch
$ docker inspect docker-elk_elasticsearch_1
"NetworkSettings": {
"Bridge": "",
"SandboxID": "31a438f8fcb3dd8efca37260e77d346f21239b36d8bb30f5f08db4b79880a5c9",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"9200/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "9200"
}
],
"9300/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "9300"
}
]
},
"SandboxKey": "/var/run/docker/netns/31a438f8fcb3",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "294a2f67196788135f370bbf83526395ba4401afb25db9eb0b59fba7fd358912",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:04",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "f561a85fb2b22bbf251545c7021d57020cf152bd3a5c3c061c7d6b0cb4e267e5",
"EndpointID": "294a2f67196788135f370bbf83526395ba4401afb25db9eb0b59fba7fd358912",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:04",
"DriverOpts": null
}
}
}
Kibana Logs where elastic search is inaccessible:
$ docker logs docker-elk_kibana_1
{"type":"log","#timestamp":"2018-09-20T05:27:05Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2018-09-20T05:27:05Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
However, everything works fine with below config where I haven't provided any network related config.
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
depends_on:
- elasticsearch
Containers on the default bridge network cannot refer each other by host name, they can only refer each other by IP. You can find this in the docs here https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge.
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
The solution is to define your custom bridge network in the Compose file with networks as described here https://docs.docker.com/compose/compose-file/#networks and add every container to this user defined network. On this network containers can resolve each other by name.

Docker isolated network receives packets from outside

I set up a docker bridge network (on Linux) for the purpose of testing how network traffic of individual applications (containers) looks like. Therefore, a key requirement for the network is that it is completely isolated from traffic that originates from other applications or devices.
A simple example I created with compose is a ping-container that sends ICMP-packets to another one, with a third container running tcpdump to collect the traffic:
version: '3'
services:
ping:
image: 'detlearsom/ping'
environment:
- HOSTNAME=blank
- TIMEOUT=2
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
blank:
image: 'alpine'
command: sleep 300
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
tcpdump:
image: 'detlearsom/tcpdump'
volumes:
- '$PWD/data:/data'
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
network_mode: 'service:ping'
command: -v -w "/data/dump-011-ping2-${CAPTURETIME}.pcap"
networks:
capture:
driver: "bridge"
internal: true
Note that I have set the network to internal, and I have also disabled IPV6. However, when I run it and collect the traffic, additional to the expected ICMP packets I get IPV6 packets:
10:42:40.863619 IP6 fe80::42:2aff:fe42:e303 > ip6-allrouters: ICMP6, router solicitation, length 16
10:42:43.135167 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local.
10:42:37.875646 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0*- [0q] 2/0/0 (Cache flush) PTR he...F.local., (Cache flush) AAAA fe80::e437:76ff:fe9e:36b4 (161)
What is even stranger is that I receive UDP packets from port 57621:
10:42:51.868199 IP 172.25.0.1.57621 > 172.25.255.255.57621: UDP, length 44
This port corresponds to spotify traffic and most likely originates from my spotify application that is running on the host machine.
My question: Why do I see this traffic in my network that is supposed to be isolated?
For anyone interested, here is the network configuration:
[
{
"Name": "capture-011-ping2_capture",
"Id": "35512f852332351a9f677f75b522982aa6bd288e813a31a3c36477baa005c0fd",
"Created": "2018-08-07T10:42:31.610178964+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": true,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dac25cb8810b2c786735a76c9b8387d1cfb4d6006dbb7549f5c7c3f381d884c2": {
"Name": "capture-011-ping2_tcpdump_1",
"EndpointID": "2463a46cf00a35c8c77ff9f224ff052aea7f061684b7a24b41dab150496f5c3d",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "capture",
"com.docker.compose.project": "capture-011-ping2",
"com.docker.compose.version": "1.22.0"
}
}
]

Docker swarm network not recognizing service/container on worker node. Using Traefik

I'm trying to test out a Traefik load balanced Docker Swarm and added a blank Apache service to the compose file.
For some reason I'm unable to place this Apache service on a worker node. I get a 502 bad gateway error unless it's on the manager node. Did I configure something wrong in the YML file?
networks:
proxy:
external: true
configs:
traefik_toml_v2:
file: $PWD/infra/traefik.toml
services:
traefik:
image: traefik:1.5-alpine
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/infra/acme.json:/acme.json
networks:
- proxy
ports:
- target: 80
protocol: tcp
published: 80
mode: ingress
- target: 443
protocol: tcp
published: 443
mode: ingress
- target: 8080
protocol: tcp
published: 8080
mode: ingress
configs:
- source: traefik_toml_v2
target: /etc/traefik/traefik.toml
mode: 444
server:
image: bitnami/apache:latest
networks:
- proxy
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.port=80
- traefik.backend=nerdmercs
- traefik.backend.loadbalancer.swarm=true
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
- traefik.frontend.rule=Host:www.example.com
You'll see I've enabled swarm and everything
The proxy network is an overlay network and I'm able to see it in the worker node:
ubuntu#staging-worker1:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
f91525416b42 bridge bridge local
7c3264136bcd docker_gwbridge bridge local
7752e312e43f host host local
epaziubbr9r1 ingress overlay swarm
4b50618f0eb4 none null local
qo4wmqsi12lc proxy overlay swarm
ubuntu#staging-worker1:~$
And when I inspect that network ID
$ docker network inspect qo4wmqsi12lcvsqd1pqfq9jxj
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T09:40:37.822595405Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1860b30e97b7ea824ffc28319747b23b05c01b3fb11713fa5a2708321882bc5e": {
"Name": "proxy_visualizer.1.dc0elaiyoe88s0mp5xn96ipw0",
"EndpointID": "d6b70d4896ff906958c21afa443ae6c3b5b6950ea365553d8cc06104a6274276",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"3ad45d8197055f22f5ce629d896236419db71ff5661681e39c50869953892d4e": {
"Name": "proxy_traefik.1.wvsg02fel9qricm3hs6pa78xz",
"EndpointID": "e293f8c98795d0fdfff37be16861afe868e8d3077bbb24df4ecc4185adda1afb",
"MacAddress": "02:42:0a:00:00:18",
"IPv4Address": "10.0.0.24/24",
"IPv6Address": ""
},
"735191796dd68da2da718ebb952b0a431ec8aa1718fe3be2880d8110862644a9": {
"Name": "proxy_portainer.1.xkr5losjx9m5kolo8kjihznvr",
"EndpointID": "de7ef4135e25939a2d8a10b9fd9bad42c544589684b30a9ded5acfa751f9c327",
"MacAddress": "02:42:0a:00:00:07",
"IPv4Address": "10.0.0.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
You'll see Traefik, Portainer, and Visualizer all present but not the apache container on the worker node
Inspecting the network on the worker node
$ sudo docker network inspect qo4wmqsi12lc
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T19:53:29.104259115Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c5725a332db5922a16b9a5e663424548a77ab44ab021e25dc124109e744b9794": {
"Name": "example_site.1.pwqqddbhhg5tv0t3cysajj9ux",
"EndpointID": "6866abe0ae2a64e7d04aa111adc8f2e35d876a62ad3d5190b121e055ef729182",
"MacAddress": "02:42:0a:00:00:3c",
"IPv4Address": "10.0.0.60/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
It shows up in the network's container list but the manager node containers are not there either.
Portainer is unable to see the apache site when it's on the worker node as well.
This problem is related to this: Creating new docker-machine instance always fails validating certs using openstack driver
Basically the answer is
It turns out my hosting service locked down everything other than 22,
80, and 443 on the Open Stack Security Group Rules. I had to add 2376
TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine
env
should look at this https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/system-requirements/#ports-used and make sure they're all open

Resources