Docker container (nginx) could not resolve host.docker.internal via extra_hosts - docker

I try to connect follow the request from nginx to port 9100 (Node exporter) on linux host.
this is my docker-compose.yml
version: '3.3'
services:
nginx:
image: nginx:1.21.4-perl
ports:
- 80:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
extra_hosts:
- 'host.docker.internal:10.187.1.52'
This is my nginx.conf
worker_processes auto;
http {
listen 80;
server_name localhost;
resolver 127.0.0.11 ipv6=off;
location ~ ^/node(/?.*) {
proxy_pass http://host.docker.internal:9100$1;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 300s;
}
}
This is my docker version
docker version
Client: Docker Engine - Community
Version: 20.10.10
API version: 1.41
Go version: go1.16.9
Git commit: b485636
Built: Mon Oct 25 07:44:50 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
I do reverse proxy for Node Exporter on port 9100. It's run on linux host machine.
It works well when I put ip-address ("10.187.1.52") in nginx.cnf directly.
However, It will failed when I try to use hostname as "host.docker.internal".
I also try to define it on "extra_hosts" section in docker-compose.yml but the result still be failed. I got the same error '[error] 24#24: *1 no resolver defined to resolve host.docker.internal, client: 10.186.110.106, server: localhost, request: "GET /node/metrics HTTP/1.1"'
Could you please give me any suggestions to fix this?
Note!! I'm creating an example for monitoring with load testing on GitHub. This is the snap code from my project so you could see the full source code on this link.

Docker Compose by default exposes the service name of a service as hostname for inter-container networking. In your docker-compose.yml you have a service called appcadvisor so your hostname should be appcadvisor instead of cadvisor.

Related

How to use service name as URL in Docker?

What I want to do:
My docker-composer file contains several services, which each have a service name. All services use the exact same image. I want them to be reachable via the service name (like curl service-one, curl service-two) from the host. The use-case is that these are microservices that should be reachable from a host system.
services:
service-one:
container_name: container-service-one
image: customimage1
service-two:
container_name: container-service-two
image: customimage1
What's the problem
Lots of tutorials say that that's the way to build microservices, but it's usually done by using ports, but I need services names instead of ports.
What I've tried
There are lots of very old answers (5-6 years), but not a single one gives a working answer. There are ideas like parsing the IP of each container and then using that, or just using hostnames internally between docker containers, or complex third party tools like building your own DNS.
It feels weird that I'm the only one who needs several APIs reachable from a host system, this feels like standard use-case, so I think I'm missing something here.
Can somebody tell me where to go from here?
I'll start from basic to advanced as far as I know.
For starter, every service that's part of the docker network (by default everyone that's part of the compose file) so accessing each other by their service name is already there "for free".
If you want to use the service names from the host itself you can set a reverse proxy like nginx and by the server name (in your case would be equal to service name) route the appropriate port on the host running the docker containers.
the basic idea is to intercept all communication to port 80 on the server and send the communication by the incoming DNS name.
Here's an example:
compose file:
version: "3.9"
services:
nginx-router:
image: "nginx:latest"
volumes:
- type: bind
source: ./nginx.conf
target: /nginx/nginx.conf
ports:
- "80:80"
service1:
image: "nginx:latest"
ports:
- "8080:80"
service2:
image: "httpd:latest"
ports:
- "8081:80"
nginx.conf
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 8000;
multi_accept on;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_set_header Host $host;
proxy_pass http://service1:80;
}
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_pass http://service2:80;
}
}
}
in this example if my server name is localhost I'm routing to service2 which is an httpd image or Apache HTTP and we're getting it works which is the default apache image HTML page:
and when we're accessing through 127.0.0.1 server name we should see nginx and indeed this is what we're getting:
in your case you'd use the service names instead after setting them as a DNS record and using this DNS record to route to the appropriate service.

Nginx Reverse Proxy Problem: Using Docker-compose and Rundeck

Setting up my rundeck application within a docker container and using nginx to reverse proxy. Presume my problem is originating from the proxy that is being received back into the server.
When I access the desired URL (https://vmName.Domain.corp/rundeck) I am able to see the login page, even though it doesn't have any UI. Once I enter the default admin:admin information I am directed to a 404 page. I pasted below one of the error logs from the docker-compose logs. You'll notice it's going to /etc/nginx to find rundeck's logo.
I can't determine if the problem is in my docker-compose file or nginx' config file.
Example of error log:
production_nginx | 2021-02-04T08:17:50.770544192Z 2021/02/04 08:17:50 [error] 29#29: *8 open() "/etc/nginx/html/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js" failed (2: No such file or directory), client: 10.243.5.116, server: vmName.Domain.corp, request: "GET /assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js HTTP/1.1", host: "vmName.Domain.corp", referrer: "https://vmName.Domain.corp/rundeck/user/login"
If curious, I can access Rundeck's logo if I go to: https://vmName.Domain.corp/rundeck/assets/jquery-aafa4de7f25b530ee04ba20028b2d154.js"
Here's more information on my set-up
/nginx/sites-enabled/docker-compose.yml (main machine)
rundeck:
image: ${RUNDECK_IMAGE:-jordan/rundeck:latest}
container_name: production_rundeck
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_SERVER_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_GRAILS_URL: "https://vmName.Domain.corp/rundeck"
RUNDECK_SERVER_FORWARDED: "true"
RDECK_JVM_SETTINGS: "-Xmx1024m -Xms256m -XX:MaxMetaspaceSize=256m -server -Dfile.encoding=UTF-8 -Drundeck.jetty.connector.forwarded=true -Dserver.contextPath=/rundeck -Dserver.https.port:4440"
#RUNDECK_SERVER_CONTEXTPATH: "https://vmName.Domain.corp/rundeck"
RUNDECK_MAIL_FROM: "rundeck#vmName.Domain.corp"
EXTERNAL_SERVER_URL: "https://vmName.Domain.corp/rundeck"
SERVER_URL: "https://vmName.Domain.corp/rundeck"
volumes:
- /etc/rundeck:/etc/rundeck
- /var/rundeck
- /var/lib/mysql
- /var/log/rundeck
- /opt/rundeck-plugins
nginx:
image: nginx:latest
container_name: production_nginx
links:
- rundeck
volumes:
- /etc/nginx/sites-enabled:/etc/nginx/conf.d
depends_on:
- rundeck
ports:
- 80:80
- 443:443
restart: always
networks:
default:
external:
name: vmName
nginx/sites-enabled/default.conf (main machine)
# Route all HTTP traffic through HTTPS
# ====================================
server {
listen 80;
server_name vmName;
return 301 https://vmName$request_uri;
}
server {
listen 443 ssl;
server_name vmName;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
return 301 https://vmName.Domain.corp$request_uri;
}
# ====================================
# Main webserver route configuration
# ====================================
server {
listen 443 ssl;
server_name vmName.Domain.corp;
ssl_certificate /etc/nginx/conf.d/vmName.Domain.corp.cert;
ssl_certificate_key /etc/nginx/conf.d/vmName.Domain.corp.key;
#===========================================================================#
## MAIN PAGE
location /example-app {
rewrite ^/example-app(.*) /$1 break;
proxy_pass http://example-app:5000/;
proxy_set_header Host $host/example-app;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# #Rundeck
location /rundeck/ {
# rewrite ^/rundeck(.*) /$1 break;
proxy_pass http://rundeck:4440/;
proxy_set_header Host $host/rundeck;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
[image container]/etc/rundeck/ rundeck-config.properties:
# change hostname here
grails.serverURL=https://vmName.Domain.corp/rundeck
grails.mail.default.from = rundeck#vmName.Domain.corp
server.useForwardHeaders = true
[image container]/etc/rundeck/ framework.properties:
framework.server.name = vmName.Domain.corp
framework.server.hostname = vmName.Domain.corp
framework.server.port = 443
framework.server.url = https://vmName.Domain.corp/rundeck
It seems related to the Rundeck image/network problem, I did a working example with the official one, take a look:
nginx.conf (located at config folder, check the docker-compose file volumes section):
server {
listen 80 default_server;
server_name rundeck-cl;
location / {
proxy_pass http://rundeck:4440;
}
}
docker-compose:
version: "3.7"
services:
rundeck:
build:
context: .
args:
IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:3.3.9}
container_name: rundeck-nginx
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
Dockerfile:
ARG IMAGE
FROM ${IMAGE}
Build with: docker-compise build and run with docker-compose up.
rundeck-config.properties content:
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/home/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
# Bind address and server URL
server.address=0.0.0.0
server.servlet.context-path=/
grails.serverURL=http://localhost
server.servlet.session.timeout=3600
dataSource.dbCreate = update
dataSource.url = jdbc:h2:file:/home/rundeck/server/data/grailsdb;MVCC=true
dataSource.username =
dataSource.password =
#Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
rundeck.api.tokens.duration.max=30d
rundeck.log4j.config.file=/home/rundeck/server/config/log4j.properties
rundeck.gui.startpage=projectHome
rundeck.clusterMode.enabled=true
rundeck.security.httpHeaders.enabled=true
rundeck.security.httpHeaders.provider.xcto.enabled=true
rundeck.security.httpHeaders.provider.xxssp.enabled=true
rundeck.security.httpHeaders.provider.xfo.enabled=true
rundeck.security.httpHeaders.provider.csp.enabled=true
rundeck.security.httpHeaders.provider.csp.config.include-xcsp-header=false
rundeck.security.httpHeaders.provider.csp.config.include-xwkcsp-header=false
rundeck.storage.provider.1.type=db
rundeck.storage.provider.1.path=keys
rundeck.projectsStorageType=db
framework.properties file content:
# framework.properties -
#
# ----------------------------------------------------------------
# Server connection information
# ----------------------------------------------------------------
framework.server.name = 85845cd30fe9
framework.server.hostname = 85845cd30fe9
framework.server.port = 4440
framework.server.url = http://localhost
# ----------------------------------------------------------------
# Installation locations
# ----------------------------------------------------------------
rdeck.base=/home/rundeck
framework.projects.dir=/home/rundeck/projects
framework.etc.dir=/home/rundeck/etc
framework.var.dir=/home/rundeck/var
framework.tmp.dir=/home/rundeck/var/tmp
framework.logs.dir=/home/rundeck/var/logs
framework.libext.dir=/home/rundeck/libext
# ----------------------------------------------------------------
# SSH defaults for node executor and file copier
# ----------------------------------------------------------------
framework.ssh.keypath = /home/rundeck/.ssh/id_rsa
framework.ssh.user = rundeck
# ssh connection timeout after a specified number of milliseconds.
# "0" value means wait forever.
framework.ssh.timeout = 0
# ----------------------------------------------------------------
# System-wide global variables.
# ----------------------------------------------------------------
# Expands to ${globals.var1}
#framework.globals.var1 = value1
# Expands to ${globals.var2}
#framework.globals.var2 = value2
rundeck.server.uuid = a14bc3e6-75e8-4fe4-a90d-a16dcc976bf6

Docker: "Cannot assign requested address while connecting to upstream"

I have docker running on a machine with the IP address fd42:1337::31. One container is a nginx reverse proxy with the port mapping 443:443, in its configuration file it proxy_pass-es depending on the server name to other ports on the same machine, e.g.
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name plex.mydomain.tld;
location / {
proxy_pass http://[fd42:1337::31]:32400;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name file.mydomain.tld;
location / {
proxy_pass http://[fd42:1337::31]:2020;
}
}
These other ports refer to bottle py servers or other containers with mapped ports.
I've started this container with the command
docker run -d -p 443:443 (volume mappings) --name reverseproxy nginx
and it has served me well for a year.
I've now decided to work with docker-compose and have the following configuration file:
version: '3'
services:
reverseproxy:
image: "nginx"
ports:
- "443:443"
volumes:
(volume mappings)
When I shut down the original container and start my new one with docker-compose up, it starts, but every request gives me something like this:
2019/02/13 17:04:43 [crit] 6#6: *1 connect() to [fd42:1337::31]:32400 failed (99: Cannot assign requested address) while connecting to upstream, client: 192.168.178.126, server: plex.mydomain.tld, request: "GET / HTTP/1.1", upstream: "http://[fd42:1337::31]:32400/", host: "plex.mydomain.tld"
Why is the new container behaving differently? What do I have to change?
(I know I can have a virtual network mode to connect to other containers directly, but my proxy is supposed to connect to some services that are not inside containers (but on the same metal).)

nginx reverse proxy upstream fails in docker-compose with connection refused message

I have a docker-compose.yaml similar to this (shortened for simplicity):
# ...
services:
my-ui:
# ...
ports:
- 5402:8080
networks:
- my-net
networks:
my-net:
external:
name: my-net
and I'm trying to set up nginx as a reverse proxy with this configuration:
upstream client {
server my-ui:5402;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
}
and this is the docker-compose.yaml I have for nginx:
# ...
services:
reverse-proxy:
# ...
ports:
- 5500:80
networks:
- my-net
networks:
my-net:
external:
name: my-net
What happens now is that when I run my-ui and reverse-proxy (each using its own docker-compose up), and I go to http://localhost:5500, I get a Bad Gateway message, and my nginx logs says this:
connect() failed (111: Connection refused) while connecting to
upstream, client: 172.19.0.1, server: , request: "GET / HTTP/1.1",
upstream: "http://172.19.0.5:5402/", host: "localhost:5500"
If I exec into my nginx container and use ping:
ping my-ui
ping 172.19.0.5
Both are successful, but if I want to, for example, curl:
curl -L http://my-ui
curl -L http://my-ui:5402
curl -L http://172.19.0.1
All of them fail with connection refused message. What am I missing here?
PS: I'm not sure, but it might be useful to add that my-ui is a basic vuejs application, running on Webpack dev server.
PS2: I also tried passing host headers etc. but same result
The name of the container (my-ui) resolves to the IP of the container. Therefor you have to provide in upstream the port of the container and not the port you have mapped to the host.
upstream client {
server my-ui:8080;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
}
You could also configure your upstream with the name of your host machine and use the mapped port. (server <name of host>:5402) But this could get quite messy and you would lose the advantage of isolating services with docker networks.
Furthermore you could also remove the port mapping unless you need to access the webservice without reverse proxy:
# ...
services:
reverse-proxy:
# ...
# ports:
# - 5500:80

docker-compose --scale X nginx.conf configuration

My nginx.conf file currently has the routes defined directly:
worker_processes auto;
events { worker_connections 1024; }
http {
upstream wordSearcherApi {
least_conn;
server api1:61370 max_fails=3 fail_timeout=30s;
server api2:61370 max_fails=3 fail_timeout=30s;
server api3:61370 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name server_name 0.0.0.0;
location / {
proxy_pass http://wordSearcherApi;
}
}
}
Is there any way to create just one service in docker-compose.yml and when docker-compose up --scale api=3, does nginx do automatic load balance?
Nginx
Dynamic upstreams are possible in Nginx (normal, sans Plus) but with tricks and limitations.
You give up on upstream directive and use plain proxy_pass.
It gives round robin load balancing and failover, but no extra feature of the directive like weights, failure modes, timeout, etc.
Your upstream hostname must be passed to proxy_pass by a variable and you must provide a resolver.
It forces Nginx to re-resolve the hostname (against Docker networks' DNS).
You lose location/proxy_pass behaviour related to trailing slash.
In the case of reverse-proxying to bare / like in the question, it does not matter. Otherwise you have to manually rewrite the path (see the references below).
Let's see how it works.
docker-compose.yml
version: '2.2'
services:
reverse-proxy:
image: nginx:1.15-alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 8080:8080
app:
# A container that exposes an API to show its IP address
image: containous/whoami
scale: 4
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
access_log /dev/stdout;
error_log /dev/stderr;
server {
listen 8080;
server_name localhost;
resolver 127.0.0.11 valid=5s;
set $upstream app;
location / {
proxy_pass http://$upstream:80;
}
}
}
Then...
docker-compose up -d
seq 10 | xargs -I -- curl -s localhost:8080 | grep "IP: 172"
...produces something like the following which indicates the requests are distributed across 4 app containers:
IP: 172.30.0.2
IP: 172.30.0.2
IP: 172.30.0.3
IP: 172.30.0.3
IP: 172.30.0.6
IP: 172.30.0.5
IP: 172.30.0.3
IP: 172.30.0.6
IP: 172.30.0.5
IP: 172.30.0.5
References:
Nginx with dynamic upstreams
Using Containers to Learn Nginx Reverse Proxy
Dynamic Nginx configuration for Docker with Python
Traefik
Traefik relies on Docker API directly and may be a simpler and more configurable option. Let's see it in action.
docker-compose.yml
version: '2.2'
services:
reverse-proxy:
image: traefik
# Enables the web UI and tells Traefik to listen to docker
command: --api --docker
ports:
- 8080:80
- 8081:8080 # Traefik's web UI, enabled by --api
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
app:
image: containous/whoami
scale: 4
labels:
- "traefik.frontend.rule=Host:localhost"
Then...
docker-compose up -d
seq 10 | xargs -I -- curl -s localhost:8080 | grep "IP: 172"
...also produces something the output that indicates the requests are distributed across 4 app containers:
IP: 172.31.0.2
IP: 172.31.0.5
IP: 172.31.0.6
IP: 172.31.0.4
IP: 172.31.0.2
IP: 172.31.0.5
IP: 172.31.0.6
IP: 172.31.0.4
IP: 172.31.0.2
IP: 172.31.0.5
In the Traefik UI (http://localhost:8081/dashboard/ in the example) you can see it recognised the 4 app containers:
References:
The Traefik Quickstart (Using Docker)
It's not possible with your current config since it's static. You have two options -
1. Use docker engine swarm mode - You can define replicas & swarm internal DNS will automatically balance the load across those replicas.
Ref - https://docs.docker.com/engine/swarm/
2. Use famous Jwilder nginx proxy - This image listens to the docker sockets, uses templates in GO to dynamically change your nginx configs when you scale your containers up or down.
Ref - https://github.com/jwilder/nginx-proxy

Resources