How to configure server names for nginx container running in WSL2? - docker

I have a setup that works successfully in Linux and MacOS, in which I run a docker nginx container to route all of my different services running locally.
The issue is that this same setup is throwing nginx Bad Gateway errors when running the docker container inside of Window's WSL2, presumably because I'm missing some additional routing configuration between Windows and WSL2.
A simplified version of my setup:
docker-compose.yml
nginx:
image: nginx:alpine
container_name: nginx
volumes:
- ./config/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
- 443:443
networks:
- backend
/config/nginx.conf
server {
listen 80;
server_name test.localhost;
location / {
set test.localhost host.docker.internal:3001;
proxy_pass http://test.localhost;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80;
server_name test2.localhost;
location / {
set test2.localhost host.docker.internal:3002;
proxy_pass http://test2.localhost;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
windows hosts file
127.0.0.1 test.localhost
127.0.0.1 test2.localhost
WSL2 Debian /etc/hosts file
127.0.0.1 test.localhost
127.0.0.1 test2.localhost
Both services are running inside WSL2 at ports 3001 and 3002.
Browsing to localhost:3001 and localhost:3002 provides the expected result, but if I go to test.localhost or test2.localhost I get 502 Bad Gateway errors from nginx.
Any idea on what I may be missing or guidance will be greatly appreciated.

Maybe you could try with the workaround below, seen here.
127.0.0.1 test.localhost
::1 test.localhost localhost

You can use WSL2HOST which will automatically update your Windows hosts file with the WSL2 VM's IP address.

Related

Nginx run from docker-compose returns "host not found in upstream"

I'm trying to create a reverse proxy towards an app by using nginx with this docker-compose:
version: '3'
services:
nginx_cloud:
build: './nginx-cloud'
ports:
- 443:443
- 80:80
networks:
- mynet
depends_on:
- app
app:
build: './app'
expose:
- 8000
networks:
- mynet
networks:
mynet:
And this is my nginx conf (shortened):
server {
listen 80;
server_name reverse.internal;
location / {
# checks for static file, if not found proxy to app
try_files $uri #to_app;
}
location #pto_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app:8000;
}
}
When I run it, nginx returns:
[emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/app.conf:39
I tried several other proposed solutions without any success. Curiously if I run nginx manually via shell access from inside the container it works, I can ping app etc. But running it from docker-compose or directly via docker itself, doesn't work.
I tried setting up a separate upstream, adding the docker internal resolver, waiting a few seconds to be sure the app is running already etc with no luck. I know this question has been asked several times, but nothing seems to work so far.
Can you try the following server definition?
server {
listen 80;
server_name reverse.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
The app service may not start in time.
To diagnose the issue, try 2-step approach:
docker-compose up -d app
wait 15-20 seconds (or whatever it takes for the app to be up and ready)
docker-compose up -d nginx_cloud
If it works, then you have to update entrypoint in nginx_cloud service to wait for the app service.

failed (113: No route to host) while connecting to upstream

I want using nginx make reverse proxy(docker container). However, there have been some exceptions.
issue context
Centos version: 7.4.1708
nginx version: 1.13.12
docker version: 1.13.1
Open firewall and exposed 80 port
nginx reproxy on docker container: failed (113: No route to host) while connecting to upstream
nginx reproxy on host: function normal
nginx configuration:
server
{
listen 80;
server_name web.pfneo.geo;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://172.18.0.249:88;
}
access_log logs/web.tk_access.log;
}
close the firewall
nginx reproxy on docker container: function normal
nginx reproxy on host: function normal
Open firewall not expose port
nginx app service on docker container(88 port): function normal
It seems that this problem is caused by docker?
Docker can ignore the host firewall?

Docker nginx reverse proxy gives "502 Bad Gateway"

I'm trying to have a docker container with nginx work as reverse proxy to other docker containers and I keep getting "Bad Gateway" on locations other other than the base location '/'.
I have the following server block:
server {
listen 80;
location / {
proxy_pass "http://game2048:8080";
}
location /game {
proxy_pass "http://game:9999";
}
}
It works for http://localhost but not for http://localhost/game which gives "Bad Gateway" in the browser and this on the nginx container:
[error] 7#7: *6 connect() failed (111: Connection refused)
while connecting to upstream, client: 172.17.0.1, server: ,
request: "GET /game HTTP/1.1", upstream: "http://172.17.0.4:9999/game",
host: "localhost"
I use the official nginx docker image and put my own configuration on it. You can test it and see all details here:
https://github.com/jollege/ngprox1
Any ideas what goes wrong?
NB: I have set local hostname entries on docker host to match those names:
127.0.1.1 game2048
127.0.1.1 game
I fixed it! I set the server name in different server blocks in nginx config. Remember to use docker port, not host port.
server {
listen 80;
server_name game2048;
location / {
proxy_pass "http://game2048:8080";
}
}
server {
listen 80;
server_name game;
location / {
# Remember to refer to docker port, not host port
# which is 9999 in this case:
proxy_pass "http://game:8080";
}
}
The github repo has been updated to reflect the fix, the old readme file is there under ./README.old01.md.
Typical that I find the answer when I carefully phrase the question to others. Do you know that feeling?
I had the same "502 Bad Gateway" error, but the solution was to tune proxy_buffer_size following this post instructions:
proxy_buffering off;
proxy_buffer_size 16k;
proxy_busy_buffers_size 24k;
proxy_buffers 64 4k;
See the nginx error log
sudo tail -n 100 /var/log/nginx/error.log
If you see Permission denied error in the log like below -
2022/03/28 03:51:09 [crit] 1140954#0: *141 connect() to
xxx.xxx.68.xx:8080 failed (13: Permission denied) while connecting to
upstream, client: xxx.xx.xxx.25, server: www.example.com
See whether the value of httpd_can_network_connect is enabled or not by running the command: sudo getsebool -a | grep httpd
If you see the value of httpd_can_network_connect is off then this is the cause of your issue.
Solution:
set the value of httpd_can_network_connect is on by run the command sudo setsebool httpd_can_network_connect on -P
Hope it will resolve your problem.
I had the same error, but for a web application that was just not serving at the IP and port mentioned in the config.
So say you have this:
location /game {
proxy_pass "http://game:9999";
}
Then make sure the web application that you expect at http://game:9999 is really serving from within a docker container named 'game' and the code is set to serve the app at port 9999.
For me helped this line of code proxy_set_header Host $http_host;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://myserver;
}
In my case, after 4 hours, only I missed put the port with semanage command.
location / {
proxy_pass http://A.B.C.D:8090/test;
}
The solution was add 8090 port and works.
semanage port -a -t http_port_t -p tcp 8090
You have to declare an external network if the container you are pointing to is defined in another docker-compose.yml file:
version: "3"
services:
webserver:
image: nginx:1.17.4-alpine
container_name: ${PROJECT_NAME}-webserver
depends_on:
- drupal
restart: unless-stopped
ports:
- 80:80
volumes:
- ./docroot:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- internal
- my-passwords
networks:
my-passwords:
external: true
name: my-passwords_default
nginx.conf:
server {
listen 80;
server_name test2.com www.test2.com;
location / {
proxy_pass http://my-passwords:3000/;
}
}
You may need to telnet on the upstream machine to check to wither it's connected:
tracing the /var/log/nginx/error.log would help.

nginx does not automatically pick up dns changes in swarm

I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?
Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.

Subdomains, Nginx-proxy and Docker-compose

I'm looking for a way to configure Nginx to access hosted services through a subdomain of my server. Those services and Nginx are instantiated with Docker-compose.
In short, when typing jenkins.192.168.1.2, I should access to Jenkins hosted on 192.168.1.2 redirected with Nginx proxy.
A quick look of what I currently have. It doesn't work without a top domain name, so it works fine on play-with-docker.com, but not locally with for example 192.168.1.2.
server {
server_name jenkins.REVERSE_PROXY_DOMAIN_NAME;
location / {
proxy_pass http://jenkins:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
To have a look of what I want: https://github.com/Ivaprag/devtools-compose
My overall goal is to access remote docker containers without modifying clients' DNS service.
Unfortunately nginx doesn't support sub-domains on IP addresses like that.
You would either have to modify the clients hosts file (which you said you didn't want to do)...
Or you can just set your nginx to redirect like so:
location /jenkins {
proxy_pass http://jenkins:8080;
...
}
location /other-container {
proxy_pass http://other-container:8080;
}
which would allow you to access jenkins at 192.168.1.2/jenkins
Or you can try and serve your different containers through different ports. E.g:
server {
listen 8081;
location / {
proxy_pass http://jenkins:8080;
...
}
}
server {
listen 8082;
location / {
proxy_pass http://other-container:8080;
...
}
}
And then access jenkins from 192.168.1.2:8081/
If you are already using docker-compose I recommend using the jwilder nginx-proxy container.
https://github.com/jwilder/nginx-proxy
This allows you to add unlimited number of web service containers to the backend of the defined nginx proxy, for example:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "nginx_certs:/etc/nginx/certs:rw"
nginx:
build:
context: ./docker/nginx/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host1.com
nginx_2:
build:
context: ./docker/nginx_2/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host2.com
apache_1:
build:
context: ./docker/apache_1/
dockerfile: Dockerfile
volumes_from:
- data
environment:
VIRTUAL_HOST: www.host3.com
The nginx-proxy mount the host docker sock file in order to get information about the other containers running, if any of them have the env variable VIRTUAL_HOST set then it will add it to its configuration.
I was trying to configure subdomains in nginx (host), for two virtualhosts in one LXC container.
The way it worked for me:
For apache (in the container), I created two virtual hosts: one in port 80 and the other one in port 90.
For enabling port 90 in apache2 (container), it was necessary to add the line "Listen 90" below "Listen 80" in /etc/apache2/ports.conf
For NGINX (host machine), configured two DOMAINS, both in port 80 creating independent .conf files in /etc/nginx/sites-available. Created symbolic link for each file to /etc/nginx/sites-enabled.
In the first NGINX myfirstdomain.conf file, redirect to http://my.contai.ner.ip:80.
In the second NGINX myseconddomain.conf file, redirect to http://my.contai.ner.ip:90
That was it for me !

Resources