I'm encountering a very specific problem with my NGINX/RabbitMQ setup in which the desired result is only accesible via a mobile device. I hope there is someone who could shine a light on what i'm doing wrong :). I have the following setup:
Two droplets on DigitalOcean:
Droplet A with rancher server installed on it
Droplet B which acts as a host, controlled by rancher. for this example, assume its ip-adress is 123.45.678.90
Two images on docker-hub:
myaccount/customnginx
myaccount/customrabbitmq
myaccount/customnginx
Dockerfile:
FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf
nginx.conf (in which http://123.45.678.90:15672 = Droplet B + RabbitMQ port)
worker_processes 1;
events {
worker_connections 1024;
}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
listen 80 default_server;
server_name www.mydomain.nl mydomain.nl;
access_log /dev/stdout;
location /rabbitmq/ {
proxy_pass http://123.45.678.90:15672/;
rewrite ^/rabbitmq$ /rabbitmq/ permanent;
rewrite ^/rabbitmq/(.*)$ /$1 break;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
myaccount/customrabbitmq
I can provide the rabbitMQ configuration upon request, but I don't think it is of much importance at the moment.
Both images are built into a stack on Rancher via the following docker-compose.yml:
version: '2'
services:
rabbitmq:
image: myaccount/customrabbitmq
ports:
- 5672:5672
- 15672:15672
nginx:
image: myaccount/customproxy
ports:
- 80:80
which looks like this
Now
When I try to access my RabbitMQ manager via www.mydomain.nl/rabbitmq on a mobile device everything works properly. When I try to do the same with any browser on my desktop (or laptop), nothing works. I don't even see the attempt logged on Rancher (nginx container). I also tried this in incognito-mode and/or with ad-block-plus/Disconnect disabled, but to no avail.
What's wrong with this configuration?
Thanks in advance.
Ok I think I managed to fix this. Either or both of the following had to do something with it:
I enabled connection through ipv6 on the DigitalOcean droplet, added the ipv6 adress as AAAA record (for both the www.mydomain.nl as mydomain.nl) in the DNS-records with the domain registrar. I don't know much about this subject, but thought the mobile device might have connected with ipv4, while desktop tried to connect with the other (which wasn't set-up properly). I went into the firefox ocnfig (type about:config in adress bar) and set network.dns.disableIPv6 to true. This seemed to help.
I waited a day. Maybe it took a little longer for the DNS (normal A-records) to propagate properly
Related
I have a very thin app in a server and I just set up unleash (feature flag management tool) on it (with docker).
So I just opened the port 4242 in both the host and the container machine (docker-compose segment bellow).
services:
custom-unleash:
container_name: custom_unleash
image: unleashorg/unleash-server:latest
command: docker-entrypoint.sh /bin/sh -c 'node index.js'
ports:
- "4242:4242"
environment:
- DATABASE_HOST=foo
- DATABASE_NAME=bar
- DATABASE_USERNAME=foo
- DATABASE_PASSWORD=bar
- DATABASE_SSL=false
- DATABASE_PORT=5432
then I added the following configuration to my nginx configs,
location /unleash {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:4242;
access_log /var/log/nginx/unleash-access.log main;
}
But when I simply enter http://SERVER_IP:4242/ in my browser the unleash login page appears; but when I want to access unleash panel via https://SERVER_DNS/unleash there will be a blank page.
I think it's because the browser tries to get static/index.1f5d6bc3.js file from https://SERVER_DNS/, (i.e. GET https://SERVER_DNS/static/index.1f5d6bc3.js).
but in the first scenario when I enter http://SERVER_IP:4242/ the browser tries to GET the file from http://SERVER_IP:4242/static/index.1f5d6bc3.js which will work because the unleash server will serve it.
Why this happens? how can I prevent the unleash server to send https://SERVER_DNS/static/index.1f5d6bc3.js file while it does not exists in my host server? is there something wrong with my nginx config?
I'm not sure about the nginx configuration but since you're deploying under a subpath maybe you need to add the environment variable UNLEASH_URL as specified in the docs https://docs.getunleash.io/reference/deploy/configuring-unleash#unleash-url
If that doesn't help, let me know and I'll get help from someone else from the team.
I've been trying to fix an issue which is when I try to login to pgAdmin (in docker container) behind Nginx Proxy I'm getting an error that The CSRF tokens do not match.
See https://en.wikipedia.org/wiki/Cross-site_request_forgery
Frankly, the problem is related within nginx or not I'm not sure but configuration files as below:
Docker Swarm Service :
pgAdmin:
image: dpage/pgadmin4
networks:
- my-network
ports:
- 9102:80
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_CONFIG_SERVER_MODE=True
volumes:
- /home/docker-container/pgadmin/persist-data:/var/lib/pgadmin
- /home/docker-container/pgadmin/persist-data/servers.json:/pgadmin4/servers.json
deploy:
placement:
constraints: [node.hostname == my-host-name]
Nginx Configuration:
server {
listen 443 ssl;
server_name my-server-name;
location / {
proxy_pass http://pgAdmin/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-CSRF-Token $http_x_pga_csrftoken;
}
ssl_certificate /home/nginx/ssl/certificate.crt;
ssl_certificate_key /home/nginx/ssl/private.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name my-server-name;
return 301 https://my-server-name $request_uri;
}
I can able to access to pgAdmin in two ways :
The first way is direct host ip like 172.23.53.2:9102
The second way is via Nginx proxy.
When I try to access to pgAdmin via direct host ip there is no error but when I try to access to via dns ( like my-server.pgadmin.com ) I'm getting an error when I logged into pgAdmin dashboard.
The error is :
Bad Request. The CSRF tokens do not match.
My first opinion about this error is nginx does not pass CSRF Token header to pgAdmin. For these reason I've changed nginx configuration file many many times but I'm still getting this error.
What could be source of this error and how could I solve this problem?
Try to use the default ports "5050:80". It's solved the same issue on my side.
Using strings is also recommended.
Cf: https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
I used pgadmin4 deployed by Apache httpd,
the deployment method is similar, I also had the same problem,
my solution is Apache httpd loaded the lib of Apr/Aprl-util /pcre, Apache httpd will use token.
I'm trying to create a reverse proxy towards an app by using nginx with this docker-compose:
version: '3'
services:
nginx_cloud:
build: './nginx-cloud'
ports:
- 443:443
- 80:80
networks:
- mynet
depends_on:
- app
app:
build: './app'
expose:
- 8000
networks:
- mynet
networks:
mynet:
And this is my nginx conf (shortened):
server {
listen 80;
server_name reverse.internal;
location / {
# checks for static file, if not found proxy to app
try_files $uri #to_app;
}
location #pto_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app:8000;
}
}
When I run it, nginx returns:
[emerg] 1#1: host not found in upstream "app" in /etc/nginx/conf.d/app.conf:39
I tried several other proposed solutions without any success. Curiously if I run nginx manually via shell access from inside the container it works, I can ping app etc. But running it from docker-compose or directly via docker itself, doesn't work.
I tried setting up a separate upstream, adding the docker internal resolver, waiting a few seconds to be sure the app is running already etc with no luck. I know this question has been asked several times, but nothing seems to work so far.
Can you try the following server definition?
server {
listen 80;
server_name reverse.*;
location / {
resolver 127.0.0.11 ipv6=off;
set $target http://app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $target;
}
}
The app service may not start in time.
To diagnose the issue, try 2-step approach:
docker-compose up -d app
wait 15-20 seconds (or whatever it takes for the app to be up and ready)
docker-compose up -d nginx_cloud
If it works, then you have to update entrypoint in nginx_cloud service to wait for the app service.
I'm having issues with the Jenkins proxy. The Jenkins container is behind my NGINX proxy. I access it at http://localhost:8000. After I log in I get kicked to http://localhost. Some links on Jenkins also does the same and removes the port which brakes the screen. I get the error on the from the title on my Manage Jenkins page and tried adding the proxy_pass URL also, but nothing works.
My NGINX conf file is like so...
server {
listen 8000;
server_name "";
access_log off;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://jenkins_master_1:8080;
proxy_redirect http://jenkins_master_1:8080 http://localhost:8000;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
my docker-compose.yml file is like so...
version: '3'
# Services are the names of each container
services:
master:
# Where to build the container from a Dockerfile
build: ./jenkins-master
# Open which ports to
ports:
- "50000:50000"
# Connecting volumes to in a container
volumes:
- jenkins-log:/var/log/jenkins
- jenkins-data:/var/jenkins_home
# Adding the service to a network
networks:
- jenkins-net
nginx:
build: ./jenkins-nginx
ports:
- "8000:8000"
networks:
- jenkins-net
# List of volumes to create
volumes:
jenkins-data:
jenkins-log:
# List of netorks to create
networks:
jenkins-net:
I'm trying to learn Docker and Jenkins and was following a tutorial, the jenkins_master_1 is from the docker-compose. Any help or guidance would be really appreciative.
Thanks
Assumption 1: NGINX is in front of your app, accepting connections on port 80, then passing to backend port 8080.
Assumption 2: the Jenkins application and NGINX are on the same server here.
You should be accessing it originally from port 80, not 8080 if you are using the proxy.
NGINX gets request on 80, then passes to backend 8080. From the browser you shouldn’t see the 8080 if you are using the proxy. If you are using 8080 and it’s doing something, then your going directly to app.... aka, bypassing the proxy.
So, how to start addressing it:
(1.) Navigate to http://localhost, which should go through your proxy (if it’s set up properly)
(2.) In Manage Jenkins-> Configure System -> Jenkins URL, make sure the URL is set to http://localhost
(3.) Better to use a FQDN for the server name in the NGINX configuration, then make sure Jenkins is only listening for connections on localhost in the Jenkins.xml configuration. Jenkins.xml should have listen address set to 127.0.0.1. Then external requests to that FQDN will not be able to bypass the proxy, as Jenkins will only be allowing connections from localhost (from NGINX, or you playing with the browser on the localhost).
Then, ideally, you have:
http://fqdn->NGINX listening on port 80 -> Jenkins on 127.0.0.1:8080. The user with their browser (safely outside of your server) never sees the 8080 port.
Try adding proxy_redirect directive in location block. This instructs webserver to return different 301/302 http response codes than calculated by the server itself. Sometimes the webserver is unable to properly calculate its address like in docker where the container has no information of the outside world and that the connection is proxied/forwarded.
location / {
proxy_pass http://jenkins_master_1:8080;
proxy_redirect http://jenkins_master_1:8080 http://localhost:8080;
}
SRC: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Add X-Forwarded-* headers is the right solution.
Without such headers, I got a lot of errors such as I was redirected to https://jenkinsci:8080 after I set the initial password and click continue button. Many times when I visit https://jenkins.mydomain.com and click the links on the webpage, I was redirected to https://jenkinsci:8080. And https://jenkinsci:8080 can not be visited obviously. I dont't know why. Maybe tomact needs those X-Forwarded-* headers information.
This article - Jenkins behind an NGinX reverse proxy is highly recommended for those who want to run jenkins behind nginx, even if both jenkins and nginx are created through docker container. And once again, you'd better add those X-Forwarded-* headers.
An example nginx vhost configuration file:
server {
charset utf8;
access_log /var/log/nginx/jenkins.yourdomain.com.access_log main;
listen 443 ssl http2;
server_name jenkins.yourdomain.com;
ssl_certificate /etc/nginx/ssl/yourdomain.com.crt;
ssl_certificate_key /etc/nginx/ssl/yourdomain.com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
proxy_pass http://jenkinsci:8080; #jenkinsci is the service/container name specified in the docker-compose.yml file
}
}
I'm running nginx via lets-nginx in the default nginx configuration (as per the lets-nginx project) in a docker swarm:
services:
ssl:
image: smashwilson/lets-nginx
networks:
- backend
environment:
- EMAIL=sas#finestructure.co
- DOMAIN=api.finestructure.co
- UPSTREAM=api:5000
ports:
- "80:80"
- "443:443"
volumes:
- letsencrypt:/etc/letsencrypt
- dhparam_cache:/cache
api:
image: registry.gitlab.com/project_name/image_name:0.1
networks:
- backend
environment:
- APP_SETTINGS=/api.cfg
configs:
- source: api_config
target: /api.cfg
command:
- run
- -w
- tornado
- -p
- "5000"
api is a flask app that runs on port 5000 on the swarm overlay network backend.
When services are initially started up everything works fine. However, whenever I update the api in a way that makes the api container move between nodes in the three node swarm, nginx fails to route traffic to the new container.
I can see in the nginx logs that it sticks to the old internal ip, for instance 10.0.0.2, when the new container is now on 10.0.0.4.
In order to make nginx 'see' the new IP I need to either restart the nginx container or docker exec into it and kill -HUP the nginx process.
Is there a better and automatic way to make the nginx container refresh its name resolution?
Thanks to #Moema's pointer I've come up with a solution to this. The default configuration of lets-nginx needs to be tweaked as follows to make nginx pick up IP changes:
resolver 127.0.0.11 ipv6=off valid=10s;
set $upstream http://${UPSTREAM};
proxy_pass $upstream;
This uses docker swarm's resolver with a TTL and sets a variable, forcing nginx to refresh name lookups in the swarm.
Remember that when you use set you need to generate the entire URL by yourself.
I was using nginx in a compose to proxy a zuul gateway :
location /api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/api/v1/;
}
location /zuul/api/v1/ {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_pass http://rs-gateway:9030/zuul/api/v1/;
}
Now with Swarm it looks like that :
location ~ ^(/zuul)?/api/v1/(.*)$ {
set $upstream http://rs-gateway:9030$1/api/v1/$2$is_args$args;
proxy_pass $upstream;
# Set headers
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
Regex are good but don't forget to insert GET params into the generated URL by yourself.