I've been trying to fix an issue which is when I try to login to pgAdmin (in docker container) behind Nginx Proxy I'm getting an error that The CSRF tokens do not match.
See https://en.wikipedia.org/wiki/Cross-site_request_forgery
Frankly, the problem is related within nginx or not I'm not sure but configuration files as below:
Docker Swarm Service :
pgAdmin:
image: dpage/pgadmin4
networks:
- my-network
ports:
- 9102:80
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_CONFIG_SERVER_MODE=True
volumes:
- /home/docker-container/pgadmin/persist-data:/var/lib/pgadmin
- /home/docker-container/pgadmin/persist-data/servers.json:/pgadmin4/servers.json
deploy:
placement:
constraints: [node.hostname == my-host-name]
Nginx Configuration:
server {
listen 443 ssl;
server_name my-server-name;
location / {
proxy_pass http://pgAdmin/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-CSRF-Token $http_x_pga_csrftoken;
}
ssl_certificate /home/nginx/ssl/certificate.crt;
ssl_certificate_key /home/nginx/ssl/private.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name my-server-name;
return 301 https://my-server-name $request_uri;
}
I can able to access to pgAdmin in two ways :
The first way is direct host ip like 172.23.53.2:9102
The second way is via Nginx proxy.
When I try to access to pgAdmin via direct host ip there is no error but when I try to access to via dns ( like my-server.pgadmin.com ) I'm getting an error when I logged into pgAdmin dashboard.
The error is :
Bad Request. The CSRF tokens do not match.
My first opinion about this error is nginx does not pass CSRF Token header to pgAdmin. For these reason I've changed nginx configuration file many many times but I'm still getting this error.
What could be source of this error and how could I solve this problem?
Try to use the default ports "5050:80". It's solved the same issue on my side.
Using strings is also recommended.
Cf: https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
I used pgadmin4 deployed by Apache httpd,
the deployment method is similar, I also had the same problem,
my solution is Apache httpd loaded the lib of Apr/Aprl-util /pcre, Apache httpd will use token.
Related
I want to deploy Growthbook (A/B Testing tool) container along with Nginx reverse proxy for handling SSL/TLS encryption(i.e SSL Termination) into AWS ECS. I was trying to deploy using Docker compose file(i.e Docker ECS context). The problem is, it is creating all the necessary resources like Network Load Balances, Target Groups, ECS task definitions etc. And Abruptly fails creating ECS service and tries to delete all the resources it created when I ran docker compose --project-name growthbook up. Reason it says is "Nginx sidecar container exited".
Here is my docker-compose.yml file:
# docker-compose.yml
version: "3"
x-aws-vpc: "vpc-*************"
services:
growthbook:
image: "growthbook/growthbook:latest"
ports:
- 3000:3000
- 3100:3100
environment:
- MONGODB_URI=<mongo_db_connection_string>
- JWT_SECRET=<jwt_secret>
volumes:
- uploads:/usr/local/src/app/packages/back-end/uploads
nginx-tls-sidecar:
image: <nginx_sidecar_image>
ports:
- 443:443
links:
- growthbook
volumes:
uploads:
And Here is Dockerfile used to build nginx sidecar image:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY ssl.key /etc/nginx/ssl.key
COPY ssl.crt /etc/nginx/ssl.crt
In the above Dockerfile, SSL keys and certificates are self-signed generated using openssl and are in order.
And Here is my nginx.conf file:
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:3000; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
proxy_pass http://localhost:3100; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
Basically, Growthbook exposes its service on http://localhost:3000 and http://localhost:3100. And Nginx sidecar container only listens on port 443. I need to be able to proxy to both endpoints which are exposed by Growthbook from Nginx port 443.
Help is much appreciated, if you guys find any mistakes in my configuration :)
By default, Growthbook service does not provide TLS encryption. So I used Nginx as sidecar handling SSL Termination. As a end result, I need to be able to run Growthbook service using TLS encryption hosted on AWS ECS.
I'm running gitlab and nginx in docker. I had to use an unbound version of nginx because there's other applications running and I can only publish ports 80 and 443. So far I've managed to access gitlab with my.domain.com/gitlab/ and can also login, upload projects, etc...
I've wanted to use the container registry for uploading and storing images from my projects, which is why I've uncommented gitlab_rails['registry_enabled'] = true in gitlab.rb. Now the container registry is visible for my projects, however I get a Docker Connection Error when trying to access the page.
My question is, are there any other settings I have to tweak to get the inbuilt container registry running or did I already mess things up in the way I've set this up? Especially since I need gitlab to run on a relative url and my projects url for cloning/pulling is something like https://<server.ip.address>/gitlab/group-name/test-project, even though browser tabs show https://my.domain.com/gitlab/group-name/test-project
So far, this is my setup:
nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my.domain.com www.my.domain.com;
server_tokens off;
ssl_certificate /etc/nginx/ssl/live/my.domain.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/my.domain.com/privkey.pem;
##############
## GITLAB ##
##############
location /gitlab/ {
root /home/git/gitlab/public;
proxy_http_version 1.1;
proxy_pass http://<server.ip.address>:8929/gitlab/;
gzip off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
gitlab docker-compose.yml
services:
git:
image: gitlab/gitlab-ce:14.10.0-ce.0
restart: unless-stopped
container_name: gitlab
hostname: 'my.domain.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['gitlab_shell_ssh_port'] = 2224
external_url 'http://<server.ip.address>:8929/gitlab/'
gitlab_rails['registry_enabled'] = true
ports:
- "8929:8929"
- "2224:2224"
I'm having issues with the Jenkins proxy. The Jenkins container is behind my NGINX proxy. I access it at http://localhost:8000. After I log in I get kicked to http://localhost. Some links on Jenkins also does the same and removes the port which brakes the screen. I get the error on the from the title on my Manage Jenkins page and tried adding the proxy_pass URL also, but nothing works.
My NGINX conf file is like so...
server {
listen 8000;
server_name "";
access_log off;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://jenkins_master_1:8080;
proxy_redirect http://jenkins_master_1:8080 http://localhost:8000;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
my docker-compose.yml file is like so...
version: '3'
# Services are the names of each container
services:
master:
# Where to build the container from a Dockerfile
build: ./jenkins-master
# Open which ports to
ports:
- "50000:50000"
# Connecting volumes to in a container
volumes:
- jenkins-log:/var/log/jenkins
- jenkins-data:/var/jenkins_home
# Adding the service to a network
networks:
- jenkins-net
nginx:
build: ./jenkins-nginx
ports:
- "8000:8000"
networks:
- jenkins-net
# List of volumes to create
volumes:
jenkins-data:
jenkins-log:
# List of netorks to create
networks:
jenkins-net:
I'm trying to learn Docker and Jenkins and was following a tutorial, the jenkins_master_1 is from the docker-compose. Any help or guidance would be really appreciative.
Thanks
Assumption 1: NGINX is in front of your app, accepting connections on port 80, then passing to backend port 8080.
Assumption 2: the Jenkins application and NGINX are on the same server here.
You should be accessing it originally from port 80, not 8080 if you are using the proxy.
NGINX gets request on 80, then passes to backend 8080. From the browser you shouldn’t see the 8080 if you are using the proxy. If you are using 8080 and it’s doing something, then your going directly to app.... aka, bypassing the proxy.
So, how to start addressing it:
(1.) Navigate to http://localhost, which should go through your proxy (if it’s set up properly)
(2.) In Manage Jenkins-> Configure System -> Jenkins URL, make sure the URL is set to http://localhost
(3.) Better to use a FQDN for the server name in the NGINX configuration, then make sure Jenkins is only listening for connections on localhost in the Jenkins.xml configuration. Jenkins.xml should have listen address set to 127.0.0.1. Then external requests to that FQDN will not be able to bypass the proxy, as Jenkins will only be allowing connections from localhost (from NGINX, or you playing with the browser on the localhost).
Then, ideally, you have:
http://fqdn->NGINX listening on port 80 -> Jenkins on 127.0.0.1:8080. The user with their browser (safely outside of your server) never sees the 8080 port.
Try adding proxy_redirect directive in location block. This instructs webserver to return different 301/302 http response codes than calculated by the server itself. Sometimes the webserver is unable to properly calculate its address like in docker where the container has no information of the outside world and that the connection is proxied/forwarded.
location / {
proxy_pass http://jenkins_master_1:8080;
proxy_redirect http://jenkins_master_1:8080 http://localhost:8080;
}
SRC: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Add X-Forwarded-* headers is the right solution.
Without such headers, I got a lot of errors such as I was redirected to https://jenkinsci:8080 after I set the initial password and click continue button. Many times when I visit https://jenkins.mydomain.com and click the links on the webpage, I was redirected to https://jenkinsci:8080. And https://jenkinsci:8080 can not be visited obviously. I dont't know why. Maybe tomact needs those X-Forwarded-* headers information.
This article - Jenkins behind an NGinX reverse proxy is highly recommended for those who want to run jenkins behind nginx, even if both jenkins and nginx are created through docker container. And once again, you'd better add those X-Forwarded-* headers.
An example nginx vhost configuration file:
server {
charset utf8;
access_log /var/log/nginx/jenkins.yourdomain.com.access_log main;
listen 443 ssl http2;
server_name jenkins.yourdomain.com;
ssl_certificate /etc/nginx/ssl/yourdomain.com.crt;
ssl_certificate_key /etc/nginx/ssl/yourdomain.com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
proxy_pass http://jenkinsci:8080; #jenkinsci is the service/container name specified in the docker-compose.yml file
}
}
I'm encountering a very specific problem with my NGINX/RabbitMQ setup in which the desired result is only accesible via a mobile device. I hope there is someone who could shine a light on what i'm doing wrong :). I have the following setup:
Two droplets on DigitalOcean:
Droplet A with rancher server installed on it
Droplet B which acts as a host, controlled by rancher. for this example, assume its ip-adress is 123.45.678.90
Two images on docker-hub:
myaccount/customnginx
myaccount/customrabbitmq
myaccount/customnginx
Dockerfile:
FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf
nginx.conf (in which http://123.45.678.90:15672 = Droplet B + RabbitMQ port)
worker_processes 1;
events {
worker_connections 1024;
}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
listen 80 default_server;
server_name www.mydomain.nl mydomain.nl;
access_log /dev/stdout;
location /rabbitmq/ {
proxy_pass http://123.45.678.90:15672/;
rewrite ^/rabbitmq$ /rabbitmq/ permanent;
rewrite ^/rabbitmq/(.*)$ /$1 break;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
myaccount/customrabbitmq
I can provide the rabbitMQ configuration upon request, but I don't think it is of much importance at the moment.
Both images are built into a stack on Rancher via the following docker-compose.yml:
version: '2'
services:
rabbitmq:
image: myaccount/customrabbitmq
ports:
- 5672:5672
- 15672:15672
nginx:
image: myaccount/customproxy
ports:
- 80:80
which looks like this
Now
When I try to access my RabbitMQ manager via www.mydomain.nl/rabbitmq on a mobile device everything works properly. When I try to do the same with any browser on my desktop (or laptop), nothing works. I don't even see the attempt logged on Rancher (nginx container). I also tried this in incognito-mode and/or with ad-block-plus/Disconnect disabled, but to no avail.
What's wrong with this configuration?
Thanks in advance.
Ok I think I managed to fix this. Either or both of the following had to do something with it:
I enabled connection through ipv6 on the DigitalOcean droplet, added the ipv6 adress as AAAA record (for both the www.mydomain.nl as mydomain.nl) in the DNS-records with the domain registrar. I don't know much about this subject, but thought the mobile device might have connected with ipv4, while desktop tried to connect with the other (which wasn't set-up properly). I went into the firefox ocnfig (type about:config in adress bar) and set network.dns.disableIPv6 to true. This seemed to help.
I waited a day. Maybe it took a little longer for the DNS (normal A-records) to propagate properly
I'm trying to have a docker container with nginx work as reverse proxy to other docker containers and I keep getting "Bad Gateway" on locations other other than the base location '/'.
I have the following server block:
server {
listen 80;
location / {
proxy_pass "http://game2048:8080";
}
location /game {
proxy_pass "http://game:9999";
}
}
It works for http://localhost but not for http://localhost/game which gives "Bad Gateway" in the browser and this on the nginx container:
[error] 7#7: *6 connect() failed (111: Connection refused)
while connecting to upstream, client: 172.17.0.1, server: ,
request: "GET /game HTTP/1.1", upstream: "http://172.17.0.4:9999/game",
host: "localhost"
I use the official nginx docker image and put my own configuration on it. You can test it and see all details here:
https://github.com/jollege/ngprox1
Any ideas what goes wrong?
NB: I have set local hostname entries on docker host to match those names:
127.0.1.1 game2048
127.0.1.1 game
I fixed it! I set the server name in different server blocks in nginx config. Remember to use docker port, not host port.
server {
listen 80;
server_name game2048;
location / {
proxy_pass "http://game2048:8080";
}
}
server {
listen 80;
server_name game;
location / {
# Remember to refer to docker port, not host port
# which is 9999 in this case:
proxy_pass "http://game:8080";
}
}
The github repo has been updated to reflect the fix, the old readme file is there under ./README.old01.md.
Typical that I find the answer when I carefully phrase the question to others. Do you know that feeling?
I had the same "502 Bad Gateway" error, but the solution was to tune proxy_buffer_size following this post instructions:
proxy_buffering off;
proxy_buffer_size 16k;
proxy_busy_buffers_size 24k;
proxy_buffers 64 4k;
See the nginx error log
sudo tail -n 100 /var/log/nginx/error.log
If you see Permission denied error in the log like below -
2022/03/28 03:51:09 [crit] 1140954#0: *141 connect() to
xxx.xxx.68.xx:8080 failed (13: Permission denied) while connecting to
upstream, client: xxx.xx.xxx.25, server: www.example.com
See whether the value of httpd_can_network_connect is enabled or not by running the command: sudo getsebool -a | grep httpd
If you see the value of httpd_can_network_connect is off then this is the cause of your issue.
Solution:
set the value of httpd_can_network_connect is on by run the command sudo setsebool httpd_can_network_connect on -P
Hope it will resolve your problem.
I had the same error, but for a web application that was just not serving at the IP and port mentioned in the config.
So say you have this:
location /game {
proxy_pass "http://game:9999";
}
Then make sure the web application that you expect at http://game:9999 is really serving from within a docker container named 'game' and the code is set to serve the app at port 9999.
For me helped this line of code proxy_set_header Host $http_host;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://myserver;
}
In my case, after 4 hours, only I missed put the port with semanage command.
location / {
proxy_pass http://A.B.C.D:8090/test;
}
The solution was add 8090 port and works.
semanage port -a -t http_port_t -p tcp 8090
You have to declare an external network if the container you are pointing to is defined in another docker-compose.yml file:
version: "3"
services:
webserver:
image: nginx:1.17.4-alpine
container_name: ${PROJECT_NAME}-webserver
depends_on:
- drupal
restart: unless-stopped
ports:
- 80:80
volumes:
- ./docroot:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- internal
- my-passwords
networks:
my-passwords:
external: true
name: my-passwords_default
nginx.conf:
server {
listen 80;
server_name test2.com www.test2.com;
location / {
proxy_pass http://my-passwords:3000/;
}
}
You may need to telnet on the upstream machine to check to wither it's connected:
tracing the /var/log/nginx/error.log would help.