I'm having a particularly odd behaviour with my docker-compose and nginx setup. I am trying to have nginx proxy_pass requests to a backend web service. The web service is Spring Boot, but I don't believe that's relevant. My web service will issue a 302 redirect to users who are not authenticated, sending them to a /login page. This all seems to work as expected except for an indeterminate period of time when I bring the docker-compose up. Early requests to the service which result in 302 responses result in timeouts, whilst requests direct to the /login page return as expected immediately. After an indeterminate period of time, usually minutes, something seems to stabilise and everything works as expected. I've verified the behaviour using chrome from a client machine and curl directly on the host running the compose. I believe the 302 responses are somehow getting dropped but I'm not sure.
Can anyone spot a problem?
version: '3.5'
services:
proxy:
image: nginx:latest
container_name: "proxy"
restart: always
volumes:
- blah:/etc/nginx
ports:
- 80:80
- 443:443
webservice:
image: "webservice:latest"
container_name: "webservice"
restart: always
ports:
- 8080:8080
Nginx Config:
worker_processes auto;
events { }
http {
server {
listen 80 default_server;
listen 443 default_server ssl;
ssl_certificate /etc/nginx/cert;
ssl_certificate_key /etc/nginx/key;
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
gzip on;
gzip_types text/plain application/xml application/json application/javascript;
location / {
proxy_http_version 1.1;
proxy_pass http://webservice:8080/;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
access_log /etc/nginx/log/access.log combined;
error_log /etc/nginx/log/error.log warn;
}
}
}
Related
I'm building an e2e test suite inside a docker container for CI.
It has 3 services (admin, platform, dashboard) which all connect to a postgres instance.
I use nginx as a reverse proxy to direct the traffic to the correct service based on the subdomain.
Dashboard (dashboard.localtest.me) and Admin (admin.localtest.me) each have their own subdomain and everything else goes to Platform (ex. accounts.localtest.me, public.localtest.me) and there are hundreds of subdomains. The services are on their own ports as well as part of how the product is designed.
Currently I can bring all this up in docker-compose, make requests to <foo>.localtest.me:8080 from my browser and nginx will direct all the traffic to the correct endpoints. Works great.
But when the Cypress service tests start making requests (from inside the docker host) they don't resolve. My assumption is that since the hostname isn't resolving then the request isn't going to the nginx service, which means it can't get routed to the correct product service (admin, platform, or dashboard)
If I can route everything to nginx with a dns wildcard (*.localest.me) I think that would work but I can't figure out what modules or tools can allow for that. Everything I've found is for handing a reverse proxy connecting to the docker host, not containers making url requests internally.
TL;DR
How can I allow cypress container to make wildcard GET requests (*.localtest.me) to my nginx reverse_proxy container
This is roughly what my docker compose looks like, I've removed any of the internal env vars that aren't relevant
services:
postgres:
build:
context: ./
dockerfile: postgres.dockerfile
ports:
- 5432:5432
dashboard:
build:
context: ./
dockerfile: web.dockerfile
ports:
- 9001:9000
platform:
build:
context: ./
dockerfile: web.dockerfile
ports:
- 8001:8000
admin:
build:
context: ./
dockerfile: web.dockerfile
ports:
- 7001:7000
nginx:
restart: always
image: nginx
build:
context: ./
dockerfile: nginx.dockerfile
ports:
- 8080:80
- 443:443
- 8000:8000
- 9000:9000
- 7000:7000
cypress:
image: cypress-testing
build:
context: ./
dockerfile: cypress-tests.dockerfile
ports:
- 6000:6000
Here is my nginx.conf
server {
listen 80;
listen 9000;
server_name dashboard.localtest.me;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://dashboard:9000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_buffering off;
chunked_transfer_encoding off;
}
}
server {
listen 80;
listen 7000;
server_name admin.localtest.me;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header Connection "";
proxy_pass http://admin:7000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_buffering off;
chunked_transfer_encoding off;
}
}
server {
listen 80;
listen 8000;
server_name ~(^|\.)localtest\.me$;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass http://platform:8000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_buffering off;
chunked_transfer_encoding off;
}
}
I have a simple dockerized flask backend that listens on 0.0.0.0:8080 and simple dockerized react frontend that sends a request to localhost:8080/api/v1.0/resource.
Now i want to run those containers in docker compose and issue the request to the service's name backend
The compose file looks like this:
version: '3'
services:
backend:
ports:
- "8080:8080"
image: "tobiaslocker/simple-dockerized-flask-backend:v0.1"
frontend:
ports:
- "80:80"
image: "tobiaslocker/simple-dockerized-react-frontend:v0.1"
The NGINX configuration that works for requests to localhost:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
The frontend sends the request axios.get('http://localhost:8080/api/v1.0/resource')
My questions:
How do i have to configure NGINX to be able to use the service name (e.g. backend)
How do i have to issue the request to match the configuration.
I am not sure how the proxy_pass will take effect when sending the request from the frontend and found it hard to debug.
Regards
My Answers:
How do i have to configure NGINX to be able to use the service name (e.g. backend)
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://backend:8080;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
Taken from here. Not sure if all settings are relevant but only setting proxy_pass didn't work for me.
How do i have to issue the request to match the configuration.
Same as before: axios.get('http://localhost:8080/api/v1.0/resource'), which makes sense, since it works locally and proxied with NGINX.
I'm new with Docker and I'm setting up a new application where I have 2 services on my docker-compose file:
# Contains all my API servers
api_load_balancer:
build: ./microservices/load_balancer
restart: always
ports:
- "8080:80"
# Contains all my client servers
server_client:
build: ./microservices/client
ports:
- "80:80"
....
My microservices/load_balancer nginx.conf looks like this:
events { worker_connections 1024; }
http{
include /etc/nginx/mime.types;
default_type application/octet-stream;
upstream api_nodes {
server api_1:9000;
}
upstream socket_nodes {
ip_hash;
server socketio_1:5000;
}
# SERVER API
server {
location /socket.io/ {
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://socket_nodes;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api/ {
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://api_nodes;
}
}
}
My load_balancer/Dockerfile looks like this:
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./ssl ./etc/nginx/ssl
EXPOSE 80
When I try to connect from my client I'm able to do it from my client's server (using the api_load_balancer connection string as they're in the same docker network), but, when I try to do a call from the browser I need to change my connection string to something like localhost:8080 or some.ip.in.public.server:8080.
I don't like the idea of exposing neither my port nor my API configuration like that, so is there any way that I can implement a more transparent connection between those microservices? I don't know if it's even possible to do so.
I am trying to setup SSL for my homepage (www.myhomepage.com) using LetsEncrypt on a nginx reverse-proxy. I have an additional host without SSL running for testing proxying to multiple hosts (www.myotherhomepagewithoutssl.com).
The reverse-proxy and two hosts are running in three separate docker containers.
I got both hosts to work without SSL, but the encrypted one does not work, when trying to use SSL. The LetsEncrypt certificates appear to be setup/obtained correctly and are persisted in a docker volume.
I am trying to follow and adapt this tutorial to setup the LetsEncrypt SSL encryption:
http://tom.busby.ninja/letsecnrypt-nginx-reverse-proxy-no-downtime/
When trying to connect to the SSL encrypted host under www.myhomepage.com using Firefox I get this error:
Unable to connect
The other non-encrypted host under www.myotherhomepagewithoutssl.com works. And as I stated above, when I have www.myhomepage.com setup without SSL (in the same way as www.myotherhomepagewithoutssl.com), it is also reachable.
My complete setup is listed below and consists of:
* reverse_proxy_testing.sh: Bash script to clean-up, build and start the containers.
* compose_reverse_proxy.yaml: Docker-Compose file.
* reverse_proxy.docker: Dockerfile for setting up the reverse-proxy with nginx.
* nginx.conf: nginx config-file for the reverse-proxy.
I suspect that my error is located somewhere inside nginx.conf, but I cannot find it.
Any help is much appreciated!
nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
deny all;
}
upstream myhomepage {
server myhomepage_blog:80;
}
upstream docker-apache {
server apache:80;
}
server {
listen 80;
listen [::]:80;
server_name www.myhomepage.com myhomepage.com;
return 302 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443;
server_name www.myhomepage.com myhomepage.com;
ssl_certificate /etc/letsencrypt/live/myhomepage.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myhomepage.com/privkey.pem;
location /.well-known {
root /var/www/ssl-proof/myhomepage.com/;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://myhomepage;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 900s;
}
}
server {
listen 80;
server_name www.myotherhomepagewithoutssl.com myotherhomepagewithoutssl.com;
location / {
proxy_pass http://docker-apache;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
reverse_proxy.docker:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /var/www/ssl-proof/myhomepage.com/.well-known
RUN apk update && apk add certbot
compose_reverse_proxy.yaml:
version: '3.3'
services:
reverseproxy:
image: reverseproxy
ports:
- 80:80
restart: always
volumes:
- proxy_letsencrypt_ssl_proof:/var/www/ssl-proof
- proxy_letsencrypte_certificates:/etc/letsencrypt
apache:
depends_on:
- reverseproxy
image: httpd:alpine
restart: always
myhomepage_blog:
image: wordpress
links:
- myhomepage_db:mysql
environment:
- WORDPRESS_DB_PASSWORD=somepassword
- VIRTUAL_HOST=myhomepage.com
volumes:
- myhomepage_code:/code
- myhomepage_html:/var/www/html
restart: always
myhomepage_db:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=somepassword
- MYSQL_DATABASE=wordpress
volumes:
- myhomepage_dbdata:/var/lib/mysql
restart: always
volumes:
myhomepage_dbdata:
myhomepage_code:
myhomepage_html:
proxy_letsencrypt_ssl_proof:
proxy_letsencrypte_certificates:
reverse_proxy_testing.sh:
#!/bin/bash
docker rm testreverseproxy_apache_1 testreverseproxy_myhomepage_blog_1 testreverseproxy_myhomepage_db_1 testreverseproxy_reverseproxy_1
docker build -t reverseproxy -f reverse_proxy.docker .
docker-compose -f reverse_proxy_compose.yml up
Sometimes nginx server forwards request to wrong docker-compose service.
I have docker-compose config
version: "3.0"
services:
proj-reader:
image: repositry:5000/my-company.com/proj-reader
ports:
- "28090:8080"
- "28095:5005"
proj-helpdesk:
image: repository:5000/my-company.com/proj-helpdesk
ports:
- "29080:8080"
- "29085:5005"
proj-frontend:
image: repository:5000/my-company.com/proj-frontend
ports:
- "80:80"
links:
- "proj-helpdesk:backend"
- "proj-reader:reader"
...
Frontend is a nginx container with NodeJs application. And we have next nginx configuration:
upstream backend {
server backend:8080;
keepalive 30;
}
upstream reader {
server reader:8080;
keepalive 30;
}
server {
listen 80;
client_max_body_size 2m;
server_name localhost;
root /usr/share/nginx/html;
location /api/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
proxy_read_timeout 600;
}
location /web-callback/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://reader;
}
}
And sometimes I can see that request for /web-callback/ is recieved in other service which doesn't pointed in links section for frontend service. At first time I have thougt it happened after I have restarted reader service. But yerstaday this situation repeated and I know that reader haven't been restarted.
What cn it be? ANd how can I prevent this situation in future?
You should try to use:
location ^~ /api/ {
# Queries beginning with /api/ and then stops searching.
}
location ^~ /web-callback/ {
# Queries beginning with /web-callback/ and then stops searching.
}
https://www.keycdn.com/support/nginx-location-directive/