I have nginx & docker-compose setup with the following nginx config file, here api and kibana are docker containers which are running on ports 8080 and 5601 respectively
user nobody;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events
{
worker_connections 1024;
}
http
{
server
{
listen 80;
server_name my-domain.com www.my-domain.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server
{
listen 443 ssl;
server_name my-domain.com www.my-domain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/all/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/all/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location ^~ /
{
proxy_pass http://api:8080/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
}
location ^~ /monitoring
{
proxy_pass http://kibana:5601/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
rewrite /monitoring/(.*)$ /$1 break;
}
}
}
All of my containers are up and running and everything seems fine but when i visit https://my-domain.com i get back This site can’t be reached and if i go to the non-secure http://my-domain.com/ i get nginx 404 error with the following log in the container
[error] 17#17: *13 open() "/etc/nginx/html/index.html" failed (2: No such file or directory), client: 123.456.789.101, server: my-domain.com, request: "GET / HTTP/1.1", host: "my-domain.com
Why is it looking for a file? Is there something wrong with my nginx config? pls help
Found it! 🤦🏽♂️ it should've been
proxy_set_header Host $host;
instead of
proxy_set_header Host $http_host;
Edit:
Apparently i also had to stop docker containers after running it for the first time and start again to get it to work
Related
Hi
my problem is that I have 502 error when trying to connect to localhost:8090.
Setup is made on running Docker container with Mariadb (MySql) in it.
Ports: 80 and 8080 work great. Database is running (Alpine Linux - Mariadb). Localhost on port 80 and 8080 shows what should show.
I haven't had anything to do with nginx configuration before.
In Error log I have this:
2022/08/04 20:55:53 [emerg] 302#302: open() "/conf/nginx/nginx.conf"
failed (2: No such file or directory)
In conf file:
user root; worker_processes 2; events { worker_connections 1024; }
http { include mime.types; default_type application/octet-stream;
sendfile on; keepalive_timeout 65; include
/etc/nginx/sites-enabled/*; } daemon off;
In sites-enabled: server {
listen 8090;
root /usr/bin;
server_name localhost;
access_log /dev/null;
error_log /dev/null;
location / {
proxy_pass http://127.0.0.0:7001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Fowarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Fowarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ =404;
}
location ~ \.(gif) {
root /var/lib;
}
What should I do?
there. I have a docker nginx reverse proxy configured with ssl by configuration like this:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# some locations location ... { ... }
}
The certificates are configured with certbot and working pretty fine.
All the containers are up and running. But I have multiple websites running on my server. They are managed by local NginX. So I set ports for the docker NginX like this:
nginx:
image: example/example_nginx:latest
container_name: example_nginx
ports:
- "8123:80"
- "8122:443"
volumes:
- ${PROJECT_ROOT}/data/certbot/conf:/etc/letsencrypt
- ${PROJECT_ROOT}/data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
The docker port 80 maps to local port 8123 (http). The docker port 443 maps to local port 8122 (https). To pass the request from local NginX to docker container NginX I use the following config:
server {
listen 80;
server_name example.com;
location / {
access_log off;
proxy_pass http://localhost:8123;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
access_log off;
proxy_pass https://localhost:8122;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $server_name;
}
}
When I open the website it works but certificate seems to be broken and my WebSockets crash.
My question is: how can I pass ssl processing from local NginX to docker NginX so that it will work as expected?
~
~
~
I have a backend container in internal docker network, which is not accessible to the internet.
Through nginx proxy i want to send request (webhook to slack) from backend server to outside world. Is it possible at all?
I have this config for nginx:
server {
listen 80 default_server;
server_name localhost;
client_max_body_size 100M;
charset utf-8;
... # setup for server containers
}
server{
listen 443;
server_name hooks.slack.com;
location / {
proxy_pass https://hooks.slack.com/;
proxy_redirect off;
proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #Gets CSS working
#proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Is there a "proper" structure for the directives of an NGINX Reverse Proxy? I have seen 2 main differences when looking for examples of an NGINX reverse proxy.
http directive is used to house all server directives. Servers with data are listed in a pool within the upstream directive.
server directives are listed directly within the main directive.
Is there any reason for this or is this just a syntactical sugar difference?
Example of #1 within ./nginx.conf file:
upstream docker-registry {
server registry:5000;
}
http {
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 default_server;
ssl on;
ssl_certificate external/cert.pem;
ssl_certificate_key external/key.pem;
# set HSTS-Header because we only allow https traffic
add_header Strict-Transport-Security "max-age=31536000;";
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
auth_basic "Restricted"
auth_basic_user_file external/docker-registry.htpasswd;
proxy_pass http://docker-registry; # the docker container is the domain name
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
}
Example of #2 within ./nginx.conf file:
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
error_log /var/log/nginx/error.log info;
access_log /var/log/nginx/access.log main;
ssl_certificate /etc/ssl/private/{SSL_CERT_FILENAME};
ssl_certificate_key /etc/ssl/private/{SSL_CERT_KEY_FILENAME};
location / {
proxy_pass http://app1
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr; # could also be `$proxy_add_x_forwarded_for`
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I dont quite understand your question, but it seems to me that the second example is missing the http {}, I dont think that nginx will start without it.
unless your example2 file is included somehow in the nginx.conf that has the http{}
I want to have two differents Rails apps in the same server with Thin and Nginx in the same port but in different locations.
Example:
http://domain.com/rails_app_1
http://domain.com/rails_app_2
This is my nginx.conf file with only one app:
upstream domain1 {
server 127.0.0.1:80;
server 127.0.0.1:81;
server 127.0.0.1:82;
}
server {
listen 80;
server_name domain.com;
access_log /var/www/rails_app_1/current/log/access.log;
error_log /var/www/rails_app_1/current/log/error.log;
root /var/www/rails_app_1/current/public/;
index index.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://domain1;
break;
}
}
}
Regards,