Is there a way to put two NGINX server in series?
In my configuration, I have multiple docker-compose instances of containers, which all run the same web applications. In additions, I have two NGINX. The NGINX1 server is located on my physical machine, and the other NGINX server (NGINX2) is located inside a docker-compose container.
Is there a way, connecting to the NGINX1 server, to automatically reach the APP1 application (which is inside a container) passing through the second NGINX (NGINX2, which, also, is internal to the container) by simply typing in a browser the link "mydomain.com/app1"?
I know that a more simple solution would be to point directly the docker-compose container to the external NGINX, but could I apply the scenario described instead?
For better understanding, I made a simple images showing my architecture.
image showing the architecture of the project
Here is my NGINX1 config file:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 9999;
server {
listen 80;
server_name client1.nginx.loc;
access_log logs/nginx_client_loc-access.log;
error_log logs/nginx_client_loc-error.log;
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
And here there is the second NGINX config (NGNX2, internal to the container)
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 99999;
server {
listen 80;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
location /App2 {
include /etc/nginx/mime.types;
proxy_pass http://APP2/targets;
proxy_set_header X-Forwarded-For $remote_addr;
}
Thanks so much
If I understood correctly you want NGINX1 to pass into NGINX2 which would pass the packet onward to APP1?
In this case, the solution is rather straight-forward:
Config NGINX1 to send the packet into a specific port, e.g. port 777. Then, add an NGINX2 listener which would listen on port 777 and send it away.
NGINX1:
http {
...
server {
listen 80;
...
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
location /APP1 {
proxy_pass <URL for NGINX2>:777;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
#error_page 404 /404.html;
...
}
NGINX2:
http {
include mime.types;
...
server {
listen 80;
...
}
server {
listen 777;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
}
...
This way a packet that arrives to /APP1 is forwarded by NGINX1 into port 777 of NGINX2 which in-turn forwards it into the APP1 content.
Also, if you could next time include ports on your architecture diagram, thsi would make it clearer to understand packet-movement.
Hopes this helps.
Related
Hi
my problem is that I have 502 error when trying to connect to localhost:8090.
Setup is made on running Docker container with Mariadb (MySql) in it.
Ports: 80 and 8080 work great. Database is running (Alpine Linux - Mariadb). Localhost on port 80 and 8080 shows what should show.
I haven't had anything to do with nginx configuration before.
In Error log I have this:
2022/08/04 20:55:53 [emerg] 302#302: open() "/conf/nginx/nginx.conf"
failed (2: No such file or directory)
In conf file:
user root; worker_processes 2; events { worker_connections 1024; }
http { include mime.types; default_type application/octet-stream;
sendfile on; keepalive_timeout 65; include
/etc/nginx/sites-enabled/*; } daemon off;
In sites-enabled: server {
listen 8090;
root /usr/bin;
server_name localhost;
access_log /dev/null;
error_log /dev/null;
location / {
proxy_pass http://127.0.0.0:7001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Fowarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Fowarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ =404;
}
location ~ \.(gif) {
root /var/lib;
}
What should I do?
Using the latest jitsi docker build on a docker desktop with wsl2 I am having problems getting the wss socket to redirect when using a an internal PUBLIC_URL behind an nginx reverse proxy
using a default localhost with no PUBLIC_URL I can connect to a meeting no issues and url = http://localhost
.env
# Public URL for the web service (required)
#PUBLIC_URL=https://meet.example.com
adding a reverse proxy with the following nginx default.conf
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /home/ssl/certs/meet.example.com.crt;
ssl_certificate_key /home/ssl/private/meet.example.com.key;
server_name meet.example.com;
#charset koi8-r;
access_log /home/meet.jitsi.access.log main;
error_log /home/meet.jitsi.error.log ;
location / {
proxy_pass http://meet.jitsi:80;
}
location /xmpp-websocket {
proxy_pass http://jvb.meet.jitsi; <- see error below
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ /\.ht {
deny all;
}
}
I get an error when testing the above default.conf
root#9c684:/# nginx -c /etc/nginx/nginx.conf -t
2021/01/25 15:53:14 [emerg] 300#300: invalid URL prefix in /etc/nginx/conf.d/default.conf:20
nginx: [emerg] invalid URL prefix in /etc/nginx/conf.d/default.conf:20
nginx: configuration file /etc/nginx/nginx.conf test failed
/etc/nginx/conf.d/default.conf:20 == proxy_pass http://jvb.meet.jitsi;
Following a number of threads I am lost to the current config I should use, but I understand that two proxy_pass should be possible for the same sever_name, is this correct?
Is there a better method to have a local url redirect to the JVB sever for the wss:// socket?
In the virtual host that Jitsi creates by default for Nginx there is an entry for websocket that I don't see in your configuration. This is the default configuration:
# colibri (JVB) websockets for jvb1
location ~ ^/colibri-ws/default-id/(.*) {
proxy_pass http://127.0.0.1:9090/colibri-ws/default-id/$1$is_args$args;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
tcp_nodelay on;
}
In my case I have several JVB servers so I have an entry for each one.
# colibri (JVB) websockets for my jvb1
location ~ ^/colibri-ws/jvb1/(.*) {
proxy_pass http://10.200.0.112:9090/colibri-ws/jvb1/$1$is_args$args;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
tcp_nodelay on;
}
# colibri (JVB) websockets for my jvb2
location ~ ^/colibri-ws/jvb2/(.*) {
proxy_pass http://10.200.0.83:9090/colibri-ws/jvb2/$1$is_args$args;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
tcp_nodelay on;
}
To know the id that you go to use you need to configure the /etc/jitsi/videobridge/jvb.conf file
videobridge {
http-servers {
public {
port = 9090
}
}
websockets {
enabled = true
domain = "your.domain.com:443"
tls = true
server-id = jvb2
}
}
Is there a "proper" structure for the directives of an NGINX Reverse Proxy? I have seen 2 main differences when looking for examples of an NGINX reverse proxy.
http directive is used to house all server directives. Servers with data are listed in a pool within the upstream directive.
server directives are listed directly within the main directive.
Is there any reason for this or is this just a syntactical sugar difference?
Example of #1 within ./nginx.conf file:
upstream docker-registry {
server registry:5000;
}
http {
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 default_server;
ssl on;
ssl_certificate external/cert.pem;
ssl_certificate_key external/key.pem;
# set HSTS-Header because we only allow https traffic
add_header Strict-Transport-Security "max-age=31536000;";
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
auth_basic "Restricted"
auth_basic_user_file external/docker-registry.htpasswd;
proxy_pass http://docker-registry; # the docker container is the domain name
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
}
Example of #2 within ./nginx.conf file:
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
error_log /var/log/nginx/error.log info;
access_log /var/log/nginx/access.log main;
ssl_certificate /etc/ssl/private/{SSL_CERT_FILENAME};
ssl_certificate_key /etc/ssl/private/{SSL_CERT_KEY_FILENAME};
location / {
proxy_pass http://app1
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr; # could also be `$proxy_add_x_forwarded_for`
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I dont quite understand your question, but it seems to me that the second example is missing the http {}, I dont think that nginx will start without it.
unless your example2 file is included somehow in the nginx.conf that has the http{}
I have installed docker and have a running container with below port mapping.
0.0.0.0:32770->1414/tcp, 0.0.0.0:32769->4414/tcp, 0.0.0.0:32768->7800/tcp
I am able to open the page using http://localhost:32769 in local browser. But I am not able to open in internet using http://server_name:32769.
I have Jenkins installed on same machine and I am able to access it via nginx using http://server_name:80 over internet. Nginx installed locally and below is the setup in nginx.conf.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
#this is the maximum upload size
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_request_buffering off; # Required for HTTP CLI commands in Jenkins > 2.54
proxy_set_header Connection ""; # Clear for keepalive
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
I'm trying to use nginx container which will be used to port forwarding communication to other containers such as Ansible/GIT/Jenkins etc ....
When using nci-ansible-ui as Ansible, I can reach the Ansible UI using the external host server IP http://10.97.98.6:3000 (which is according the instructions). Yet, when trying to use the nginx to port forward to this container - the 404 and 502 errors appear in the log while the web page is not properly loaded... note that in such case, I would like to use http://10.97.98.6/Master_Ansible as the URL. Note that http://172.18.0.5 is the docker network IP given to the container...
server {
listen 80;
listen 3000;
server_name localhost;
#charset koi8-r;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
root /etc/nginx/html;
index index.html index.php;
#charset koi8-r;
location / {
root /etc/nginx/html;
try_files $uri /$uri $uri/ =404;
}
location /Master_Ansible {
proxy_pass http://172.18.0.5:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Any idea about this?