I'm trying to use nginx container which will be used to port forwarding communication to other containers such as Ansible/GIT/Jenkins etc ....
When using nci-ansible-ui as Ansible, I can reach the Ansible UI using the external host server IP http://10.97.98.6:3000 (which is according the instructions). Yet, when trying to use the nginx to port forward to this container - the 404 and 502 errors appear in the log while the web page is not properly loaded... note that in such case, I would like to use http://10.97.98.6/Master_Ansible as the URL. Note that http://172.18.0.5 is the docker network IP given to the container...
server {
listen 80;
listen 3000;
server_name localhost;
#charset koi8-r;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
root /etc/nginx/html;
index index.html index.php;
#charset koi8-r;
location / {
root /etc/nginx/html;
try_files $uri /$uri $uri/ =404;
}
location /Master_Ansible {
proxy_pass http://172.18.0.5:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Any idea about this?
Related
Hi
my problem is that I have 502 error when trying to connect to localhost:8090.
Setup is made on running Docker container with Mariadb (MySql) in it.
Ports: 80 and 8080 work great. Database is running (Alpine Linux - Mariadb). Localhost on port 80 and 8080 shows what should show.
I haven't had anything to do with nginx configuration before.
In Error log I have this:
2022/08/04 20:55:53 [emerg] 302#302: open() "/conf/nginx/nginx.conf"
failed (2: No such file or directory)
In conf file:
user root; worker_processes 2; events { worker_connections 1024; }
http { include mime.types; default_type application/octet-stream;
sendfile on; keepalive_timeout 65; include
/etc/nginx/sites-enabled/*; } daemon off;
In sites-enabled: server {
listen 8090;
root /usr/bin;
server_name localhost;
access_log /dev/null;
error_log /dev/null;
location / {
proxy_pass http://127.0.0.0:7001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Fowarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Fowarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
try_files $uri $uri/ =404;
}
location ~ \.(gif) {
root /var/lib;
}
What should I do?
Is there a way to put two NGINX server in series?
In my configuration, I have multiple docker-compose instances of containers, which all run the same web applications. In additions, I have two NGINX. The NGINX1 server is located on my physical machine, and the other NGINX server (NGINX2) is located inside a docker-compose container.
Is there a way, connecting to the NGINX1 server, to automatically reach the APP1 application (which is inside a container) passing through the second NGINX (NGINX2, which, also, is internal to the container) by simply typing in a browser the link "mydomain.com/app1"?
I know that a more simple solution would be to point directly the docker-compose container to the external NGINX, but could I apply the scenario described instead?
For better understanding, I made a simple images showing my architecture.
image showing the architecture of the project
Here is my NGINX1 config file:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 9999;
server {
listen 80;
server_name client1.nginx.loc;
access_log logs/nginx_client_loc-access.log;
error_log logs/nginx_client_loc-error.log;
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
And here there is the second NGINX config (NGNX2, internal to the container)
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 99999;
server {
listen 80;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
location /App2 {
include /etc/nginx/mime.types;
proxy_pass http://APP2/targets;
proxy_set_header X-Forwarded-For $remote_addr;
}
Thanks so much
If I understood correctly you want NGINX1 to pass into NGINX2 which would pass the packet onward to APP1?
In this case, the solution is rather straight-forward:
Config NGINX1 to send the packet into a specific port, e.g. port 777. Then, add an NGINX2 listener which would listen on port 777 and send it away.
NGINX1:
http {
...
server {
listen 80;
...
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
location /APP1 {
proxy_pass <URL for NGINX2>:777;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
#error_page 404 /404.html;
...
}
NGINX2:
http {
include mime.types;
...
server {
listen 80;
...
}
server {
listen 777;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
}
...
This way a packet that arrives to /APP1 is forwarded by NGINX1 into port 777 of NGINX2 which in-turn forwards it into the APP1 content.
Also, if you could next time include ports on your architecture diagram, thsi would make it clearer to understand packet-movement.
Hopes this helps.
I have two docker container of my 2 different net core API project running on my machine (Linux) on respective ports 3333:80 and 6666:8088. I have deployed their front end part on Nginx server each having its own configuration in sites-available folder.
The problem is that my 1st container (API) is working fine, getting response from front end application as well as from Postman but the 2nd container is not working, throwing this error HTTP 502 bad gateway and error msg:
recv() failed (104: Connection reset by peer) while reading response header from upstream
What's wrong over here? Kindly help me to resolve this issue. Following are my config files:
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
client_max_body_size 50M;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
site1.conf
server {
listen 80 default_server;
server_name _;
root /var/www/app.admin-crm.com;
index index.html;
location /api/ {
proxy_pass http://127.0.0.1:3333/api/;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded_Host $server_name;
add_header Access-Control_Allow-Credentials true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
client_max_body_size 50M;
}
location / {
try_files $uri $uri/ /index.html;
}
}
site2.conf:
server {
listen 8088 default_server;
server_name _;
root /var/www/stilaar-web;
index index.html;
location /api/ {
proxy_pass http://127.0.0.1:6666/api/;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded_Host $server_name;
add_header Access-Control_Allow-Credentials true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
client_max_body_size 50M;
}
location / {
try_files $uri $uri/ /index.html;
}
}
I have installed docker and have a running container with below port mapping.
0.0.0.0:32770->1414/tcp, 0.0.0.0:32769->4414/tcp, 0.0.0.0:32768->7800/tcp
I am able to open the page using http://localhost:32769 in local browser. But I am not able to open in internet using http://server_name:32769.
I have Jenkins installed on same machine and I am able to access it via nginx using http://server_name:80 over internet. Nginx installed locally and below is the setup in nginx.conf.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
#this is the maximum upload size
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_request_buffering off; # Required for HTTP CLI commands in Jenkins > 2.54
proxy_set_header Connection ""; # Clear for keepalive
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
I want to configure NGINX with Rails 4 and run my application in production mode. The problem is I get 403 code - command: rails s -e production and in browser typing localhost. Naturally I established 755 privileges for the whole files in my application folder. There is my nginx.conf below:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
client_max_body_size 50M;
# fastcgi_buffers 8 16k;
# fastcgi_buffer_size 32k;
# fastcgi_connect_timeout 300;
# fastcgi_send_timeout 300;
# fastcgi_read_timeout 300;
upstream proxy-user {
server 127.0.0.1:2000;
}
upstream thin_cluster {
server unix:/tmp/thin.0.sock;
# server unix:/tmp/thin.1.sock;
# server unix:/tmp/thin.2.sock;
}
server {
listen 80;
server_name localhost;
# access_log /var/log/nginx-access.log;
root /home/user/Apps/myapp/public;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location /home/user/Apps/myapp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://proxy-user;
if (!-f $request_filename) {
proxy_pass http://proxy-user;
break;
}
}
}
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!ADH:!MD5;
access_log /var/log/nginx-access-ssl.log;
root /home/user/Apps/myapp/public;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location /home/user/Apps/myapp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO https;
proxy_set_header SSL_CLIENT_S_DN $ssl_client_s_dn;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://proxy-user;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
You kinda have few issues in your config, I'll write what I would have done and you tell me what ever questions you have, I'll assume that the server is on port 2000 because that's the upstream you used.
I'll also ignore the http block and only use the server and upstream blocks.
upstream rails {
server 127.0.0.1:2000;
}
server {
server_name domain.com; # or whichever
listen 80;
# ssl settings start
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# ssl settings end
root /home/user/Apps/myapp/public;
error_page 500 502 503 504 /50x.html;
access_log /var/log/nginx/domain-access.log;
error_log /var/log/nginx/domain-error.log;
location #pass_to_rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails;
}
location / {
try_files $uri $uri/ #pass_to_rails;
}
}
You should place this inside sites-available and symlink to sites-enabled if you are on a debian/ubuntu distro, or use /etc/nginx/conf.d if you are on another distro, to keep things tidy and maintainable.
also make sure to uncomment one of those lines depending on what you want to use
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;