exposing docker container port to access from internet - docker

I have installed docker and have a running container with below port mapping.
0.0.0.0:32770->1414/tcp, 0.0.0.0:32769->4414/tcp, 0.0.0.0:32768->7800/tcp
I am able to open the page using http://localhost:32769 in local browser. But I am not able to open in internet using http://server_name:32769.
I have Jenkins installed on same machine and I am able to access it via nginx using http://server_name:80 over internet. Nginx installed locally and below is the setup in nginx.conf.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_redirect default;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
#this is the maximum upload size
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_request_buffering off; # Required for HTTP CLI commands in Jenkins > 2.54
proxy_set_header Connection ""; # Clear for keepalive
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

Related

Put two nginx series

Is there a way to put two NGINX server in series?
In my configuration, I have multiple docker-compose instances of containers, which all run the same web applications. In additions, I have two NGINX. The NGINX1 server is located on my physical machine, and the other NGINX server (NGINX2) is located inside a docker-compose container.
Is there a way, connecting to the NGINX1 server, to automatically reach the APP1 application (which is inside a container) passing through the second NGINX (NGINX2, which, also, is internal to the container) by simply typing in a browser the link "mydomain.com/app1"?
I know that a more simple solution would be to point directly the docker-compose container to the external NGINX, but could I apply the scenario described instead?
For better understanding, I made a simple images showing my architecture.
image showing the architecture of the project
Here is my NGINX1 config file:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 9999;
server {
listen 80;
server_name client1.nginx.loc;
access_log logs/nginx_client_loc-access.log;
error_log logs/nginx_client_loc-error.log;
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
And here there is the second NGINX config (NGNX2, internal to the container)
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 99999;
server {
listen 80;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
location /App2 {
include /etc/nginx/mime.types;
proxy_pass http://APP2/targets;
proxy_set_header X-Forwarded-For $remote_addr;
}
Thanks so much
If I understood correctly you want NGINX1 to pass into NGINX2 which would pass the packet onward to APP1?
In this case, the solution is rather straight-forward:
Config NGINX1 to send the packet into a specific port, e.g. port 777. Then, add an NGINX2 listener which would listen on port 777 and send it away.
NGINX1:
http {
...
server {
listen 80;
...
location /loki{
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "Upgrade";
#proxy_set_header Host $http_host;
proxy_pass http://172.29.161.227:3100;
}
location /APP1 {
proxy_pass <URL for NGINX2>:777;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
#error_page 404 /404.html;
...
}
NGINX2:
http {
include mime.types;
...
server {
listen 80;
...
}
server {
listen 777;
server_name localhost 127.0.0.1;
resolver 127.0.0.11;
location /APP1 {
proxy_pass http://APP1/content;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
}
...
This way a packet that arrives to /APP1 is forwarded by NGINX1 into port 777 of NGINX2 which in-turn forwards it into the APP1 content.
Also, if you could next time include ports on your architecture diagram, thsi would make it clearer to understand packet-movement.
Hopes this helps.

Rails + Nginx + activestorage + directupload with large files

I'm not able to upload files larger than 4.7gb, larger than that fails with a 404 not found.
It works just fine in development without nginx, so I assume nginx is the source of the problem here.
Nginx config:
server {
upstream bench {
server 127.0.0.1:3001;
}
server_name servername;
proxy_http_version 1.1;
root /path/to/server;
proxy_max_temp_file_size 10024m;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 100G;
}
location / {
try_files $uri #bench;
}
location /cable {
proxy_http_version 1.1;
proxy_pass http://bench/cable;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location #bench {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://bench;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_buffering off;
proxy_request_buffering off;
}
error_page 500 502 503 504 /500.html;
keepalive_timeout 10;
routes.rb
Rails.application.routes.draw do
resources :benches, :path => "benchmarks"
root 'benches#index'
end
The problem wasn't nginx. The service_url activestorage creates when you start the upload is only valid for 5 minutes by default, so if your files takes more than 5minutes to upload.. you're out of luck.
config.active_storage.service_urls_expire_in = 1.hour

Redirecting to the invalid URI. Jenkins in a docker container behind the Nginx

Successfully pulled an image from the official Jenkins hub and run a container with the following parameters
docker run -d --name=jenkins -p 8080:8080 -p 50000:50000 -e JENKINS_OPTS="--prefix=/build" -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
Also, I have the Nginx installed on my host (not a container!)
Instructions for Nginx
upstream jenkins {
server localhost:8080;
keepalive 16;
}
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name example.com www.example.com;
ignore_invalid_headers off;
location /build/ {
proxy_pass http://jenkins;
proxy_http_version 1.1;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Proto: $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_request_buffering off;
}
access_log /var/log/nginx/jenkins.access.log;
error_log /var/log/nginx/jenkins.error.log;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
listen 80;
return 301 https://example.comk$request_uri;
}
Trying to access Jenkins via https://example.com/build. It asks me to input an initial admin password. After successfull submission it gives me this page
Page URL is https://example.com/build/:%20https://example:80/build/
I tried to add prefix... Tried to restart both of them but nothing changes.
Simply put set_proxy_headers strings before the proxy_pass. Such as
location /build/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Proto: $scheme;
proxy_pass http://jenkins;
proxy_http_version 1.1;
proxy_redirect default;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffering off;
proxy_request_buffering off;
}

nginx - port forwarding to another container

I'm trying to use nginx container which will be used to port forwarding communication to other containers such as Ansible/GIT/Jenkins etc ....
When using nci-ansible-ui as Ansible, I can reach the Ansible UI using the external host server IP http://10.97.98.6:3000 (which is according the instructions). Yet, when trying to use the nginx to port forward to this container - the 404 and 502 errors appear in the log while the web page is not properly loaded... note that in such case, I would like to use http://10.97.98.6/Master_Ansible as the URL. Note that http://172.18.0.5 is the docker network IP given to the container...
server {
listen 80;
listen 3000;
server_name localhost;
#charset koi8-r;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
root /etc/nginx/html;
index index.html index.php;
#charset koi8-r;
location / {
root /etc/nginx/html;
try_files $uri /$uri $uri/ =404;
}
location /Master_Ansible {
proxy_pass http://172.18.0.5:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_max_temp_file_size 0;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Any idea about this?

nginx - configuration issue - production mode - code 403

I want to configure NGINX with Rails 4 and run my application in production mode. The problem is I get 403 code - command: rails s -e production and in browser typing localhost. Naturally I established 755 privileges for the whole files in my application folder. There is my nginx.conf below:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
client_max_body_size 50M;
# fastcgi_buffers 8 16k;
# fastcgi_buffer_size 32k;
# fastcgi_connect_timeout 300;
# fastcgi_send_timeout 300;
# fastcgi_read_timeout 300;
upstream proxy-user {
server 127.0.0.1:2000;
}
upstream thin_cluster {
server unix:/tmp/thin.0.sock;
# server unix:/tmp/thin.1.sock;
# server unix:/tmp/thin.2.sock;
}
server {
listen 80;
server_name localhost;
# access_log /var/log/nginx-access.log;
root /home/user/Apps/myapp/public;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location /home/user/Apps/myapp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://proxy-user;
if (!-f $request_filename) {
proxy_pass http://proxy-user;
break;
}
}
}
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!ADH:!MD5;
access_log /var/log/nginx-access-ssl.log;
root /home/user/Apps/myapp/public;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location /home/user/Apps/myapp/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO https;
proxy_set_header SSL_CLIENT_S_DN $ssl_client_s_dn;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://proxy-user;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
You kinda have few issues in your config, I'll write what I would have done and you tell me what ever questions you have, I'll assume that the server is on port 2000 because that's the upstream you used.
I'll also ignore the http block and only use the server and upstream blocks.
upstream rails {
server 127.0.0.1:2000;
}
server {
server_name domain.com; # or whichever
listen 80;
# ssl settings start
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# ssl settings end
root /home/user/Apps/myapp/public;
error_page 500 502 503 504 /50x.html;
access_log /var/log/nginx/domain-access.log;
error_log /var/log/nginx/domain-error.log;
location #pass_to_rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-FORWARDED_PROTO $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails;
}
location / {
try_files $uri $uri/ #pass_to_rails;
}
}
You should place this inside sites-available and symlink to sites-enabled if you are on a debian/ubuntu distro, or use /etc/nginx/conf.d if you are on another distro, to keep things tidy and maintainable.
also make sure to uncomment one of those lines depending on what you want to use
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;

Resources