Enable Docker port access only with Nginx reverse proxy - docker

I have a Docker container on port 8081 running on Centos7, and a reverse proxy with Nginx.
My domain have a LetsEncrypt SSl installed and it works good when i access "https://my.example.com", it redirects me to my 8081 Docker.
But i when i access "http://my.example.com:8081", i still can reach my Docker application...i don't want to enable this...don't want to enable any http access.
I want to reach 8081 only through Nginx reverse proxy (that forces me to https)...i think it may be some configuration on my iptables, but i don't have experience with it.
Can someone help me?
Thanks!
This is my conf.d file in Nginx
server{
server_name my.example.com;
location / {
proxy_pass http://localhost:8081;}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server{
if ($host = my.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my.example.com;
return 404; # managed by Certbot
}

iptables does not understand the difference between HTTP or HTTPS, it understands only ip; ports and mac levels, if you try to block port 8081 with iptables even your https connection will be dropped or rejected depending on your choice.
If your docker container is accessible from the outside without passing through the reverse proxy, it is a container configuration issue, or if your nginx reverse proxy lets through http packets, then it is an nginx configuration issue, I think we need more details from your side.

I have resolved this issue using the firewall application from my hosting provider(Vultr).
There, i left 8081 only for local access, so now it's not possible to access this without passing through Nginx reverse proxy!

Related

Gitlab docker instance doesn't take my external URL

I launched a gitlab container like this:
sudo docker run --detach --hostname MY_URL.com --publish 4433:443 --publish 8080:80 --publish 2222:22 --name gitlab --og/gitlab --volume /data/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
And I have a NGINX configuration like this:
server {
server_name MY_URL.com;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwardedd_for;
proxy_set_header X-Forwarded_Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/MY_URL.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MY_URL.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = MY_URL.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name MY_URL.com;
return 404; # managed by Certbot
}
With this configuration, everything works fine, I can type https://MY_URL.com in the address bar of my browser and I can get access to my Gitlab.
The problem is that the link to clone in the repositories is "HTTP" and not "HTTPS". Moreover, it seems that there is a configuration somewhere telling my CI jobs to use "HTTP://MY_URL.com" (and it doesn't work because I get an HTTP basic auth error, which I wouldn't get if I used https I think). I read the documentation and I thought I just had to modify the external_url parameter:
sudo vi /data/gitlab/config/gitlab.rb
Adding external_url 'https://MY_URL.com'
sudo docker exec -it gitlab gitlab-ctl reconfigure
But after doing that I always have a "bad redirection" if I write "http://MY_URL.com" or "https://MY_URL.com". In the nginx logs, I don't have any error but only 301 in the access.log.
What am I doing wrong here?
Thanks a lot in advance...
Because you are providing an external NGINX configuration that also terminates SSL, you have to apply a configuration to your GitLab instance for external proxy/load-balancer SSL termination.
Normally, when you don't provide external_url, the system host name is used and HTTPS is disabled. If you provide an external_url with an https:// scheme, this will activate HTTPS, which is not what you want since you are using an external server (NGINX) for SSL/TLS termination.
external_url "https://myhost.com"
nginx['listen_port'] = 80
nginx['listen_https'] = false
This should be all you need to get GitLab to display the correct hostname in the UI without any other behavior changes.
You'll probably also want to change the proxy headers since you already have a proxy server in front of GitLab. You'll want to configure trusted proxies as well as the real-ip header to make sure GitLab correctly logs the IP address of your users (instead of the IP of your proxy).

Need help troubleshooting custom docker image for nginx

I want to install a simple web service to browse a file directory tree on an internal server and to comply with company policy it needs to use TLS ("https://...").
First I tried several images including davralin/nginx-autoindex and mounted the directory I want this service to share. It worked like a charm, but it didn't use a TLS connection.
To get something to work with TLS, I started from scratch and created my own default.conf file for nginx:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I created the following Dockerfile:
FROM nginx:stable-alpine
MAINTAINER lsiden at gmail.com
COPY default.conf /etc/nginx/conf.d
COPY my-cert.crt /etc/ssl/certs/
COPY server.key /etc/ssl/certs/
Then I build it:
docker build -t lsiden/nginx-autoindex-tls .
Then I run it:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:80 lsiden/nginx-autoindex-tls
However, I can't reach it even from the host machine. I tried:
$ telnet localhost 3453
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
I tried to read log messages:
docker logs <container-id>
Silence.
I've already confirmed that the docker proxy is listening to the port:
tcp6 0 0 :::3453 :::* LISTEN 14828/docker-proxy
The port shows up on tcp6 but not "tcp" (ipv4) but I read here that netstat will show only the ipv6 connection even if it is available on both. To be sure, I verified:
sudo sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
To be thorough, I already opened this port in iptables, although iptables can't be playing a role here if I can't even get to it from the same machine via localhost.
I'm hoping someone with good networking chops can tell me where to look next. I can't figure out what I missed.
In case the configuration you shared is complete, you are not listing on port 80 inside your container at all.
change your configuration to something like that in case you want to redirect incomming traffic on port 80 to 443:
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If you don't want to do this, just change your docker run command:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:443 lsiden/nginx-autoindex-tls

nginx config reverse proxy + docker + http to https redirect

Context
I have a nginx container in front of several other containers. One of them running node.js front end that is presenting on 10001:3000
I've managed to piece together a nginx config that partially works, allowing SSL termination on 443 to the container on 10001.
However, I now need to re-direct all traffic to HTTPS and ideally prevent port 10001 from working, some sort of http catch all?
Here is my localhost config for this.
user www-data;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
worker_processes 2;
events {
worker_connections 1024;
multi_accept off;
}
stream {
upstream stream_backend {
server 172.17.0.1:10001;
# server backend2.example.com:12345;
#server backend3.example.com:12345;
}
server {
listen 443 ssl;
proxy_pass stream_backend;
ssl_certificate /etc/letsencrypt/live/localhost/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/localhost/fullchain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 4h;
ssl_handshake_timeout 30s;
#...
}
}
Beyond this everything I try I get a syntax error or some other error. Can anyone offer some plain advice?
If you are using docker-compose and adding your API and nginx to the same bridge network, you can expose the port to your api container, and remove the ports directive. This will allow nginx to communicate with the api container, but there will be no open port to the api that is publically available.
ports:
- "8000:80"
expose:
- "8000"
Above, the ports directive opens port 8000 to the public. Expose makes 8000 available only over the local subnet or bridge network. So, in this scenario, my suggestion is to remove the ports section. Expose also works with the run command but you will need to create a bridge network manually in that case.
I had the same issue recently. Actually, I had several issues, but this was one of them. See my question and answer here for more detail.
NGINX reverse proxy not working for .NET core webAPI running in Docker

nginx responds to HTTPS but not HTTP

I am using the dockerized Nextcloud as shown here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm
I set this up with port 80 mapped to 12345 and port 443 mapped to 12346. When I go to https://mycloud.example.com:12346, I get the self-signed certificate prompt, but otherwise everything is fine and I see the NextCloud web UI. But when I go to http://mycloud.example.com:12345 nginx (the proxy container) gives error "503 Service Temporarily Unavailable". The error also shows up in proxy's logs.
How can I diagnose the issue? Why is HTTPS working but not HTTP?
Can you provide your docker command starting nextcloud, or docker-compose file ?
Diagnosis is as usual with docker stuff : get the id for the currently running container
docker ps
Then check the logs
docker logs [id or name of your container]
docker-compose logs [name of your service]
Connect in the container
docker exec -ti [id or name of your container] [bash or ash if alpine based container]
There read the nginx conf files involved. In your case I'ld check the redirection being made from http to https, most likely it's something like below with no specific port specified for https, hence port 443, hence not working
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri; <======== no port = 443
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}

spring boot application behind nginx with https

I have two docker containers:
One container runs my spring boot application which listens on port 8080:
This container exposes 8080 port to other docker containers.
Container ip in the docker network is 172.17.0.2.
The other container runs nginx which publishes port 80.
I can successfully put my spring boot app behind nginx with the following conf in my nginx container:
server {
server_name <my-ip>;
listen 80;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
Doing a GET request to my REST API (http://my-ip/context-url) works fine.
I am trying now to put my application behind nginx with https. My nginx conf is as follows:
server {
server_name <my-ip>;
listen 80;
return 301 https://$server_name$request_uri;
}
server {
server_name <my-ip>;
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
However I cannot access my application now either through http or https.
http redirects to https and result is ERR_CONNECTION_REFUSED
Problem was that I was not publishing 443 port when running nginx container but only port 80.The nginx configuration is right.

Resources