spring boot application behind nginx with https - docker

I have two docker containers:
One container runs my spring boot application which listens on port 8080:
This container exposes 8080 port to other docker containers.
Container ip in the docker network is 172.17.0.2.
The other container runs nginx which publishes port 80.
I can successfully put my spring boot app behind nginx with the following conf in my nginx container:
server {
server_name <my-ip>;
listen 80;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
Doing a GET request to my REST API (http://my-ip/context-url) works fine.
I am trying now to put my application behind nginx with https. My nginx conf is as follows:
server {
server_name <my-ip>;
listen 80;
return 301 https://$server_name$request_uri;
}
server {
server_name <my-ip>;
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
However I cannot access my application now either through http or https.
http redirects to https and result is ERR_CONNECTION_REFUSED

Problem was that I was not publishing 443 port when running nginx container but only port 80.The nginx configuration is right.

Related

Enable Docker port access only with Nginx reverse proxy

I have a Docker container on port 8081 running on Centos7, and a reverse proxy with Nginx.
My domain have a LetsEncrypt SSl installed and it works good when i access "https://my.example.com", it redirects me to my 8081 Docker.
But i when i access "http://my.example.com:8081", i still can reach my Docker application...i don't want to enable this...don't want to enable any http access.
I want to reach 8081 only through Nginx reverse proxy (that forces me to https)...i think it may be some configuration on my iptables, but i don't have experience with it.
Can someone help me?
Thanks!
This is my conf.d file in Nginx
server{
server_name my.example.com;
location / {
proxy_pass http://localhost:8081;}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server{
if ($host = my.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my.example.com;
return 404; # managed by Certbot
}
iptables does not understand the difference between HTTP or HTTPS, it understands only ip; ports and mac levels, if you try to block port 8081 with iptables even your https connection will be dropped or rejected depending on your choice.
If your docker container is accessible from the outside without passing through the reverse proxy, it is a container configuration issue, or if your nginx reverse proxy lets through http packets, then it is an nginx configuration issue, I think we need more details from your side.
I have resolved this issue using the firewall application from my hosting provider(Vultr).
There, i left 8081 only for local access, so now it's not possible to access this without passing through Nginx reverse proxy!

nginx reverse proxy proxy_pass wildcard

I have an application running on port 4343. This is a single page app, so hitting http://myApp:4343 will dynamically redirect me to somewhere like http://myApp:4343/#/pageOne.
Both the nginx container and the myApp container are running on the same docker network so can resolve via container name.
I'm trying to proxy this via nginx with:
server {
listen 80;
server_name localhost;
location /myApp {
proxy_pass http://myApp:4343
}
}
How do I wildcard the rule?

Need help troubleshooting custom docker image for nginx

I want to install a simple web service to browse a file directory tree on an internal server and to comply with company policy it needs to use TLS ("https://...").
First I tried several images including davralin/nginx-autoindex and mounted the directory I want this service to share. It worked like a charm, but it didn't use a TLS connection.
To get something to work with TLS, I started from scratch and created my own default.conf file for nginx:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I created the following Dockerfile:
FROM nginx:stable-alpine
MAINTAINER lsiden at gmail.com
COPY default.conf /etc/nginx/conf.d
COPY my-cert.crt /etc/ssl/certs/
COPY server.key /etc/ssl/certs/
Then I build it:
docker build -t lsiden/nginx-autoindex-tls .
Then I run it:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:80 lsiden/nginx-autoindex-tls
However, I can't reach it even from the host machine. I tried:
$ telnet localhost 3453
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
I tried to read log messages:
docker logs <container-id>
Silence.
I've already confirmed that the docker proxy is listening to the port:
tcp6 0 0 :::3453 :::* LISTEN 14828/docker-proxy
The port shows up on tcp6 but not "tcp" (ipv4) but I read here that netstat will show only the ipv6 connection even if it is available on both. To be sure, I verified:
sudo sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
To be thorough, I already opened this port in iptables, although iptables can't be playing a role here if I can't even get to it from the same machine via localhost.
I'm hoping someone with good networking chops can tell me where to look next. I can't figure out what I missed.
In case the configuration you shared is complete, you are not listing on port 80 inside your container at all.
change your configuration to something like that in case you want to redirect incomming traffic on port 80 to 443:
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If you don't want to do this, just change your docker run command:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:443 lsiden/nginx-autoindex-tls

nginx responds to HTTPS but not HTTP

I am using the dockerized Nextcloud as shown here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm
I set this up with port 80 mapped to 12345 and port 443 mapped to 12346. When I go to https://mycloud.example.com:12346, I get the self-signed certificate prompt, but otherwise everything is fine and I see the NextCloud web UI. But when I go to http://mycloud.example.com:12345 nginx (the proxy container) gives error "503 Service Temporarily Unavailable". The error also shows up in proxy's logs.
How can I diagnose the issue? Why is HTTPS working but not HTTP?
Can you provide your docker command starting nextcloud, or docker-compose file ?
Diagnosis is as usual with docker stuff : get the id for the currently running container
docker ps
Then check the logs
docker logs [id or name of your container]
docker-compose logs [name of your service]
Connect in the container
docker exec -ti [id or name of your container] [bash or ash if alpine based container]
There read the nginx conf files involved. In your case I'ld check the redirection being made from http to https, most likely it's something like below with no specific port specified for https, hence port 443, hence not working
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri; <======== no port = 443
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}

jenkins behind nginx reverse proxy

I'm trying to keep a jenkins container(docker) behind nginx reverse proxy. It works fine with this path, https://example.com/ but it returns 502 Bad Gateway when I add parameter to the path, https://example.com/jenkins.
The docker container for jenkins is run like this
docker container run -d -p 127.0.0.1:8080:8080 jenkins/jenkins
Here is my code,
server {
listen 80;
root /var/www/html;
server_name schoolcloudy.com www.schoolcloudy.com;
location / {
proxy_pass http://localhost:8000;
}
}
# Virtual Host configuration for example.com
upstream jenkins {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name jenkins;
location /jenkins {
proxy_pass http://jenkins;
proxy_redirect 127.0.0.1:8080 https://schoolcloudy.com/jenkins;
}
}
Specify the Jenkins container's network with --network=host flag when you run the container. This way the container will be able to interact with host network or use the container's IP explicitly in the Nginx conf.
good practice in such questions is official documentation usage:
wiki.jenkins.io
I've configured Jenkins behind Nginx reverse proxy several time, wiki works fine for me each time.
P.S.: look like proxy_pass option value in your config should be changed to http://127.0.0.1:8080

Resources