How to implement (Certbot) ssl using Docker with Nginx image - docker

I'm trying to implement ssl in my application using Docker with nginx image. I have two apps, one for back-end (api) and other for front-end (admin). It's working with http on port 80, but I need to use https. This is my nginx config file...
upstream ulib-api {
server 10.0.2.229:8001;
}
server {
listen 80;
server_name api.ulib.com.br;
location / {
proxy_pass http://ulib-api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
upstream ulib-admin {
server 10.0.2.229:8002;
}
server {
listen 80;
server_name admin.ulib.com.br;
location / {
proxy_pass http://ulib-admin;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 100M;
}
I get some tutorials but all is using docker-compose. I need to install it with Dockerfile. Can anyone give me a light?
... I'm using ECS instance on AWS and project is building with CI/CD

This is just one of possible ways:
First issue certificate using certbot. You will end up with a couple of *.pem files.
There are pretty tutorials on installing and running certbot on different systems, I used Ubuntu with command certbot --nginx certonly. You need to run this command on your domain because certbot will check that you are the owner of the domain by a number of challenges.
Second, you create nginx containers. You will need proper nginx.conf and link certificates to this containers. I use docker volumes but that is not the only way.
My nginx.conf looks like following:
http {
server {
listen 443 ssl;
ssl_certificate /cert/<yourdomain.com>/fullchain.pem;
ssl_certificate_key /cert/<yourdomain.com>/privkey.pem;
ssl_trusted_certificate /cert/<yourdomain.com>/chain.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
...
}
}
Last, you run nginx with proper volumes connected:
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro -v $PWD/cert:/cert:ro -p 443:443 nginx:1.15-alpine
Notice:
I mapped $PWD/cert into container as /cert. This is a folder, where *.pem files are stored. They live under ./cert/example.com/*.pem
Inside nginx.conf you refer these certificates with ssl_... directives
You should expose port 443 to be able to connect

Related

NGINX Reverse Proxy for JupyterHub inside docker

I want to config nginx as a reverse proxy on my host ubuntu VM to point to the jupyterhub running inside a docker on port 8888. I am using subpaths for this and not subdomains and my corporate firewall gives me access only to port 80 and 443, all other ports are blocked, that's why i can't use rewrite. I came up with the following nginx configuration, which works but it does not display the assets from jupyter hub(css files, images and so on)
The path myservername.com/jphubdisplays the page but the assets are loaded from myservername.com (without the subpath /jphub)
Ex(the logo is loaded from myservername.com/hub/logo instead of myservername.com/jphub/hub/logo.
Does anyone know if i am doing this the right way? what should i change inside the config?
upstream jupyter {
server localhost:8888;
keepalive 32;
}
server {
listen 80;
server_name myservername.com;
ssl_certificate /etc/ssl/cert-request/cert.pem;
ssl_certificate_key /etc/ssl/private/cert.key;
ssl_prefer_server_ciphers on;
location /jphub/ {
proxy_pass http://jupyter/;
proxy_http_version 1.1;
proxy_redirect default;
proxy_redirect / /jphub/;
proxy_redirect http://jupyter/ https://$host/jphub/;
proxy_pass_header Set-Cookie;
proxy_pass_header Cookie;
proxy_pass_header X-Forwarded-For;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Nginx-Proxy true;
add_header X-Upstream $upstream_addr;
proxy_read_timeout 86400;
}
}
When the location path ends in /, Nginx removes the leading part before forwarding the request.
To have it forward the full path, remove the trailing /, so you have
location /jphub {
...
...
}
in your Nginx configuration.

Nexus3 stuck on initializing and not properly resolving content-type

I am running Nexus3 in a docker container on a server that also uses nginx reverse-proxy. The problem is that when try to access to nexus repository from a browser, I am getting a broken page that has many console errors. Here's what I see:
After looking at the network tab, I noticed that my server is not setting the proper content-type for my requests. This is an example of a request to a js file:
Does anyone know what this could be? This is what my nginx.conf looks like:
server {
listen 443 ssl http2;
ssl_certificate /etc/ssl/confidential.com/fullchain.cer;
ssl_certificate_key /etc/ssl/confidential.com/*.confidential.com.key;
server_name confidential.com;
location /test {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
You have:
location /test {
proxy_pass http://nexus:8081/;
The context path of Nexus needs to match the context path served through the reverse proxy. Edit $workdir/etc/nexus.properties and set "nexus-context-path=/test". And change the proxy_pass to be "proxy_pass http://nexus:8081/test".

Broken UI and multiple console errors when deploying Nexus3 on Docker and an nginx reverse-proxy

I am running Nexus3 in Docker as well as an nginx reverse-proxy on Docker. I have my own ssl certificate. I followed the instructions from Sonatype on how to properly handle using a reverse proxy and use the Nexus3 certificates. Problem is that this is what I see when I go to my repository:
These are my errors:
What could cause this? This is my nginx.conf:
server {
listen 443 ssl http2;
ssl_certificate /etc/ssl/confidential.com/fullchain.cer;
ssl_certificate_key /etc/ssl/confidential.com/*.confidential.com.key;
server_name internal.confidential.com;
location /test {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}

Native Nginx reverse proxy to Docker container with Letsencrypt

I have an ubuntu 18.0.4 lts box with nginx installed and configuered as a reverse proxy:
/etc/nginx/sites-enabled/default:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
I have a website running in a docker container listening on port 3000. With this configuration if I browse to http://example.com I see the site.
I've then installed LetsEncypt using the standard install from their website then I run sudo certbot --nginx and follow the instructions to enable https for mydomain.com.
Now my etc/nginx/sites-enabled/default looks like this and i'm unable to load the site on both https://example.com and http://example.com:
server {
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
Any ideas?
I figured it out. The problem wasn't with my nginx/letsencrypt config it was a networking issue at the provider level (azure).
I noticed the Network Security Group only allowed traffic on port 80. The solution was to add a rule for 443.
After adding this rule everything now works as expected.

how to run two apps on EC2 with nginx

I am a newbie in Ubuntu and generally server side and I have created a Rails app and have deployed it on Ubuntu Ec2.
I am using Nginx and Thin server on it.The app is running perfectly on it.
Now I want to deploy another app on the same server.
I have already put the app on the server and when i try to start the rails app it does not start.
I guess it is because of nginx.conf file.
Can someone please let me know how to run two apps on the same server
When you try to browse to a machine on Amazon's EC2, and you don't get any response, the best suspect is the AWS Security Group. Make sure that the port the application runs on is open in your machine's security group:
(source: amazon.com)
For nginx to run both you apps, you need to configure them both on its nginx.conf
upstream app1 {
server 127.0.0.1:3000;
}
upstream app2 {
server 127.0.0.1:3020;
}
server {
listen 80;
server_name .example.com;
access_log /var/www/myapp.example.com/log/access.log;
error_log /var/www/myapp.example.com/log/error.log;
root /var/www/myapp.example.com;
index index.html;
location /app1 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app1;
}
location /app2 {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app2;
}
}
This configuration will listen for app1 on local port 3000, and app2 on local port 3020, and redirect data starting with http://my.example.com/app1 to the first app, and data starting with http://my.example.com/app2 to the second app.

Resources