What is the recommended method for setting up HTTPS for Jenkins?
Setting up HTTPS in Jenkins itself?
Using Apache as proxy for HTTPS setup?
We have a VM in which Jenkins is the only application.
Method I use and I believe to be the most simple is to use nginx as proxy, example configuration:
root#redacted-jenkins-2:/etc/nginx/sites-available# cat jenkins_http.conf
#Ansible managed
server {
listen 80;
server_name jenkins.redacted.com.ar;
return 301 https://jenkins.redacted.com.ar$request_uri;
}
server {
listen 443 ssl;
server_name jenkins.redacted.com.ar;
ssl_certificate /etc/letsencrypt/live/jenkins.redacted.com.ar/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/jenkins.redacted.com.ar/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/jenkins.redacted.com.ar/fullchain.pem;
include /etc/nginx/snippets/ssl.conf;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http:// https://;
proxy_pass http://127.0.0.1:8080;
}
}
Related
I want to config nginx as a reverse proxy on my host ubuntu VM to point to the jupyterhub running inside a docker on port 8888. I am using subpaths for this and not subdomains and my corporate firewall gives me access only to port 80 and 443, all other ports are blocked, that's why i can't use rewrite. I came up with the following nginx configuration, which works but it does not display the assets from jupyter hub(css files, images and so on)
The path myservername.com/jphubdisplays the page but the assets are loaded from myservername.com (without the subpath /jphub)
Ex(the logo is loaded from myservername.com/hub/logo instead of myservername.com/jphub/hub/logo.
Does anyone know if i am doing this the right way? what should i change inside the config?
upstream jupyter {
server localhost:8888;
keepalive 32;
}
server {
listen 80;
server_name myservername.com;
ssl_certificate /etc/ssl/cert-request/cert.pem;
ssl_certificate_key /etc/ssl/private/cert.key;
ssl_prefer_server_ciphers on;
location /jphub/ {
proxy_pass http://jupyter/;
proxy_http_version 1.1;
proxy_redirect default;
proxy_redirect / /jphub/;
proxy_redirect http://jupyter/ https://$host/jphub/;
proxy_pass_header Set-Cookie;
proxy_pass_header Cookie;
proxy_pass_header X-Forwarded-For;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Nginx-Proxy true;
add_header X-Upstream $upstream_addr;
proxy_read_timeout 86400;
}
}
When the location path ends in /, Nginx removes the leading part before forwarding the request.
To have it forward the full path, remove the trailing /, so you have
location /jphub {
...
...
}
in your Nginx configuration.
I have a backend container in internal docker network, which is not accessible to the internet.
Through nginx proxy i want to send request (webhook to slack) from backend server to outside world. Is it possible at all?
I have this config for nginx:
server {
listen 80 default_server;
server_name localhost;
client_max_body_size 100M;
charset utf-8;
... # setup for server containers
}
server{
listen 443;
server_name hooks.slack.com;
location / {
proxy_pass https://hooks.slack.com/;
proxy_redirect off;
proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #Gets CSS working
#proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Is there a "proper" structure for the directives of an NGINX Reverse Proxy? I have seen 2 main differences when looking for examples of an NGINX reverse proxy.
http directive is used to house all server directives. Servers with data are listed in a pool within the upstream directive.
server directives are listed directly within the main directive.
Is there any reason for this or is this just a syntactical sugar difference?
Example of #1 within ./nginx.conf file:
upstream docker-registry {
server registry:5000;
}
http {
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 default_server;
ssl on;
ssl_certificate external/cert.pem;
ssl_certificate_key external/key.pem;
# set HSTS-Header because we only allow https traffic
add_header Strict-Transport-Security "max-age=31536000;";
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
auth_basic "Restricted"
auth_basic_user_file external/docker-registry.htpasswd;
proxy_pass http://docker-registry; # the docker container is the domain name
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
}
Example of #2 within ./nginx.conf file:
server {
listen 80;
listen [::]:80;
return 301 https://$host#request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
error_log /var/log/nginx/error.log info;
access_log /var/log/nginx/access.log main;
ssl_certificate /etc/ssl/private/{SSL_CERT_FILENAME};
ssl_certificate_key /etc/ssl/private/{SSL_CERT_KEY_FILENAME};
location / {
proxy_pass http://app1
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr; # could also be `$proxy_add_x_forwarded_for`
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I dont quite understand your question, but it seems to me that the second example is missing the http {}, I dont think that nginx will start without it.
unless your example2 file is included somehow in the nginx.conf that has the http{}
I had a Shopify-like application. So, my customer get sub-domain when they create store(i.e customer1.myShopify.com).
to handle this case of dynamic sub-domains with nginx:
server {
listen 443 ssl;
server_name admin.myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem;
location / {
proxy_pass http://admin-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
server {
listen 443 ssl;
server_name *.myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
this works great so if you visit admin.myapp.com you'll see the admin application and if you visit any xxx.myapp.com you'll see the shop-front-end application.
The Problem
I want to allow my customer to connect their own domain. so I told them to connect with CNAME and A Record.
A Record => # => 12.12.12.3(my root nginx ip)
CNAME => WWW => thier.myapp.com
not each request to customer.com will resolved by my nginx.
so I added this configuration to my nginx, to catch all other server_name request:
server {
listen 80;
server_name server_name ~^.*$;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
and it works fine.
but how can I handle SSL for this case? because it could be any domain name.I don't know what the customer domain name will be.
how i can give them the ability to add SSL certificate automatically and without create manually ?
This server block should work since variable names are supported for ssl_certificate and ssl_certificate_key directives.
http {
map "$ssl_server_name" $domain_name { ~(.*)\.(.*)\.(.*)$ $2.$3; }
server {
listen 443;
server_name server_name ~^.*$;
ssl_certificate /path/to/cert/files/$domain_name.crt;
ssl_certificate_key /path/to/cert/keys/$domain_name.key;
location / {
proxy_pass http://app-front-end:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
}
P.S. Using variable names would compromise performance because now nginx would load the files on each ssl handshake.
Ref: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate
In my view instead of compromising the nginx performance, we should use one cron job, say letsencrypt bot, which will fetch certificate based on user requested domain, and you can add the certificate in nginx conf, and restart server.
Bonus:
I have used traefik which is kubernetes based solutions, they load configs on the fly without the restart.
In VM you can use the certbot for managing the SSL/TLS certificate.
Now if you are using the HTTP-01 method to verify your domain you won't be able to get the Wild card domain name.
i would suggest to use the DNS-01 method in cert-bot for domain verification and you can get the wild card certificate and use it.
Adding the certificate to Nginx config using :
ssl_certificate /path/to/cert/files/tls.crt;
ssl_certificate_key /path/to/cert/keys/tls.key;
If you are using the certbot it will also auto inject and add the SSL config above lines to the configuration file.
For different domains also you can run the job or certbot with HTTP-01 method and you will get the certificate.
If you are on Kubernetes you can use the cert-manager, which will be managing the SSL/TLS certificate.
I suggest using separete block with different ssh certificates thats the only solution that worked for me
server {
listen 80;
root /var/www/html/example1.com;
index index.html;
server_name example1.com;
ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 80;
root /var/www/html/example2.com;
index index.html;
server_name example2.com;
ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
location / {
try_files $uri $uri/ =404;
}
}
I'm using a Nginx-proxy in a docker-container. And I have to run multiple applications on a server. I want to run them all in a docker container except one. I run Jira an Confluence in container. It took me a lot of time to configure the applications and the Nginx-config. Now I want to run Graylog2 on the Server aswell and I'm facing kind of the same problems like in Jira/Confluence. I guess it's maybe because I don't really understand how all this works. Thats why I made the following image:
Thats how I understand the reverse proxy. The nginx-conf looks like this:
upstream jenkins {
server 43.3.34.333:8080 fail_timeout=0;
}
upstream docker-jira {
server jira:8080;
}
upstream docker-conf {
server conf:8090;
}
upstream docker-graylog {
server graylog:9000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name mySite.de;
return 301 https://mySite.de;
}
server {
# SSL configuration
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name mySite.de;
include snippets/ssl-mySite.de;
include snippets/ssl-params.conf;
location /jenkins {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://jenkins;
proxy_redirect http://jenkins $scheme://mySite.de;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off; # Required for HTTP-based CLI to work over SSL
# workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
add_header 'X-SSH-Endpoint' 'jenkins.domain.tld:50022' always;
client_max_body_size 2M;
}
location /graylog {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Graylog-Server-URL http://$server_name/api;
proxy_pass http://docker-graylog/graylog;
}
location /jira {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://docker-jira/jira;
client_max_body_size 100M;
add_header X-Frame-Options ALLOW;
}
location /confluence {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://docker-conf/confluence;
proxy_redirect http://docker-conf/confluence https://mySite.de;
client_max_body_size 100M;
add_header X-Frame-Options SAMEORIGIN;
}
location /synchrony {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://mySite.de:8091/synchrony;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
client_max_body_size 100M;
}
}
To run Graylog2 behind a proxy you have to set some settings(Graylog2 docu):
set web_listen_uri
set rest_listen_uri
set web_endpoint_uri
I did it like this:
rest_listen_uri = http://localhost:9000/api/
web_listen_uri = http://localhost:9000/graylog
GRAYLOG_WEB_ENDPOINT_URI: https://mySite.de/api
When I got to https://mySite.de/graylog I get a 502 Bad Gateway Error. Nginx-log:
connect() failed (111: Connection refused) while connecting to upstream, client: 33.11.102.157, server: mySite.de, request: "GET /graylog HTTP/2.0", upstream: "http://172.18.0.9:9000/graylog", host: "mySite.de"
My Network:
NETWORK ID NAME DRIVER SCOPE
6c9de2d6b0ac MyNet bridge local
I don't really get it.
Leave your 80–>443 redirect you have with NGINX doing the SSL termination, then sending to backend over http.
Change these to listen on the LAN IP or docker DNS name:
web_listen_uri = http://docker-graylog:9000/graylog
rest_listen_uri = http://docker-graylog:9000/api
Note: The problem with your current config is it is only listening on localhost, and a request coming in externally will never make it to the app, because it’s not listening for external connections. It’s only listening for connections within the graylog container. NGINX can’t reach graylog on localhost:9000 across the LAN.
The bad gateway indicates that your proxy is probably working, but no connections to app can be made.
More details on that:
https://forums.docker.com/t/access-to-localhost-from-bridge-network/22948/2
This config is basically what you already have, but copied it from graylog documentation. Your current proxy config might work as is.
upstream docker-graylog {
server graylog:9000;
}
server
{
listen 443 ssl spdy;
server_name mySite.de;
# <- your SSL Settings here!
location /graylog
{
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Graylog-Server-URL https://$server_name/api;
proxy_pass http://docker-graylog/graylog;
}
}