I have a nginx web app that runs on a Google VM instance and works perfectly but when I try to add SSL support, I can no longer access the site, "Unable to Connect". I believe I correctly configure my nginx config to cater SSL. I can also see in the logs that nginx and certbot are started.
http {
upstream react {
server client:3000;
}
upstream phoenix {
server web:4000;
}
server {
# Listen to port 443 on both IPv4 and IPv6.
listen 80 default_server
listen 443 ssl default_server reuseport;
listen [::]:443 ssl default_server reuseport;
# Domain names this server should respond to.
server_name tabi.blanknodes.com www.tabi.blanknodes.com;
# Load the certificate files.
ssl_certificate /etc/letsencrypt/live/tabi/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/tabi/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/tabi/chain.pem;
# Load the Diffie-Hellman parameter.
ssl_dhparam /etc/letsencrypt/dhparams/dhparam.pem;
location / {
proxy_pass https://react;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
proxy_pass https://phoenix/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /socket {
proxy_pass https://phoenix/socket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Logs:
nginx_1 | Starting the Nginx service
nginx_1 | Starting the certbot autorenewal service
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: using the "epoll" event method
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: nginx/1.21.0
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: built by gcc 8.3.0 (Debian 8.3.0-6)
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: OS: Linux 5.4.104+
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker processes
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker process 122
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker process 123
nginx_1 | Couldn't find the dhparam file '/etc/letsencrypt/dhparams/dhparam.pem'; creating it...
nginx_1 | mkdir: created directory '/etc/letsencrypt/dhparams'
nginx_1 |
nginx_1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nginx_1 | % ATTENTION! %
nginx_1 | % %
nginx_1 | % This script will now create a 2048 bit Diffie-Hellman %
nginx_1 | % parameter to use during the SSL handshake. %
nginx_1 | % %
nginx_1 | % >>>>> This MIGHT take a VERY long time! <<<<< %
nginx_1 | % (Took 65 minutes for 4096 bit on an old 3GHz CPU) %
nginx_1 | % %
nginx_1 | % However, there is some randomness involved so it might %
nginx_1 | % be both faster or slower for you. 2048 is secure enough %
nginx_1 | % for today and quite fast to generate. These files will %
nginx_1 | % only have to be created once so please be patient. %
nginx_1 | % A message will be displayed when this process finishes. %
nginx_1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nginx_1 |
nginx_1 | Will now output to the following file: '/etc/letsencrypt/dhparams/dhparam.pem'
nginx_1 | Generating DH parameters, 2048 bit long safe prime, generator 2
nginx_1 | This is going to take a long time
I'm using docker-nginx-certbot for my nginx + certbot.
Related
I am trying to build springboot, nginx, mysql with docker, other stacks work fine, but nginx returns the following error
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/06/14 03:41:48 [emerg] 1#1: unexpected end of file, expecting "}" in /etc/nginx/conf.d/app.conf:11
nginx: [emerg] unexpected end of file, expecting "}" in /etc/nginx/conf.d/app.conf:11
I think the error is a grammatical error, Below is the content of the nginx/conf.d/app.conf file.
server {
listen 80;
access_log off;
location / {
proxy_pass http://app:8080;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The error log says there is a problem in the 11th line in app.conf, but I do not know the nginx syntax, so I have no idea what the problem is. Any advice would be appreciated.
Does anyone see what I did wrong with my Nginx Reverse Proxy? I am getting a 502 Bad Gateway and I can't seem to figure out where my ports are wrong.
Nginx
/etc/nginx/sites-enabled/default
upstream reverse_proxy {
server 35.237.158.31:8080;
}
server {
listen 80;
server_name 35.237.158.31;
location / {
proxy_pass http://reverse_proxy;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
/etc/nginx/sites-enabled/jesse.red [VHOST]
upstream jessered {
server 127.0.0.1:2600; # <-- PORT 2600
}
server {
server_name jesse.red;
#root /var/www/jesse.red/;
# ---------------------------------------------------------------
# Location
# ---------------------------------------------------------------
location / {
proxy_pass http://jessered;
#proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jesse.red/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jesse.red/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = jesse.red) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name jesse.red;
listen 80;
return 404; # managed by Certbot
}
Docker
Below it's running on 2600
$ docker ps
9d731afed500 wordpress:php7.0-fpm-alpine "docker-entrypoint.s…" 3 days ago Up 17 hours 9000/tcp, 0.0.0.0:2600->80/tcp jesse.red
/var/www/jesse.red/docker-compose.yml
version: '3.1'
services:
jessered:
container_name: jesse.red
image: wordpress:4-fpm-alpine
restart: always
ports:
- 2600:80 # <-- PORT 2600
env_file:
- ./config.env # Contains .gitignore params
Testing Docker
docker-compose logs
Attaching to jesse.red
jesse.red | WordPress not found in /var/www/html - copying now...
jesse.red | Complete! WordPress has been successfully copied to /var/www/html
jesse.red | [03-Jul-2018 11:15:07] NOTICE: fpm is running, pid 1
jesse.red | [03-Jul-2018 11:15:07] NOTICE: ready to handle connections
System
$ ps aux | grep 2600
Below, port 2600 is in use.
root 1885 0.0 0.1 232060 3832 ? Sl Jul02 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 2600 -container-ip 172.20.0.2 -container-port 80
I'm not sure what went wrong, any help is really appreciated. I have scoured many places and haven't figured it out before asking.
Nginx request processing chooses a server block like this:
Check listen directive for IP:port exact matches, if no matches then check for IP OR port matches. IP addresses with no port are considered to be port 80.
From those matches it then checks the Host header of the request looking to match a server_name directive in the matched blocks. If it finds a match then that server handles the request, if not then assuming no default_server directive is set the request will be passed to the server listed first in your config.
So you have server_name 35.237.158.31; on port 80, and server_name jesse.red; also on port 80
IP addresses should be part of the listen directive, not the server_name, although this might match for some requests. Assuming this is being accessed from the outside world it's unlikely jesse.red will be in anyone's host headers.
Assuming no matches then it's going to get passed to whatever server Nginx finds first with a port match, I'm assuming Nginx will work alphabetically when including files, so your configs will load like this:
/etc/nginx/sites-enabled/default
/etc/nginx/sites-enabled/jesse.red
and now all your requests on port 80 with no host match, or with the ip address in the host field are getting proxied to:
upstream reverse_proxy {
server 35.237.158.31:8080;
}
That's my guess anyway, your Nginx logs will probably give you a fairly definitive answer.
I am using nginx as a proxy to forward requests to other components (servers).
Each component, including nginx is implemented as docker container, i.e. I have a docker container for 'nginx-proxy', 'dashboard-server', 'backend-server' (REST API), and 'landing-server' (Landing Page). The latter 3 components are all NodeJS Express servers and working properly when I use the command docker-compose build there are no errors but when I start the containers with docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d the NodeJS containers work fine, but the nginx container gives me this error using docker-compose logs nginx-proxy:
Attaching to docker_nginx-proxy_1
nginx-proxy_1 | /start.sh: line 5: openssl: command not found
nginx-proxy_1 | Creating dhparams…\c
nginx-proxy_1 | ok
nginx-proxy_1 | Starting nginx…
nginx-proxy_1 | 2017/08/23 23:27:20 [emerg] 6#6:
BIO_new_file(“/etc/letsencrypt/live/admin.domain.com/fullchain.pem”)
failed (SSL: error:02001002:system library:fopen:No such file or directory:
fopen(‘/etc/letsencrypt/live/admin.domain.com/fullchain.pem’,‘r’) error:2006D080:BIO routines:BIO_new_file:no such file)
nginx-proxy_1 | nginx: [emerg]
BIO_new_file(“/etc/letsencrypt/live/admin.domain.com/fullchain.pem”) failed (SSL: error:02001002:system library:fopen:
No such file or directory:fopen(‘/etc/letsencrypt/live/admin.domain.com/fullchain.pem’,‘r’)error:2006D080:BIO routines:BIO_new_file:no such file)
I am using Lets Encrypt for the SSL certificates, however the command certbot certonly --webroot -w /var/www/letsencrypt -d admin.domain.com -d api.domain.com -d www.domain.com -d domain.com results in the error Connection Refused because the nginx server does not start to handle the requests.
My nginx Dockerfile (nginx-proxy/Dockerfile):
FROM nginx:1.12
COPY start.sh /start.sh
RUN chmod u+x /start.sh
COPY conf.d /etc/nginx/conf.d
COPY sites-enabled /etc/nginx/sites-enabled
ENTRYPOINT ["/start.sh"]
My start.sh file (nginx-proxy/start.sh):
#!/bin/bash
if [ ! -f /etc/nginx/ssl/dhparam.pem ]; then
echo "Creating dhparams…\c"
openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
echo "ok"
fi
echo "Starting nginx…"
nginx -g 'daemon off;
My default.conf file (nginx-proxy/conf.d/default.conf):
include /etc/nginx/sites-enabled/*.conf;
My api.conf file (the others are similar) (nginx-proxy/sites-enabled/api.conf):
server {
listen 80;
server_name api.domain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name api.domain.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/api.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.domain.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:1m;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
client_max_body_size 0;
chunked_transfer_encoding on;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://backend-server:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
}
}
Any ideas?
I found the solution.
In my nginx Dockerfile, I had to use
FROM nginx:1.12-alpine
RUN apk update \
&& apk add openssl
...
Then the openssl command worked properly.
At first, you can try edit your start.sh file at line openssl to /usr/bin/openssl. Did /usr/bin/openssl exists?
Second, your nginx server will not start until /etc/letsencrypt/live/api.domain.com/fullchain.pem and /etc/letsencrypt/live/api.domain.com/privkey.pem file exists.
So delete or comment all the server block that handling 443 port, keep server block that handling 80 port. Your api.conf will become this:
server {
listen 80;
server_name api.domain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
return 301 https://$host$request_uri;
}
Then start your nginx server and retry install Let's Encrypt certificate.
I have installed elasticsearch, logstash and kibana to my Debian server. The problem is Kibana is not showing any statistics or logs. I don't know what is wrong and how to debug this problem. When I test each of the components (elasticsearch, kibana and logstash) everything looks working properly.
ElasticSearch Tests
Checking elasticsearch-cluster status:
curl 'localhost:9200/_cluster/health?v'
{"cluster_name":"elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":71,"active_shards":71,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":71,"number_of_pending_tasks":0}
Checking elasticsearch-node status:
curl 'localhost:9200/_cat/nodes?v'
host ip heap.percent ram.percent load node.role master name
ais 193.xx.yy.zz 6 10 0.05 d * Shathra
Checking elasticsearch-index status:
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open countries 5 1 243 365 145.2kb 145.2kb
yellow open imports 5 1 26 7 49.6kb 49.6kb
yellow open categories 5 1 6 1 20.6kb 20.6kb
yellow open faculties 5 1 36 0 16.9kb 16.9kb
yellow open users 5 1 6602 29 1.8mb 1.8mb
yellow open cities 5 1 125 0 23.5kb 23.5kb
yellow open exam_languages 5 1 155 0 26.6kb 26.6kb
yellow open departments 5 1 167 70 166.4kb 166.4kb
yellow open examinations 5 1 4 0 14.1kb 14.1kb
yellow open certificates 5 1 1 0 3kb 3kb
yellow open .kibana 1 1 2 1 14kb 14kb
yellow open exam_centers 5 1 5 0 22.7kb 22.7kb
Checking elasticsearch-service status:
$ service elasticsearch status
[ ok ] elasticsearch is running.
ElasticSearch is also reacable from localhost:9200 in my browser and listing indexes correct.
/etc/nginx/sites-available/elasticsearch file =>
server {
listen 443;
server_name es.xxx.yyy.com;
ssl on;
ssl_certificate /etc/elasticsearch/ssl/es_domain.crt;
ssl_certificate_key /etc/elasticsearch/ssl/es_domain.key;
access_log /var/log/nginx/elasticsearch/access.log;
error_log /var/log/nginx/elasticsearch/error.log debug;
location / {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:9200;
proxy_redirect http://localhost:9200 http://es.xxx.yyy.com/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Elasticsearch Authentication";
auth_basic_user_file /etc/elasticsearch/user.pwd;
}
}
server{
listen 80;
server_name es.xxx.yyy.com;
return 301 https://$host$request_uri;
}
Kibana Tests
$ service kibana4 status
[ ok ] kibana is running.
/etc/nginx/sites-available/kibana file =>
server {
listen 443;
server_name kibana.xxx.yyy.com;
ssl on;
ssl_certificate /opt/kibana/ssl/es_domain.crt;
ssl_certificate_key /opt/kibana/ssl/es_domain.key;
access_log /var/log/nginx/kibana/access.log;
error_log /var/log/nginx/kibana/error.log debug;
location / {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:5601;
proxy_redirect http://localhost:5601 http://kibana.xxx.yyy.com/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Kibana Authentication";
auth_basic_user_file /etc/nginx/htpasswd.users;
}
}
server{
listen 80;
server_name kibana.xxx.yyy.com;
return 301 https://$host$request_uri;
}
Kibana is also reacable from localhost:5601 in my browser without any problem.
Logstash Tests
$ sudo /etc/init.d/logstash status
[ ok ] logstash is running.
/etc/logstash/conf.d/01-ais-input.conf file =>
input {
file {
type => "rails"
path => "/srv/www/xxx.yyy.com/site/log/logstasher.log"
codec => json {
charset => "UTF-8"
}
}
}
output {
elasticsearch {
host => 'localhost'
port => 9200
}
}
Is there anything wrong with these services and config files? Each of the components look working fine but I can not see anything in Kibana interface. How can I test my ELK stack?
You need to configure index patterns in Kibana to see the elasticsearch data.
Open Kibana from your browser http://localhost:5601
Click on Settings
Type your existing index name and click Create. (Uncheck the option 'Index contains time-based events' unless your index is having logs or any time-stamp based data)
Doing this, you must be able to see all your elasticsearch documents.
I've just setup nginx and unicorn. I start unicorn like this:
unicorn_rails -c /var/www/Web/config/unicorn.rb -D
I've tried the various commands for stopping the unicorn but none of them work. I usually just restart the server and start unicorn again but this is very annoying.
EDIT
unicorn.rb file (/var/www/Web/config/):
# Set the working application directory
# working_directory "/path/to/your/app"
working_directory "/var/www/Web"
# Unicorn PID file location
# pid "/path/to/pids/unicorn.pid"
pid "/var/www/Web/pids/unicorn.pid"
# Path to logs
# stderr_path "/path/to/log/unicorn.log"
# stdout_path "/path/to/log/unicorn.log"
stderr_path "/var/www/Web/log/unicorn.log"
stdout_path "/var/www/Web/log/unicorn.log"
# Unicorn socket
listen "/tmp/unicorn.Web.sock"
listen "/tmp/unicorn.Web.sock"
# Number of processes
# worker_processes 4
worker_processes 2
# Time-out
timeout 30
default.conf (/etc/nginx/conf.d/):
upstream app {
# Path to Unicorn SOCK file, as defined previously
server unix:/tmp/unicorn.Web.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
# Application root, as defined previously
root /root/Web/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
This is what I do:
$ for i in `ps awx | grep unico | grep -v grep | awk '{print $1;}'`; do kill -9 $i; done && unicorn_rails -c /var/www/Web/config/unicorn.rb -D
If you don't want to have all this line, script it, like this:
/var/www/Web/unicorn_restart.sh:
#!/bin/bash
for i in `ps awx | grep unicorn | grep -v grep | awk '{print $1;}'`; do
kill $i
done
unicorn_rails -c /var/www/Web/config/unicorn.rb -D
and then:
$ chmod +x /var/www/Web/unicorn_restart.sh
summon it each time calling:
$ /var/www/Web/unicorn_restart.sh