Nginx Internal Server Error with docker throw error 500 - docker

I,m trying to deploy a nginx application in docker. After I have installed certificates with cerbot i have this nginx.conf:
server {
listen 80;
server_name web.com www.web.com;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl default_server;
server_name web.com www.web.com;
location / {
proxy_pass https://www.web.com;
}
ssl_certificate /etc/letsencrypt/live/web.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/web.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
When I try to access to my web url the browser show 500 Internal Server Error. nginx/1.15.12
I can`t see the logs so I don't know what I have to do.
The ssl certificate works fine becaouse the lock appear in the url bar

Can you check if the container is started or not?
If container is starting, you can connect to container and then check the nginx logs (must be available /var/log/nginx/error.log).

Related

Need help troubleshooting custom docker image for nginx

I want to install a simple web service to browse a file directory tree on an internal server and to comply with company policy it needs to use TLS ("https://...").
First I tried several images including davralin/nginx-autoindex and mounted the directory I want this service to share. It worked like a charm, but it didn't use a TLS connection.
To get something to work with TLS, I started from scratch and created my own default.conf file for nginx:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I created the following Dockerfile:
FROM nginx:stable-alpine
MAINTAINER lsiden at gmail.com
COPY default.conf /etc/nginx/conf.d
COPY my-cert.crt /etc/ssl/certs/
COPY server.key /etc/ssl/certs/
Then I build it:
docker build -t lsiden/nginx-autoindex-tls .
Then I run it:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:80 lsiden/nginx-autoindex-tls
However, I can't reach it even from the host machine. I tried:
$ telnet localhost 3453
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
I tried to read log messages:
docker logs <container-id>
Silence.
I've already confirmed that the docker proxy is listening to the port:
tcp6 0 0 :::3453 :::* LISTEN 14828/docker-proxy
The port shows up on tcp6 but not "tcp" (ipv4) but I read here that netstat will show only the ipv6 connection even if it is available on both. To be sure, I verified:
sudo sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
To be thorough, I already opened this port in iptables, although iptables can't be playing a role here if I can't even get to it from the same machine via localhost.
I'm hoping someone with good networking chops can tell me where to look next. I can't figure out what I missed.
In case the configuration you shared is complete, you are not listing on port 80 inside your container at all.
change your configuration to something like that in case you want to redirect incomming traffic on port 80 to 443:
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If you don't want to do this, just change your docker run command:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:443 lsiden/nginx-autoindex-tls

ssl certificate or nginx proxy server not working

I have created a domain(domain.com) and subdomain (abc.domain.com), and also generated SSL certificates for both by using letsencrypt. Both the Django projects are hosted on AWS EC2 and created proxy server for them which is as follow:
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass https://1.2.3.4:444;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/domain/fullchain.pem;
proxy_ssl_certificate_key /home/domain/privkey.pem;
}
}
server {
listen 443 ssl;
server_name abc.example.com;
location / {
proxy_pass https://1.2.3.4:445;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/subdomain/fullchain.pem;
proxy_ssl_certificate_key /home/subdomain/privkey.pem;
}
}
I strats the proxy server and both the projects, starting not giving any problem the problem is that when i enter https://example.com on the browser it is not showing the page, but when i pull domain with port no. https://example.com:444, it starts showing the page. I do not know what I am missing.
In order to make https://example.com work you need to correctly configure Nginx with SSL configuration which include using ssl_certificate and ssl_certificate_key directives as it does not seem that you are using them.
Using proxy_ssl_certificate is for using HTTPS connection between Nginx and the Proxied Server which in your case the django application.
Using ssl_certificate is for using HTTPS connection between the user's browser and Nginx which you need to make https://example.com works as expected
For more details check configuring HTTPS servers

How to make a secure nginx-proxy to point different paths in single server?

I want to use letsencrypt-nginx-proxy-companion in my Docker instance.
After some reading I still cannot find solution for my schema:
HOST => DOCKER
/ |
(vps) |
(containers)
- nginx-proxy
- letsencrypt-nginx-proxy-companion
- portainer [to manage self-hosted docker]
https://projects.domain.com:4488
- jenkins [to manage projects from github]
https://projects.domain.com:5533
- projects home [static website]
https://projects.domain.com
- project #1
https://projects.domain.com/project-1
- project #2
https://projects.domain.com/project-2
Assuming I know how to manage multiple subdomains (each for container) I miss how (and where) specify /path for projects.
Where to start if I want to route all traffic throught SSL (excluding script for certificate renewal) and manage projects with Jenkins? Is it a good idea to wrap it in this way?
Did you try using "location /" tag with proxy_pass and subfilters?
For example:
server {
server_name jenkins.domain.com;
listen 80 ;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name jenkins.domain.com;
ssl on;
ssl_certificate /etc/nginx/ssl/sslcertificate.crt
ssl_certificate_key /etc/nginx/sslkey.key;
proxy_set_header Accept-Encoding “”;
sub_filter_types ‘text/html’;
sub_filter_types ‘text/css’;
sub_filter ‘http://jenkins.domain.com‘ ‘https://$host';
sub_filter_once off;
server {
server_name projects.domain.com;
listen 80 ;
return 301 https://$host$request_uri;
}
location /project-1/{
proxy_pass http://jenkins.domain.com:4488/project-1/;
}
}

How to add a second app to Phusion Passenger?

I have my Phusion Passenger Nginx configured to as below :
server {
listen 80;
server_name blog.abc.com;
passenger_enabled on;
root /app/public;
}
Im about to host the main site abc.com also in this machine. How can I do that (Its a separate app)? Is it possible to add another server block like this :
server {
listen 80;
server_name abc.com;
passenger_enabled on;
root /app2/public;
}
Phusion Passenger author here. Yes. Just add another virtual host block for the other app. It works exactly as expected.
I configured my second app on sub-uri of first app. Below is the nginx conf and settings what i done.
nginx.conf:
server {
listen 80;
server_name localhost;
location / {
root /var/www/demo/public;
passenger_enabled on;
rails_env production;
}
location /test {
root /var/www/demo;
passenger_base_uri /test;
passenger_enabled on;
}
Then add symbolic link:
ln -s /var/www/logger/public /var/www/demo/test

Multiple Ruby apps (Rails and Sinatra) deployed using Passenger for Nginx?

I have two Ruby applications, one is under Rails and the another is under Sinatra.
How can I deploy both these apps in Nginx and Passenger with one in the root ("localhost:3000") and the other in subroot ("localhost:3000/test")?
The Rails application is running with this configuration. Everything seems to work OK:
server {
listen 80;
server_name localhost;
location / {
root /var/www/demo/public;
passenger_enabled on;
rails_env production;
}
location /test/ {
root /var/www/test/public;
passenger_base_uri /test/;
proxy_pass http://10.0.3.12:80/test/;
passenger_enabled on;
}
I am not able to access the second application.
The server returns 404 for the second app and the first app is still running.
I believe you need to define local servers, that only listen on local port and define your passenger apps there. Your actual server listening on port should only act as proxy.
server {
listen localhost:8181;
server_name test_app;
root /var/www/test/public;
passenger_enabled on;
}
server {
listen localhost:8182;
server_name demo_app;
root /var/www/demo/public;
passenger_enabled on;
rails_env production;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:8182/;
}
location /test/ {
proxy_pass http://localhost:8181/;
}
}
I didn't have chance to test this config, so it might have some minor flaws, but it should be correct in high-level terms.
In nginx.conf:
server {
listen 80;
server_name localhost;
location / {
root /var/www/new/public;
passenger_enabled on;
rails_env production;
}
location /test {
root /var/www/demo;
passenger_base_uri /test;
passenger_enabled on;
}
Add a soft link:
ln -s /var/www/loggerapp/public /var/www/new/test

Resources