I am getting the following error when I run docker-compose up:
backend_1_a5b5a2caf6fc | 2019/04/28 21:40:49 [emerg] 1#1: no "ssl_certificate" is defined for the "listen ... ssl" directive in /etc/nginx/conf.d/default.conf:4
backend_1_a5b5a2caf6fc | nginx: [emerg] no "ssl_certificate" is defined for the "listen ... ssl" directive in /etc/nginx/conf.d/default.conf:4
...
...
production_backend_1_a5b5a2caf6fc exited with code 1
Here is my Dockerfile for nginx:
FROM nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
default.conf:
fastcgi_cache_path /dev/shm levels=1:2 keys_zone=laravel:100m;
fastcgi_cache_key "$scheme$request_method$host$request_uri$query_string";
server {
listen 80 default_server;
root /var/www/public;
index index.php index.html;
client_max_body_size 5M;
...
...
docker-compose.yml:
version: '3'
services:
backend:
build: ./nginx
depends_on:
- db
- redis
working_dir: /var/www
volumes:
- ../../src:/var/www
ports:
- 80:80
...
...
This means that you have not setup ssl correctly (you're missing a server certificate). Since you have mapped port 80 and not 443 in your docker-compose i assume you're not going to use SSL.
Simply remove the following line in your nginx.conf to disable ssl:
listen 443 ssl http2;
be sure to rebuild and restart your nginx container.
Do you have any other server listening on port 443.Try to delete all symbolic links from /etc/nginx/sites-enabled except this one server you want to make work
But, if someone need to use SSL, this error means that your server section is missing ssl_certificate and ssl_certificate_key declarations. You need to have a .crt and a .key file to run with ssl.
It should looks like
server {
listen 80;
listen 443 default_server ssl;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
... other declarations
}
Related
I'm using nginx in docker-compose file for handling my frontend and backend website.
I had no problems for a long time but once I've got the error "504 Gateway Time-out" when I try to access my project through localhost and it's port
http://localhost:8080
when I type docker Ip and its port
http://172.18.0.1:8080
I can access the project and nginx works correctly.
I'm sure my config file is correct because It was working for 6 months and I don't know what happened for it.
what should I check to find the problem?
docker-compose file:
.
.
.
nginx:
container_name: nginx
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend_appx
networks:
backend_appx :
external: true
.
.
nginx config file:
upstream nextjs_upstream {
server next_app:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
# set root
root /var/www/html/frontend;
# set log
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs_upstream;
add_header X-Cache-Status $upstream_cache_status;
}
}
I generated certificates on my host machine, they are located in /etc/letsencrypt
Then, in my docker-compose.yml I have following configuration
frontend:
build: ./frontend
container_name: frontend
ports:
- 8080:80
networks:
- not-exposed
- exposed
volumes:
- /etc/letsencrypt:/etc/letsencrpt
For testing purposes, I stopped at this point and I started container to see if I can access those files and they are inside my container
However, moving to my nginx default.conf file, I have following code there
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
}
Unfortunatelly, when I run docker-compose up, it fails with
2022/10/26 13:51:38 [emerg] 1#1: cannot load certificate
"/etc/letsencrypt/live/domain.com/fullchain.pem": BIO_new_file()
failed (SSL: error:02001002:system library:fopen:No such file or
directory:fopen('/etc/letsencrypt/live/domain.com/fullchain.pem','r')
error:2006D080:BIO routines:BIO_new_file:no such file)
Any idea why? Like I mentioned, this file is there on container that holds nginx and my frontend, why it can't be opened?
I followed these steps to setup nginx in docker at my server:
I create a nginx/ folder and put all the docker-compose.yml and conf.d/ with conf.d/default.conf accordingly.
docker-compose.yml:
version: "3.8"
services:
web:
image: nginx:latest
restart: always
volumes:
- ./public:/var/www/html
- ./conf.d:/etc/nginx/conf.d
- ./certbot/conf:/etc/nginx/ssl
- ./certbot/data:/var/www/certbot
ports:
- 80:80
- 443:443
certbot:
image: certbot/certbot:latest
command: certonly --webroot --webroot-path=/var/www/certbot --email abc#xyz.com --agree-tos --no-eff-email -d example.com -d www.example.com
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/logs:/var/log/letsencrypt
- ./certbot/data:/var/www/certbot
and my cond.f/default.conf:
server {
listen [::]:80;
listen 80;
server_name example.com www.example.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
# redirect http to https www
return 301 https://www.example.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name example.com;
# SSL code
ssl_certificate /etc/nginx/ssl/live/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/example.com/privkey.pem;
root /var/www/html;
location / {
index index.html;
}
return 301 https://www.example.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name www.example.com;
# SSL code
ssl_certificate /etc/nginx/ssl/live/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/example.com/privkey.pem;
root /var/www/html/example/public;
location / {
index index.html;
}
}
I am sure the SSL is working because I can access https://example.com with no problem.
But I always get 404 not found.
I do have a public/ folder in nginx/ folder with index.html. But somehow I always get 404.
I am using Ubuntu 20.
How can I resolve this?
Ok, after #Amin's comments, I read carefully on my config file, and found out that with all the rerouting of the port 80 to port 443, the root folder actually change to /var/www/html/example/public. so I just have to change my docker-compose volume binding from:
volumes:
- ./public:/var/www/html
to
volumes:
- ./public:/var/www/html/example/public
Now it works.
I am playing with "nginx-proxy", pulled the image "jwilder/nginx-proxy:latest" to my local host, tried to start it but got this error "Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'", and when I tried to go to the server at port 80: it returned 503 Bad Gateway:
Here is my docker-compose file:
version: '3.0'
services:
proxy:
image: jwilder/nginx-proxy:latest
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d
- /usr/share/nginx/html
- /docker/certs:/etc/nginx/certs:ro
network_mode: "bridge"
and the error I've got
WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one is being generated in the background.Once the new dhparam.pem is in place, nginx will be reloaded.
forego | starting dockergen.1 on port 5000
forego | starting nginx.1 on port 5100
dockergen.1 | 2018/07/18 04:14:14 Generated '/etc/nginx/conf.d/default.conf' from 1 containers
dockergen.1 | 2018/07/18 04:14:14 Running 'nginx -s reload'
dockergen.1 | 2018/07/18 04:14:14 Watching docker events
dockergen.1 | 2018/07/18 04:14:14 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
Any idea is much appreciated. Cheers
It's default behavior for this image, you can see /etc/nginx/conf.d/default.conf:
server {
server_name _;
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
So when your visit, it will give 503 error.
This is a service discovery service, so you need to use it.
See the official example:
docker-compose.yaml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
If you use docker-compose up to start it, then have a look at /etc/nginx/conf.d/default.conf again, you will see:
server {
server_name _;
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# whoami.local
upstream whoami.local {
## Can be connected with "a_default" network
# a_whoami_1
server 172.20.0.2:8000;
}
server {
server_name whoami.local;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://whoami.local;
}
}
Here, jwilder/nginx-proxy watch the events of docker, and add a proxy to nginx reverse settings.
So if execute curl -H "Host: whoami.local" localhost on your host machine, it will print I'm 5b129ab83266.
server 172.20.0.2 in nginx settings is your application container's ip, it will changes everytime you start a new container, so with this method, you can free to know the ip of your application container, just use inverse proxy from nginx.
Many service such as marathon-lb which is a component of marathon who known as a mesos framework also could afford such function, maybe k8s also? Anyway, you need to know principle of this image, a useful doc for your reference: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Hung Vu, was that the entirety of your docker-compose file, or just the section for jwilder/nginx-proxy? I ask because I experienced this error when I had simply forgotten to add the "virtual_host" environment variable for my other services, after dropping in that service definition for the nginx-proxy. Doh! :-) See its docs, or atline's example here.
I ran into this issue in 2021, but I was able to resolve it. Documenting as this thread is at the top of Google searches.
tl;dr: Run the containers without a custom NGINX config at least once
This happened to me as I was migrating from one host to another. I migrated all of my files over and then started running my containers one by one.
For me, I had a custom NGINX config file that proxied a path to a separate docker container that was not created yet.
- "~/gotti/volumes/nginx-configs:/etc/nginx/vhost.d"
in this mount, I had a mydomain.conm config file with the following contents
# reverse proxy
location /help {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://ghost:2368; <----- Problem: This container didn't exist yet.
proxy_redirect off;
}
This invalid reference prevented NGINX from proxying my app and thus prevented a SSL cert from being issued.
I want to be able to access an nginx docker container via the https at https://192.168.99.100. By now I have done the following:
Dockerfile:
FROM nginx
COPY certs/nginx-selfsigned.crt /etc/ssl/certs/
COPY certs/nginx-selfsigned.key /etc/ssl/private/
COPY default-ssl.conf /etc/nginx/sites-available/default
EXPOSE 443
I have the correspondent certificate files in folder certs.
The default-ssl.conf:
server {
listen 80;
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
docker-compose.yaml
version: '3'
services:
nginx:
image: mynamespace/nginx_pma
container_name: nginx_pma
build:
context: .
ports:
- 443:443
- 80:80
So, when I run this, I am able to access: 192.168.99.100 which shows NGINX welcome page, but I am unable to make it work on https://192.168.99.100.
The host is Windows 7 with docker toolbox.
Any sugestions?
The reason for your error is because your copying the nginx SSL configuration to a folder nginx does not load by default.
After changing this line in the Dockerfile -
COPY default-ssl.conf /etc/nginx/sites-available/default
To this -
COPY default-ssl.conf /etc/nginx/conf.d/default-ssl.conf
I'm able to reach Nginx with https.