I want to be able to access an nginx docker container via the https at https://192.168.99.100. By now I have done the following:
Dockerfile:
FROM nginx
COPY certs/nginx-selfsigned.crt /etc/ssl/certs/
COPY certs/nginx-selfsigned.key /etc/ssl/private/
COPY default-ssl.conf /etc/nginx/sites-available/default
EXPOSE 443
I have the correspondent certificate files in folder certs.
The default-ssl.conf:
server {
listen 80;
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
docker-compose.yaml
version: '3'
services:
nginx:
image: mynamespace/nginx_pma
container_name: nginx_pma
build:
context: .
ports:
- 443:443
- 80:80
So, when I run this, I am able to access: 192.168.99.100 which shows NGINX welcome page, but I am unable to make it work on https://192.168.99.100.
The host is Windows 7 with docker toolbox.
Any sugestions?
The reason for your error is because your copying the nginx SSL configuration to a folder nginx does not load by default.
After changing this line in the Dockerfile -
COPY default-ssl.conf /etc/nginx/sites-available/default
To this -
COPY default-ssl.conf /etc/nginx/conf.d/default-ssl.conf
I'm able to reach Nginx with https.
Related
I'm using nginx in docker-compose file for handling my frontend and backend website.
I had no problems for a long time but once I've got the error "504 Gateway Time-out" when I try to access my project through localhost and it's port
http://localhost:8080
when I type docker Ip and its port
http://172.18.0.1:8080
I can access the project and nginx works correctly.
I'm sure my config file is correct because It was working for 6 months and I don't know what happened for it.
what should I check to find the problem?
docker-compose file:
.
.
.
nginx:
container_name: nginx
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend_appx
networks:
backend_appx :
external: true
.
.
nginx config file:
upstream nextjs_upstream {
server next_app:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
# set root
root /var/www/html/frontend;
# set log
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs_upstream;
add_header X-Cache-Status $upstream_cache_status;
}
}
I followed these steps to setup nginx in docker at my server:
I create a nginx/ folder and put all the docker-compose.yml and conf.d/ with conf.d/default.conf accordingly.
docker-compose.yml:
version: "3.8"
services:
web:
image: nginx:latest
restart: always
volumes:
- ./public:/var/www/html
- ./conf.d:/etc/nginx/conf.d
- ./certbot/conf:/etc/nginx/ssl
- ./certbot/data:/var/www/certbot
ports:
- 80:80
- 443:443
certbot:
image: certbot/certbot:latest
command: certonly --webroot --webroot-path=/var/www/certbot --email abc#xyz.com --agree-tos --no-eff-email -d example.com -d www.example.com
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/logs:/var/log/letsencrypt
- ./certbot/data:/var/www/certbot
and my cond.f/default.conf:
server {
listen [::]:80;
listen 80;
server_name example.com www.example.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
# redirect http to https www
return 301 https://www.example.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name example.com;
# SSL code
ssl_certificate /etc/nginx/ssl/live/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/example.com/privkey.pem;
root /var/www/html;
location / {
index index.html;
}
return 301 https://www.example.com$request_uri;
}
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name www.example.com;
# SSL code
ssl_certificate /etc/nginx/ssl/live/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/example.com/privkey.pem;
root /var/www/html/example/public;
location / {
index index.html;
}
}
I am sure the SSL is working because I can access https://example.com with no problem.
But I always get 404 not found.
I do have a public/ folder in nginx/ folder with index.html. But somehow I always get 404.
I am using Ubuntu 20.
How can I resolve this?
Ok, after #Amin's comments, I read carefully on my config file, and found out that with all the rerouting of the port 80 to port 443, the root folder actually change to /var/www/html/example/public. so I just have to change my docker-compose volume binding from:
volumes:
- ./public:/var/www/html
to
volumes:
- ./public:/var/www/html/example/public
Now it works.
I have a few web apps running on my VPS. I have one domain, and I run the apps on subdomains using Nginx and server_name directives.
I decided to containerize my newest one, using Docker and docker-compose.
However, I can't reach this app with Nginx.
On my VPS, I configured app-client-proxy.nginx so that it tries to redirect to Docker IP with port that client-app (name changed) listens to:
server {
root /home/dartungar/projects/app-client;
server_name app.dartungar.com www.app.dartungar.com;
location / {
proxy_pass http://172.17.0.1:8043;
}
# omitted certbot things
}
server { # server that listens to :80 and redirects to :443 }
Here's docker-compose.yml:
version: "3"
services:
client:
image: dartungar/app-client
restart: unless-stopped
ports:
- 8043:443
Inside app-client container I also have Nginx that listens on 443 and serves html file:
server {
listen 443 ssl;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
When I try to open URL app.dartungar.com, I get 502 Bad Gateway.
I know it has to do with docker networking - my guess is that I have to reverse-proxy to network IP, not Docker IP.
How do I do that? Perhaps I missed it in the docks - if so, just send me a link.
Cheers!
I am getting the following error when I run docker-compose up:
backend_1_a5b5a2caf6fc | 2019/04/28 21:40:49 [emerg] 1#1: no "ssl_certificate" is defined for the "listen ... ssl" directive in /etc/nginx/conf.d/default.conf:4
backend_1_a5b5a2caf6fc | nginx: [emerg] no "ssl_certificate" is defined for the "listen ... ssl" directive in /etc/nginx/conf.d/default.conf:4
...
...
production_backend_1_a5b5a2caf6fc exited with code 1
Here is my Dockerfile for nginx:
FROM nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
default.conf:
fastcgi_cache_path /dev/shm levels=1:2 keys_zone=laravel:100m;
fastcgi_cache_key "$scheme$request_method$host$request_uri$query_string";
server {
listen 80 default_server;
root /var/www/public;
index index.php index.html;
client_max_body_size 5M;
...
...
docker-compose.yml:
version: '3'
services:
backend:
build: ./nginx
depends_on:
- db
- redis
working_dir: /var/www
volumes:
- ../../src:/var/www
ports:
- 80:80
...
...
This means that you have not setup ssl correctly (you're missing a server certificate). Since you have mapped port 80 and not 443 in your docker-compose i assume you're not going to use SSL.
Simply remove the following line in your nginx.conf to disable ssl:
listen 443 ssl http2;
be sure to rebuild and restart your nginx container.
Do you have any other server listening on port 443.Try to delete all symbolic links from /etc/nginx/sites-enabled except this one server you want to make work
But, if someone need to use SSL, this error means that your server section is missing ssl_certificate and ssl_certificate_key declarations. You need to have a .crt and a .key file to run with ssl.
It should looks like
server {
listen 80;
listen 443 default_server ssl;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
... other declarations
}
I have this architecture:
- 1 docker component running nginx on the host's port 80
- app with 2 services: one node and one mongodb
the docker-compose file:
version: '2'
services:
backend:
build: ./back-end/
container_name: "app-back-end"
volumes:
- ./back-end/:/usr/src/dance-app-back
- /usr/src/app-back/node_modules
ports:
- "3000:3050"
links:
- mongodb
mongodb:
image: mongo:3.2.15
ports:
- "3100:27017"
volumes:
- ./data/mongodb:/data/db
The nginx config file
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location /back_1 {
#proxy_pass http://172.17.0.2:5050/;
proxy_pass http://0.0.0.0:5050/;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The nginx container seems not to be able to reach the port 3000 on the host.
What I'm I doing wrong?
If you have port 3000 on your host and would like to map it to your container port, than you need to do that in your docker-compose file
Change:
backend:
build: ./back-end/
container_name: "app-back-end"
volumes:
- ./back-end/:/usr/src/dance-app-back
- /usr/src/app-back/node_modules
ports:
- "3000:3000"
links:
- mongodb
Then you can link nginx to your container that you want to proxy pass to.
Add to your compose file:
nginx:
restart: always
image: nginx:1.13.1
ports:
- "80:80"
depends_on:
- backend
links:
- backend:backend
And then in your nginx file mention the name of your application container that you would like to proxy to with the port of that container that is open.
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location /back_1 {
proxy_pass http://backend:3000/;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Instead of the IP of the container inyour nginx, you can link the container name and the nginx config and docker will resolve those IPs for you.