Docker nginx self-signed certificate - can't connect to https - docker

I have been following a few tutorials to try and get my SSL cert working with my docker enviroment. I have decided to go down the route of a self-signed certificate with letsencrypt. I have generated the certificate with the following command
certbot certonly --manual \
--preferred-challenges=dns \
--email {email_address} \
--server https://acme-v02.api.letsencrypt.org/directory \
--agree-tos \
--manual-public-ip-logging-ok \
-d "*.servee.co.uk"
NOTE: I am using multi tenancy so I need the wildcard on my domain
This works, the certificate has been generated on my server. I am now trying to use this with my docker nginx container.
My docker-compose.yml files looks like this
...
services:
nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
ports:
- 433:433
- 80:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- app
- mysql
networks:
- laravel
...
This is my Dockerfile
FROM nginx:stable-alpine
COPY ./fullchain.pem /etc/nginx/fullchain.pem
COPY ./privkey.pem /etc/nginx/privkey.pem
ADD nginx.conf /etc/nginx/nginx.conf
ADD default.conf /etc/nginx/conf.d/default.conf
RUN mkdir -p /var/www/html
RUN addgroup -g 1000 laravel && adduser -G laravel -g laravel -s /bin/sh -D laravel
RUN chown laravel:laravel /var/www/html
I am copying the pem files into the nginx container so I can use them.
Here is my default.conf file which should be loading my certificate
server {
listen 80;
index index.php index.html;
server_name servee.co.uk;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
server {
listen 443 ssl;
server_name servee.co.uk;
ssl_certificate /etc/nginx/fullchain.pem;
ssl_certificate_key /etc/nginx/privkey.pem;
index index.php index.html;
location / {
proxy_pass http://servee.co.uk; #for demo purposes
}
}
The nginx container builds successfully and when I bash into it I can find the pem files. The issue is when I go to https://servee.co.uk I just get Unable to connect error. If I go to http://servee.co.uk it works fine.
I'm not sure what I have missed, this has really put me off docker because its such a pain to get SSL working so hopefully its an easy fix.

You need to update your docker-compose.yml file to use port 443 instead of 433 to match your nginx.conf. Try the below docker-compose.yml file.
...
services:
nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
ports:
- 443:443
- 80:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- app
- mysql
networks:
- laravel
...

Related

Docker nginx routing to API in other container

i am going crazy not figuring this out..
I have a docker network created named isolated-network, and i have an nginx server and an API created in the network. Nginx on port 80 is exposed to the host so i can access it, but the API isnt. I know i could expose the API to the host too, but i'd like for it to be isolated and only accessible through nginx through say /api.
I have configured the Nginx at /api to route to http://my-api:8000, however i get 404 not found in return when accessing http://locahost/api. If i do 'docker exec -it nginx sh' and try to curl the same route http://my-api:8000 i get the expected response.
Is what i'm trying even possible? I have not found any example trying to do the same. If i can't route to http://my-api:8000, can i atleast send the API request to it and receive the response?
in below i have an example that nginx will route traffic to php-fpm container
docker file:
FROM php:8.1-fpm
USER root
RUN mkdir -p /var/www/ && mkdir -p /home/www/.composer && chown www:www /var/www/html && chown www:www /home/www/.composer
#groupadd -g 1000 www && useradd -ms /bin/bash -G www -g 1000 www &&
RUN usermod -aG sudo www
RUN chown -R www /var/www
USER root
EXPOSE 9000
CMD ["php-fpm"]
docker compose file:
version: "3"
services:
webserver:
image: nginx:1.21.6-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./configfiles/nginx/conf-fpm:/etc/nginx/conf.d
php-fpm:
image: php:8.1-fpm
working_dir: /var/www
stdin_open: true
tty: true
build:
context: ./configfiles/php-fpm
dockerfile: Dockerfile
volumes:
- ./src:/var/www
nginx config file
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
proxy_read_timeout 3600;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_read_timeout 3600;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}

Docker Container cant connect to other container - nginx in alpine to nginx in alpine

I'm confused about making a connection from nginx alpine to nginx alpine
both use laravel 9
on the host I can access both using http://localhost:8080
and http://localhost:5001
but when I try to use guzzle in frontend like this
$response = $http->get('http://dashboard:5001');
result is
cURL error 7: Failed to connect to dashboard_service port 5001 after 0 ms:
Connection refused
and I try to curl from frontend container to dashboard container the result is connection refused.I can ping it, but curl not work
this is my docker-compose.yml
version: "3.8"
networks:
networkname:
services:
frontend:
build:
context: .
dockerfile: ./file/Dockerfile
container_name: frontend
ports:
- 8080:80
volumes:
- ./frontend:/code
- ./.docker/php-fpm.conf:/etc/php8/php-fpm.conf
- ./.docker/php.ini-production:/etc/php8/php.ini
- ./.docker/nginx.conf:/etc/nginx/nginx.conf
- ./.docker/nginx-laravel.conf:/etc/nginx/modules/nginx-laravel.conf
networks:
- networkname
dashboard:
build:
context: .
dockerfile: ./file/Dockerfile
container_name: dashboard
ports:
- 5001:80
volumes:
- ./dashboard:/code
- ./.docker/php-fpm.conf:/etc/php8/php-fpm.conf
- ./.docker/php.ini-production:/etc/php8/php.ini
- ./.docker/nginx.conf:/etc/nginx/nginx.conf
- ./.docker/nginx-laravel.conf:/etc/nginx/modules/nginx-laravel.conf
networks:
- networkname
this is my dockerfile
FROM alpine:latest
WORKDIR /var/www/html/
# Essentials
RUN echo "UTC" > /etc/timezone
RUN apk add --no-cache zip unzip curl sqlite nginx supervisor
# Installing PHP
RUN apk add --no-cache php8 \
php8-common \
php8-fpm \
# Installing composer
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN rm -rf composer-setup.php
# Configure supervisor
RUN mkdir -p /etc/supervisor.d/
COPY .docker/supervisord.ini /etc/supervisor.d/supervisord.ini
# Configure PHP
RUN mkdir -p /run/php/
RUN mkdir -p /test
RUN touch /run/php/php8.0-fpm.pid
CMD ["supervisord", "-c", "/etc/supervisor.d/supervisord.ini"]
this is my nginx conf
server {
listen 80;
server_name localhost;
root /code/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass localhost:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
I'm confused about having to set it up in docker, nginx or alpine linux
Thanks.
Hello if you try this from inside container's php you needn't check it with port
Type such from php:
$response = $http->get('http://dashboard');
But, if you check this from out container you may enter like ip:port
$response = $http->get('http://127.0.0.1:5001');
If you are connecting to a docker container within the same network, then use the internal port. If not, use the external port.
In your case, you are trying to connect to the dashboard container from within networkname network. So try http://dashboard instead of http://dashboard:5001

docker compose nginx not starting during build

I found it very strange, I had setup an rails app, posgres db, and a nginx server for production only but the ngix only able to start if I type
docker-compose -f docker-compose.yml -f production.yml up --build
but not the pre-build
docker-compose -f docker-compose.yml -f production.yml build
then
docker-compose up
the rails app and db is starting just fine is just that nginx is not started and the port is revert back to port 3000 instead of 80 which i found very strange isn't they doing the same thing?
nginx.conf
# This is a template. Referenced variables (e.g. $INSTALL_PATH) need
# to be rewritten with real values in order for this file to work.
upstream rails_app {
server unix:///webapp/tmp/sockets/puma.sock;
}
server {
listen 80;
# define your domain
server_name 127.0.0.1 localhost www.example.com;
# define the public application root
root /providre_api/public;
# define where Nginx should write its logs
access_log /providre_api/log/nginx.access.log;
error_log /providre_api/log/nginx.error.log;
# deny requests for files that should never be accessed
location ~ /\. {
deny all;
}
location ~* ^.+\.(rb|log)$ {\
deny all;
}
# serve static (compiled) assets directly if they exist (for rails production)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri #rails;
access_log off;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
# send non-static file requests to the app server
location / {
try_files $uri #rails;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://rails_app;
}
}
web.Dockerfile
# Base image:
FROM nginx
# Install dependencies
RUN apt-get update -qq && apt-get -y install apache2-utils
# establish where Nginx should look for files
ENV INSTALL_PATH /providre_api
# Set our working directory inside the image
WORKDIR $INSTALL_PATH
# create log directory
RUN mkdir log
# copy over static assets
COPY public public/
# Copy Nginx config template
COPY docker/web/nginx.conf /tmp/docker.nginx
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst '$INSTALL_PATH' < /tmp/docker.nginx > /etc/nginx/conf.d/default.conf
EXPOSE 80
# Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`)
CMD [ "nginx", "-g", "daemon off;" ]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
restart: always
ports:
- "5433:5432"
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: ''
app:
command: bundle exec puma -C config/puma.rb
ports:
- "3000"
depends_on:
- db
docker-compose.override.yml
version: '3'
services:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
volumes:
- .:/providre_api
ports:
- "3000:3000"
production.yml
version: '3'
services:
app:
build:
context: .
dockerfile: ./docker/app/prod.Dockerfile
volumes:
- .:/providre_api
ports:
- "3000"
nginx:
container_name: web
build:
context: .
dockerfile: ./docker/web/web.Dockerfile
depends_on:
- app
volumes:
- ./docker/web/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
Sorry my bad I didn't test it fully so the docker-compose up will used the normal docker-compose.yml so I have to repeat and use docker-compose -f docker-compose.yml -f production.yml up instead

How to letsencrypt flask application running on docker and gunicorn as a webserver..?

I am trying to get SSL for my site, i have try to do it with many tutorial i have follows, and yeah i can do it, but many of these tutorial mostly using nginx as a webserver.
But.. now i want to get SSL for my site which running on docker and gunicorn as a webserver. I have follows many tutorials and source, but i can't do it.
and so.. how to do that.?, any source example or tutorials will be very appreciated...?
this my Dockerfile:
FROM python:3.6.5-stretch
MAINTAINER Irwan Santosa
RUN apt-get update && apt-get install -y build-essential libpq-dev
ENV INSTALL_PATH_DOCKER /web_app_docker
RUN mkdir -p $INSTALL_PATH_DOCKER
WORKDIR $INSTALL_PATH_DOCKER
COPY requirements.txt requirements_docker.txt
RUN pip install -r requirements_docker.txt
COPY . .
CMD gunicorn -b 0.0.0.0:80 --access-logfile - "web_app.app:create_app()"
and this is my docker-compose.yml :
version: '3'
services:
web_app_docker:
build: .
command: >
gunicorn -b 0.0.0.0:80
--access-logfile -
--reload
"web_app.app:create_app()"
volumes:
- '.:/web_app_docker'
ports:
- '9999:80'
service_postgresql_docker:
image: 'postgres:9.6'
environment:
POSTGRES_USER: 'irwan'
POSTGRES_PASSWORD: '12345'
volumes:
- '/var/lib/postgresql/data'
ports:
- '5435:5432'
[SOLVED] i am do it with nginx reverse proxy.
This my default file config at /etc/nginx/sites-available/default
server
{
listen 80;
listen [::]:80;
server_name irwan.trinanda.tk;
return 301 https://$server_name$request_uri;
}
server
{
listen 443 ssl;
listen [::]:443 ssl;
server_name irwan.trinanda.tk;
ssl on;
ssl_certificate /etc/letsencrypt/live/irwan.trinanda.tk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/irwan.trinanda.tk/privkey.pem;
ssl_dhparam /etc/letsencrypt/live/irwan.trinanda.tk/dhparams.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /.well-known
{
root /var/www/ssl/website1/;
}
location /
{
include proxy.conf;
proxy_pass http://128.199.80.54:9999/;
}
}
i have follow this tutorial, and yea.. i got it:
https://www.guyatic.net/2017/05/09/configuring-ssl-letsencrypt-certbot-nginx-reverse-proxy-nat/
many thanks to who was wrote that.

Having issues with nginx and docker mapping the correct folder

For whatever reason, I keep getting the default html
My docker files are here /Users/xxx/_sites/_blah:
Dockerfile
site.conf
docker-compose.yml
My app with the code is here /Users/xxx/_sites/_blah/app
Here are my files:
Dockerfile
FROM nginx
# Change Nginx config here...
RUN rm /etc/nginx/conf.d/default.conf
ADD ./site.conf /etc/nginx/conf.d/
COPY content /var/www/html/site
COPY conf /etc/nginx
VOLUME /var/www/html/site
VOLUME /etc/nginx/conf.d/
docker-compose.yml
version: '3.3'
services:
php:
container_name: pam-php
image: php:fpm
volumes:
- ./app:/var/www/html/site
nginx:
container_name: pam-nginx
image: nginx:latest
volumes:
- ./app:/var/www/html/site:rw
- ./site.conf:/etc/nginx/conf.d/site.conf:rw
ports:
- 7777:80
site.conf
server {
listen 80;
server_name localhost;
root /var/www/html/site;
error_log /var/log/nginx/localhost.error.log;
access_log /var/log/nginx/localhost.access.log;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass 192.168.59.103:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Every tutorial or block of code just doesn't work or is old.

Resources