i have 2 containers : nginx and jenkins.
nginx container built by following docker-compose file:
nginx:
build:
context: .
dockerfile: nginxDF # copy nginx.conf
ports:
- 80:80
- 443:443
container_name: nginx
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
jenkins container build by this:
version: '3.7'
services:
jenkins:
build:
context: .
dockerfile: jenkinsDF
container_name: 'jenkins_docker'
restart: always
user: root
ports:
- '8081:8080'
- '50200:50000'
volumes:
- './jenkins_home:/var/jenkins_home'
- '/var/run/docker.sock:/var/run/docker.sock'
- '/home/ubuntu/proj:/home/proj'
and my nginx.conf:
...
server {
listen 80;
listen [::]:80;
server_name {mydomain};
location ~ /.well-known/acme-challenge/ {
allow all;
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name {mydomain};
server_tokens off;
ssl_certificate /etc/letsencrypt/live/{mydomain}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{mydomain}/privkey.pem;
...
}
...
all those file located in '/home/ubuntu/proj'
running "docker compose up -d -build" command at '/home/ubuntu/proj' works fine as well..
but when i do this inside on a mounted volume('/home/proj' in jenkins container), 'nginx' container stops with this log:
cannot load certificate "/etc/letsencrypt/live/{mydomain}/fullchain.pem": BIO_new_file() failed
(SSL: error:02001002:system library:fopen:No such file or directory:
fopen('/etc/letsencrypt/live/{mydomain}/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
i checked inside with 'docker exec -it jenkins bash' and all .pem files looks just fine.
I'm just curious why this happens and if this can be fixed.
Perhaps referring the mounted volume once more occur this problem...
Related
Okay so I'm learning Docker and I am trying to deploy a test app with a subdomain (who's domain was bought from another provider) which is pointing to my server. The server already has non-dockerized Nginx setup which serves couple of other non-dockerized apps perfectly. And that part means Nginx is already using port 80 and 443. It's also worth mentioning that the subdomain's main domain (example.dev) has a non-dockerized app with active SSL cert from Let's Encrypt already running in the server. And now the subdomain (test.example.dev) somehow shows Nginx default page when visited. This is my server situation. Now let me explain what happens with Nginx and Certbot in a dockerized app.
The app is using 4 images to create 4 containers: Nodejs, Mongodb, Nginx and Certbot(for SSL). Before adding Certbot, I could perfectly access the app with :. But now I need to attach that subdomain (test.example.dev) to my app with Let's Encrypt SSL certificates.
So after the build is done with Docker Compose, I see that Nginx and Certbot is exited with errors.
This is my nginx/default.conf file:
server {
listen 80;
listen [::]:80;
server_name test.example.dev;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://test.example.dev$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name test.example.dev;
ssl_certificate /etc/nginx/ssl/live/test.example.dev/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/test.example.dev/privkey.pem;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://practice-app:3050;
proxy_redirect off;
}
}
And here’s my docker-compose.yml file:
version: '3'
services:
practice-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
command: node index.js
depends_on:
- mongo
nginx:
image: nginx:stable-alpine
ports:
- "4088:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
depends_on:
- nginx
mongo:
image: mongo:4.4.6
environment:
- MONGO_INITDB_ROOT_USERNAME=test
- MONGO_INITDB_ROOT_PASSWORD=test
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
Nginx logs says:
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/31 13:42:28 [emerg] 1#1: cannot load certificate "/etc/nginx/ssl/live/test.example.dev/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/ssl/live/test.example.dev/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: [emerg] cannot load certificate "/etc/nginx/ssl/live/test.example.dev/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/ssl/live/test.example.dev/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
And Certbot logs says:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Certbot doesn't know how to automatically configure the web server on this system. However, it can still get a certificate for you. Please run "certbot certonly" to do so. You'll need to manually configure your web server to use the resulting certificate.
But after adding the following code:
command: certonly --webroot -w /var/www/certbot --force-renewal --email example#gmail.com -d test.example.dev --agree-tos
under certbot service, the log changed to this:
[17:00] [server1.com test] # docker logs test_certbot_1
Requesting a certificate for test.example.dev
Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: test.example.dev
Type: unauthorized
Detail: Invalid response from http://test.example.dev/.well-known/acme-challenge/HCFXwB1BXb-provr8lr6mJCDG9LRoGbVV0e9BWiiwAo [63.250.33.76]: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx</center>\r\n"
Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
What am I doing wrong here? Please give me a beginner friendly solution as I am new to DevOps.
You have some mistakes in your docker-compose file. Your nginx should be linked with Practice_app not on nginx and your practice app should open the port 3050 in here.
version: '3'
services:
practice-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
command: node index.js
ports:
- "3050:3050"
depends_on:
- mongo
nginx:
image: nginx:stable-alpine
ports:
- "4088:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
links:
- practice-app
certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
depends_on:
- nginx
mongo:
image: mongo:4.4.6
environment:
- MONGO_INITDB_ROOT_USERNAME=test
- MONGO_INITDB_ROOT_PASSWORD=test
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
I'm running a multi-docker container locally with docker-compose, the containers are React front-end 'client', a Nodejs app 'api', and a Nginx proxy in sits in front of two. I have been using the docker-compose setup as follow for a while
version: '3'
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /usr/app/node_modules
- ./client:/usr/app
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /usr/app/node_modules
- ./server:/usr/app
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '8080:80'
and my Nginx setup is as follows
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
server_name _;
location / {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
proxy_pass http://client;
}
location /api {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Recently when I tried to start up the containers, i got following error:
nginx_1 | 2019/08/08 18:11:12 [emerg] 1#1: host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
nginx_1 | nginx: [emerg] host not found in upstream "client:3000" in /etc/nginx/conf.d/default.conf:2
Any idea why nginx not able to find upstream?
I have tried to add links to nginx setup blocks as follows:
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
links:
- client:client
- api:api
ports:
- '8080:80'
I also tried 'depends_on' instead of links. After adding links, nginx no longer complains and exit with code 0. But when i visit the localhost:8080,
it gives a 301 redirect to https://localhost.
Any help or direction are greatly appreciated!!!
You should check names of your services. Docker compose will start your service api in pod named [YOUR_PROJECT_NAME]_api_1. Start only api and client and check output of docker ps. You should gey list of names of pods.
In newer docker_compose syntax versions you can use link_external to map [YOUR_PROJECT_NAME]_api_1 to api.
I have two docker-compose.yml files
THe first one is a global one and am using it to configure nginx webserver and the other one am using it for holding the application code and below are their configurations
First one with nginx configuration
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: globaldocker
container_name: app
restart: unless-stopped
tty: true
working_dir: /var/www
volumes:
- ./:/var/www
- ./dockerconfig/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- common_network
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- webserver_network
- common_network
networks:
common_network:
external: false
webserver_network:
external: false
The above creates two networks
global_docker_common_network, global_docker_webserver_network
On the docker config folder there is a nginx configuration like
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
//other nginx configurations for pos.test
}
ON THE docker-compose configuration with php file
Now the one one holding the source code for pos.test i have the following configuration
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- globaldocker_webserver_network
networks:
globaldocker_webserver_network:
external: true
Which i have added the external network
When i try accessing nginx pos.test it doesnt display the application but only shows the default nginx page
I have tried accessing the first docker nginx configuration bash and checked on the var/www/pos folder but i cant see the files from the second docker config(source code).
How do i share volumes with my nginx docker configuration container so that when i access docker via exposed port 80 am able to access my site pos.test
What am i missing out on this to make this work?
UPDATE
The two docker configuration files are located on different folders on my host machine
UPDATE ON THE QUESTION
This is my nginx config file
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
error_log /var/log/nginx/pos_error.log;
access_log /var/log/nginx/pos_access.log;
root /var/www/pos/web;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
you are mounting the current directory of docker-compose file each. So the only that container will have the source code which resides in the same source code directory.
You need some common directory
First File
volumes:
- /path_to_sc/common:/var/www
Second File
volumes:
- /path_to_sc/common:/var/www/pos
When I try accessing Nginx pos.test it doesn't display the application
but only shows the default Nginx page
Probably you first File not picking correct configuration. Double-check ./dockerconfig/nginx/:/etc/nginx/conf.d/ or run the command inside docker to verify the correct configuration file.
docker exec nginx bash -c "cat /etc/nginx/conf.d/filename.conf`
I have tried accessing the first docker nginx configuration bash and
checked on the var/www/pos folder but i cant see the files from the
second docker config(source code).
Mount the common directory so that can accessible for both containers.
update:
From your comment, it seems like there is a syntax error in your docker-compose file. Take a look into this example
web:
image: nginx
volumes:
- ./data:/var/www/html/
ports:
- 80:80
command: [nginx-debug, '-g', 'daemon off;']
web2:
image: nginx
volumes:
- ./data:/var/www/html
command: [nginx-debug, '-g', 'daemon off;']
I run nginx with docker on my machine (localhost).
When I browse to localhost:8080 I expect to get "hello world", but I get "Welcome to nginx!" screen.
What I have missing in the configuration?
docker-compose.yml
web:
image: nginx
volumes:
- ./example.com.conf:/etc/nginx/conf.d/example.com.conf
ports:
- '8080:80'
example.com.conf
server {
location / {
return 200 "hello world";
}
}
I run the command:
docker-compose up
there is a /etc/nginx/conf.d/default.conf file exists inside the nginx image, which has
server {
listen 80;
server_name default_server;
...
}
you either remove the default.conf file and properly setup your example.com.conf (listen to port, server_name etc...) or replace default.conf with your example.com.conf
you can replace by doing:
volumes:
- ./example.com.conf:/etc/nginx/conf.d/default.conf
I configured my django-uwsgi-nginx using docker compose with the following files.
From browser "http://127.0.0.1:8000/" works fine and gives me the django default page
From browser "http://127.0.0.1:80" throws a 502 Bad Gateway
dravoka-docker.conf
upstream web {
server 0.0.0.0:8000;
}
server {
listen 80;
server_name web;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
alias "/dravoka-static/";
}
location / {
include uwsgi_params;
proxy_pass http://web;
}
}
nginx/Dockerfile
FROM nginx:latest
RUN echo "---------------------- I AM NGINX --------------------------"
RUN rm /etc/nginx/conf.d/default.conf
ADD sites-enabled/ /etc/nginx/conf.d
RUN nginx -t
web is just from "django-admin startproject web"
docker-compose.yaml
version: '3'
services:
nginx:
restart: always
build: ./nginx/
depends_on:
- web
ports:
- "80:80"
web:
build: .
image: dravoka-image
ports:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
Dockerfile
# Ubuntu base image
FROM ubuntu:latest
# Some installs........
EXPOSE 80
When you say from the docker instance , you are running curl from with in the container ?? or you are running the curl command from your local ?
if you are running it from your local , update your docker-compose's web service to following
...
web:
build: .
image: dravoka-image
expose:
- "8000:8000"
volumes:
- .:/dravoka
command: uwsgi /dravoka/web/dravoka.ini
and try again.