When I raise the container, I get the following errors:
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/09/04 23:08:42 [emerg] 1#1: cannot load certificate "/etc/ssl/5master.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/5master.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx | nginx: [emerg] cannot load certificate "/etc/ssl/5master.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/5master.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx exited with code 1
Although in the DockerFile I copied the certificate files that are in the command invocation folder.
nginx conf:
server {
listen 80;
listen 443 ssl;
server_name 5master.com;
ssl_certificate /etc/ssl/5master.crt;
ssl_certificate_key /etc/ssl/5master.key;
}
DockerFile:
FROM python:3.8
WORKDIR /usr/src/app
ADD . /usr/src/app
COPY requirements.txt ./
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["uwsgi", "app.ini"]
COPY nginx.conf /etc/nginx/conf.d
COPY 5master.crt /etc/ssl/5master.crt
COPY 5master.key /etc/ssl/5master.key
docker-compose:
version: "3.8"
services:
api:
build: .
restart: "always"
environment:
FLASK_APP: run.py
volumes:
- .:/usr/src/app
nginx:
build: ./nginx
container_name: nginx
restart: always
volumes:
- /application/static/:/static
depends_on:
- api
ports:
- "80:80"
- "443:443"
What could be the problem?
You are installing the certificates into your Python API image, not into your nginx image. That is, in your docker-compose.yaml you are building two images:
api:
build: .
nginx:
build: ./nginx
The Dockerfile in your question appears to be for the Python API
image. Since that image isn't used by nginx, it doesn't make any sense
to install the certificates there.
You need to modify your nginx/Dockerfile to install the
certificates.
Related
I have this dockerfile:
FROM nginx
COPY .docker/certificates/fullchain.pem /etc/letsencrypt/live/mydomain.com/fullchain.pem
COPY .docker/certificates/privkey.pem /etc/letsencrypt/live/mydomain.com/privkey.pem
COPY .docker/config/options-ssl-nginx.conf /etc/nginx/options-ssl-nginx.conf
COPY .docker/config/ssl-dhparams.pem /etc/nginx/ssl-dhparams.pem
COPY .docker/config/nginx.conf /etc/nginx/conf.d/default.conf
RUN chmod +r /etc/letsencrypt/live/mydomain.com/fullchain.pem
I have this in my nginx configuration:
server {
listen 443 ssl default_server;
server_name _;
# Why can't this file be found?
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# ssl_certificate /etc/nginx/fullchain.pem;
# ssl_certificate_key /etc/nginx/privkey.pem;
include /etc/nginx/options-ssl-nginx.conf;
ssl_dhparam /etc/nginx/ssl-dhparams.pem;
...
}
Nginx crashes with:
[emerg] 7#7: cannot load certificate "/etc/letsencrypt/live/mydomain.com/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mydomain.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
However, if I change the location of fullchain.pem and privkey.pem to, for example, /etc/nginx/fullchaim.pem and /etc/nginx/privkey.pem and update the nginx configuration, it does find the files and works as expected.
Here's the service definition in docker-compose.yml:
nginx-server:
container_name: "nginx-server"
build:
context: ../../
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "80:80"
- "443:443"
volumes:
- static-content:/home/docker/code/static
- letsencrypt-data:/etc/letsencrypt
- certbot-data:/var/www/certbot
depends_on:
- api
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- api-network
- main
# Commented out to verify that the files aren't being deleted by certbot
# certbot:
# image: certbot/certbot
# container_name: "certbot"
# depends_on:
# - nginx-server
# restart: unless-stopped
# volumes:
# - letsencrypt-data:/etc/letsencrypt
# - certbot-data:/var/www/certbot
# entrypoint: "/bin/sh -c 'sleep 30s && trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
The intention is to use fullchain.pem as an initial certificate until one can be requested from let's encrypt. Note that, at this point, there is no certbot service, and the /etc/letsencrypt/live/mydomain.com directory is not referenced anywhere else at all (only in NginxDockerfile and nginx.conf), so it shouldn't be an issue of another service deleting the files. Rebuilding with --no-cache does not help.
Why can't nginx find the files in this specific location, but can find them if copied to a different location?
EDIT: As suggested, I ended up using a host volume instead. This didn't work when the host volume was located inside the repository (root_of_context/path/to/gitignored/directory/letsencrypt:/etc/letsencrypt, but did work with /etc/letsencrypt:/etc/letsencrypt, which I personally find ugly, but oh well.
Volumes are mounted on run, so after your container is built.
Since you mounted letsencrypt-data on /etc/letsencrypt, Nginx is going to look for your files into letsencrypt-data.
I don't know the purpose of this mount but I guess your container would succeed in running if you removed - letsencrypt-data:/etc/letsencrypt from volumes.
I've a docker-compose which setting up an nginx with a react static site.
When I'm running up the docker-compose I'm trying to sahre with the container a folder which contains my ssl certificates, but the nginx is not able to find them.
version: "3.3"
services:
app:
container_name: frontend
image: colymore/frontend:latest
ports:
- 80:80
- 443:443
restart: always
volumes:
- /etc/letsencrypt/live/colymore.me/:/certs/:ro
labels:
- com.centurylinklabs.watchtower.enable=true
networks:
net:
In the nginx confi I've
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/privkey.pem;
And the file exists:
colymore#colymore.me$ ls /etc/letsencrypt/live/colymore.me/fullchain.pem
/etc/letsencrypt/live/colymore.me/fullchain.pem
But when IO run the ngins it's giving an error:
52a5118c5c40_frontend | nginx: [emerg] cannot load certificate "/certs/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/certs/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I have an application with 4 services. One of them is Nginx which will act as a proxy.
I use docker compose to run the services. In nginx when I specify a path and where to proxy I want to be able to use the service name. This is what I have done so far.
version: '3'
services:
go_app:
image: go_app
depends_on:
- mysql
ports:
- "8000:8000"
mysql:
image: mysql_db
ports:
- "3306:3306"
flask_app:
image: flask_app
ports:
- "8080:8080"
nginx:
image: nginx_app
ports:
- "80:80"
depends_on:
- mysql
- flask_app
- go_app
With the above I create all services. They all work on their respective ports. I want Nginx to listen on port 80 and proxy as defined in the config:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://${FLASK_APP}:8080/;
}
}
You may ask where does FLASK_APP come from. I specify it inside nginx docker image:
FROM nginx
ENV FLASK_APP=flask_app
RUN rm /etc/nginx/conf.d/default.conf
COPY config/default.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Nginx container keeps failing with the following error:
[emerg] 1#1: unknown "flask_app" variable
nginx: [emerg] unknown "flask_app" variable
The way I understand docker compose, flask_app should resolve as the flask_app service.
What am I doing wrong/misunderstanding?
The issue is that nginx does not read ENV variables .
see https://github.com/docker-library/docs/tree/master/nginx#using-environment-variables-in-nginx-configuration
a solution :
you can modify you dockerfile for nginx with this
FROM nginx
ENV FLASK_APP=flask_app
RUN rm /etc/nginx/conf.d/default.conf
COPY default.conf /etc/nginx/conf.d/default.template
EXPOSE 80
CMD ["/bin/bash","-c","envsubst < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
the COPY command is modified to copy you configuration as a template.
the last line is modified to do a substitution using your ENV variables.
I'm new to docker, and I have some issues with the process of pushing images into a private registry and pulling them by docker-compose.yml file in another computer in our office.
I have 2 folders in my project: nginx, and client.
The nginx, is the server, and the client is create-react-app.
nginx folder:
default.conf:
upstream client {
server client:3000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
Dockerfile:
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.confd
client folder:
nginx/default.conf:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
Dockerfile:
FROM node:alpine as builder
ARG REACT_APP_NODE_ENV
ENV REACT_APP_NODE_ENV $REACT_APP_NODE_ENV
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
Outside of the 2 folders, i have the docker-compose.yml file:
version: '3'
services:
nginx:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '3050:80'
client:
build:
context: ./client
dockerfile: Dockerfile
args:
- REACT_APP_NODE_ENV=production
volumes:
- /app/node_modules
- ./client:/app
When I do inside the project folder "docker-compose up --build" everything works as I expect.
Now, I want to push the images and pull them on another computer in the office.
I first pushed the 2 images (nginx, and the client) to the registry by the following commands on the terminal:
docker build -t orassayag/osr_streamer_nginx:v1.0 .
docker tag orassayag/osr_streamer_nginx:v1.0 <office_ip_address>:5000/orassayag/osr_streamer_nginx:v1.0
docker push <office_ip_address>:5000/orassayag/osr_streamer_nginx:v1.0
docker build -t orassayag/osr_streamer_client:v1.0 .
docker tag orassayag/osr_streamer_client:v1.0 <office_ip_address>:5000/orassayag/osr_streamer_client:v1.0
docker push <office_ip_address>:5000/orassayag/osr_streamer_client:v1.0
Then, I updated my docker-compose.yml file as the following:
version: '3'
services:
nginx:
image: <office_ip_address>:5000/orassayag/osr_streamer_nginx
restart: always
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '3050:80'
client:
image: <office_ip_address>:5000/orassayag/osr_streamer_client
build:
context: ./client
dockerfile: Dockerfile
args:
- REACT_APP_NODE_ENV=production
volumes:
- /app/node_modules
- ./client:/app
I went to other computer, created a folder name "TestDeploy", and on terminal I run "docker-compose build --pull", and I get the following error:
"ERROR: build path C:\Or\Web\StreamImages\TestDeploy\nginx either does not exist, is not accessible, or is not a valid URL."
What am I doing wrong?
I need help.
You need to remove the build: blocks in your deployment environment, or else Docker Compose will try to build the images rather than pulling them. You also need to remove the volumes: there or else it will expect to find source code locally instead of in the image.
(My personal recommendation would be to remove those volumes: everywhere, do development outside of Docker, and have your Docker setup accurately reflect your deployment environment, but I seem to be in a minority on this.)
I made infra with docker.
Also used docker-compose to tide each container.
Below is images that I used.
nginx:latest
mongo:latest
python:3.6.5
To deploy flask webserver, I used uwsgi.
(uwsgi installed at python:3.6.5)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
links:
- web_project
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
networks:
backend:
driver: 'bridge'
[/docker/nginx/dockerfile]
FROM nginx:latest
COPY . ./home
WORKDIR home
RUN rm /etc/nginx/conf.d/default.conf
COPY ./config/nginx.conf /etc/nginx/conf.d/default.conf
[/config/nginx.conf]
upstream flask {
server web_project:5000;
}
server {
listen 80;
location / {
uwsgi_pass flask;
include /home/config/uwsgi_params;
}
}
[/docker/web/dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
RUN uwsgi --ini config/uwsgi.ini
[uwsgi.ini]
[uwsgi]
chdir = /home/app
socket = :5000
chmod-socket = 666
logto = /home/web.log
master = true
process = 2
daemonize = /home/uwsgi.log
Defined socket = :5000.
After build/up and access to website, it throws 502 bad gateway throw error to console.
nginx | 2018/11/12 06:28:55 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://172.27.0.3:5000", host: "localhost"
I searched in google long times, but I can't find the solution.
Is there any solution here?
Thanks.
You must expose the port 5000 in the Python app
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
EXPOSE 5000 # <---- add this
RUN uwsgi --ini config/uwsgi.ini