Trying to set up an HTTPS server using NGINX and DOCKER. Keep getting the same error while checking nginx configuration file nginx -t:
2020/11/13 13:37:52 [emerg] 6#6: cannot load certificate "/etc/nginx/certs/cert.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/cert.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file) nginx: [emerg] cannot load certificate "/etc/nginx/certs/cert.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/cert.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file) nginx: configuration file /etc/nginx/nginx.conf test failed
Tried copying certs dir into etc/nginx in Dockerfile and it didn't work:
Dockerfile
FROM node:latest as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY certs /etc/nginx
COPY nginx.conf /etc/nginx/nginx.conf
RUN nginx -t
Tried setting up docker volumes as well and still the same error.
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80 default_server;
server_name test;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name test;
ssl_certificate certs/cert.crt;
ssl_certificate_key certs/cert.key;
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
P.S. Permissions of the certs are set to 444
chmod -R 444 certs
Related
I am using docker compose to serve up a front-end (vue.js) a back-end and a nginx reverse-proxy.
When I navigate to a route and hit refresh I get a 404 nginx error.
Here is part of my docker compose, omitted a few lines for brevity
version: '3'
services:
# Proxies requests to internal services
dc-reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy_demo
depends_on:
- dc-front-end
- dc-back-end
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 5004:80
dc-front-end:
..
container_name: dc-front-end
ports:
- 8080:80
# API
dc-back-end:
container_name: dc-back-end
ports:
- 5001:5001
here is the nginx.conf that belongs to the reverse proxy service
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_pass http://dc-front-end:80;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /dc-back-end/ {
proxy_pass http://dc-back-end:5001/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
and this is the nginx.conf for the front-end
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /app;
#root /usr/share/nginx/html;
location / {
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
and finally the docker file for the front-end service
# build stage
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I have tried using try_files $uri $uri/ /index.html; in both nginx files but it still gives 404 on page refresh or if i try and navigate to the page in the browser (rather than clicking a link)
As usual the laws of Stackoverflow dictate that you only solve your own question once you post the question.
docker file was wrong. threw me as everything else worked
FROM node:16-alpine3.12 as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx:stable-alpine as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
I'm writing apk uploading functionality in my React App. Every time I try to upload an apk file bigger than 10 mb, I get back an error: 413 Request Entity Too Large. I've already used client_max_body_size 888M; directive. Can anybody explain me, what I do wrong?
Here is my nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/errors.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/accesses.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
listen [::]:80;
server_name up-app.unpluggedsystems.app;
# return 301 http://$server_name$request_uri;
root /usr/share/nginx/html;
index index.html;
client_max_body_size 888M;
location /api/7/apps/search {
proxy_pass http://ws75.aptoide.com/api/7/apps/search;
proxy_set_header X-Forwarded-For $remote_addr;
}
location ~ \.(apk)$ {
proxy_pass https://pool.apk.aptoide.com;
proxy_set_header X-Forwarded-For $remote_addr;
}
location /api {
proxy_pass https://up-app.unpluggedsystems.app;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
And Dockerfile that I use (maybe something wrong here?):
# build environment
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
COPY create-env-file.sh ./create-env-file.sh
RUN npm install
RUN npm install react-scripts#3.4.1 -g
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY fullchain.crt /etc/ssl/fullchain.crt
COPY unpluggedapp.key.pem /etc/ssl/unpluggedapp.key.pem
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I FE Junior, and I so weak in such things. Maybe I put client_max_body_size in wrong place?
Please move client_max_body_size under http{} section
http {
# some code here
sendfile on;
client_max_body_size 888M;
#...
}
Make sure to restart nginx After modifying the configuration file.
You can try
client_max_body_size 0;
(set to no limit, but this is not recommended for production env.)
More over, Remember that if you have SSL, that will require you to set the above limitation for the SSL server and location{} in nginx.conf too. If your client (browser) tries to upload on http, and you expect them to get 301'd to https, nginx server will actually drop the connection before the redirect due to the file being too large for the http server, so the limitation has to be set in both http and https.
The trick is to put in both http{} and server{} (for vhost)
http {
# some code
client_max_body_size 0;
}
server {
# some code
client_max_body_size 0;
}
This is the docker-compose for nginx
nginx:
container_name: nginx
image: nginx
build:
context: ./dockerfile
dockerfile: nginx
volumes:
- type: bind
source: ./config/nginx/nginx.conf
target: /etc/nginx/nginx.conf
- type: bind
source: ./config/nginx/credentials.list
target: /etc/nginx/.credentials.list
- type: bind
source: /mnt/raid
target: /webdav
dockerfile
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-extras libnginx-mod-http-dav-ext
nginx.conf
worker_processes auto;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.0.0.0/8;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Real-IP;
gzip on;
server{
server_name _;
root /webdav;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
dav_access user:rw group:r all:r;
client_body_temp_path /tmp;
client_max_body_size 0;
create_full_put_path on;
auth_basic realm_name;
auth_basic_user_file /etc/nginx/.credentials.list;
}
docker exec nginx ls -la / it shows drwxrwxr-x 12 nginx nginx 20 Jan 4 03:01 webdav
docker exec nginx id -u nginx shows 1000
1000 is the UID of host system user y2kbug. /mnt/raid is owned by 1000:1000.
drwxrwxr-x 12 y2kbug y2kbug 20 Jan 4 11:01 raid/
Going into the docker container, since it is root user by default, the mounted directory is writable. However, connecting with WebDav, the directory is readable, but not writable. Nginx log shows these
2021/01/04 03:20:32 [error] 29#29: *6 mkdir() "/webdav/test" failed (13: Permission denied), client: 10.0.0.7, server: _, request: "MKCOL /test/ HTTP/1.1", host: "10.0.0.10"
10.0.0.7 - y2kbug [04/Jan/2021:03:20:32 +0000] "MKCOL /test/ HTTP/1.1" 403 143 "-" "gvfs/1.46.1" "-"
10.0.0.7 - y2kbug [04/Jan/2021:03:20:32 +0000] "PROPFIND /test HTTP/1.1" 404 143 "-" "gvfs/1.46.1" "-"
May I know what I am doing wrong?
Thanks.
adding user nginx; onto nginx.conf solved the problem.
I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere
My Requirements
I am working on a Windows 10 machine
I have my test app running on http://localhost:3000/
I need to have a reverse proxy setup so http://localhost:80 redirects to http://localhost:3000/ ( i will be adding further rewrite rules when i get the basic setup up and running)
Steps
I am following instructions from
https://www.docker.com/blog/tips-for-deploying-nginx-official-image-with-docker/
I'm trying to create a container (name = mynginx1) specifying my own nginx conf file
$ docker run --name mynginx1 -v C:/zNGINX/testnginx/conf:/etc/nginx:ro -P -d nginx
where "C:/zNGINX/testnginx/conf" contains the file "default.conf" and its contents are
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
}
}
A container ID is returned, but "docker ps" does not show it running.
Viewing the container logs using "docker logs mynginx1" shows the following error
2020/03/30 12:27:18 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
What am I doing wrong?
There were 2 errors in what i was doing
(1) In the conf file, I was using "proxy_pass http://localhost:3000;"
"localhost" in the container is the CONTAINER host, not MY computer. Therefore this needed changing to
proxy_pass http://host.docker.internal:3000;
(2) the path to copy my config file to on the container was not right, i needed to add "conf.d"
docker run --name mynginx1 -v C:/zNGINX/testnginx/conf:/etc/nginx/conf.d:ro -P -d nginx
The documentation I was reading (multiple websites) did not mention adding the "conf.d" directory on the end of the path. However if you view the "/etc/nginx/nginx.conf" file there is a clue on the last line
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
The "include /etc/nginx/conf.d/*.conf;" indicates that it loads any file ended in ".conf" from the "/etc/nginx/conf.d/" directory.