starting nginx docker container with custom nginx config file - docker

My Requirements
I am working on a Windows 10 machine
I have my test app running on http://localhost:3000/
I need to have a reverse proxy setup so http://localhost:80 redirects to http://localhost:3000/ ( i will be adding further rewrite rules when i get the basic setup up and running)
Steps
I am following instructions from
https://www.docker.com/blog/tips-for-deploying-nginx-official-image-with-docker/
I'm trying to create a container (name = mynginx1) specifying my own nginx conf file
$ docker run --name mynginx1 -v C:/zNGINX/testnginx/conf:/etc/nginx:ro -P -d nginx
where "C:/zNGINX/testnginx/conf" contains the file "default.conf" and its contents are
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
}
}
A container ID is returned, but "docker ps" does not show it running.
Viewing the container logs using "docker logs mynginx1" shows the following error
2020/03/30 12:27:18 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
What am I doing wrong?

There were 2 errors in what i was doing
(1) In the conf file, I was using "proxy_pass http://localhost:3000;"
"localhost" in the container is the CONTAINER host, not MY computer. Therefore this needed changing to
proxy_pass http://host.docker.internal:3000;
(2) the path to copy my config file to on the container was not right, i needed to add "conf.d"
docker run --name mynginx1 -v C:/zNGINX/testnginx/conf:/etc/nginx/conf.d:ro -P -d nginx
The documentation I was reading (multiple websites) did not mention adding the "conf.d" directory on the end of the path. However if you view the "/etc/nginx/nginx.conf" file there is a clue on the last line
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
The "include /etc/nginx/conf.d/*.conf;" indicates that it loads any file ended in ".conf" from the "/etc/nginx/conf.d/" directory.

Related

how to allow docker nginx webdav writing into a mounted directory?

This is the docker-compose for nginx
nginx:
container_name: nginx
image: nginx
build:
context: ./dockerfile
dockerfile: nginx
volumes:
- type: bind
source: ./config/nginx/nginx.conf
target: /etc/nginx/nginx.conf
- type: bind
source: ./config/nginx/credentials.list
target: /etc/nginx/.credentials.list
- type: bind
source: /mnt/raid
target: /webdav
dockerfile
FROM nginx:latest
RUN apt-get update && apt-get install -y nginx-extras libnginx-mod-http-dav-ext
nginx.conf
worker_processes auto;
include /etc/nginx/modules-enabled/*.conf;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.0.0.0/8;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Real-IP;
gzip on;
server{
server_name _;
root /webdav;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
dav_access user:rw group:r all:r;
client_body_temp_path /tmp;
client_max_body_size 0;
create_full_put_path on;
auth_basic realm_name;
auth_basic_user_file /etc/nginx/.credentials.list;
}
docker exec nginx ls -la / it shows drwxrwxr-x 12 nginx nginx 20 Jan 4 03:01 webdav
docker exec nginx id -u nginx shows 1000
1000 is the UID of host system user y2kbug. /mnt/raid is owned by 1000:1000.
drwxrwxr-x 12 y2kbug y2kbug 20 Jan 4 11:01 raid/
Going into the docker container, since it is root user by default, the mounted directory is writable. However, connecting with WebDav, the directory is readable, but not writable. Nginx log shows these
2021/01/04 03:20:32 [error] 29#29: *6 mkdir() "/webdav/test" failed (13: Permission denied), client: 10.0.0.7, server: _, request: "MKCOL /test/ HTTP/1.1", host: "10.0.0.10"
10.0.0.7 - y2kbug [04/Jan/2021:03:20:32 +0000] "MKCOL /test/ HTTP/1.1" 403 143 "-" "gvfs/1.46.1" "-"
10.0.0.7 - y2kbug [04/Jan/2021:03:20:32 +0000] "PROPFIND /test HTTP/1.1" 404 143 "-" "gvfs/1.46.1" "-"
May I know what I am doing wrong?
Thanks.
adding user nginx; onto nginx.conf solved the problem.

Can't connect from one docker container to another by its public domain name

I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere

changing nginx conf file

I installed openresty alpine docker image and mounted conf.d to define the server in there. It works fine.
Next, I want to change nginx.conf and set worker_process=auto. However worker_processes are defined in nginx.conf. I tried to volume mount nginx.conf in Docker-compose file as:
volumes:
- ./conf.d:/etc/nginx/conf.d
- ./conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
however, it creates a directory nginx.conf in ./conf
How can I mount/modify nginx.conf?
You are mounting it with the wrong directory of the Docker if you want to update nginx root configuration.
Nginx Config Files
The Docker tooling installs its own nginx.conf file. If you want to
directly override it, you can replace it in your own Dockerfile or via
volume bind-mounting.
For the Linux images, that nginx.conf has the directive include
/etc/nginx/conf.d/*.conf; so all nginx configurations in that
directory will be included. The default virtual host configuration has
the original OpenResty configuration and is copied to
/etc/nginx/conf.d/default.conf.
docker run -v /my/custom/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
Second thing, Better to use an absolute path for mounting.
docker run -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf openresty/openresty:1.15.8.2-1-alpine
or
docker run -v abs_path/nginx.conf:/etc/nginx/nginx.conf openresty/openresty:1.15.8.2-1-alpine
Openresty config:
docker run -v $PWD/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf openresty/openresty:1.15.8.2-1-alpine
You should mount exact file, otherwise it will break the container.
here is the default config for /usr/local/openresty/nginx/conf/nginx.conf
# nginx.conf -- docker-openresty
#
# This file is installed to:
# `/usr/local/openresty/nginx/conf/nginx.conf`
# and is the file loaded by nginx at startup,
# unless the user specifies otherwise.
#
# It tracks the upstream OpenResty's `nginx.conf`, but removes the `server`
# section and adds this directive:
# `include /etc/nginx/conf.d/*.conf;`
#
# The `docker-openresty` file `nginx.vh.default.conf` is copied to
# `/etc/nginx/conf.d/default.conf`. It contains the `server section
# of the upstream `nginx.conf`.
#
# See https://github.com/openresty/docker-openresty/blob/master/README.md#nginx-config-files
#
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

403 Forbidden Error when deploying app through nginx in docker container

I'm trying to deploy my frontend through nginx in a docker container. The URL should be http://10.122.45.116/sub.
I keep on getting 403 Forbidden Error. The file is existing. The permissions of both /home directory and index_.html file is 777. I suppose port 80 should be open by default.
Here is the content of /etc/nginx/conf.d/nginx.conf:
server {
listen 80;
server_name 10.122.45.116;
location /sub {
root /home;
index index_.html;
include uwsgi_params;
uwsgi_pass 0.0.0.0:5000;
}
}
Here is the content of /etc/nginx/nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Here is the code of supervisord.conf:
[supervisord]
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
# Graceful stop, see http://nginx.org/en/docs/control.html
stopsignal=QUIT
I've looked at various answers but none of them seemed to help me.
Please help!
10.122.45.11 this ip is Private IP
what is your public ip address and try with public ip
What is the difference between a Public IP address and a Private IP address?

nginx index directive works fine locally but gives 404 on ec2

I have a web project that I want to deploy using docker-compose and nginx.
Locally, I:
docker-compose build
docker-compose push
If I docker-compose up, I can access localhost/ and get redirected to my index.html.
Now on my ec2 instance (a regular ec2 instance where I installed docker and docker-compose) I docker-compose pull, then docker-compose up.
All the containers launch correctly and I can exec sh into my nginx container and see there's a /facebook/index.html file.
If I go to [instance_ip]/index.html, everything works as expected.
If I go to [instance_ip]/, I get a 404 response.
nginx receives the request (I see it in the access logs) but does not redirect to index.html.
Why is the index directive not able to redirect to my index.html file?
I tried to:
Reproduce locally by remove all local images and pulling from my registry.
Kill my ec2 instance and launch a new one.
But I got the same result.
I'm using docker-compose 1.11.1 and docker 17.05.0. On the ec2 instance it's docker 17.03.1 and I tried both docker-compose 1.11.1 and 1.14.1 (Sign that I'm a bit desperate ;)).
An extract from my docker-compose file:
nginx:
image: [image from registry]
build:
context: ./
dockerfile: deploy/nginx.dockerfile
ports:
- "80:80"
depends_on:
- web
My nginx image starts from alpine, installs nginx, adds the index.html file and copies my conf file in /etc/nginx/nginx.conf.
Here's my nginx config. I checked that it is present on the running containers (both locally and on ec2).
# prevent from exiting when using `run` to launch container
daemon off;
worker_processes auto;
#
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
sendfile off;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
server {
error_log /var/log/nginx/file.log debug;
listen 80 default_server;
# root /home/domain.com;
# Bad developers use underscore in headers.
underscores_in_headers on;
# root should be out of location block
root /facebook;
location / {
index index.html;
# autoindex on;
try_files $uri #app;
}
location #app {
include uwsgi_params;
# Using docker-compose linking, the nginx docker-compose service depends on a 'web' service.
uwsgi_pass web:3033;
}
}
}
I have no idea why the container is behaving differently on the ec2 instance.
Any pointers appreciated!

Resources