jwilder/nginx giving 403 permission forbidden - docker

I'm trying to serve multiple containers with a static index.html file with a nginx reverse proxy
I've tried to follow the documentation here to create a default location
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
If I check my default.conf in my container with
$ docker-compose exec nginx-proxy cat /etc/nginx/conf.d/default.conf
I get this result:
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# xx.example.services
upstream xx.example.services {
## Can be connected with "nginx-proxy" network
# examplecontainer1
server 172.18.0.4:80;
}
server {
server_name xx.example.services;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://xx.example.services;
include /etc/nginx/vhost.d/default_location;
}
}
# yy.example.services
upstream yy.example.services {
## Can be connected with "nginx-proxy" network
# examplecontainer2
server 172.18.0.2:80;
}
server {
server_name yy.example.services;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://yy.example.services;
include /etc/nginx/vhost.d/default_location;
}
}
If i check the content of /etc/nginx/vhost.d/default_location it is exactly what I typed in the beginning, so that's fine
However when i go to xx.example.services I get a 403 forbidden.
To my understanding this means that no index.html file was found, but if i exec into my container and cat app/index.html it does exist!
I've checked that all my containers are on the same network.
I'm running my container with this command
docker run -d --name examplecontainer1 --expose 80 --net nginx-proxy -e VIRTUAL_HOST=xx.example.services my-container-registry
Update
I checked the logs of my nginx-proxy container and found this error message:
[error] 29#29: *1 directory index of "/app/" is forbidden..
Tried removing $uri/ as per this SO post but this just left me with redirect cycles. Right now I'm trying to see if I can set the correct permissions, but I'm struggling
What am I missing?

My issue was a basic misunderstanding that the reverse proxy can reach into the filesystem of my containers as stated by jwilder himself here.
Therefor the default location on the reverse proxy is unnecessary in my case. Instead I can simply let it point to my container, and have the nginx config in my container determine the location of my app.

Hi its simple your containers is missing /app/index.html inside them

Related

How to use nginx(intsalled on docker) reverse proxy gitlab(installed on docker too)

I installed gitlab according to the official documentation.
sudo docker run --detach \
--hostname git.stupidpz.com \
--publish 8443:443 --publish 880:80 --publish 822:22 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
--shm-size 256m \
gitlab/gitlab-ee:latest
Now I want to use Nginx(installed By Myself) to reverse proxy gitlab instead of the nginx that comes with the gitlab container.
According to official documentation I added some code in gitlab.rb
# Define the external url
external_url 'http://git.stupidpz.com'
# Disable the built-in nginx
nginx['enable'] = false
# Disable the built-in puma
puma['enable'] = false
# Set the internal API URL
gitlab_rails['internal_api_url'] = 'http://git.stupidpz.com'
# Define the web server process user (ubuntu/nginx)
web_server['external_users'] = ['nginx']
Then gitlab cannot be accessed, I found some error logs in this file /var/log/gitblab/gitlab_workhorse/current
{"correlation_id":"","duration_ms":0,"error":"badgateway: failed to receive response: dial tcp 127.0.0.1:8080: connect: connection refused","level":"error","method":"GET","msg":"","time":"2023-01-25T20:57:21Z","uri":""}
{"correlation_id":"","duration_ms":0,"error":"badgateway: failed to receive response: dial tcp 127.0.0.1:8080: connect: connection refused","level":"error","method":"GET","msg":"","time":"2023-01-25T20:57:31Z","uri":""}
{"correlation_id":"","duration_ms":0,"error":"badgateway: failed to receive response: dial tcp 127.0.0.1:8080: connect: connection refused","level":"error","method":"GET","msg":"","time":"2023-01-25T20:57:41Z","uri":""}
{"correlation_id":"","duration_ms":0,"error":"badgateway: failed to receive response: dial tcp 127.0.0.1:8080: connect: connection refused","level":"error","method":"GET","msg":"","time":"2023-01-25T20:57:51Z","uri":""}
Did nothing else except for adding some code in gitlab.rb.
I wonder where this dial tcp 127.0.0.1:8080 comes from?
I hope you can help me, or give me a correct demo.Many thanks.This problem has been bothering me for two days
Now i figure out why i could not make it works,I mixed up Using an existing Passenger/NGINX installation and Using a non-bundled web-server
If you just need to use your own nginx to proxy gitlab(both of them was installed on docker)
you just need to add two lines to gitlab.rb.
# Disable the built-in nginx
nginx['enable'] = false
# Define the web server process user (ubuntu/nginx)
web_server['external_users'] = ['nginx']
and here is nginx's conf
upstream gitlab-workhorse {
server unix://var/opt/gitlab/gitlab-workhorse/sockets/socket fail_timeout=0;
}
server {
listen *:80;
server_name git.example.com;
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 250m;
access_log /var/log/gitlab/nginx/gitlab_access.log;
error_log /var/log/gitlab/nginx/gitlab_error.log;
# Ensure Passenger uses the bundled Ruby version
passenger_ruby /opt/gitlab/embedded/bin/ruby;
# Correct the $PATH variable to included packaged executables
passenger_env_var PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin";
# Make sure Passenger runs as the correct user and group to
# prevent permission issues
passenger_user git;
passenger_group git;
# Enable Passenger & keep at least one instance running at all times
passenger_enabled on;
passenger_min_instances 1;
location ~ ^/[\w\.-]+/[\w\.-]+/(info/refs|git-upload-pack|git-receive-pack)$ {
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
location ~ ^/[\w\.-]+/[\w\.-]+/repository/archive {
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
location ~ ^/api/v3/projects/.*/repository/archive {
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
# Build artifacts should be submitted to this location
location ~ ^/[\w\.-]+/[\w\.-]+/builds/download {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
# Build artifacts should be submitted to this location
location ~ /ci/api/v1/builds/[0-9]+/artifacts {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
# Build artifacts should be submitted to this location
location ~ /api/v4/jobs/[0-9]+/artifacts {
client_max_body_size 0;
# 'Error' 418 is a hack to re-use the #gitlab-workhorse block
error_page 418 = #gitlab-workhorse;
return 418;
}
# For protocol upgrades from HTTP/1.0 to HTTP/1.1 we need to provide Host header if its missing
if ($http_host = "") {
# use one of values defined in server_name
set $http_host_with_default "git.example.com";
}
if ($http_host != "") {
set $http_host_with_default $http_host;
}
location #gitlab-workhorse {
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 3600;
proxy_connect_timeout 300;
proxy_redirect off;
# Do not buffer Git HTTP responses
proxy_buffering off;
proxy_set_header Host $http_host_with_default;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
## The following settings only work with NGINX 1.7.11 or newer
#
## Pass chunked request bodies to gitlab-workhorse as-is
# proxy_request_buffering off;
# proxy_http_version 1.1;
}
## Enable gzip compression as per rails guide:
## http://guides.rubyonrails.org/asset_pipeline.html#gzip-compression
## WARNING: If you are using relative urls remove the block below
## See config/application.rb under "Relative url support" for the list of
## other files that need to be changed for relative url support
location ~ ^/(assets)/ {
root /opt/gitlab/embedded/service/gitlab-rails/public;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
## To access Grafana
location /-/grafana/ {
proxy_pass http://localhost:3000/;
}
error_page 502 /502.html;
}
last but not least,you need to add another bash to your nginx's container,
-v /var/opt/gitlab:/var/opt/gitlab
This will let your nginx container connect to gitlab container.Otherwise you will get "cannot find var/opt/gitlab/gitlab-workhorse/sockets/socket".
It looks like you are installing a GitLab instance as a custom git server on a remote host. There are 3 pieces of this that must work.
DNS setup, remote host's ports and firewall setup.
Working installation of GitLab on the remote host.
Valid SSL certificates, and a correct nginx config for HTTPS.
The first step really depends on your virtual machine and container's setup, but essentially, make sure it (the VM or container) has a public port that responds to requests.
These variables must be set in the remote host's environment as such
--volume $GITLAB_HOME/config:/etc/gitlab
--volume $GITLAB_HOME/logs:/var/log/gitlab
--volume $GITLAB_HOME/data:/var/opt/gitlab \
The above URL covers all the GitLab install steps once you have signed in and verified that it was installed correctly and that it runs as expected on that remote host.
Only then, install and configure nginx. Since GitLab likely will transfer credentials and other secure data, you will need to setup https on nginx.
An example of an Nginx configuration can be found here. There is also a tool by Mozilla that makes building a custom nginx config easier, found here.
The error you show has this URL "127.0.0.1:8080". It is likely you have supplied this URL to the gitlab.rb config somewhere, and that might be a mistake. I cannot be sure without the whole config file however.
Also, it is likely the GitLab image will need to run its own nginx instance, so that the said container when launched may do its job and act as a git server. To reverse proxy this GitLab instance, you may need to install nginx onto your host machine and point it to GitLab Image's nginx.
You may be able to do away with the second nginx instance by appending a new server {} block into the Gitlab Image's nginx config. I would not recommend this.

Need help troubleshooting custom docker image for nginx

I want to install a simple web service to browse a file directory tree on an internal server and to comply with company policy it needs to use TLS ("https://...").
First I tried several images including davralin/nginx-autoindex and mounted the directory I want this service to share. It worked like a charm, but it didn't use a TLS connection.
To get something to work with TLS, I started from scratch and created my own default.conf file for nginx:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I created the following Dockerfile:
FROM nginx:stable-alpine
MAINTAINER lsiden at gmail.com
COPY default.conf /etc/nginx/conf.d
COPY my-cert.crt /etc/ssl/certs/
COPY server.key /etc/ssl/certs/
Then I build it:
docker build -t lsiden/nginx-autoindex-tls .
Then I run it:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:80 lsiden/nginx-autoindex-tls
However, I can't reach it even from the host machine. I tried:
$ telnet localhost 3453
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
I tried to read log messages:
docker logs <container-id>
Silence.
I've already confirmed that the docker proxy is listening to the port:
tcp6 0 0 :::3453 :::* LISTEN 14828/docker-proxy
The port shows up on tcp6 but not "tcp" (ipv4) but I read here that netstat will show only the ipv6 connection even if it is available on both. To be sure, I verified:
sudo sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
To be thorough, I already opened this port in iptables, although iptables can't be playing a role here if I can't even get to it from the same machine via localhost.
I'm hoping someone with good networking chops can tell me where to look next. I can't figure out what I missed.
In case the configuration you shared is complete, you are not listing on port 80 inside your container at all.
change your configuration to something like that in case you want to redirect incomming traffic on port 80 to 443:
server {
listen 80;
listen [::]:80;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/certs/my-cert.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
location / {
root /usr/share/nginx/html;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If you don't want to do this, just change your docker run command:
docker run -dt -v /var/www/data/files:/usr/share/nginx/html:ro -p 3453:443 lsiden/nginx-autoindex-tls

Nginx variables ignore case

I'm setting up Nginx using Docker's service discovery. My service name is webAdmin.
Relevant section of the current Nginx config reads
resolver 127.0.0.11 valid=10s; # Docker DNS server
if (!-f $request_filename) {
set $upstream_admin_server webAdmin:8000;
proxy_pass http://$upstream_admin_server;
break;
}
When visiting the appropriate server, Nginx returns a 404. The logs reveal that Nginx is attempting to resolve a lower case version of my service name.
2019/08/26 21:53:46 [error] 3756#3756: *1569 webadmin could not be resolved (3: Host not found), client: 10.0.0.29, server: admin.mysite.com, request: "GET /favicon.ico HTTP/1.1", host: "admin.mysite.com"
When I avoid using a variable the config reads
resolver 127.0.0.11 valid=10s; # Docker DNS server
if (!-f $request_filename) {
proxy_pass http://webAdmin:8000;
break;
}
Nginx is then able to resolve the service name and correctly route my request.
I attempted to use quotes, single and double but neither seem to have any effect. The Nginx docs for set don't seem to offer any clues.
Why is my variable being converted to lower case?
When Nginx attempts to resolve a name it actually forces the name to lower case. Source can be found here.
I assume this decision was made with the knowledge that DNS names are supposed to be "case insensitive". But it results in inconsistent behavior between an explicitly declared resolver and the default resolver.
For now it seems that the best option is to avoid the use of capitalization in service names. (ie. webAdmin -> web_admin)
Thanks to Adiii for the guidance!
Its working fine with me using nginx:alpine. I test it using the below configuration and changing the value for LocalhOst.
server {
listen 80;
server_name localhost;
location / {
set $upstream_admin_server LocalhOst:80;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://$upstream_admin_server/index.html;
}
location /index.html {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
You can test it below command.
docker run --rm --name my-custom-nginx-container -p 80:80 -v $PWD/nginx.conf:/etc/nginx/conf.d/default.conf -it nginx:alpine

How to make a secure nginx-proxy to point different paths in single server?

I want to use letsencrypt-nginx-proxy-companion in my Docker instance.
After some reading I still cannot find solution for my schema:
HOST => DOCKER
/ |
(vps) |
(containers)
- nginx-proxy
- letsencrypt-nginx-proxy-companion
- portainer [to manage self-hosted docker]
https://projects.domain.com:4488
- jenkins [to manage projects from github]
https://projects.domain.com:5533
- projects home [static website]
https://projects.domain.com
- project #1
https://projects.domain.com/project-1
- project #2
https://projects.domain.com/project-2
Assuming I know how to manage multiple subdomains (each for container) I miss how (and where) specify /path for projects.
Where to start if I want to route all traffic throught SSL (excluding script for certificate renewal) and manage projects with Jenkins? Is it a good idea to wrap it in this way?
Did you try using "location /" tag with proxy_pass and subfilters?
For example:
server {
server_name jenkins.domain.com;
listen 80 ;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name jenkins.domain.com;
ssl on;
ssl_certificate /etc/nginx/ssl/sslcertificate.crt
ssl_certificate_key /etc/nginx/sslkey.key;
proxy_set_header Accept-Encoding “”;
sub_filter_types ‘text/html’;
sub_filter_types ‘text/css’;
sub_filter ‘http://jenkins.domain.com‘ ‘https://$host';
sub_filter_once off;
server {
server_name projects.domain.com;
listen 80 ;
return 301 https://$host$request_uri;
}
location /project-1/{
proxy_pass http://jenkins.domain.com:4488/project-1/;
}
}

Nginx on kubernetes docker doing infinite redirect when generating conf

I have an nginx pod deployed in my kubernetes cluster to serve static files. In order to set a specific header in different environments I have followed the instructions in the official nginx docker image docs which uses envsubst to generate the config file from a template before running nginx.
This is my nginx template (nginx.conf.template):
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
add_header x-myapp-env $MYAPP_ENV;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
}
I use the default command override feature of Kubernetes to initially generate the nginx conf file before starting nginx. This is the relevant part of the config:
command: ["/bin/sh"]
args: ["-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" ]
Kubernetes successfully deploys the pod however when I make a request I get a ERR_TOO_MANY_REDIRECTS error in my browser.
Strangely, when I deploy the container without running the command override using an nginx.conf almost identical to the above (but without the add_header directive) it works fine.
(All SSL certs and files to be served are happily copied onto the container at build time so there should be no issue there)
Any help appreciated.
I am pretty sure envsubst is biting you by making try_files $uri $uri/ /index.html; into try_files / /index.html; and return 301 https://$host$request_uri; into return 301 https://;. This results in a loop of redirections.
I suggest you run envsubst '$MYAPP_ENV' <template >nginx.conf instead. That will only replace that single variable and not the unintended ones. (Note the escaping around the variable in the sample command!) If later on you need to add variables you can specify them all like envsubsts '$VAR1$VAR2$VAR3'.
If you want to replace all environment variables you can use this snippet:
envsubst `declare -x | sed 's/^declare -x \([^=]*\)=.*/$\1/' | tr -d '\n'` <template >nginx.conf
Also, while it's not asked in the question you can save yourself some trouble by using ... && exec nginx -g 'daemon off;'. The exec will replace the running shell (pid 1) with the nginx process instead of forking it. This also means that signals will be received by nginx, etc.

Resources