With same docker image(Cloud registry), it can prompt popup to input username/password in local but it always return 401 in Cloud Run
nginx config:
server {
listen 80;
location / {
auth_basic "Administrator’s Area";
auth_basic_user_file /etc/nginx/.htpasswd;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
include /etc/nginx/extra-conf.d/*.conf;
}
Local run:
docker run -d -p 9091:80 asia.gcr.io/my-project/my-service/my-service#sha256:b09537845fc8be8db82a7c772dd6b47cf4460cbbd5bc7e209c405be102248619
Does GCP Cloud Run support nginx basic auth? or I did wrong somewhere?
Related
I'm trying to serve multiple containers with a static index.html file with a nginx reverse proxy
I've tried to follow the documentation here to create a default location
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
If I check my default.conf in my container with
$ docker-compose exec nginx-proxy cat /etc/nginx/conf.d/default.conf
I get this result:
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# xx.example.services
upstream xx.example.services {
## Can be connected with "nginx-proxy" network
# examplecontainer1
server 172.18.0.4:80;
}
server {
server_name xx.example.services;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://xx.example.services;
include /etc/nginx/vhost.d/default_location;
}
}
# yy.example.services
upstream yy.example.services {
## Can be connected with "nginx-proxy" network
# examplecontainer2
server 172.18.0.2:80;
}
server {
server_name yy.example.services;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://yy.example.services;
include /etc/nginx/vhost.d/default_location;
}
}
If i check the content of /etc/nginx/vhost.d/default_location it is exactly what I typed in the beginning, so that's fine
However when i go to xx.example.services I get a 403 forbidden.
To my understanding this means that no index.html file was found, but if i exec into my container and cat app/index.html it does exist!
I've checked that all my containers are on the same network.
I'm running my container with this command
docker run -d --name examplecontainer1 --expose 80 --net nginx-proxy -e VIRTUAL_HOST=xx.example.services my-container-registry
Update
I checked the logs of my nginx-proxy container and found this error message:
[error] 29#29: *1 directory index of "/app/" is forbidden..
Tried removing $uri/ as per this SO post but this just left me with redirect cycles. Right now I'm trying to see if I can set the correct permissions, but I'm struggling
What am I missing?
My issue was a basic misunderstanding that the reverse proxy can reach into the filesystem of my containers as stated by jwilder himself here.
Therefor the default location on the reverse proxy is unnecessary in my case. Instead I can simply let it point to my container, and have the nginx config in my container determine the location of my app.
Hi its simple your containers is missing /app/index.html inside them
I can run nginx in a Docker container, port 4000. It should serve a static website with the following structure:
/usr/share/nginx/html/
index.html
app.html
app/
some-other.html
The port was mapped to port 3000 on localhost. Now I am trying to use clean urls as following: http://localhost:3000/app. However, when I try to access this url it is redirecting me to http://localhost:4000/app/ (notice the suffixing /) with status code 301 - Moved Permanently.
I've tried using try_files in different locations and order (it's commented below). Nothing worked. In addition, I've also used a rewrite without success. Always resulting in the same redirect.
site.conf
server {
listen 4000;
server_name localhost;
root /usr/share/nginx/html;
# location / {
# index index.html
# try_files $uri $uri.html $uri/ =404;
# }
location / {
index index.html
try_files $uri #htmlext $uri/;
}
location ~ \.html$ {
try_files $uri =404;
}
location #htmlext {
rewrite ^(.*)$ $1.html last;
}
error_page 404 /404;
}
I'd expect nginx to return the content of app.html without touching the url in any form. Can't I do this without adding location /app too?
I have a online shop that has creditcard payment with 3Dsecure.
When 3D secure navigates back to my site using url example.com/confirmPage/token I get a 405 not allowed from Nginx.
If I visit the page direct from my browser there is no problem also when I refresh the exact same page with the 405 error it loads perfectly fine.
It seems to be related to programmatic redirection to my site from 3DSecure.
Details:
Site is hosted in a AWS ECS Cluster which redirects to https so Nginx doesn't have to.
Site runs in a Docker container with Nginx
My Nginx config for the site looks like this:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name example.com *.example.com;
access_log /var/log/example/access/example.access.log;
error_log /var/log/example/error/example.error.log;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.pem;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html?/$request_uri;
}
}
}
This is copied over using a docker file.
Any help would be greatly appreciated.
My team mates managed to find the fix for this.
location / {
error_page 405 = 200 $uri;
try_files $uri $uri/ /index.html?/$request_uri;
}
405 (Not Allowed) on POST request - Esa Jokinen
I have an nginx pod deployed in my kubernetes cluster to serve static files. In order to set a specific header in different environments I have followed the instructions in the official nginx docker image docs which uses envsubst to generate the config file from a template before running nginx.
This is my nginx template (nginx.conf.template):
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
add_header x-myapp-env $MYAPP_ENV;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
}
I use the default command override feature of Kubernetes to initially generate the nginx conf file before starting nginx. This is the relevant part of the config:
command: ["/bin/sh"]
args: ["-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" ]
Kubernetes successfully deploys the pod however when I make a request I get a ERR_TOO_MANY_REDIRECTS error in my browser.
Strangely, when I deploy the container without running the command override using an nginx.conf almost identical to the above (but without the add_header directive) it works fine.
(All SSL certs and files to be served are happily copied onto the container at build time so there should be no issue there)
Any help appreciated.
I am pretty sure envsubst is biting you by making try_files $uri $uri/ /index.html; into try_files / /index.html; and return 301 https://$host$request_uri; into return 301 https://;. This results in a loop of redirections.
I suggest you run envsubst '$MYAPP_ENV' <template >nginx.conf instead. That will only replace that single variable and not the unintended ones. (Note the escaping around the variable in the sample command!) If later on you need to add variables you can specify them all like envsubsts '$VAR1$VAR2$VAR3'.
If you want to replace all environment variables you can use this snippet:
envsubst `declare -x | sed 's/^declare -x \([^=]*\)=.*/$\1/' | tr -d '\n'` <template >nginx.conf
Also, while it's not asked in the question you can save yourself some trouble by using ... && exec nginx -g 'daemon off;'. The exec will replace the running shell (pid 1) with the nginx process instead of forking it. This also means that signals will be received by nginx, etc.
I'm trying to add a Wordpress blog into a site that was built in ruby on rails. I just need it to be in a sub directory. I made a folder in the public directory and put the Wordpress files in there and now i'm getting a routing error and i'm really not that familiar with rails. Can someone help me figure out a way to do this?
You can get PHP and rails working in the same project if you have access to the server configuration. I was able to get things working on a test VPS in just a few minutes. I didn't test with wordpress, just a simple phpinfo() call, but I don't see any reason why it would fail.
My install uses NGINX for the web server, Unicorn for Rails, and spawn-fcgi and php-cgi for the PHP processing.
I already had a rails app working so I just added PHP to that. The rails app uses NGINX to proxy requests to Unicorn, so it was already serving the public directory as static. I will post my virtual host file below so you can see how it was done.
This was all done on an ArchLinux VPS, but other distros should be similar.
My virtual host file:
upstream unicorn {
server unix:/tmp/unicorn.jrosw.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name example.com www.example.com;
root /home/example/app/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/conf/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/example/app/current/public$fastcgi_script$
}
}
And then a small script to bring up php-cgi:
#!/bin/sh
# You may want to just set this to run as your app user
# if you upload files to the php app, just to avoid
# permissions problems
if [ `grep -c "nginx" /etc/passwd` = "1" ]; then
FASTCGI_USER=nginx
elif [ `grep -c "www-data" /etc/passwd` = "1" ]; then
FASTCGI_USER=www-data
elif [ `grep -c "http" /etc/passwd` = "1" ]; then
FASTCGI_USER=http
else
# Set the FASTCGI_USER variable below to the user that
# you want to run the php-fastcgi processes as
FASTCGI_USER=
fi
# Change 3 to the number of cgi instances you want.
/usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 3 -u $FASTCGI_USER -f /usr/bin/php-cgi
The only problem I had was getting the fastcgi_index option to work, so you'd probably need to look into nginx's url rewriting capabilities to get wordpress' permalink functionality working.
I know this method isn't ideal, but hopefully it gets you on the right track.