Internal redirection inside a docker container - docker

I have a docker image that serves an Angular app via an nginx webserver. The image is to be deployed into a kubernetes cluster, exposed behind an nginx-ingress resource, so all my DNS config is handled by ingress.
A requirement appeared where I'm supposed to keep some legacy configuration, mainly a URI of the form /some/path/to/stuff/{a large code}/something/{another code}. I have the regex settled and everthing works on that part.
It was requested to perform a redirect from www.example.com/some/path/to/stuff/{a large code}/something/{another code} to www.example.com/path/{a large code}/something/{another code}?
The constraint being is that the only modification I can do is modifying the nginx.conf file that's injected in the Dockerfile.
I tried location regex with proxy_pass, rewrite and return 301.
From my tests it seems impossible to do while being DNS-agnostic.
Tl;dr can redirects be performed on webservers while being DNS agnostic?
EDIT: This is the Dockerfile:
FROM node:12.16.1-alpine AS builder
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build-prod
FROM nginx:1.15.8-alpine
RUN rm -rf /usr/share/nginx/html/*
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY --from=builder /usr/src/app/dist/dish-ui/ /usr/share/nginx/html
This is the base nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
}
}
What I tried:
added rewrite rules in location blocks:
location ~* ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" {
rewrite ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" /new/$1/$2/path;
}
The above configuration caused 404 error because the new path couldn't be found in the root folder.
added rewrite rule inside server block:
rewrite ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" /new/$1/$2/path;
This would only check once, subsequent requests wouldn't get rewritten. It also didn't redirect properly.
added proxy pass to localhost:
location ~* ".*([a-z0-9]{64})/something/([a-z0-9]{64})$" {
proxy_pass http://127.0.0.1/new/$1/$2/path;
}
This didn't work either. My index.html would alway be loaded, it seemed that the rule was ignored completely, I also placed it BEFORE location /.
I settled on hardcoding the correct URL and use a 301 return.

Related

Showing updates in browser with hitting refresh using Docker to deploy for Vue.js

I am new to docker and following the tutorial on https://cli.vuejs.org/guide/deployment.html#docker-nginx.
I have managed to get it work, but wondering if there is a way to get the browser to update without having to hit the refresh button. Here are the steps I take.
Change content.
Stop container.
Remove container.
Build container.
Run container.
Refresh browser.
I am wondering if there is a way to avoid step 6 to see the new content in the browser.
Here is my docker file
FROM node:latest as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build
FROM nginx as production-stage
RUN mkdir /app
COPY --from=build-stage /app/dist /app
COPY nginx.conf /etc/nginx/nginx.conf
and here is my nginx.conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root /app;
index index.html;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
Problem
Hot reloading is generally implemented using file system specific event APIs to determine when and what to “reload”. Since you’re using Vue in a docker container running a different OS, Vue has no way of communicating with your file system via API (and shouldn’t, for obvious security reasons).
Solution
Ensure Hot Reloading Is Enabled
You’ll need to ensure hot reloading is enabled in Vue.us and force Vue.is to use polling rather than sockets to determine when to reload.
Ensure hot reloading is enabled by disabling minification in development mode, webpack target is not “node” and NODE_ENV is not “production”.
Use Polling
Update webpack config to include the following:
watchOptions: {
poll: 1000 // Check for changes every second
}
or
Set environment variable CHOKIDAR_USEPOLLING in your development docker or docker compose file to true.
References
Disabling Hot Reload in Vue.js: https://vue-loader.vuejs.org/guide/hot-reload.html#usage
Watch Options: https://webpack.js.org/configuration/watch/#watchoptionspoll

Docker build doesn't detect changes in React production build files

I am trying to run a React application in an NGINX Alpine container, but whenever I make a change in the React app, create a production build again and copy updated build folder from my machine in the container, the old version of the application is still running
My Dockerfile looks like this
FROM nginx:alpine
COPY ./nginx_conf /etc/nginx/nginx.conf
COPY ./build/ /var/www/build/
EXPOSE 80
I tried building the image without using cache(sudo docker build --no-cache -t image_name .), deleting the image and then building it again, and even changed the name of the build folder from my machine and tried to copy it in different folders in the container(of course modifying the server configuration accordingly)
The server configuration looks like this
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
keepalive_timeout 3000;
server {
listen 80;
root /var/www/build;
index index.html;
server_name localhost;
client_max_body_size 64m;
error_page 500 502 503 504 /50x.html;
location / {
try_files $uri /index.html;
}
}
}
The dockerignore file(all files and folders except build/ are excluded)
/node_modules
/public
/src
/.env
/package.json
/package-lock.json
/.gitignore
/README.md
How to solve this problem? Have I made something wrong or is this a bug in Docker?
I know that I can just mount the build/ folder from my machine in the container but I would like to copy it in the container instead of referencing it from the container

Does nginx, as I suspect, put "Welcome to Nginx!" when it doesn't know what else to do?

I'm debugging a simple (Docker) proxy server which, so far as I know, doesn't have a "default web site" or anything like that. I think it's getting 302 responses from upstream but I do not yet know why. But what is interesting is that I'm getting "Welcome to Nginx!" Even though I don't think there is a web-site file that would actually produce it nor any reason to go to such a place.
So ... does nginx sometimes produce that response "on its own?" If so, it would greatly help my troubleshooting if someone could tell me, "under what circumstances?" If this is a clue, I'd like to understand that clue ...
This is, in fact, the default index.html for an Nginx webserver. If you spin up a vanilla Nginx container and connect inside of it you can even see these files on the filesystem. For example:
$> docker run --name nginx -d nginx
98da5173df23ea4690b9ce8bda87d844775c77609905f76b542115e4babcdcfa
$> docker exec -it nginx sh
$> ls /usr/share/nginx/
html
$> ls /usr/share/nginx/html
50x.html index.html
$> cat /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
So how does this file get interpreted by Nginx? Well, unless you volume mount in your own /etc/nginx/nginx.conf file into the container the Nginx process is going to use the default one that the original container developers added which references the above index.html file. We can track this down too:
$> ls /etc/nginx/
conf.d fastcgi_params koi-utf koi-win mime.types modules nginx.conf scgi_params uwsgi_params win-utf
$> cat /etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
So if we look here, one thing sticks out. The very last line in the file says: include /etc/nginx/conf.d/*.conf. This means that Nginx should continue to recursively process any configuration information it finds in any *.conf files present in the /etc/nginx/conf.d directory after it finishes reading this configuration file.
So let's take a look at that directory:
$> ls /etc/nginx/conf.d/
default.conf
$> cat /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
What do we see here? Well, mostly that almost the entire config file is commented out. Let's remove those to make it easier to read and understand.
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
We can see here that Nginx is defined, by default by the container developers, to route all traffic on port 80 that it receives to /usr/share/nginx/html if it enters on / or /usr/share/nginx/html if it enters on any other path. As demonstrated in the STDOUT produced by cat earlier in my post, the contents of these files are what you are seeing in your browser.
Hope that helps, let me know if you need any other questions about this!
Solved thanks to TJ's tremendously useful help. It turns out that the page was actually being delivered by the nginx container that was being proxied, and that the root cause was that the proper volume was not being mounted into it.
I traced the problem to the container by attempting to curl directly to the container's IP address after specifying ports 8080:80 to allow me to do so. I then logged-on to the container and looked around very carefully. Indeed it was as TJ described: an error was being thrown and the default-screen was produced as a result.
When I corrected the volume (bind point) specification to expose the correct file, the problem disappeared.
Incidentally, I surmise (now) that my original presumption that somehow "Nginx produces this page by default" was incorrect. It really did produce the page because it had been told to do so. I just didn't fully understand why.

Nginx reverse proxy static assets 404 not found

I am using Nginx and reverse proxy, also Docker.
I have two Docker containers.
319f103c82e5 web_client_web_client "nginx -g 'daemon of…" 6 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp web_client
7636ddaeae99 admin_web_admin "nginx -g 'daemon of…" 2 hours ago Up 2 hours 0.0.0.0:6500->80/tcp, 0.0.0.0:7000->443/tcp web_admin
This is my two containers. When I enter http://website.com, it goes to web_client_web_client container. When I enter http://website.com:6500, it goes to admin_web_admin container. This is the flow right now.
What I want is I don't want my admin users to type http://website.com:6500 to get to the admin page. I prefer them to type http://website.com/admin. So I decided to use proxy_pass which means, when accessing http://website.com/admin, it should proxy_pass to https://website.com:7000
So now, I am posting a Nginx config for web_client_web_client since it's the one which handles requests for port 80 and 433.
Here it is:
server {
listen 80 default_server;
server_name website.com;
location / {
rewrite ^ https://$host$request_uri? permanent;
}
location /admin {
proxy_pass https://website.com:7000/;
}
# I also tried
#location /admin/ {
# proxy_pass https://website.com:7000/;
#}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name website.com;
gzip on;
gzip_min_length 1000;
gzip_types text/plain text/xml application/javascript text/css;
ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
root /usr/share/nginx/html;
location / {
add_header Cache-Control "no-store";
try_files $uri $uri/index.html /index.html;
}
location ~ \.(?!html) {
add_header Cache-Control "public, max-age=2678400";
try_files $uri =404;
}
}
Now, what happens is, static files (css and js files are not loaded) and when inspecting from chrome, request gets made as https://website.com/static/css/app.597efb9d44f82f809fff1f026af2b728.css instead of https://website.com:7000/static/css/app.597efb9d44f82f809fff1f026af2b728.css. So it says 404 not found. I am not sure why I cannot understand such a simple thing.
Your main problem is not really with nginx but with how the 2 applications are setup. I don't have your code but this is what I can infer from your post:
In your pages you load the static content using absolute paths: /static/css/...
So even when you call your pages with /admin in front they will still try to load the static content from /static/
One solution is to use relative paths for your static content. Depending on how complex your application is this might require some work... You need to change the path to static files to something like "./static/css/..." and make sure your files still work. Then your setup in nginx will work because admin pages will try to load '/admin/static/...'
Another solution is to rename the 'static' folder in the admin app to something else and then proxypass that new path as well in your nginx config.
One last thing, your post mentions 2 ports: 6500 and 7000. I am assuming that is a mistake so can you correct it? Or did I understand wrong?

Nginx on kubernetes docker doing infinite redirect when generating conf

I have an nginx pod deployed in my kubernetes cluster to serve static files. In order to set a specific header in different environments I have followed the instructions in the official nginx docker image docs which uses envsubst to generate the config file from a template before running nginx.
This is my nginx template (nginx.conf.template):
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
add_header x-myapp-env $MYAPP_ENV;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
}
I use the default command override feature of Kubernetes to initially generate the nginx conf file before starting nginx. This is the relevant part of the config:
command: ["/bin/sh"]
args: ["-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" ]
Kubernetes successfully deploys the pod however when I make a request I get a ERR_TOO_MANY_REDIRECTS error in my browser.
Strangely, when I deploy the container without running the command override using an nginx.conf almost identical to the above (but without the add_header directive) it works fine.
(All SSL certs and files to be served are happily copied onto the container at build time so there should be no issue there)
Any help appreciated.
I am pretty sure envsubst is biting you by making try_files $uri $uri/ /index.html; into try_files / /index.html; and return 301 https://$host$request_uri; into return 301 https://;. This results in a loop of redirections.
I suggest you run envsubst '$MYAPP_ENV' <template >nginx.conf instead. That will only replace that single variable and not the unintended ones. (Note the escaping around the variable in the sample command!) If later on you need to add variables you can specify them all like envsubsts '$VAR1$VAR2$VAR3'.
If you want to replace all environment variables you can use this snippet:
envsubst `declare -x | sed 's/^declare -x \([^=]*\)=.*/$\1/' | tr -d '\n'` <template >nginx.conf
Also, while it's not asked in the question you can save yourself some trouble by using ... && exec nginx -g 'daemon off;'. The exec will replace the running shell (pid 1) with the nginx process instead of forking it. This also means that signals will be received by nginx, etc.

Resources