Nginx serving two Angular-Apps: Config with multiple location blocks causes issues - docker

I created a website and a respective admin-backend with Angular and want to serve it using Nginx via Docker. The website should be available via 'mydomain.com/' whereas the backend should be available via 'mydomain.com/backend/'. Getting the website running works perfectly fine. But the path resolution for the backend just does not work.
This is my Dockerfile:
# Webpage
FROM node:16 as build-webpage
WORKDIR /usr/local/app
COPY ./webpage /usr/local/app/
RUN npm ci
RUN npm run build # ng build
# Backend
FROM node:16 as build-backend
WORKDIR /usr/local/app
COPY ./backend /usr/local/app/
RUN npm ci
RUN npm run build
# Nginx
FROM nginx:latest
COPY --from=build-webpage /usr/local/app/dist/webpage /usr/share/nginx/html/webpage
COPY --from=build-backend /usr/local/app/dist/backend /usr/share/nginx/html/backend
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
The nginx.conf contains these location blocks:
location /backend/ {
root /usr/share/nginx/html/; # backend should get appended (due to root)
index index.html index.htm;
include /etc/nginx/mime.types;
}
location / {
root /usr/share/nginx/html/webpage/; # only / is appended, so webpage is appended manually
index index.html index.htm;
include /etc/nginx/mime.types;
}
When I then try to open 'mydomain.com/backend' or 'mydomain.com/backend/index.html', the following log is generated by Nginx:
open() /usr/share/nginx/html/webpage/backend/index.html failed …
As it seems to me, Nginx matches the "/-Location" instead of the "/backend/-Location", since the "webpage"-folder is part of the directory Nginx resolves.
According to various blogposts, Nginx chooses the "best-fitting" location branch when resolving a URL. Therefore, "/backend/" should be chosen. Also, I tried to replace location /backend/ with location ^~ \/backend\/, so that the caret enforces that branch since it is before the "/-Location" in the config file. Unfortunately, that made no difference either.
So I was blessed if someone could tell me what I'm doing wrong.
Thank you!

Related

How can I use environment variables with lua in nginx container

I want to use nginx on a container and that the nginx will read the environment variables of the container. I've searched and found that using lua modules makes it possible but for some reason I can't load the lua modules on the nginx itself. Please help, adding Dockerfile and nginx.conf
Dockerfile
FROM nginx:1.15-alpine
RUN mkdir -p /run/nginx && \
apk add nginx-mod-http-lua
WORKDIR /usr/src/app
COPY build /usr/src/app/build
COPY mime.types /usr/src/app
COPY nginx.conf /usr/src/app
EXPOSE 8080
CMD [ "nginx", "-c", "/usr/src/app/nginx.conf", "-g", "daemon off;" ]
nginx.conf
load_module /usr/lib/nginx/modules/ndk_http_module.so;
load_module /usr/lib/nginx/modules/ngx_http_lua_module.so;
pcre_jit on;
events {
}
http {
server {
listen 8080;
set_by_lua $db_api 'return os.getenv("DB_API")';
location /db/ {
proxy_pass $db_api;
}
location / {
root /usr/src/app/build;
index index.html;
}
}
}
and these are the errors I get:
2020/09/24 17:06:49 [alert] 1#1: detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
2020/09/24 17:06:49 [error] 1#1: lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/5.1/resty/core.lua'
no file '/usr/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/common/resty/core.lua'
no file '/usr/share/lua/common/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so')
nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/5.1/resty/core.lua'
no file '/usr/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/common/resty/core.lua'
no file '/usr/share/lua/common/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so')
and this is the docker build and run commands:
docker build -t client:1.0.0 --no-cache .
docker run -p 80:8080 -it -e DB_API=DB_API_URL client:1.0.0
Add env directive in the top and it works with the comments of the question it works

Debugging a Go Process on a Container does not listen on the mapped port

I am using Goland IDE on MacOSX and I'm trying to debug an application running on the container. I'm trying to attempt remote debugging, just that the container is on my local.
When I run the debugger on my IDE it does stop on the breakpoint but the one that it is debugging is the application on my local and not the one on the container.
For background, my application is supposed to listen on port 8000 and return "Hello, visitor!".
If I compile and run this file through a docker container, map my port 8000 and make a request through browser or through .http file, I do receive this response.
However, when I run it through Delve on the container, it does not respond through browser.
Also, once the container is up, when I start debugger on my IDE it does not debug the application on the container, as it complains about
2020/08/05 17:57:39 main.go:16: listen tcp :8000: bind: address already in use
I've tried following these 2 tutorials, both of which are mostly same, except for the version of their docker images that they use.
Tutorial1
Tutorial2
I have gone through all the comments on these 2 posts as well but haven't found anything that would solve my problem.
Here is my main.go
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
// Set the flags for the logging package to give us the filename in the logs
log.SetFlags(log.LstdFlags | log.Lshortfile)
log.Println("starting server...")
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = fmt.Fprintln(w, `Hello, visitor!`)
})
log.Fatal(http.ListenAndServe(":8000", nil))
}
Here is my Dockerfile:
# Compile stage
FROM golang AS build-env
# Build Delve
RUN git config --global http.sslVerify "false"
RUN git config --global http.proxy http://mycompanysproxy.com:80
RUN go get github.com/go-delve/delve/cmd/dlv
ADD . /dockerdev
WORKDIR /dockerdev
RUN go build -gcflags="all=-N -l" -o /server
# Final stage
FROM debian:buster
EXPOSE 8000 40000
WORKDIR /
COPY --from=build-env /go/bin/dlv /
COPY --from=build-env /server /
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/server"]
The container comes up successfully and the attached console's log says:
API server listening at: [::]:40000
However, it does not seem to be listening.
If I run
GET http://localhost:8000/
Accept: application/json
I expect it to stop on the breakpoint but it doesn't. Rather it complains:
org.apache.http.NoHttpResponseException: localhost:8000 failed to respond
Am I missing something?
Is this the way to invoke debugger on a containerized app?
Some more information:
I figured out that I was using the wrong debug configuration. Need to press debug button with remote debug (top right) showing in the configuration.

Traefik and my web app inside a container

I am trying to do something I thought was simple, but looks like I’m missing a thing.
I have a web app, which is package in a docker image I manage. It starts a server listening on port 9000. It does have an endpoint publishing metrics, on /admin/metrics.
The application is deployed on a system that requires me to publish those metrics on port 9100, with the path /metrics. I could change the application, run a second server, etc, but for fun I tried something quicker (I thought): running a companion reverse proxy.
I chose traefik, and I managed to configure it properly using a file provider: when running on my machine (no container), it does redirect properly calls from /metrics on port 9100 to my app’s /admin/metrics. But when inside the container, it only gives 404 errors, although the configuration is ok. I also tried to run the app only and have traefik on my machine route to the app inside the container, but it fails too.
This is my configuration:
#/app/traefik.toml
[entryPoints]
[entryPoints.MetricsProxy]
address = ":9100"
[providers]
providersThrottleDuration = 42
[providers.file]
directory = "/app"
watch = false
[api]
insecure = false
dashboard = false
debug = false
[log]
level = "TRACE"
#/app/metrics.toml
[http]
[http.routers]
[http.routers.Router0]
entryPoints = ["MetricsProxy"]
middlewares = ["PathConvert"]
service = "MetricsService"
rule = "Path(`/metrics`)"
[http.services]
[http.services.MetricsService]
[http.services.MetricsService.loadbalancer]
[[http.services.MetricsService.loadBalancer.servers]]
url = "http://0.0.0.0:9000"
[http.middlewares]
[http.middlewares.PathConvert]
[http.middlewares.PathConvert.addPrefix]
prefix = "/admin"
Please note that I tried to replace 0.0.0.0 with 127.0.0.1 or localhost, neither works.
Finally, the Dockerfile:
FROM openjdk:8-jre-slim
WORKDIR /app
RUN \
apt-get update -qq && apt-get install -y -qq curl && \
curl -sSL https://github.com/containous/traefik/releases/download/v2.0.4/traefik_v2.0.4_linux_amd64.tar.gz | tar -xz
COPY bin/myapp.sh .
COPY target/universal/bluevalet-server.zip .
COPY deploy/traefik/traefik.toml .
COPY deploy/traefik/metrics.toml .
COPY deploy/nginx.conf .
COPY deploy/run.sh .
#run.sh ~~> ./traefik --configfile /app/traefik.toml & ./myapp.sh
CMD [ "/app/run.sh" ]
EXPOSE 9000
EXPOSE 9100
I guess there is something with "localhost" in the service definition, but cannot understand what.
Anyone has an idea?
Not sure why it does work this way, but I succeeded using another configuration for traefik:
[http]
[http.routers]
[http.routers.Router0]
entryPoints = ["MetricsProxy"]
middlewares = ["PathConvert"]
service = "MetricsService"
rule = "Path(`/metrics`)"
[http.services]
[http.services.MetricsService]
[http.services.MetricsService.loadbalancer]
[[http.services.MetricsService.loadBalancer.servers]]
url = "http://localhost:9000/"
[http.middlewares]
[http.middlewares.PathConvert]
[http.middlewares.PathConvert.replacePathRegex]
regex = "^/metrics"
replacement = "/admin/metrics/prometheus"

nginx "http_headers_more" module returns "not binary compatible" error

I am trying to run the nginx-mod-http-headers-more module for nginx so that I can fully hide the server name/version from a header response.
A bit of background, I am running nginx 1.16.1 inside a docker container. It has a dockerfile running nginx:1.16.1-alpine. In order to hide the Server header response field I need to use the nginx-mod-http-headers-more module.
I added the following commands into my dockerfile to get the module installed in my docker container:
RUN apk update && \
apk upgrade && \
apk add nginx-mod-http-headers-more
Inside my nginx.conf file, I added the following lines:
load_module modules/ngx_http_headers_more_filter_module.so;
...
http {
server {
more_clear_headers "Server: ";
...
}
}
The load_module statement and the more_clear_headers are the two pieces of code needed to make this module work.
However when the docker container is created and ran, it generates this error inside the container:
nginx: [emerg] module "/etc/nginx/modules/ngx_http_headers_more_filter_module.so" is not binary compatible in /etc/nginx/nginx.conf:1
I need help to figure out what to do from here! Thanks!

How to dynamically assign an environment variable to a angular cli project using Docker?

I am having angular cli project and node project running two seperate docker containers.
Here is my Dockerfile
### STAGE 1: Build ###
# We label our stage as 'builder'
FROM node:carbon as builder
COPY package.json package-lock.json ./
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i && mkdir /ng-app && cp -R ./node_modules ./ng-app
WORKDIR /ng-app
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN $(npm bin)/ng build --aot --build-optimizer --environment=test
### STAGE 2: Setup ###
FROM nginx:1.13.3-alpine
## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From 'builder' stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
The node container URL is stored inside environment.ts (angular).
Environment.ts file
declare var require: any;
const pack = require('../../package.json');
export const environment = {
production: false,
API_URL: 'http://localhost:3000/',
socket: 'http://localhost:3200',
appName: pack.name,
version: pack.version,
envi: 'test'
};
Node API_URL is taken during the build time of angular project. But I want to modify the environment variable during the docker run command. ( i.e ) I want to dynamically add environment variable value to the environment.ts file during docker container runtime
Such as,
docker run -e API_URL=192.168.10.147:3000 -p 4200:80 --name=angular angular_image
How can I achieve this?
I'll try to summarize the solution I've worked out with a colleague developing an Angular app, to solve exactly this problem. To better illustrate the solution, I start by including a depiction of the dev folder tree for our angular application (folder names are in square brackets), each relevant element of which is described below:
+---[my angular cli project]
¦ ¦
¦ +---[src]
¦ ¦ +---[assets]
¦ ¦ ¦ +---[json]
¦ ¦ ¦ ¦ +---runtime.json
¦ ¦ ¦ ¦
¦ ¦ ¦ ..other angular application assets files ...
¦ ¦
¦ ¦ ...other angular application source files...
¦ ¦
¦ +---[dist]
¦ ¦ ...built angular files
¦ ¦
¦ +---[docker]
¦ ¦ +---[nginx]
¦ ¦ ¦ +---default.conf
¦ ¦ +---startup.sh
¦ ¦
¦ +---Dockerfile
¦
... other angluar cli project files in my project ...
In your angular cli project configuration data that need to be replaced at runtime with the values of environment variables are kept in a static json file in the application assets. We choose to locate this for instance at assets/json/runtime.json. In this file, values to be replaced are handled like the ${API_URL} variable in the following example:
./src/assets/json/runtime.json:
{
"PARAM_API_URL": "${API_URL}"
...other parameters...
}
At runtime, the angular code will read the value of PARAM_API_URL from this file, whose contents will have been modified at runtime with environment values as explained below. Technically, the json is read by one Angular service by means of http that is, the web application performs a HTTP GET operation to itself to the URLs of the static asset json file above.
#Injectable()
export class MyComponent {
constructor( private http: Http ) {
}
...
someMethod() {
this.http.get( 'assets/json/runtime.json' ).map( result => result.PARAM_API_URL ).subscribe(
api_url => {
... do something with the api_url
eg. invoke another http get on it ...
}
);
}
}
To create a docker container that performs the environment replacement at runtime startup, a script startup.sh will be put inside it (see Dockerfile below) that, at container startup peforms an evnsubst on the above file before launching the nginx web server:
./docker/startup.sh:
echo "Starting application..."
echo "API_URL = ${API_URL}"
envsubst < "/usr/share/nginx/html/assets/json/runtime.json" > "/usr/share/nginx/html/assets/json/runtime.json"
nginx -g 'daemon off;'
As shown below, the Dockerfile performs the following operations: after copying the compiled angular files in ./dist), then defines the startup.sh script as a CMD starting point (the host /dist folder is COPYed in /usr/share/nginx/html, that's why this is the path used to locate the runtime.json file mentioned in the envsubst invocation above). Note that, differently from your Dockerfile, here we don't include the build stage of the angular cli sources, instead, the ng build is supposed to have been performed by the developers before the container image creation - and the result of such build is expected to be found in the ./dist folder. This is a minor difference for what concerns the solution to the problem at hand, though.
./Dockerfile:
FROM nginx:1.13.3-alpine
## Copy our default nginx config
COPY docker/nginx/default.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## copy over the artifacts in dist folder to default nginx public folder
COPY dist /usr/share/nginx/html
## startup.sh script is launched at container run
ADD docker/startup.sh /startup.sh
CMD /startup.sh
Now, when you build your container you can run it with:
docker run -e "API_URL=<your api url>" <your image name>
and the given value will be replaced inside runtime.json before launching nginx.
For completeness, though not relevant for the specific problem, I include also the docker/nginx/default.conf file to configure the nginx instance
./docker/nginx/default.conf:
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 256;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
}

Resources