Traefik - proxy to backend for angular application - docker

I have set up a proxy with Nginx which is as follows
server {
listen 80;
server_name localhost;
location /api {
proxy_pass https://api.mydomain.com/;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
my Dockerfile
FROM node:12-alpine as builder
WORKDIR /workspace
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/www /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
This works fine But wants to replace Nginx with Traefik for the above proxy settings. Any help would be much appreciated since I'm very new to traefik.

With Traefik 2+, you need to configure 2 routers:
- One for the API
- One for the webapp
For the API proxy, you will have a rule like:
rule = "Host(`example.com`) && Path(`/api`)"
And the webapp will juste have the host as rule
rule = "Host(`example.com`)"
For kubernetes, you can do it in an ingress like that:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`example.com`) && PathPrefix(`/api`)
kind: Rule
services:
- name: mywebapp-svc
port: 80
- match: Host(`example.com`)
Kind: Rule
services:
- name: myapi-svc
port: 80
If the API is not inside the kubernetes cluster, you can define the rule tu use an externalService like that:
---
apiVersion: v1
kind: Service
metadata:
name: myapi-svc
namespace: default
spec:
externalName: api.mydomain.com
type: ExternalName

if you want to step up from that manual configuration, you may use Traefik as described here. Watch how he uses docker labels to define how to route HTTP traffic.
I personally use caddy docker proxy in docker (swarm, but not required) which I find easier to understand and use

Related

Kubernets - nginx: [emerg] host not found in upstream

I'm attempting to expose a service from my home network via a sub-url from my domain. This service is running on a Kubernetes (K3s) cluster on a Ubuntu 20.04 server. However there's an issue with proxy_pass in that pod that is giving me the `nginx: [emerg] host not found in upstream in 'webhook.lan' error.
Here's an overview of my setup
.lan DNS entry is setup on my router. I have many other services accessible (locally) that use .lan. I'm confident this is setup correctly.
My Ubuntu server is set up to use my router (192.168.0.1) as a DNS server. I can run dig webhook.lan and it's successful. I can also run wget webhook.lan and it downloads the HTML. So, the server can definitely resolve the address. resolvectl status has 192.168.0.1 as the Current DNS Server. Everything seems okay.
The K3s cluster is using an NGINX Ingress controller. I have two hosts setup. A wildcard host for *.lan addresses, along with a host to handle requests from my sub-url sub.url.com. I'm not attempting any sort of communication within the cluster, as I don't see the point (they're both already exposed separately).
Each container has it's own NGINX configuration to handle / proxy requests.
The .lan config looks like this:
server {
server_name service-a.lan;
listen 80;
listen [::]:80;
location / {
include proxy_params;
proxy_pass http://192.168.0.27:8000;
}
}
server {
server_name service-b.lan;
listen 80;
listen [::]:80;
location / {
include proxy_params;
proxy_pass http://192.168.0.27:3000;
}
}
That's just a sample. There's many more. They all work perfect on my local network.
Here's the config for the pod that is handling requests from the web. I currently use it to serve static files, along with a few basic HTML pages. This is where the proxy_pass issue is coming up:
upstream webhook {
server webhook.lan;
}
server {
server_name genweb;
listen 80;
listen [::]:80;
#basic welcome screen
location / {
alias /srv/web/;
try_files $uri $uri/ /index.html;
}
#static images, music, etc.
location /static {
alias /app/static;
autoindex on;
}
#problem area!!
location /webhook {
#resolver 127.0.0.11;
resolver 192.168.0.1;
include proxy_params;
proxy_pass http://webhook;
}
}
Originally, I just imported proxy_params and used the proxy_pass directive. Since that's all I was doing with the other container. Then, I tried using upstream{}; tried resolver 192.168.0.1 to make it resolve to the router. Finally, tried resolver 127.0.0.11 because that's the docker localhost (apparently).
No dice. Also, I should mention that the pod/deployment doesn't even start. It got that error from kubectl logs. It tries to restart a bunch of times, and eventually gives a status of Failure (I think), and never starts. Commenting out the webhook location in the NGINX config allows it to start right up.
I'm guessing the issue comes down to NGINX in the sub-url's container not being able to resolve 192.168.0.1, but I cannot figure out why.
Here's the Dockerfile (which I basically copied from the other running app) and Service/Deployment yaml file, in case that's where the problem lies:
Dockerfile
FROM nginx:1.21.4
RUN rm /etc/nginx/conf.d/default.conf
COPY web-nginx.conf /etc/nginx/conf.d
COPY proxy_params /etc/nginx
RUN mkdir -p /srv/web/static
COPY index.html /srv/web
ADD static /srv/web/static
web-service.yml
apiVersion: v1
kind: Service
metadata:
name: gen-web-service
spec:
selector:
app: genweb
ports:
- protocol: TCP
port: 8080
targetPort: 80
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gen-web-deployment
labels:
app: genweb
spec:
replicas: 1
selector:
matchLabels:
app: genweb
template:
metadata:
labels:
app: genweb
spec:
containers:
- name: genweb
image: genweb:v1.0.16
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: gen-web-volume
mountPath: /app/static
volumes:
- name: gen-web-volume
hostPath:
path: /srv/web/static

Dockerfile doesn't seem to be exposing ports

I'm trying to run a simple node server on port 8080, but with the following config any attempt at hitting the subdomain results in a 502 Bad Gateway error. If I go the node I can see there doesn't appear to be any ports open on the container itself. So, assuming I've checked everything correctly, is there anything else I need to do in the config to open the port for the node server?
Edit: If I ssh into the pod and curl localhost on 8080 I'm able to hit the node server.
Dockerfile
FROM node:12.18.1
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "node", "server.js" ]
k8s deployment
spec:
containers:
- name: test
image: test_image
ports:
- name: http
protocol: TCP
containerPort: 8080
service yaml
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8080
protocol: TCP
selector:
app: test-deployment
type: NodePort
externalTrafficPolicy: Cluster
Ingress
spec:
rules:
- host: dev.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 80
path: /
This wound up being on the application side. The server needed to be bound to 0.0.0.0 instead of 127.0.0.1.

How do I configure nginx to serve files with a prefix?

I am trying to host static files in kubernetes with an nginx container, and expose them on a private network with istio.
I want the root of my server to exist at site.com/foo, as I have other applications existing at site.com/bar, site.com/boo, etc.
My istio virtual service configuration:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cluster-foo
namespace: namespace
spec:
gateways:
- cluster-gateway
hosts:
- site.com
http:
- match:
- name: http
port: 80
uri:
prefix: /foo
route:
- destination:
host: app.namespace.svc.cluster.local
port:
number: 80
All of my static files exist in the directory /data on my nginx container. My nginx config:
events {}
http {
server {
root /data;
location /foo {
autoindex on;
}
}
}
Applying this virtual service, and a kube deployment that runs an nginx container with this config, I get a nginx server at site.com/foo that serves all of the static files in /data on the container. All good. The problem is that the autoindexing that nginx does, does not respect the prefix /foo. So all the file links that nginx indexes at site.com/foo, look like this:
site.com/page.html, rather than site.com/foo/page.html. Furthermore, when I put site.com/foo/page.html in my browser manually, site.com/foo/page.html is displayed correctly, So I know that nginx is serving it in the correct location, just the links that it is indexing are incorrect.
Is there any way to configure nginx autoindex with a prefix?

Can't get nginx to serve PHP application after ingress nginx "502 bad gateway"

I have this old vanilla PHP application I'm playing with, trying to Dockerize it and then put it into a Kubernetes cluster.
I upgraded the app to php7.3-fpm and I'm trying to add nginx to the same image. Something like how the php7.3-apache is, but using nginx and php-fpm.
I came across this answer which offers a solution for building the image. I've changed it to fit my needs, but I'm having issues getting it to actually serve the application:
It just returns "502 Bad Gateway nginx/1.14.2" if I navigate to /admin/
It just returns "Welcome to nginx!" if I navigate to /admin
Seems like ingress-nginx and nginx are at least communicating. Just the index.php isn't being served.
Not quite sure where I'm going wrong.
Here is my configuration:
# project structure
root/
/conf
app.conf
default.conf
entrypoint.sh
file_size.ini
/src
index.php
all other.php
Dockerfile.dev
Dockerfile
# ingress-nginx.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "500m"
nginx.ingress.kubernetes.io/use-regex: "true"
name: ingress-service-dev
namespace: default
spec:
rules:
- http:
paths:
- path: /admin/?(.*)
backend:
serviceName: admin-cluster-ip-service-dev
servicePort: 4000
# admin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-deployment-dev
spec:
replicas: 1
selector:
matchLabels:
component: admin
template:
metadata:
labels:
component: admin
spec:
containers:
- name: admin
image: testappacr.azurecr.io/test-app-admin
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: admin-cluster-ip-service-dev
spec:
type: ClusterIP
selector:
component: admin
ports:
- port: 4000
targetPort: 4000
# Dockerfile
FROM php:7.3-fpm
# PHP_CPPFLAGS are used by the docker-php-ext-* scripts
ENV PHP_CPPFLAGS="$PHP_CPPFLAGS -std=c++11"
RUN apt-get update \
&& apt-get install -y nginx \
&& apt-get install -y libpq-dev zlib1g-dev libzip-dev \
&& docker-php-ext-install pgsql zip mbstring opcache
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/php-opocache-cfg.ini
COPY . /usr/share/nginx/html
COPY ./conf/default.conf /etc/nginx/conf.d/default.conf
COPY ./conf/entrypoint.sh /etc/entrypoint.sh
# COPY --chown=www-data:www-data . /app/src
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN mv "/usr/share/nginx/html/conf/file_size.ini" "$PHP_INI_DIR/conf.d/"
WORKDIR /usr/share/nginx/html/src
EXPOSE 4000
ENTRYPOINT ["sh", "/etc/entrypoint.sh"]
# default.conf
server {
listen 4000;
root /usr/share/nginx/html/src;
include /etc/nginx/default.d/*.conf;
index app.php index.php index.html index.htm;
client_max_body_size 500m;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass 127.0.0.1:4000;
fastcgi_index index.php;
include fastcgi.conf;
}
}
What have I screwed up here?
Ok, figured it out...
The following I had in the default.conf was wrong:
fastcgi_pass 127.0.0.1:4000;
It should have stayed this (which was in the answer I was copying)...
fastcgi_pass 127.0.0.1:9000;
Naively didn't know that this was the default for php-fpm.

NGINX + Kubernetes Ingress + Google Cloud Load Balancing

I'm currently trying to configure a Kubernetes environment.
Currently I have a Docker image which serves my compiled Angular application via Nginx.
The default.conf of the Nginx is:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
Ingress configuration (mapped to Google Cloud Load Balancer since I'm using Kubernetes Cluster)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fullstack-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app
backend:
serviceName: frontend-main
servicePort: 80
- path: /app/*
backend:
serviceName: frontend-main
servicePort: 80
- path: /core
backend:
serviceName: backend-core
servicePort: 80
Whenever I request my front-end using http://[loadbalancerip]/app/, all request sent to my angular docker are GET /app/inline.js when they should be GET /inline.js in order to work.
I tried directly in my cluster with curl and the file works. I only need to get rid of the "/app". How can I do that? Is that in the Nginx configuration or in the Ingress file?
I would rather fix it in the Ingress file, since I will probably have the same issue when I will be deploying my back-end which are .NET Core and Kestrel.

Resources