Which ports needed to be set for a k8s deployment? - docker

I do not understand how to configure ports correctly for a k8s deployment.
Assume there is a nextJS application which listens to port 3003 (default is 3000). I build the docker image:
FROM node:16.14.0
RUN apk add dumb-init
# ...
EXPOSE 3003
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD npx next start -p 3003
So in this Dockerfile there are two places defining the port value 3003. Is this needed?
Then I define this k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
The deployment is not working correctly. Calling domain.com shows me a 503 Service Temporarily Unavailable error.
If I do a port forward on the pod, I can see the working app at localhost:3003. I cannot create a port forward on the service.
So obviously I'm doing something wrong with the ports. Can someone explain which value has to be set and why?

You are missing labels from the deployment and the selector from the service. Try this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Service: https://kubernetes.io/docs/concepts/services-networking/service/
Labels and selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
You can name your label keys and values anything you like, you could even have a label as whatever: something instead of app: example but these are some recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
https://kubernetes.io/docs/reference/labels-annotations-taints/

Related

issues setting up traefik with kubernetes using a simple container

Not sure what I am missing, trying to set up a simple traefik environment with kubernetes proxying the errm/cheese:cheddar docker container to cheddar.minikube
Prerequisite:
have minikube setup
git clone # personal repo that is now deleted. see solution below
# setup.sh will delete current minikube environment then recreate it
./setup.sh
# add IP to minikube
echo `minikube ip` cheddar.minikube | sudo tee -a /etc/hosts
after running
minikube delete
minikube start
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
kubectl apply -f traefik-deployment.yaml -f traefik-whoami.yaml
with...
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
hostNetwork: true
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
- --providers.kubernetescrd
ports:
- name: web
containerPort: 8000
# hostPort: 80
- name: websecure
containerPort: 4443
# hostPort: 443
- name: admin
containerPort: 8080
# hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
traefik-whoami.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
I was able to get a simple container working with traefik in kubernetes at:
echo `minikube ip`/notls

Expose basepath in container through service in nginx ingress on kubernetes

My ASP.NET Core web application uses basepath in startup like,
app.UsePathBase("/app1");
So the application will run on basepath. For example if the application is running on localhost:5000, then the app1 will be accessible on 'localhost:5000/app1'.
So in nginx ingress or any ingress we can expose the entire container through service to outside of the kubernetes cluster.
My kubernetes deployment YAML file looks like below,
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
spec:
selector:
matchLabels:
app: app1
replicas: 1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1-container
image: app1:latest
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: app1-service
labels:
app: app1
spec:
type: NodePort
ports:
- name: app1-port
port: 5001
targetPort: 5001
protocol: TCP
selector:
app: app1
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /app1(/|$)(.*)
backend:
serviceName: server-service
servicePort: 5001
The above ingress will expose the entire container in 'localhost/app1' URL. So the application will run on '/app1' virtual path. But the app1 application will be accessible on 'localhost/app1/app1'.
So I want to know if there is any way to route 'localhost/app1' request to basepath in container application 'localhost:5001/app1' in ingress.
If I understand correctly, now the app is accessible on /app1/app1 and you would like it to be accessible on /app1
To do this, don't use rewrite:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: server-service
servicePort: 5001

Google Kubernetes Engine Ingress UNHEALTHY backend service

Kind Note: I have googled a lot and take a look too many questions related to this issue at StackOverflow also but couldn't solve my issue, that's why don't mark this as duplicate, please!
I'm trying to deploy 2 services (One is Python flask and other is NodeJS) on Google Kubernetes Engine. I have created two Kubernetes-deployments one for each service and two Kubernetes-services one for each service of type NodePort. Then, I have created an Ingress and mentioned my endpoints but Ingress says that One backend service is UNHEALTHY.
Here are my Deployments YAML definitions:
# Pyservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pyservice
labels:
app: pyservice
namespace: default
spec:
selector:
matchLabels:
app: pyservice
template:
metadata:
labels:
app: pyservice
spec:
containers:
- name: pyservice
image: docker.io/arycloud/docker_web_app:pyservice
ports:
- containerPort: 5000
imagePullSecrets:
- name: docksecret
# # Nodeservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodeservice
labels:
app: nodeservice
namespace: default
spec:
selector:
matchLabels:
app: nodeservice
template:
metadata:
labels:
app: nodeservice
tier: web
spec:
containers:
- name: nodeservice
image: docker.io/arycloud/docker_web_app:nodeservice
ports:
- containerPort: 8080
imagePullSecrets:
- name: docksecret
And, here are my services and Ingress YAML definitions:
# pyservcie service
kind: Service
apiVersion: v1
metadata:
name: pyservice
spec:
type: NodePort
selector:
app: pyservice
ports:
- protocol: TCP
port: 5000
nodePort: 30001
---
# nodeservcie service
kind: Service
apiVersion: v1
metadata:
name: nodeservcie
spec:
type: NodePort
selector:
app: nodeservcie
ports:
- protocol: TCP
port: 8080
nodePort: 30002
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pyservice
servicePort: 5000
- path: /*
backend:
serviceName: pyservice
servicePort: 5000
- path: /node/svc/
backend:
serviceName: nodeservcie
servicePort: 8080
The pyservice is working fine but the nodeservice shows as UNHEALTHY backend. Here's a screenshot:
Even I have edited the Firewall Rules for all gke-.... and allow all ports just for getting out from this issue, but it still showing the UNHEALTHY status for nodeservice.
What's wrong here?
Thanks in advance!
Why are you using a GCE ingress class and then specifying a nginx rewrite annotation? In case you haven't realised, the annotation won't do anything to the GCE ingress.
You have also got 'nodeservcie' as your selector instead of 'nodeservice'.

Rewriting paths with Traefik

Can someone give a simple example on how to route the following URLs:
http://monitor.app.com/service-one
http://monitor.app.com/service-two
http://monitor.app.com/service-three
http://monitor.app.com/service-four
To the following backend services?
http://service-one/monitor
http://service-two/monitor
http://service-three/monitor
http://service-four/monitor
Preferably using the [file] syntax of Traefik, although any is fine.
Here is a configuration for your example. Adjust it according to your real cluster configuration:
apiVersion: v1
kind: Service
metadata:
name: service-one
spec:
selector:
k8s-app: service-one-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
spec:
selector:
k8s-app: service-two-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-three
spec:
selector:
k8s-app: service-three-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitor.app
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rewrite-target: /monitor # set path to result request
spec:
rules:
- host: monitor.app.com
http:
paths:
- path /service-one # path for routing, it will be removed because of PathPrefixStrip settings
backend:
serviceName: service-one
servicePort: 80
- path /service-two # path for routing, it will be removed because of PathPrefixStrip settings
backend:
serviceName: service-two
servicePort: 80
- path /service-three # path for routing, it will be removed because of PathPrefixStrip settings
backend:
serviceName: service-three
servicePort: 80
Additional information could be found here:
Kubernetes Ingress Controller
Kubernetes Ingress Provider
kubernetes ingress rewrite-target implementation #1723

how to use AWS ELB with nginx ingress on k8s

1) I have SSL certs generated on AWS
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...fa5298fc
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx-lb-svc
# namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: http
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress-control-pod
type: LoadBalancer
2) then I have nginx controller pod
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-control-pod
labels:
app: nginx-ingress-control-pod
spec:
replicas: 1
selector:
app: nginx-ingress-control-pod
template:
metadata:
labels:
app: nginx-ingress-control-pod
spec:
containers:
- image: nginxdemos/nginx-ingress:1.0.0
imagePullPolicy: Always
name: nginx-ingress-control-pod
ports:
- name: http
containerPort: 80
hostPort: 80
#- name: https
# containerPort: 443
# hostPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
args:
#- -v=3
#- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
#- -default-server-tls-secret=$(POD_NAMESPACE)/web-secret
3) lastly I am using helm to deploy grafana and prometheus (this setup works when accessing via NodePort)
I just can not figure out setup with ELB and ingress.
Btw ingress is a part of grafana deployment
which is correctly created
3)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-04-06T09:28:10Z
generation: 1
labels:
app: graf-helmf-default-ns-grafana
chart: grafana-0.8.5
component: grafana
heritage: Tiller
release: graf-helmf-default-ns
name: graf-helmf-default-ns-grafana
namespace: default
resourceVersion: "995865"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/graf-helmf-default-ns-grafana
uid: d2991870-397c-11e8-9d...5a37f5a
spec:
rules:
- host: grafana.my.valid.domain.com
http:
paths:
- backend:
serviceName: graf-helmf-default-ns-grafana
servicePort: 80
status:
loadBalancer: {}

Resources