Not sure what I am missing, trying to set up a simple traefik environment with kubernetes proxying the errm/cheese:cheddar docker container to cheddar.minikube
Prerequisite:
have minikube setup
git clone # personal repo that is now deleted. see solution below
# setup.sh will delete current minikube environment then recreate it
./setup.sh
# add IP to minikube
echo `minikube ip` cheddar.minikube | sudo tee -a /etc/hosts
after running
minikube delete
minikube start
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
kubectl apply -f traefik-deployment.yaml -f traefik-whoami.yaml
with...
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
hostNetwork: true
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
- --providers.kubernetescrd
ports:
- name: web
containerPort: 8000
# hostPort: 80
- name: websecure
containerPort: 4443
# hostPort: 443
- name: admin
containerPort: 8080
# hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
traefik-whoami.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
I was able to get a simple container working with traefik in kubernetes at:
echo `minikube ip`/notls
Related
I do not understand how to configure ports correctly for a k8s deployment.
Assume there is a nextJS application which listens to port 3003 (default is 3000). I build the docker image:
FROM node:16.14.0
RUN apk add dumb-init
# ...
EXPOSE 3003
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD npx next start -p 3003
So in this Dockerfile there are two places defining the port value 3003. Is this needed?
Then I define this k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
The deployment is not working correctly. Calling domain.com shows me a 503 Service Temporarily Unavailable error.
If I do a port forward on the pod, I can see the working app at localhost:3003. I cannot create a port forward on the service.
So obviously I'm doing something wrong with the ports. Can someone explain which value has to be set and why?
You are missing labels from the deployment and the selector from the service. Try this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Service: https://kubernetes.io/docs/concepts/services-networking/service/
Labels and selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
You can name your label keys and values anything you like, you could even have a label as whatever: something instead of app: example but these are some recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
https://kubernetes.io/docs/reference/labels-annotations-taints/
I have clean ubuntu 18.04 server where installed minikube, kubectl and docker.
And I have several items for it.
One deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express-deployment
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongo-db-configmap
key: mongo-db-url
one internal service. because tried to connect through ingress
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
one ingress for it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: "/"
pathType: "Prefix"
backend:
service:
name: mongo-express-service
port:
number: 8081
And one external service because I tried to connect through them
apiVersion: v1
kind: Service
metadata:
name: mongo-express-external-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
But each of these options does not work for me. I tried add update to host file and add
192.168.47.2 my-host.com
but it also didn't help me.
When I run curl my-host.com in server terminal I receive correct response, but I can't get it from my browser.
My domain set up to my server and when I use nginx only all work fine.
May be need to add something else or update my config?
I hope you can help me.
I followed the official instruction and had no problem with running kie server and workbench on Docker. However, when I try with Kubernetes I bump into some problem. There is no Execution server in the list (Business Central -> Deploy -> Execution Servers). Both of them are up and running, I can access Business Central, http://localhost:31002/kie-server/services/rest/server/ is responding correctly :
<response type="SUCCESS" msg="Kie Server info">
<kie-server-info>
<capabilities>KieServer</capabilities>
<capabilities>BRM</capabilities>
<capabilities>BPM</capabilities>
<capabilities>CaseMgmt</capabilities>
<capabilities>BPM-UI</capabilities>
<capabilities>BRP</capabilities>
<capabilities>DMN</capabilities>
<capabilities>Swagger</capabilities>
<location>http://localhost:8080/kie-server/services/rest/server</location>
<messages>
<content>Server KieServerInfo{serverId='kie-server-kie-server-7fcc96f568-2gf29', version='7.45.0.Final', name='kie-server-kie-server-7fcc96f568-2gf29', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Tue Oct 27 10:36:09 UTC 2020</content>
<severity>INFO</severity>
<timestamp>2020-10-27T10:36:09.433Z</timestamp>
</messages>
<mode>DEVELOPMENT</mode>
<name>kie-server-kie-server-7fcc96f568-2gf29</name>
<id>kie-server-kie-server-7fcc96f568-2gf29</id>
<version>7.45.0.Final</version>
</kie-server-info>
</response>
Here is my yaml file that I am using to create deployments and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-wb
spec:
replicas: 1
selector:
matchLabels:
app: kie-wb
template:
metadata:
labels:
app: kie-wb
spec:
containers:
- name: kie-wb
image: jboss/drools-workbench-showcase:latest
ports:
- containerPort: 8080
- containerPort: 8001
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-wb
spec:
selector:
app: kie-wb
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "8001"
port: 8001
targetPort: 8001
# type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kie-wb-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31001
selector:
app: kie-wb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-server
spec:
replicas: 1
selector:
matchLabels:
app: kie
template:
metadata:
labels:
app: kie
spec:
containers:
- name: kie
image: jboss/kie-server-showcase:latest
ports:
- containerPort: 8080
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-server
spec:
selector:
app: kie
ports:
- name: "8080"
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kie-server-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31002
selector:
app: kie
# type: LoadBalancer
When deploying to Docker I am using --link drools-wb:kie-wb
docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest
In Kubernetes I created service called kie-wb, but that doesn't help.
What am I missing here?
I was working on a similar set up and used your YAML file as a start (thanks for that)!
I had to add the following snippet to the kia-server-showcase container:
env:
- name: KIE_WB_ENV_KIE_CONTEXT_PATH
value: "business-central"
It does work now, at least as far as I can tell.
The Kubernetes yaml below creates resources as expected and there are no errors/restarts. External requests to the kafka broker at localhost:80005 hang indefinitely. My understanding is that the docker notion of a "network" is achieved in Kubernetes via services. In this case, I have a NodePort service which brings traffic into the cluster and then a ClusterIp port which handles communication between containers. Based on the non-Kubernetes, docker run version of this Zookeeper/Kafka configuration, I suspect my issue lies somewhere in my service and/or advertised listeners. The internet is replete with docker compose files and helm charts but I don't seem to be able to find a combination of configurations that work.
Kafka/Zookeeper YAML:
# Zookeeper
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-deploy
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-app
template:
metadata:
labels:
app: zookeeper-app
spec:
containers:
- name: zookeeper-app
image: confluentinc/cp-zookeeper:5.4.1
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_TICK_TIME
value: "2000"
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
---
apiVersion: v1
kind: Service
metadata:
name: zk-cluster-svc
labels:
app: zk
spec:
type: ClusterIP
selector:
app: zookeeper-app
ports:
- port: 2181
targetPort: 2181
protocol: TCP
---
# Kafka
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deploy
spec:
replicas: 1
selector:
matchLabels:
app: kafka-app
template:
metadata:
labels:
app: kafka-app
spec:
containers:
- name: kafka-app
image: confluentinc/cp-kafka:5.4.1
ports:
- containerPort: 9092
- containerPort: 29092
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zk-cluster-svc:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://kafka-cluster-svc:29092,OUTSIDE://localhost:30005"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
app: kafka-service
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30005
protocol: TCP
selector:
app: kafka-app
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster-svc
labels:
app: kafka-app
spec:
selector:
app: kafka-app
ports:
- port: 29092
targetPort: 29092
protocol: TCP
^ This is based on a working docker run version of these containers:
docker network create kafzoonet
docker run --name app-zookeeper -d --network="kafzoonet" --env ZOOKEEPER_TICK_TIME=2000 --env ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:5.4.1
docker run --name app-kafka -p 30005:9092 -d --network="kafzoonet" --env KAFKA_BROKER_ID=1 --env KAFKA_ZOOKEEPER_CONNECT="app-zookeeper:2181" --env KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://app-kafka:29092,PLAINTEXT_HOST://localhost:30005" --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT" --env KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:5.4.1
Are the services configured correctly for a single Kafka node? It appears that Kafka is able to reach Zookeeper but for some reason when a client connects, it isn't getting the correct listener echoed back. Unfortunately I don't have any error messages to support this as the client hangs and neither the Kafka or Zookeeper pods log anything.
1) I have SSL certs generated on AWS
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...fa5298fc
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx-lb-svc
# namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: http
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress-control-pod
type: LoadBalancer
2) then I have nginx controller pod
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-control-pod
labels:
app: nginx-ingress-control-pod
spec:
replicas: 1
selector:
app: nginx-ingress-control-pod
template:
metadata:
labels:
app: nginx-ingress-control-pod
spec:
containers:
- image: nginxdemos/nginx-ingress:1.0.0
imagePullPolicy: Always
name: nginx-ingress-control-pod
ports:
- name: http
containerPort: 80
hostPort: 80
#- name: https
# containerPort: 443
# hostPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
args:
#- -v=3
#- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
#- -default-server-tls-secret=$(POD_NAMESPACE)/web-secret
3) lastly I am using helm to deploy grafana and prometheus (this setup works when accessing via NodePort)
I just can not figure out setup with ELB and ingress.
Btw ingress is a part of grafana deployment
which is correctly created
3)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-04-06T09:28:10Z
generation: 1
labels:
app: graf-helmf-default-ns-grafana
chart: grafana-0.8.5
component: grafana
heritage: Tiller
release: graf-helmf-default-ns
name: graf-helmf-default-ns-grafana
namespace: default
resourceVersion: "995865"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/graf-helmf-default-ns-grafana
uid: d2991870-397c-11e8-9d...5a37f5a
spec:
rules:
- host: grafana.my.valid.domain.com
http:
paths:
- backend:
serviceName: graf-helmf-default-ns-grafana
servicePort: 80
status:
loadBalancer: {}