KIE server and workbench on Kubernetes - docker

I followed the official instruction and had no problem with running kie server and workbench on Docker. However, when I try with Kubernetes I bump into some problem. There is no Execution server in the list (Business Central -> Deploy -> Execution Servers). Both of them are up and running, I can access Business Central, http://localhost:31002/kie-server/services/rest/server/ is responding correctly :
<response type="SUCCESS" msg="Kie Server info">
<kie-server-info>
<capabilities>KieServer</capabilities>
<capabilities>BRM</capabilities>
<capabilities>BPM</capabilities>
<capabilities>CaseMgmt</capabilities>
<capabilities>BPM-UI</capabilities>
<capabilities>BRP</capabilities>
<capabilities>DMN</capabilities>
<capabilities>Swagger</capabilities>
<location>http://localhost:8080/kie-server/services/rest/server</location>
<messages>
<content>Server KieServerInfo{serverId='kie-server-kie-server-7fcc96f568-2gf29', version='7.45.0.Final', name='kie-server-kie-server-7fcc96f568-2gf29', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Tue Oct 27 10:36:09 UTC 2020</content>
<severity>INFO</severity>
<timestamp>2020-10-27T10:36:09.433Z</timestamp>
</messages>
<mode>DEVELOPMENT</mode>
<name>kie-server-kie-server-7fcc96f568-2gf29</name>
<id>kie-server-kie-server-7fcc96f568-2gf29</id>
<version>7.45.0.Final</version>
</kie-server-info>
</response>
Here is my yaml file that I am using to create deployments and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-wb
spec:
replicas: 1
selector:
matchLabels:
app: kie-wb
template:
metadata:
labels:
app: kie-wb
spec:
containers:
- name: kie-wb
image: jboss/drools-workbench-showcase:latest
ports:
- containerPort: 8080
- containerPort: 8001
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-wb
spec:
selector:
app: kie-wb
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "8001"
port: 8001
targetPort: 8001
# type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kie-wb-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31001
selector:
app: kie-wb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-server
spec:
replicas: 1
selector:
matchLabels:
app: kie
template:
metadata:
labels:
app: kie
spec:
containers:
- name: kie
image: jboss/kie-server-showcase:latest
ports:
- containerPort: 8080
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-server
spec:
selector:
app: kie
ports:
- name: "8080"
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kie-server-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31002
selector:
app: kie
# type: LoadBalancer
When deploying to Docker I am using --link drools-wb:kie-wb
docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest
In Kubernetes I created service called kie-wb, but that doesn't help.
What am I missing here?

I was working on a similar set up and used your YAML file as a start (thanks for that)!
I had to add the following snippet to the kia-server-showcase container:
env:
- name: KIE_WB_ENV_KIE_CONTEXT_PATH
value: "business-central"
It does work now, at least as far as I can tell.

Related

issues setting up traefik with kubernetes using a simple container

Not sure what I am missing, trying to set up a simple traefik environment with kubernetes proxying the errm/cheese:cheddar docker container to cheddar.minikube
Prerequisite:
have minikube setup
git clone # personal repo that is now deleted. see solution below
# setup.sh will delete current minikube environment then recreate it
./setup.sh
# add IP to minikube
echo `minikube ip` cheddar.minikube | sudo tee -a /etc/hosts
after running
minikube delete
minikube start
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
kubectl apply -f traefik-deployment.yaml -f traefik-whoami.yaml
with...
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
hostNetwork: true
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
- --providers.kubernetescrd
ports:
- name: web
containerPort: 8000
# hostPort: 80
- name: websecure
containerPort: 4443
# hostPort: 443
- name: admin
containerPort: 8080
# hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
traefik-whoami.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
I was able to get a simple container working with traefik in kubernetes at:
echo `minikube ip`/notls

Connecting to NodePort Minikube

I created a Service and a Deployment but I am unable to access the service with minikube service --url accounts-service or minikube service accounts-service.
While the second one opens the browser but never connects, the first one just remains in my terminal.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-service
spec:
replicas: 1
selector:
matchLabels:
app: accounts-service
template:
metadata:
labels:
app: accounts-service
spec:
containers:
- name: accounts-service
image: xxxx:latest
ports:
- containerPort: 3001
Service
apiVersion: v1
kind: Service
metadata:
name: accounts-service
spec:
selector:
app: accounts-service
ports:
- port: 80
targetPort: 3001
type: NodePort
I don't know why my Minikube is not connecting to the Port. I am using minkube with Docker

Can't connect to minikube external service or ingress

I have clean ubuntu 18.04 server where installed minikube, kubectl and docker.
And I have several items for it.
One deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express-deployment
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongo-db-configmap
key: mongo-db-url
one internal service. because tried to connect through ingress
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
one ingress for it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: "/"
pathType: "Prefix"
backend:
service:
name: mongo-express-service
port:
number: 8081
And one external service because I tried to connect through them
apiVersion: v1
kind: Service
metadata:
name: mongo-express-external-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
But each of these options does not work for me. I tried add update to host file and add
192.168.47.2 my-host.com
but it also didn't help me.
When I run curl my-host.com in server terminal I receive correct response, but I can't get it from my browser.
My domain set up to my server and when I use nginx only all work fine.
May be need to add something else or update my config?
I hope you can help me.

Istio gateway redirects to HTML nginx image doesn't work

I had Istio 1.3.3 gateway and helloworld gateway toward my application service.
Istio Gateway
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
chart: gateways-1.0.0
heritage: Tiller
istio: ingressgateway
release: RELEASE-NAME
name: istio-ingressgateway
namespace: istio-system
spec:
externalTrafficPolicy: Cluster
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 31390
port: 443
protocol: TCP
targetPort: 443
- name: tcp
nodePort: 31400
port: 31400
protocol: TCP
targetPort: 31400
- name: tcp-pilot-grpc-tls
nodePort: 32565
port: 15011
protocol: TCP
targetPort: 15011
- name: tcp-citadel-grpc-tls
nodePort: 32352
port: 8060
protocol: TCP
targetPort: 8060
- name: http2-helloworld
nodePort: 31750
port: 15033
protocol: TCP
targetPort: 15033
selector:
app: istio-ingressgateway
istio: ingressgateway
type: LoadBalancer
HelloWorld Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 15033
name: http2-helloworld
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- port: 15033
route:
- destination:
host: helloworld
port:
number: 5000
HelloWorld.yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
version: v1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
When I tried to access the application from Istio gateway using localhost:15033, working with different port and docker images are working fine, but this docker image that used nginx doesn't work well.
I got an error when access to localhost:15033
upstream connect error or disconnect/reset before headers. reset reason: connection termination
Informations
Kubernetes started and installed from Mac Desktop Docker Application. Context was desktop-docker.
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubectl cluster-info dump > clusterInfoDump.txt
https://justpaste.it/5n1op
istioctl version
client version: 1.3.3
control plane version: 1.3.3
In Your HelloWorld.yaml You are missing targetPort and this is why nginx is unreachable.
This is how it should look:
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
targetPort: 80
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
version: v1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

Kubernetes (Minikube): environment variable

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (http://192.*..100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
nodePort: 32002
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
Kubernetes automatically injects services as environment variables. https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
But you should not use this. This won't work unless all the services are in place when you create the pod. It is inspired by "docker" which also moved on to DNS based service discovery now. So "environment based service discovery" is a thing of the past.
Please rely on DNS service discovery. Minikube ships with kube-dns so you can just use the lucky-server hostname (or one of lucky-server[.default[.svc[.cluster[.local]]]] names). Read the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Resources