RabbitMQ, .NET Core and Kubernetes (configuration) - docker

I'm trying to set up a few micro services in Kubernetes. Everything is working as expected, except the connection from one micro service to RabbitMQ.
Problem flow:
.NET Core app --> rabbitmq-kubernetes-service.yml --> RabbitMQ
In the .NET Core app the rabbit connection factory config looks like this:
"RabbitMQ": {
"Host": "rabbitmq-service",
"Port": 7000,
"UserName": "guest",
"Password": "guest"
}
The kubernetes rabbit service looks like this:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service
spec:
selector:
app: rabbitmq
ports:
- port: 7000
targetPort: 5672
As well as the rabbit deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: <private ACR with vanilla cfg - the image is: rabbitmq:3.7.9-management-alpine>
imagePullPolicy: Always
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "0.5"
ports:
- containerPort: 5672
So this setup is currently not working in k8s. Locally it works like a charm with a basic docker-compose.
However, what I can do in k8s is to go from a LoadBalancer --> to the running rabbit pod and access the management GUI with these config settings.
apiVersion: v1
kind: Service
metadata:
name: rabbitmqmanagement-loadbalancer
spec:
type: LoadBalancer
selector:
app: rabbitmq
ports:
- port: 80
targetPort: 15672
Where am I going wrong?

I'm assuming you are running the .NET Core app outside the Kubernetes cluster.
If this is indeed the case then you need to use type: LoadBalancer.
LoadBalancer is used to expose a service to the internet.
ClusterIP exposes the service inside cluster-internal IP. So Service will be only accessible from within the cluster, also this is a default ServiceType.
NodePort exposes the service on each Node's IP at a static port.
For more details regarding Services please check the Kubernetes docs.
You can if the connection is working using a python script:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='RABBITMQ_SERVER_IP'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
This script will try to connect RABBITMQ_SERVER_IP using port 5672.
Script requires a library pika which can be installed using pip install pika.

Related

Can't access service in AKS

I've created an ACR and AKS, and pushed a container to the ACR.
I then applied the following yaml file to AKS:
apiVersion: apps/v1
kind: Deployment
metadata:
name: readit-cart
spec:
selector:
matchLabels:
app: readit-cart
template:
metadata:
labels:
app: readit-cart
spec:
containers:
- name: readit-cart
image: memicourseregistry.azurecr.io/cart:v2
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5005
---
apiVersion: v1
kind: Service
metadata:
name: readit-cart
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5005
selector:
app: readit-cart
I can run the container locally on port 5005 and it runs just fine.
In Azure portal, in the AKS resources view, I can see the service and the pod, they are both running (green).
And yet, when I try to access the public IP of the service, I get a "This site can't be reached" error.
What am I missing?
OK, it looks like the problem was that the pod run on port 80, and not 5005, even though the container run locally on 5005. Strange...
I'll look further into it.

Kubernetes load balancer External IP pending

I create RabbitMQ cluster inside Kubernetes. I am trying to add loadbalancer. But I cant get the loadbalancer External-IP it is still pending.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
run: rabbitmq
spec:
type: NodePort
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
selector:
run: rabbitmq
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
run: rabbitmq
template:
metadata:
labels:
run: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
imagePullPolicy: Always
And My load balancer is below. I gave loadbalancer
nodePort is random,
port number is from kubernetes created RabbitMQ mqtt port number,
target port number is from kubernetes created RabbitMQ UI port number
apiVersion: v1
kind: Service
metadata:
name: loadbalanceservice
labels:
app: rabbitmq
spec:
selector:
app: rabbitmq
type: LoadBalancer
ports:
- nodePort: 31022
port: 30601
targetPort: 31533
service type Loadbalancer only works on cloud providers which support external load balancers.Setting the type field to LoadBalancer provisions a load balancer for your Service.It's pending because the environment that you are in is not supporting Loadbalancer type of service.In a non cloud environment an easier option would be to use nodeport type service. Here is a guide on using Nodeport to access service from outside the cluster.
LoadBalancer service doesn't work on bare metal clusters. Your LoadBalancer service will act as NodePort service as well. You can use nodeIP:nodePort combination to access your service from outside the cluster.
If you do want an external IP with custom port combination to access your service, then look into metallb which implements support for LoadBalancer type services on bare metal clusters.

How to connect to minikube exposed Nodeport from other computers

I am trying to expose one of my applications running on minikube to outer world. I have already used a Nodeport and I can access the application within the same hist machine using a web browser.
But I need to expose this application to one of my friends who is living somewhere far, so he can see it in his browser too.
This is how my deployment.yaml files look like, should I use an Ingress or how can I do this using an ingress ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
How can I expose this deployment which is running a nodejs server to outside world?
You generally can’t. The networking is set up only for the host machine. You could probably use ngrok or something though?
You can use ngrok. For example
ngrok http 8000
This will generate piblicly accessable url.

Kubernetes stateful apps with bare metal

I am pretty new on Kubernetes and cloud computing. I am working with bare metal servers on my home (actually virtual servers on vbox) and trying to run a stateful app with StatefulSet. I have 1 master and 2 worker nodes and I am trying to run a database application on this cluster. So each node has 1 pod and I am very confused about volumes. I use hostpath volume(code below) but volumes working separately(actually they are not synchronizing). So my 2 pods are working different(same apps but they run like 2 different servers) when I reach them.
How can I run that app in 2 synchronized pods?
I've tried to synchronized volume files between 2 slaves. I also tried to synchronize volume files with deployment. I've tried to do this with volume provisioning (persistent volume and persistent volume provisioning).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud
spec:
selector:
matchLabels:
app: cloud
serviceName: "cloud"
replicas: 2
template:
metadata:
labels:
app: cloud
spec:
containers:
- name: cloud
image: owncloud:v2
imagePullPolicy: Never
ports:
- containerPort: 80
name: web
volumeMounts:
- name: cloud-volume
mountPath: /var/www/html/
volumes:
- name: cloud-volume
hostPath:
path: /volumes/cloud/
---
kind: Service
apiVersion: v1
metadata:
name: cloud
spec:
selector:
app: cloud
type: LoadBalancer
ports:
- protocol: TCP
port: 80

Kubernetes: Can not telnet to ClusterIP Service inside my Cluster

I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves

Resources