I create RabbitMQ cluster inside Kubernetes. I am trying to add loadbalancer. But I cant get the loadbalancer External-IP it is still pending.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
run: rabbitmq
spec:
type: NodePort
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
selector:
run: rabbitmq
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
run: rabbitmq
template:
metadata:
labels:
run: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
imagePullPolicy: Always
And My load balancer is below. I gave loadbalancer
nodePort is random,
port number is from kubernetes created RabbitMQ mqtt port number,
target port number is from kubernetes created RabbitMQ UI port number
apiVersion: v1
kind: Service
metadata:
name: loadbalanceservice
labels:
app: rabbitmq
spec:
selector:
app: rabbitmq
type: LoadBalancer
ports:
- nodePort: 31022
port: 30601
targetPort: 31533
service type Loadbalancer only works on cloud providers which support external load balancers.Setting the type field to LoadBalancer provisions a load balancer for your Service.It's pending because the environment that you are in is not supporting Loadbalancer type of service.In a non cloud environment an easier option would be to use nodeport type service. Here is a guide on using Nodeport to access service from outside the cluster.
LoadBalancer service doesn't work on bare metal clusters. Your LoadBalancer service will act as NodePort service as well. You can use nodeIP:nodePort combination to access your service from outside the cluster.
If you do want an external IP with custom port combination to access your service, then look into metallb which implements support for LoadBalancer type services on bare metal clusters.
Related
This question already has an answer here:
Kubernetes services are not accessible through nodeport with Desktop Docker setup
(1 answer)
Closed 2 years ago.
There is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-task-tracker-deployment
spec:
selector:
matchLabels:
app: my-task-tracker
replicas: 5
template:
metadata:
labels:
app: my-task-tracker
spec:
containers:
- name: hello-world
image: shaikezam/task-tracker:1.0
ports:
- containerPort: 8080
protocol: TCP
This is the service (NodePort):
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8085
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
Now, I try to access localhost:8085 or localhost:30001, and nothing happened.
I'm running using K8S in docker desktop.
Any suggestion what I'm doing wrong?
Target port should be 8080 in service yaml if that is what your container runs on as per your deployment yaml file.
apiVersion: v1
kind: Service
metadata:
name: my-task-tracker-service
labels:
app: my-task-tracker
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
protocol: TCP
selector:
app: my-task-tracker
=======
port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified. You should be able to use your application on Nodeport as well.
In your case target port should be 8080 that is what is important for app to run ,you can listen to your app on port 8085 within your cluster by changing the port field in the yaml and externally by changing the Nodeport.
I am using kubernetes and run one service. Service is running and is showing in service. But i am not able to access it from the public ip of the instance. Below is my deployment file.
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
Here is my list of service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-service NodePort 10.106.242.181 <none> 80:31807/TCP 9m5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
But when i check the same service from the follwing telnet with the public ip of cluster and node. It is not responding.
telnet public-ip:31807
Any type of help will be appreciable.
What do you mean by cluster IP? Do you mean the node that acts as kunernetes master? It won't work if you use master IP. Because masters don't have deployments scheduled due to security concerns.
Exposing a service via nodeport means that the service listens to a particular port in each of the worker nodes. So you can access the kunernetes worker nodes with the nodeports and get response. However if you created the cluster using cloud providers like aws, the worker nodes security groups are secured. Probably, you need to edit the security groups of worker nodes to access the service.
I have Windows 10 Pro with Docker for Windows v18.06.1-ce with kubernetes enabled.
Using kubectl create -f, I've created rc.yml:
apiVersion: v1
kind: ReplicationController
metadata:
name: hello-rc
spec:
replicas: 9
selector:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-ctr
image: nigelpoulton/pluralsight-docker-ci:latest
ports:
- containerPort: 8080
svc.yml
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-world
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
protocol: TCP
selector:
app: hello-world
How do I access the website behind the service?
I would expect localhost:8080 to be working, but it isn't, nor is 10.108.96.27:8080
> kubectl describe service/hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello-world
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.108.96.27
LoadBalancer Ingress: localhost
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 10.1.0.10:8080,10.1.0.11:8080,10.1.0.12:8080 + 6 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
There are two ways to expose a service to the outer world from a Kubernetes cluster:
type: LoadBalancer. However, it works only with cloud providers.
type: NodePort. As you used in this case. Now, to access service inside Kubernetes cluster, you need to use the IP address of one of your Nodes and the port from the field nodePort
For example, 12.34.56.78:30001
For more information, look through the official documentation.
For local development:
kubectl port-forward <my-pod-name> 8080:8080
Your pod will be accessible on localhost:8080.
More about port forwarding here.
This might help someone (it took me 1/2 a day to work it out!)
You can use the built in "port-forward" utility (as #aedm suggests), but this will only make your service accessible locally, as it binds to the loopback network. But you can also bind to all networks and make the service accessible externally:
kubectl port-forward <service/name> 80:8080 --address='0.0.0.0'
This will make it accessible to a browser (http) from the outisde.
I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
I have the following deployment yaml:
apiVersion: v1
kind: Namespace
metadata:
name: authentication
labels:
name: authentication
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authentication-deployment
namespace: authentication
spec:
replicas: 2
template:
metadata:
labels:
app: authentication
spec:
containers:
- name: authentication
image: blueapp/authentication:0.0.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: authentication-service
namespace: authentication
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: authentication-deployment
type: LoadBalancer
externalName: authentication
Im pretty new to kubernetes but my understanding of what Im trying to do is create a namespace, in that namespace create a deployment of 2 pods and then create a load balancer to distribute traffic to those pods.
When I run
$ kubectl create -f deployment.yaml
everything creates fine, but then the service never gets assigned an external IP
Is there anything obvious that may be causing this?
Your service is of type NodePort.
To get a load balancer assigned to your service you should be using a LoadBalancer service type:
type: LoadBalancer
See documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
External IPs get assigned only in supported cloud environments, providing that your cloud provider is configured correctly.
Observe the error messages in the kube-controller-manager logs when you create your service.