I am trying to use a static ip address for the dashboard that is created outside of the node resource group following the guide located here, but it is not working. (This is for a firewalled dev-only cluster and won't go to production.)
What I have done so far:
Created a public ip address in resourcegroup1
Create an AKS cluster in resourcegroup1 tied to a azure ad application.
Added the azure ad application's service principal as a "Network Contributer" in resourcegroup1.
Added service.beta.kubernetes.io/azure-load-balancer-resource-group: resourcegroup1 to my service.yaml file.
Added loadBalancerIP with the ip address from step 1.
Whenever I apply service.yaml, the service says it's in a pending state. When I run kubectl describe service, it shows the following output:
Name: kubernetes-dashboard
Namespace: kube-system
Labels: <none>
Annotations: externalTrafficPolicy=Local
service.beta.kubernetes.io/azure-load-balancer-resource-group=resourcegroup1
Selector: k8s-app=kubernetes-dashboard
Type: LoadBalancer
IP: 10.0.42.112
IP: <IP FROM STEP 1>
Port: <unset> 80/TCP
TargetPort: 9090/TCP
NodePort: <unset> 31836/TCP
Endpoints: 10.244.0.6:9090
Session Affinity: None
External Traffic Policy: Cluster
LoadBalancer Source Ranges: <SNIPPED>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 38s (x6 over 3m) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed 38s (x6 over 3m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service kube-system/kubernetes-dashboard: user supplied IP Address <IP FROM STEP 1> was not found
Here is my service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: resourcegroup1
spec:
type: LoadBalancer
loadBalancerIP: <IP FROM STEP 1>
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
loadBalancerSourceRanges:
- <SNIP>
- <SNIP>
For the error you got, it means the public IP cannot be found in the resourcegroup1 with the same region as the AKS. Different region causes the error like this as yours:
So you should create public IP in the same region with your AKS. Then it will work for you.
My AKS cluster was 1.9.x which was older than the required 1.10.x. I was using Terraform to create the cluster and there appears to be a bug with how a missing kubernetes_version is handled. I submitted an issue on their repo.
Related
I am having spring boot app where in application.property we are specifying below properties. kafka is installed on remote machine with self-signed certificate (outside the kubernete cluster).
camel.component.kafka.configuration.brokers=kafka-worker1.abc.com:9092,kafka-worker2.abc.com:9092,kafka-worker3.abc.com:9092
at startup of application it will try to look for kafka broker.
now if i add hostaliases to deployment it will work fine like below
hostAliases:
- ip: 10.76.XX.XX
hostnames:
- kafka-worker1.abc.com
- ip: 10.76.XX.XX
hostnames:
- kafka-worker2.abc.com
- ip: 10.76.XX.XX
hostnames:
- kafka-worker3.abc.com
it will work fine but i dont want this as its not good practice to have hostaliases , we may need to restart the pod if IP changes.
we want that hostname resolution to be happen on coredns, or to resolve without adding ip to host file of pod.
how to achieve this.
followed this Cannot connecto to external database from inside kubernetes pod service endpoint like below created for kafka-worker2 & kafka-worker3 with respective IP
kind: Service
apiVersion: v1
metadata:
name: kafka-worker1
spec:
clusterIP: None
ports:
- port: 9092
targetPort: 9092
externalIPs:
- 10.76.XX.XX
and added this in property file
camel.component.kafka.configuration.brokers=kafka-worker1.default:9092,kafka-worker2.default:9092,kafka-worker3.default:9092
still getting same WARN
2020-05-13T11:57:12.004+0000 Etc/UTC docker-desktop WARN [main] org.apache.kafka.clients.ClientUtils(:74) - Couldn't resolve server hal18-coworker2.default:9092 from bootstrap.servers as DNS resolution failed for kafka-worker1.default
2020-05-13T11:57:12.318+0000 Etc/UTC docker-desktop WARN [main] org.apache.kafka.clients.ClientUtils(:74) - Couldn't resolve server hal18-coworker1.default:9092 from bootstrap.servers as DNS resolution failed for kafka-worker2.default
2020-05-13T11:57:12.567+0000 Etc/UTC docker-desktop WARN [main] org.apache.kafka.clients.ClientUtils(:74) - Couldn't resolve server hal18-coworker3.default:9092 from bootstrap.servers as DNS resolution failed for kafka-worker3.default
Update Section
Used "Services without selectors" as below still getting same error
2020-05-18T14:47:10.865+0000 Etc/UTC docker-desktop WARN [Camel (SMP-Proactive-Camel) thread #1 - KafkaConsumer[recommendations-topic]] org.apache.kafka.clients.NetworkClient(:750) - [Consumer clientId=consumer-hal-tr69-streaming-1, groupId=hal-tr69-streaming] Connection to node -1 (kafka-worker.default.svc.cluster.local/10.100.153.152:9092) could not be established. Broker may not be available.
2020-05-18T14:47:12.271+0000 Etc/UTC docker-desktop WARN [Camel (SMP-Proactive-Camel) thread #1 - KafkaConsumer[recommendations-topic]] org.apache.kafka.clients.NetworkClient(:750) - [Consumer clientId=consumer-hal-tr69-streaming-1, groupId=hal-tr69-streaming] Connection to node -1 (kafka-worker.default.svc.cluster.local/10.100.153.152:9092) could not be established. Broker may not be available.
2020-05-18T14:47:14.191+0000 Etc/UTC docker-desktop WARN [Camel (SMP-Proactive-Camel) thread #1 - KafkaConsumer[recommendations-topic]] org.apache.kafka.clients.NetworkClient(:750) - [Consumer clientId=consumer-hal-tr69-streaming-1, groupId=hal-tr69-streaming] Connection to node -1 (kafka-worker.default.svc.cluster.local/10.100.153.152:9092) could not be established. Broker may not be available.
Services & Endpoint yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-worker
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
---
apiVersion: v1
kind: Endpoints
metadata:
name: kafka-worker
subsets:
- addresses:
- ip: 10.76.XX.XX # kafka worker 1
- ip: 10.76.XX.XX # kafka worker 2
- ip: 10.76.XX.XX # kafka worker 3
ports:
- port: 9092
name: kafka-worker
kubectl.exe get svc,ep
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.99.101.185 localhost 80:31247/TCP,443:31340/TCP 11d
service/ingress-nginx-controller-admission ClusterIP 10.103.212.117 <none> 443/TCP 11d
service/kafka-worker ClusterIP 10.100.153.152 <none> 9092/TCP 97s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
NAME ENDPOINTS AGE
endpoints/ingress-nginx-controller 10.1.0.XX:80,10.1.0.XX:443 11d
endpoints/ingress-nginx-controller-admission 10.1.0.xx:8443 11d
endpoints/kafka-worker 10.76.xx.xx:9092,10.76.xx.xx:9092,10.76.xx.xx:9092 97s
endpoints/kubernetes 192.168.XX.XX:6443 17d
Thank you for the question and showing your effort to solve the problem.
You are right about adding hostAliases to be not a good practice because on an event your kafka hosts IP changes then you will have to apply the new IP to the deployment and it will trigger a pod reload.
I am not sure how externalIPs fits over here as a solution since:
Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
But for a moment if I take it for granted that externalIP solution is working, even then the way you are accessing your service is not correct!
DNS resolution is failing because your domain name is wrong camel.component.kafka.configuration.brokers=kafka-worker1.default:9092 changing it to this camel.component.kafka.configuration.brokers=kafka-worker1.default.svc.cluster.local:9092 may fix it. Note: if your k8s cluster has a different domain than the default then replace cluster.local with your k8s cluster domain.
Check DNS debugging REF
There are two solutions which I can think of:
First Service without selectors and manual endpoint creation:
(example code) the name of the endpoint is used to attach to the service. therefore use same names for both service and endpoint which is kafka-worker
apiVersion: v1
kind: Service
metadata:
name: kafka-worker
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
---
apiVersion: v1
kind: Endpoints
metadata:
name: kafka-worker
subsets:
- addresses:
- ip: 10.76.XX.XX # kafka worker 1
- ip: 10.76.XX.XX # kafka worker 2
- ip: 10.76.XX.XX # kafka worker 3
ports:
- port: 9092
name: kafka-worker
Way to access this would be camel.component.kafka.configuration.brokers=kafka-worker.default.svc.cluster.local:9092
Note:
- You can add more information to you endpoint ip like nodeName,
hostName checkout this api ref
- advantage of this approach is k8s will load balance for you to the kafka workers
Second ExternalName:
For this approach you will need to have single Domain name defined already, how to do that is out of scope of this answer but for example kafka-worker.abc.com is your domain name, now it is your responsibility to attach all of your 3 kafka worker node IPs in a (maybe) roundrobin fashion to your DNS server. Note: this kind load balancing (via DNS) is not always preferred because there is no health check performed by the DNS server to make sure which nodes are alive and which are dead.
This approach is not guaranteed and may need addition tweaks depending on your systems networking to resolve domain names. which is to say the node where you have your coredns/kube-dns running that node should be able to resolved kafka-worker.abc.com otherwise when k8s return the CNAME you application will fail to resolve it!
Here is an example:
kind: Service
metadata:
name: kafka-worker
spec:
type: ExternalName
externalName: kafka-worker.abc.com
Update:
Following your update in Question.
Looking at the first error it seems you have created 3 services which generates 3 DNS
kafka-worker3.default.svc.cluster.local
kafka-worker2.default.svc.cluster.local
kafka-worker1.default.svc.cluster.local
I suggest, if you could please check my example code! you DO NOT need to create 3 services, just one service which is attach to a endpoint which has 3 IPs of your 3 brokers.
For your second error:
hostname is not domain name, hostname is typically the name give to the machine (please check the difference). just for the sake of simplicity I would suggest to use only IP in endpoint object.
Is it true that I cannot have two LoadBalancer services on a docker-desktop cluster (osx), because they would both use localhost (and all ports are forwarded)?
I created an example and the latter service is never assigned an external IP address but stays in state pending. However, the former is accessible on localhost.
> kubectl get all
NAME READY STATUS RESTARTS AGE
pod/whoami-deployment-9f9c86c4f-l5lkj 1/1 Running 0 28s
pod/whoareyou-deployment-b896ddb9c-lncdm 1/1 Running 0 27s
pod/whoareyou-deployment-b896ddb9c-s72sc 1/1 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 95s
service/whoami-service LoadBalancer 10.97.171.139 localhost 80:30024/TCP 27s
service/whoareyou-service LoadBalancer 10.97.171.204 <pending> 80:32083/TCP 27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/whoami-deployment 1/1 1 1 28s
deployment.apps/whoareyou-deployment 2/2 2 2 27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/whoami-deployment-9f9c86c4f 1 1 1 28s
replicaset.apps/whoareyou-deployment-b896ddb9c 2 2 2 27s
Detailed state fo whoareyou-service:
kubectl describe service whoareyou-service
Name: whoareyou-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"whoareyou-service","namespace":"default"},"spec":{"ports":[{"name...
Selector: app=whoareyou
Type: LoadBalancer
IP: 10.106.5.8
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30333/TCP
Endpoints: 10.1.0.209:80,10.1.0.210:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I decided to copy my comments, as they partially explain the problem, and make a Community Wiki answer out of them so it is more clearly seen and available for possible further edits by the Community:
It works probably exactly the same way as in Minikube. As docker-desktop is unable to provision real LoadBalancer it can still "simulate" creating Service of such type using NodePort (this can easily be seen from port range it uses). I'm pretty sure you cannot use same IP address as the ExternalIP of the LoadBalancer Service and if you create one more Service of such type, your docker-desktop has no other choice than to use your localhost one more time. As it is already used by one Service it cannot be used by another one and that's why it remains in a pending state.
Note that if you create real LoadBalancer in a cloud environment, each time new IP is provisioned and there is no situation that next LoadBalancer you create gets the same IP that is already used by the existing one. Apparently here it cannot use any other IP then one of localhost, and this one is already in use. Anyway I would recommend you to simply use NodePort if you want to expose your Deployment to the external world.
Think about using Ingress controller instead.
So basically, it's 3 steps after installing docker-desktop :
Wilcard Certificate locally
SSL certificate For local env
Install Ingress Controller
Detailed here: https://github.com/kubernetes-tn/guideline-kubernetes-enterprise/blob/master/general/desktop-env-setup.md
I came across this question while looking to set up a lightweight local environment with minimal dependencies.
I found that two LoadBalancer work on localhost when using different port numbers.
---
apiVersion: v1
kind: Service
metadata:
name: webapp-one-lb
spec:
ports:
- name: 8081-tcp
port: 8081
protocol: TCP
targetPort: 8080
selector:
name: webapp-one
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
name: webapp-two-lb
spec:
ports:
- name: 8082-tcp
port: 8082
protocol: TCP
targetPort: 8080
selector:
name: webapp-two
type: LoadBalancer
status:
loadBalancer: {}
As others have said, Ingress is more flexible and allows for sub-domain and path based routing without having to worry about port conflicts, but it comes with an additional learning curve.
I make configuration that my service is builded on 8080 port.
My docker image is also on 8080.
I put my ReplicaSet with configuration like this
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-backend-rs
spec:
containers:
- name: my-app-backend
image: go-my-app-backend
ports:
- containerPort: 8080
imagePullPolicy: Never
And finally I create service of type NodePort also on port 8080 with configuration like below:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app-backend-rs
name: my-app-backend-svc-nodeport
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: my-app-backend
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
Type: NodePort
IP: 10.110.250.176
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31859/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I not understanding and what am I doing wrong? Can anyone explain me that?
From your output,i'm seeing below endpoint is created.So it seems one pod is ready to serve for this nodeport service.So label is not an issue now.
Endpoints: 172.17.0.6:8080
First ensure you are able to access the app by running curl http://podhostname:8080 command, once you are login into the pod using kubectl exec -it podname sh(if curl is installed on image which running in that pod container).If not run curl ambassador container pods as sidecar and from that pod try to access the http://<>:8080 and ensure it is working.
Remember you can't access the nodeport service as localhost since it will be pointing to your master node,if you are running this command from master node.
You have to access this service by below methods.
<CLUSTERIP:PORT>---In you case:10.110.250.176:80
<1st node's IP>:31859
<2nd node's IP>:31859
I tried to use curl after kubectl exec -it podname sh
In this very example the double dash is missed in front of the sh command.
Please note that correct syntax can be checked anytime with the kubectl exec -h and shall be like:
kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]
if you have only one container per Pod it can be simplified to:
kubectl exec -it PODNAME -- COMMAND
The caveat of not specyfying the container is that in case of multiple containers on that Pod, you'll be conected to the first one :)
Example: kubectl exec -it pod/frontend-57gv5 -- curl localhost:80
I tried also hit on 10.110.250.176:80:31859 but this is incorrect I think. Sorry but I'm beginner at network stuff.
yes, that is not correct, as the value for :port occurs twice . In that example it is needed to hit 10.110.250.176:80 (as 10.110.250.176 is a "Cluster_IP" )
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
It depends on where you are going to run that command.
In this very case it is not clear what exactly you have put into ReplicaSet config (if Service's selector matches with ReplicaSet's labels), so let me explain "how this supposed to work".
Assuming we have the following ReplicaSet (the below example is slightly modified version of official documentation on topic ):
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-rs
labels:
app: guestbook
tier: frontend-meta
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
tier: frontend-label
template:
metadata:
labels:
tier: frontend-label ## shall match spec.selector.matchLabels.tier
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
And the following service:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-svc-tier-nodeport
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
tier: frontend-label ## shall match labels from ReplicaSet spec
We can create ReplicaSet (RS) and Service. As a result, we shall be able to see RS, Pods, Service and End Points:
kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
frontend-rs 2 2 2 10m php-redis gcr.io/google_samples/gb-frontend:v3 tier=frontend-label
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
frontend-rs-76sgd 1/1 Running 0 11m 10.12.0.31 gke-6v3n
frontend-rs-fxxq8 1/1 Running 0 11m 10.12.1.33 gke-m7z8
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
frontend-svc-tier-nodeport NodePort 10.0.5.10 <none> 80:32113/TCP 9m41s tier=frontend-label
kubectl get ep -o wide
NAME ENDPOINTS AGE
frontend-svc-tier-nodeport 10.12.0.31:80,10.12.1.33:80 10m
kubectl describe svc/frontend-svc-tier-nodeport
Selector: tier=frontend-label
Type: NodePort
IP: 10.0.5.10
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32113/TCP
Endpoints: 10.12.0.31:80,10.12.1.33:80
Important thing that we can see from my example is that Port was set 80:32113/TCP for the service we have created.
That shall allow us accessing "gb-frontend:v3" app in a few different ways:
from inside cluster: curl 10.0.5.10:80
(CLUSTER-IP:PORT) or curl frontend-svc-tier-nodeport:80
from external network (internet): curl PUBLIC_IP:32113
here PUBLIC_IP is the IP you can reach Node in your cluster. All the nodes in cluster are listening on a NodePort and forward requests according t the Service's selector.
from the Node : curl localhost:32113
Hope that helps.
I am using kubernetes and run one service. Service is running and is showing in service. But i am not able to access it from the public ip of the instance. Below is my deployment file.
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
Here is my list of service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-service NodePort 10.106.242.181 <none> 80:31807/TCP 9m5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
But when i check the same service from the follwing telnet with the public ip of cluster and node. It is not responding.
telnet public-ip:31807
Any type of help will be appreciable.
What do you mean by cluster IP? Do you mean the node that acts as kunernetes master? It won't work if you use master IP. Because masters don't have deployments scheduled due to security concerns.
Exposing a service via nodeport means that the service listens to a particular port in each of the worker nodes. So you can access the kunernetes worker nodes with the nodeports and get response. However if you created the cluster using cloud providers like aws, the worker nodes security groups are secured. Probably, you need to edit the security groups of worker nodes to access the service.
I am unable to connect to an exposed IP for a docker container deployed on Google Could Kubernetes. I have been roughly following this tutorial but using my own application.
The deployment seems to work fine, everything is green and running when visiting the cloud dashboard but when trying to visit the deployed application on the exposed IP, I get a browser error:
This site can’t be reached
35.231.27.158 refused to connect
If I ping the IP I do get a response back.
kubectl get pods produces the following:
NAME READY STATUS RESTARTS AGE
mtg-dash-7874f6c54d-nktjn 1/1 Running 0 21m
and kubectl get service shows the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 23m
mtg-dash LoadBalancer 10.7.242.240 35.231.27.158 80:30306/TCP 20m
and kubectl describe svc show the following:
Name: mtg-dash
Namespace: default
Labels: run=mtg-dash
Annotations: <none>
Selector: run=mtg-dash
Type: LoadBalancer
IP: 10.7.242.240
LoadBalancer Ingress: 35.231.27.158
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30306/TCP
Endpoints: 10.4.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 37m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 37m service-controller Ensured load balancer
My Dockerfile contains the following:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/mtg-dash .
I have a feeling I have missed something obvious.
What more do I need to do to configure this to be accessible on the internet?
Here is a screenshot of running service:
As per the comments the target port should be 80 since that is what the application is set to listen on. Glad I could help. Picked it up from the documentation a month or so ago.
https://kubernetes.io/docs/concepts/services-networking/service/