Gcloud Kubernetes connection refused to exposed service - docker

I am unable to connect to an exposed IP for a docker container deployed on Google Could Kubernetes. I have been roughly following this tutorial but using my own application.
The deployment seems to work fine, everything is green and running when visiting the cloud dashboard but when trying to visit the deployed application on the exposed IP, I get a browser error:
This site can’t be reached
35.231.27.158 refused to connect
If I ping the IP I do get a response back.
kubectl get pods produces the following:
NAME READY STATUS RESTARTS AGE
mtg-dash-7874f6c54d-nktjn 1/1 Running 0 21m
and kubectl get service shows the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 23m
mtg-dash LoadBalancer 10.7.242.240 35.231.27.158 80:30306/TCP 20m
and kubectl describe svc show the following:
Name: mtg-dash
Namespace: default
Labels: run=mtg-dash
Annotations: <none>
Selector: run=mtg-dash
Type: LoadBalancer
IP: 10.7.242.240
LoadBalancer Ingress: 35.231.27.158
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30306/TCP
Endpoints: 10.4.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 37m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 37m service-controller Ensured load balancer
My Dockerfile contains the following:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/mtg-dash .
I have a feeling I have missed something obvious.
What more do I need to do to configure this to be accessible on the internet?
Here is a screenshot of running service:

As per the comments the target port should be 80 since that is what the application is set to listen on. Glad I could help. Picked it up from the documentation a month or so ago.
https://kubernetes.io/docs/concepts/services-networking/service/

Related

Ngnix ingress controller for aks using internal loadbalancer

showing internal loadbalancer external IP is in pending, could someone help me on this issue to resolve
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace=ingress-private --set rbac.create=true --set controller.service.loadBalancerIP="10.0.0.0" --set controller.replicaCount=2 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io\/os"=linux --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="true"
NAME: nginx-ingress
LAST DEPLOYED: Mon May 3 15:39:02 2021
NAMESPACE: ingress-private
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-private get services -o wide -w nginx-ingress-ingress-nginx-controller'
kubectl get svc -n ingress-private -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.0.0 <pending> 80:32392/TCP,443:32499/TCP 3m2s
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.0.223 <none> 443/TCP 3m2s
According to Azure docs, you can't use 10.0.0.0 IP address. Try using a different address. I am assuming you are trying to execute this scenario. Check the service details for any error information.
kubectl describe svc -n ingress-private nginx-ingress-ingress-nginx-controller
Ref:
Are there any restrictions on using IP addresses within these subnets?
Yes. Azure reserves 5 IP addresses within each subnet. These are
x.x.x.0-x.x.x.3 and the last address of the subnet. x.x.x.1-x.x.x.3 is
reserved in each subnet for Azure services.
x.x.x.0: Network address
x.x.x.1: Reserved by Azure for the default gateway
x.x.x.2, x.x.x.3: Reserved by Azure to map the Azure DNS IPs to the VNet space
x.x.x.255: Network broadcast address
Below is the service details
C:\Users\av13\Music\spoke-terraform-updated\kubernetes-nexus>kubectl describe svc -n ingress-private nginx-ingress-ingress-nginx-controller
Name: nginx-ingress-ingress-nginx-controller
Namespace: ingress-private
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=0.46.0
helm.sh/chart=ingress-nginx-3.30.0
Annotations: meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: ingress-private
service.beta.kubernetes.io/azure-load-balancer-internal: true
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP: 10.0.188.67
IP: 10.145.72.102
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32392/TCP
Endpoints: 10.145.72.50:80,10.145.72.94:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32499/TCP
Endpoints: 10.145.72.50:443,10.145.72.94:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 78s (x278 over 22h) service-controller Ensuring load balancer
still external IP is in pending state
C:\Users\av13\Music\spoke-terraform-updated\kubernetes-nexus>kubectl get svc -n ingress-private -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.188.67 <pending> 80:32392/TCP,443:32499/TCP 22h
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.159.223 <none> 443/TCP 22h

minikube and ingress-nginx does not have port 80 open

I'm new to ingress-nginx and I enabled it with minikube using minikube addons enable ingress. When looking for the services related to ingress-nginx, I ran kubectl get services -n kube-system and got:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.96.141.138 <none> 443/TCP 16m
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 16m
Noticing that I'm missing port 80, I ran kubectl describe service/ingress-nginx-controller-admission -n kube-system and got:
Name: ingress-nginx-controller-admission
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP: 10.96.141.138
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 172.17.0.3:8443
Session Affinity: None
Events: <none>
When trying to access an endpoint https://ingress-nginx-controller-admission.kube-system.svc.cluster.local/foo, I get this error:
FetchError: request to https://ingress-nginx-controller-admission.kube-system.svc.cluster.local/foo failed, reason: unable to verify the first certificate
, although hitting the endpoint /foo from within a pod through the ingress works just fine. I was looking into the tls related documentation to no avail. Any help on this would be much appreciated.
EDIT: I've added the output of kubectl get svc -A per request:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default auth-svc ClusterIP 10.100.218.231 <none> 4000/TCP 13s
default client-svc ClusterIP 10.106.143.107 <none> 3000/TCP 12s
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37m
kube-system ingress-nginx-controller-admission ClusterIP 10.96.141.138 <none> 443/TCP 36m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 37m
The service you're looking at is, as it's name indicates, the Admission Controller of the minikube ingress addon.
It's used by the Ingress Controller to parse the Ingress definitions and configure the underlying nginx (you can take a look at the minikube ingress addon deployment)
To access the exposed Ingresses that you may have defined, you need to hit the ports 80 or 443 of the minikube machine (available via minikube ip). (You can see those listening ports by doing a minikube ssh -- docker container ls --filter label=app.kubernetes.io/name=ingress-nginx)
The documentation you need is this one: https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
So, if you defined an Ingress with host=hello-world.info and path=/foo, you will be able to access it with:
curl --header 'Host: hello-world.info' http://`minikube ip`:80/foo
Of course, it's better to define a DNS entry to map hello-world.info to the minikube IP.
running kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system allows me to access the ingress from within a pod in the cluster.
Nginx ingress on minikube does not use service to provide access to ingress pods.
It uses hostPort. Here is part of a yaml of ingres controller's pod:
$ kubectl get pod -n kube-system ingress-nginx-controller-789d9c4dc-5wnc2 -oyaml
[...]
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
[...]
It's behavior is documented in k8s api reference docs
hostPort
Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
Thanks to this, ingress can be easly accessed with:
curl $(minikube ip)

Kubernetes (docker-desktop) with multiple LoadBalancer services

Is it true that I cannot have two LoadBalancer services on a docker-desktop cluster (osx), because they would both use localhost (and all ports are forwarded)?
I created an example and the latter service is never assigned an external IP address but stays in state pending. However, the former is accessible on localhost.
> kubectl get all
NAME READY STATUS RESTARTS AGE
pod/whoami-deployment-9f9c86c4f-l5lkj 1/1 Running 0 28s
pod/whoareyou-deployment-b896ddb9c-lncdm 1/1 Running 0 27s
pod/whoareyou-deployment-b896ddb9c-s72sc 1/1 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 95s
service/whoami-service LoadBalancer 10.97.171.139 localhost 80:30024/TCP 27s
service/whoareyou-service LoadBalancer 10.97.171.204 <pending> 80:32083/TCP 27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/whoami-deployment 1/1 1 1 28s
deployment.apps/whoareyou-deployment 2/2 2 2 27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/whoami-deployment-9f9c86c4f 1 1 1 28s
replicaset.apps/whoareyou-deployment-b896ddb9c 2 2 2 27s
Detailed state fo whoareyou-service:
kubectl describe service whoareyou-service
Name: whoareyou-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"whoareyou-service","namespace":"default"},"spec":{"ports":[{"name...
Selector: app=whoareyou
Type: LoadBalancer
IP: 10.106.5.8
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30333/TCP
Endpoints: 10.1.0.209:80,10.1.0.210:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I decided to copy my comments, as they partially explain the problem, and make a Community Wiki answer out of them so it is more clearly seen and available for possible further edits by the Community:
It works probably exactly the same way as in Minikube. As docker-desktop is unable to provision real LoadBalancer it can still "simulate" creating Service of such type using NodePort (this can easily be seen from port range it uses). I'm pretty sure you cannot use same IP address as the ExternalIP of the LoadBalancer Service and if you create one more Service of such type, your docker-desktop has no other choice than to use your localhost one more time. As it is already used by one Service it cannot be used by another one and that's why it remains in a pending state.
Note that if you create real LoadBalancer in a cloud environment, each time new IP is provisioned and there is no situation that next LoadBalancer you create gets the same IP that is already used by the existing one. Apparently here it cannot use any other IP then one of localhost, and this one is already in use. Anyway I would recommend you to simply use NodePort if you want to expose your Deployment to the external world.
Think about using Ingress controller instead.
So basically, it's 3 steps after installing docker-desktop :
Wilcard Certificate locally
SSL certificate For local env
Install Ingress Controller
Detailed here: https://github.com/kubernetes-tn/guideline-kubernetes-enterprise/blob/master/general/desktop-env-setup.md
I came across this question while looking to set up a lightweight local environment with minimal dependencies.
I found that two LoadBalancer work on localhost when using different port numbers.
---
apiVersion: v1
kind: Service
metadata:
name: webapp-one-lb
spec:
ports:
- name: 8081-tcp
port: 8081
protocol: TCP
targetPort: 8080
selector:
name: webapp-one
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
name: webapp-two-lb
spec:
ports:
- name: 8082-tcp
port: 8082
protocol: TCP
targetPort: 8080
selector:
name: webapp-two
type: LoadBalancer
status:
loadBalancer: {}
As others have said, Ingress is more flexible and allows for sub-domain and path based routing without having to worry about port conflicts, but it comes with an additional learning curve.

How to access a Kubernetes Pod via a Service that is running on localhost's Docker-Kubernetes

I'm not sure how to access the Pod which is running behind a Service.
I have Docker CE installed and running. With this, I have the Docker 'Kubernetes' running.
I created a Pod file and then kubectl created it ... and then used port-forwarding to test that it's working and it was. Tick!
Next I created a Service as a LoadBalancer and kubectl create that also and it's running ... but I'm not sure how to test it / access the Pod that is running.
Here's the terminal outputs:
Tests-MBP:k8s test$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hornet-data 1/1 Running 0 4h <none>
Tests-MBP:k8s test$ kubectl get services --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
hornet-data-lb LoadBalancer 10.0.44.157 XX.XX.XX.XX 8080:32121/TCP 4h <none>
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d component=apiserver,provider=kubernetes
Tests-MBP:k8s test$
Not sure if the pod Label <none> is a problem? I'm using labels for the Service selector.
Here's the two files...
apiVersion: v1
kind: Pod
metadata:
name: hornet-data
labels:
app: hornet-data
spec:
containers:
- image: ravendb/ravendb
name: hornet-data
ports:
- containerPort: 8080
and
apiVersion: v1
kind: Service
metadata:
name: hornet-data-lb
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: hornet-data
Update 1:
As requested by #vasily:
Tests-MBP:k8s test$ kubectl get ep hornet-data-lb
NAME ENDPOINTS AGE
hornet-data-lb <none> 5h
Update 2:
More info for/from Vasily:
Tests-MBP:k8s test$ kubectl apply -f hornet-data-pod.yaml
pod/hornet-data configured
Tests-MBP:k8s test$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hornet-data 1/1 Running 0 5h app=hornet-data
Tests-MBP:k8s test$ kubectl get services --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
hornet-data-lb LoadBalancer 10.0.44.157 XX.XX.XX.XX 8080:32121/TCP 5h <none>
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d component=apiserver,provider=kubernetes
#vailyangapov basically answered this via comments in the OP - this answer is in two parts.
I didn't apply my changes in my manifest. I made some changes to my services yaml file but didn't push these changes up. As such I needed to do kubectl apply -f myPod.yaml.
I was in the wrong context. The current context was pointing to a test Azure Kubernetes Service. I thought it was all on my localhost cluster that comes with Docker-CE (called the docker-for-desktop cluster). As this is a new machine, I failed to enable Kubernetes with Docker (it's a manual step AFTER Docker-CE is installed .. with the default setting having it NOT enabled/not ticked). Once I manually noticed that, I ticked the option to enable Kubernetes and docker-for-desktop) cluster was installed. Then I manually changed over to this context:kubectl config use-context docker-for-desktop`.
Both these mistakes were simple. The reason for providing them into an answer is to hopefully help others use this information to help them review their own settings if something isn't working right - a similar problem to me, is occurring.

Cannot Use a static IP address outside of the node resource group

I am trying to use a static ip address for the dashboard that is created outside of the node resource group following the guide located here, but it is not working. (This is for a firewalled dev-only cluster and won't go to production.)
What I have done so far:
Created a public ip address in resourcegroup1
Create an AKS cluster in resourcegroup1 tied to a azure ad application.
Added the azure ad application's service principal as a "Network Contributer" in resourcegroup1.
Added service.beta.kubernetes.io/azure-load-balancer-resource-group: resourcegroup1 to my service.yaml file.
Added loadBalancerIP with the ip address from step 1.
Whenever I apply service.yaml, the service says it's in a pending state. When I run kubectl describe service, it shows the following output:
Name: kubernetes-dashboard
Namespace: kube-system
Labels: <none>
Annotations: externalTrafficPolicy=Local
service.beta.kubernetes.io/azure-load-balancer-resource-group=resourcegroup1
Selector: k8s-app=kubernetes-dashboard
Type: LoadBalancer
IP: 10.0.42.112
IP: <IP FROM STEP 1>
Port: <unset> 80/TCP
TargetPort: 9090/TCP
NodePort: <unset> 31836/TCP
Endpoints: 10.244.0.6:9090
Session Affinity: None
External Traffic Policy: Cluster
LoadBalancer Source Ranges: <SNIPPED>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 38s (x6 over 3m) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed 38s (x6 over 3m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service kube-system/kubernetes-dashboard: user supplied IP Address <IP FROM STEP 1> was not found
Here is my service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: resourcegroup1
spec:
type: LoadBalancer
loadBalancerIP: <IP FROM STEP 1>
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
loadBalancerSourceRanges:
- <SNIP>
- <SNIP>
For the error you got, it means the public IP cannot be found in the resourcegroup1 with the same region as the AKS. Different region causes the error like this as yours:
So you should create public IP in the same region with your AKS. Then it will work for you.
My AKS cluster was 1.9.x which was older than the required 1.10.x. I was using Terraform to create the cluster and there appears to be a bug with how a missing kubernetes_version is handled. I submitted an issue on their repo.

Resources