Cannot access K8s dashboard after installation of kubeadm-dind-cluster - docker

I am using kubeadm-dind-cluster a Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker).
I have a fresh Centos 7 install on which I have just run ./dind-cluster-v1.13.sh up. I did not set any other values and am using all the default values for networking.
All appears well:
[root#node01 dind-cluster]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 23h v1.13.0
kube-node-1 Ready <none> 23h v1.13.0
kube-node-2 Ready <none> 23h v1.13.0
[root#node01 dind-cluster]# kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:32769
name: dind
contexts:
- context:
cluster: dind
user: ""
name: dind
current-context: dind
kind: Config
preferences: {}
users: []
[root#node01 dind-cluster]# kubectl cluster-info
Kubernetes master is running at http://127.0.0.1:32769
KubeDNS is running at http://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root#node01 dind-cluster]#
and it appears healthy:
[root#node01 dind-cluster]# curl -w '\n' http://127.0.0.1:32769/healthz
ok
I know the dashboard service is there:
[root#node01 dind-cluster]# kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.102.82.8 <none> 80:31990/TCP 23h
however any attempt to access it is refused:
[root#node01 dind-cluster]# curl http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
[root#node01 dind-cluster]# curl http://127.0.0.1:8080/ui
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
I also see the following in the firewall log:
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 127.0.0.1 --dport 32769 -j DNAT --to-destination 10.192.0.2:8080 ! -i br-669b654fc9cd' failed: iptables: No chain/target/match by that name.
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i br-669b654fc9cd -o br-669b654fc9cd -p tcp -d 10.192.0.2 --dport 8080 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 10.192.0.2 -d 10.192.0.2 --dport 8080 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
Any suggestions on how I actually access the dashboard externally from my development machine? I don't want to use the proxy to do this.

You should be able to access kubernetes-dashboard using the following addresses:
ClusterIP(works for other pods in cluster):
http://10.102.82.8:80/
NodePort(works for every host who can access cluster nodes using their IPs):
http://clusterNodeIP:31990/
Usually Kubernetes dashboard uses https protocol, so you may need to use different ports in request to kubernetes-dashboard Service for that.
You can also access the dashboard using kube-apiserver as a proxy:
Directly to dashboard Pod:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/pods/https:kubernetes-dashboard-pod-name:/proxy/#!/login
To dashboard ClusterIP service:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
I can guess that <master-ip>:<apiserver-port> would mean 127.0.0.1:32769 in your case.

In that situation, you'd indeed expect that everything works out-of-the box. However, seemingly the setup is missing a suitable service-account to access and manage the cluster through the dashboard.
Note I might be entirely mislead here, and maybe kubeadm-dind-cluster in fact provides such an account. Please note also that this project has been discontinued some time ago.
Anyway, here is how I fixed that problem. Hopefully it's of some help for other people (still) trying that out...
define the missing account and Role binding: Create a yaml file
# ------------------- Dashboard Secret ------------------- #
# ...already available
# ------------------- Dashboard Service Account ------------------- #
# ...already available
# ------------------- Dashboard Cluster Admin Account ------------------- #
#
# added by Ichthyo 2019-2
# - ServiceAccount and ClusterRoleBinding
# - allows administrative Access intoto Namespace kube-system
# - necessary to log-in via Kubernetes-Dashboard
#
apiVersion: v1
kind: ServiceAccount
metadata:
name: dash-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dash-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dash-admin
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Apply it to the already running cluster
kubectl apply -f k8s-dashboard-RBAC.yaml
Then find out the security token corresponding to dash-admin
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dash-admin | awk '{print $1}')|egrep '^token:\s+'|awk '{print $2}
finally paste the extracted Token into the login screen

Related

Ingress configuration issue in Docker kubernetes cluster

I am recently new to Kubernetes and Docker in general and am experiencing issues.
I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run skaffold dev the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.
Any help is greatly appreciated as I have struggled with this bug for sometime.
ingress yaml config :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1
Auth service yaml config :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: conorl47/auth
---
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold yaml config :
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: conorl47/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
For installing the ingress nginx controller I followed the installation instructions at https://kubernetes.github.io/ingress-nginx/deploy/ , namely the Docker desktop installation instruction.
After running that command I see the following two Docker containers running in Docker desktop
The two services created in the ingress-nginx namespace are :
❯ k get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.6.146 <pending> 80:30036/TCP,443:30465/TCP 22m
ingress-nginx-controller-admission ClusterIP 10.108.8.26 <none> 443/TCP 22m
When I kubectl describe both of these services I see the following :
❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.6.146
IPs: 10.103.6.146
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30036/TCP
Endpoints: 10.1.0.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30465/TCP
Endpoints: 10.1.0.10:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32485
Events: <none>
and :
❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.8.26
IPs: 10.108.8.26
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.0.10:8443
Session Affinity: None
Events: <none>
As it seems, you have made the ingress service of type LoadBalancer, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.
If you want to have that ingress service reachable outside your cluster, you need to use type NodePort.
Since their docs are not great on this point, and it seems to be by default like this. You could download the content of https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml and modify it before applying. Or you use helm, then you probably can configure this.
You could also do it in this dirty fashion.
kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \
| sed s/LoadBalancer/NodePort/g \
| kubectl apply -f -
You could also edit in place.
kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx

K8s Kf-serving: Using a different storage other than Google Cloud Storage [duplicate]

Is it possible to replace the usage of Google Cloud Storage buckets with an alternative on-premises solution so that it is possible to run e.g. Kubeflow Pipelines completely independent from the Google Cloud Platform?
Yes it is possible. You can use minio, it's like s3/gs but it runs on a persistent volume of your on-premises storage.
Here are the instructions on how to use it as a kfserving inference storage:
Validate that minio is running in your kubeflow installation:
$ kubectl get svc -n kubeflow |grep minio
minio-service ClusterIP 10.101.143.255 <none> 9000/TCP 81d
Enable a tunnel for your minio:
$ kubectl port-forward svc/minio-service -n kubeflow 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Browse http://localhost:9000 to get to the minio UI and create a bucket/upload your model. Credentials minio/minio123. Alternatively you can use the mc command to do it from your terminal:
$ mc ls minio/models/flowers/0001/
[2020-03-26 13:16:57 CET] 1.7MiB saved_model.pb
[2020-04-25 13:37:09 CEST] 0B variables/
Create a secret&serviceaccount for the minio access, note that the s3-endpoint defines the path to the minio, keyid&acceskey are the credentials encoded in base64:
$ kubectl get secret mysecret -n homelab -o yaml
apiVersion: v1
data:
awsAccessKeyID: bWluaW8=
awsSecretAccessKey: bWluaW8xMjM=
kind: Secret
metadata:
annotations:
serving.kubeflow.org/s3-endpoint: minio-service.kubeflow:9000
serving.kubeflow.org/s3-usehttps: "0"
name: mysecret
namespace: homelab
$ kubectl get serviceAccount -n homelab sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa
namespace: homelab
secrets:
- name: mysecret
Finally, create your inferenceservice as follows:
$ kubectl get inferenceservice tensorflow-flowers -n homelab -o yaml
apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
name: tensorflow-flowers
namespace: homelab
spec:
default:
predictor:
serviceAccountName: sa
tensorflow:
storageUri: s3://models/flowers

How to pull image from Docker registry within Kubernetes cluster?

I'm learning Kubernetes and want to set up a Docker registry to run within my cluster, deploy any custom code to this private registry, then have my nodes pull images from this private registry to create pods. I've described my setup in this StackOverflow question
Originally I was caught up trying to figure out SSL certificates, but for now I've postponed that and I'm trying to work with an insecure registry. To that end I've created the following pod to run my registry (I know it's a pod and not a replica set or deployment -- this is only for experimental purposes and I'll make it cleaner once it's working):
apiVersion: v1
kind: Pod
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2
ports:
- containerPort: 80
hostPort: 80
env:
- name: REGISTRY_HTTP_ADDR
value: 0.0.0.0:80
I then created the following NodePort service:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-external
labels:
app: docker-registry
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 32000
selector:
app: docker-registry
I have a load balancer set up in front of my Kubernetes cluster which I configured to route traffic on port 80 to port 32000. So I can hit this registry at http://example.com
I then updated my local /etc/docker/daemon.json as follows:
{
"insecure-registries": ["example.com"]
}
With this I was able to push an image to my registry successfully:
> docker pull ubuntu
> docker tag ubuntu example.com/my-ubuntu
> docker push exapmle.com/my-ubuntu
The push refers to repository [example.com/my-ubuntu]
cc9d18e90faa: Pushed
0c2689e3f920: Pushed
47dde53750b4: Pushed
latest: digest: sha256:1d7b639619bdca2d008eca2d5293e3c43ff84cbee597ff76de3b7a7de3e84956 size: 943
Now I want to try and pull this image when creating a pod. So I created the following ClusterIP service to make my registry accessible within my cluster:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-internal
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: docker-registry
Then I created a secret:
apiVersion: v1
kind: Secret
metadata:
name: local-docker
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewoJImluc2VjdXJlLXJlZ2lzdHJpZXMiOiBbImRvY2tlci1yZWdpc3RyeS1pbnRlcm5hbCJdCn0K
The base64 bit decodes to:
{
"insecure-registries": ["docker-registry-internal"]
}
Finally, I created the following pod:
apiVersion: v1
kind: Pod
metadata:
name: test-docker
labels:
name: test
spec:
imagePullSecrets:
- name: local-docker
containers:
- name: test
image: docker-registry-internal/my-ubuntu
When I tried to create this pod (kubectl create -f test-pod.yml) and looked at my cluster, this is what I saw:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
test-docker 0/1 ErrImagePull 0 4s
docker-registry 1/1 Running 0 34m
> kubectl describe pod test-docker
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m33s default-scheduler Successfully assigned default/test-docker to pool-uqa-dev-3sli8
Normal Pulling 3m22s (x2 over 3m32s) kubelet Pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m22s (x2 over 3m32s) kubelet Failed to pull image "docker-registry-internal/my-ubuntu": rpc error: code = Unknown desc = Error response from daemon: pull access denied for docker-registry-internal/my-ubuntu, repository does not exist or may require 'docker login'
Warning Failed 3m22s (x2 over 3m32s) kubelet Error: ErrImagePull
Normal SandboxChanged 3m19s (x7 over 3m32s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 3m18s (x6 over 3m30s) kubelet Back-off pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m18s (x6 over 3m30s) kubelet Error: ImagePullBackOff
It's clearly failing to find the host "docker-registry-internal", despite the ClusterIP service.
I tried inspecting a pod from the inside using a trick I found online:
> kubectl run -i --tty --rm debug --image=ubuntu --restart=Never -- bash
If you don't see a command prompt, try pressing enter.
root#debug:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.67 debug
It doesn't seem like ClusterIP services are being added to the /etc/hosts file, so I'm not sure how services are supposed to find one another?
I tried watching several Kubernetes tutorials on general service communication (e.g. an app pod communicating with a redis pod) and every time all they did was supply the service name as a host and it magically connected. I'm not sure if I'm missing something. Bear in mind I'm brand new to Kubernetes so the internals are still mystical to me.

How can I access nginx ingress on my local?

I can't connect to my app running with nginx ingress (Docker Desktop win 10).
The nginx-ingress controller pod is running, the app is healthy, and I have created an ingress. However, when I try to connect to my app on localhost, I get "connection refused".
I see this error in the log:
[14:13:13.028][VpnKit ][Info ] vpnkit.exe: Connected Ethernet interface f6:16:36:bc:f9:c6
[14:13:13.028][VpnKit ][Info ] vpnkit.exe: UDP interface connected on 10.96.181.150
[14:13:22.320][GoBackendProcess ][Info ] Adding vpnkit-k8s-controller tcp forward from 0.0.0.0:80 to 10.96.47.183:80
[14:13:22.323][ApiProxy ][Error ] time="2019-12-09T14:13:22-05:00" msg="Port 443 for service ingress-nginx is already opened by another service"
I think port 443 is used by another app, possibly zscaler security or skype.
Excerpt from netstat -a -b:
[svchost.exe]
TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 16012
[com.docker.backend.exe]
TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 8220
I don't know how to make the ingress work. Please help!
My ingress:
$ kubectl describe ing kbvalues-deployment-dev-ingress
Name: kbvalues-deployment-dev-ingress
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
localhost
/ kbvalues-deployment-dev-frontend:28000 (10.1.0.174:8080)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-headers: X-Forwarded-For, X-app123-XPTO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 42m nginx-ingress-controller Ingress default/kbvalues-deployment-dev-ingress
Normal UPDATE 6s (x5 over 42m) nginx-ingress-controller Ingress default/kbvalues-deployment-dev-ingress
My service:
$ kubectl describe svc kbvalues-deployment-dev-frontend
Name: kbvalues-deployment-dev-frontend
Namespace: default
Labels: chart=tomcat-sidecar-war-1.0.4
environment=dev
name=kbvalues-frontend-dev
release=kbvalues-test
tier=frontend
Annotations: <none>
Selector: app=kbvalues-dev
Type: ClusterIP
IP: 10.98.89.94
Port: <unset> 28000/TCP
TargetPort: 8080/TCP
Endpoints: 10.1.0.174:8080
Session Affinity: None
Events: <none>
I am trying to access the app at: http://localhost:28000/health. I verified that the /health URL is accessible locally within the web server container.
I appreciate any help you can offer.
Edit:
I tried altering the ingress-nginx service to remove HTTPS, as suggested here:
https://stackoverflow.com/a/56303330/166850
This got rid of the 443 error in the logs, but didn't fix my setup (still getting connection refused).
Edit 2: Here is the Ingress YAML definition (kubectl get -o yaml):
$ kubectl get ing -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: "2019-12-09T18:47:33Z"
generation: 5
name: kbvalues-deployment-dev-ingress
namespace: default
resourceVersion: "20414"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/kbvalues-deployment-dev-ingress
uid: 5c34bf7f-1ab4-11ea-80e4-00155d169409
spec:
rules:
- host: localhost
http:
paths:
- backend:
serviceName: kbvalues-deployment-dev-frontend
servicePort: 28000
path: /
status:
loadBalancer:
ingress:
- hostname: localhost
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Edit 3: Output of kubectl get svc -A (ingress line only):
ingress-nginx ingress-nginx LoadBalancer 10.96.47.183 localhost 80:30470/TCP 21h
Edit 4: I tried to get the VM's IP address from windows HyperV, but it seems like the VM doesn't have an IP?
PS C:\> (Get-VMNetworkAdapter -VMName DockerDesktopVM)
Name IsManagementOs VMName SwitchName MacAddress Status IPAddresses
---- -------------- ------ ---------- ---------- ------ -----------
Network Adapter False DockerDesktopVM DockerNAT 00155D169409 {Ok} {}
Edit 5:
Output of netstat -a -n -o -b for port 80:
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4
Can not obtain ownership information
I have managed to create Ingress resource in Kubernetes on Docker in Windows.
Steps to reproduce:
Enable Hyper-V
Install Docker for Windows and enable Kubernetes
Connect kubectl
Enable Ingress
Create deployment
Create service
Create ingress resource
Add host into local hosts file
Test
Enable Hyper-V
From Powershell with administrator access run below command:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
System could ask you to reboot your machine.
Install Docker for Windows and enable Kubernetes
Install Docker application with all the default options and enable Kubernetes
Connect kubectl
Install kubectl .
Enable Ingress
Run this commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Edit: Make sure no other service is using port 80
Restart your machine. From a cmd prompt running as admin, do:
net stop http
Stop the listed services using services.msc
Use: netstat -a -n -o -b and check for other processes listening on port 80.
Create deployment
Below is simple deployment with pods that will reply to requests:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 2.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 2.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
Apply it by running command:
$ kubectl apply -f file_name.yaml
Create service
For pods to be able for you to communicate with them you need to create a service.
Example below:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 2.0.0
ports:
- name: http
protocol: TCP
port: 80
targetPort: 50001
Apply this service definition by running command:
$ kubectl apply -f file_name.yaml
Create Ingress resource
Below is simple Ingress resource using service created above:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
rules:
- host: hello-test.internal
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: http
Take a look at:
spec:
rules:
- host: hello-test.internal
hello-test.internal will be used as the hostname to connect to your pods.
Apply your Ingress resource by invoking command:
$ kubectl apply -f file_name.yaml
Add host into local hosts file
I found this Github link that will allow you to connect to your Ingress resource by hostname.
To achieve that add a line 127.0.0.1 hello-test.internal to your C:\Windows\System32\drivers\etc\hosts file and save it.
You will need Administrator privileges to do that.
Edit: The newest version of Docker Desktop for Windows already adds a hosts file entry:
127.0.0.1 kubernetes.docker.internal
Test
Display the information about Ingress resources by invoking command:
kubectl get ingress
It should show:
NAME HOSTS ADDRESS PORTS AGE
hello-ingress hello-test.internal localhost 80 6m2s
Now you can access your Ingress resource by opening your web browser and typing
http://kubernetes.docker.internal/
The browser should output:
Hello, world!
Version: 2.0.0
Hostname: hello-84d554cbdf-2lr76
Hostname: hello-84d554cbdf-2lr76 is the name of the pod that replied.
If this solution is not working please check connections with the command:
netstat -a -n -o
(with Administrator privileges) if something is not using port 80.
On Windows the Kubernetes cluster is running in a VM. Try to access ingress on that VM-s IP address instead of localhost.
i was facing similar issue while deploying ingress-nginx controller using the manual steps mentioned for bareMetal node at ingress-nginx-deploy however was facing an issue , however referred to the link Github link mentioned by #RMorrisey which leads to other threads where they have mentioned to install ingress-nginx using the steps mentioned for mac and it worked without making cny changes to host file , etc
The problem is that your service has a type of ClusterIP, which isn't accessible externally. You need it to be of type NodePort, which is what is done in Dawid Kruk's instructions.

By kubernetes nodeport I can't get access to the application

I make configuration that my service is builded on 8080 port.
My docker image is also on 8080.
I put my ReplicaSet with configuration like this
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-app-backend-rs
spec:
containers:
- name: my-app-backend
image: go-my-app-backend
ports:
- containerPort: 8080
imagePullPolicy: Never
And finally I create service of type NodePort also on port 8080 with configuration like below:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app-backend-rs
name: my-app-backend-svc-nodeport
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: my-app-backend
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
Type: NodePort
IP: 10.110.250.176
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31859/TCP
Endpoints: 172.17.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I not understanding and what am I doing wrong? Can anyone explain me that?
From your output,i'm seeing below endpoint is created.So it seems one pod is ready to serve for this nodeport service.So label is not an issue now.
Endpoints: 172.17.0.6:8080
First ensure you are able to access the app by running curl http://podhostname:8080 command, once you are login into the pod using kubectl exec -it podname sh(if curl is installed on image which running in that pod container).If not run curl ambassador container pods as sidecar and from that pod try to access the http://<>:8080 and ensure it is working.
Remember you can't access the nodeport service as localhost since it will be pointing to your master node,if you are running this command from master node.
You have to access this service by below methods.
<CLUSTERIP:PORT>---In you case:10.110.250.176:80
<1st node's IP>:31859
<2nd node's IP>:31859
I tried to use curl after kubectl exec -it podname sh
In this very example the double dash is missed in front of the sh command.
Please note that correct syntax can be checked anytime with the kubectl exec -h and shall be like:
kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]
if you have only one container per Pod it can be simplified to:
kubectl exec -it PODNAME -- COMMAND
The caveat of not specyfying the container is that in case of multiple containers on that Pod, you'll be conected to the first one :)
Example: kubectl exec -it pod/frontend-57gv5 -- curl localhost:80
I tried also hit on 10.110.250.176:80:31859 but this is incorrect I think. Sorry but I'm beginner at network stuff.
yes, that is not correct, as the value for :port occurs twice . In that example it is needed to hit 10.110.250.176:80 (as 10.110.250.176 is a "Cluster_IP" )
And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.
It depends on where you are going to run that command.
In this very case it is not clear what exactly you have put into ReplicaSet config (if Service's selector matches with ReplicaSet's labels), so let me explain "how this supposed to work".
Assuming we have the following ReplicaSet (the below example is slightly modified version of official documentation on topic ):
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-rs
labels:
app: guestbook
tier: frontend-meta
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
tier: frontend-label
template:
metadata:
labels:
tier: frontend-label ## shall match spec.selector.matchLabels.tier
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
And the following service:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-svc-tier-nodeport
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
tier: frontend-label ## shall match labels from ReplicaSet spec
We can create ReplicaSet (RS) and Service. As a result, we shall be able to see RS, Pods, Service and End Points:
kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
frontend-rs 2 2 2 10m php-redis gcr.io/google_samples/gb-frontend:v3 tier=frontend-label
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
frontend-rs-76sgd 1/1 Running 0 11m 10.12.0.31 gke-6v3n
frontend-rs-fxxq8 1/1 Running 0 11m 10.12.1.33 gke-m7z8
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
frontend-svc-tier-nodeport NodePort 10.0.5.10 <none> 80:32113/TCP 9m41s tier=frontend-label
kubectl get ep -o wide
NAME ENDPOINTS AGE
frontend-svc-tier-nodeport 10.12.0.31:80,10.12.1.33:80 10m
kubectl describe svc/frontend-svc-tier-nodeport
Selector: tier=frontend-label
Type: NodePort
IP: 10.0.5.10
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32113/TCP
Endpoints: 10.12.0.31:80,10.12.1.33:80
Important thing that we can see from my example is that Port was set 80:32113/TCP for the service we have created.
That shall allow us accessing "gb-frontend:v3" app in a few different ways:
from inside cluster: curl 10.0.5.10:80
(CLUSTER-IP:PORT) or curl frontend-svc-tier-nodeport:80
from external network (internet): curl PUBLIC_IP:32113
here PUBLIC_IP is the IP you can reach Node in your cluster. All the nodes in cluster are listening on a NodePort and forward requests according t the Service's selector.
from the Node : curl localhost:32113
Hope that helps.

Resources