I run the kserve example for sklearn-iris and got `302 Found` - kubeflow

serving exam model
create namespace
$ kubectl create namespace kserve-test
create InferenceService
$ kubectl apply -n kserve-test -f - <<EOF
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "sklearn-iris"
spec:
predictor:
model:
modelFormat:
name: sklearn
storageUri: "gs://kfserving-examples/models/sklearn/1.0/model"
EOF
check
$ kubectl get inferenceservices sklearn-iris -n kserve-test
NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE
sklearn-iris http://sklearn-iris.kserve-test.example.com True 100 sklearn-iris-predictor-default-00001 5h11m
check SERVICE_HOSTNAME, INGRESS_PORT, INGRESS_HOST
SERVICE_HOSTNAME
$ SERVICE_HOSTNAME=$(kubectl get inferenceservice sklearn-iris -n kserve-test -o jsonpath='{.status.url}' | cut -d "/" -f 3)
$ echo $SERVICE_HOSTNAME
sklearn-iris.kserve-test.example.com
INGRESS_PORT
$ INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
$ echo $INGRESS_PORT
31018
INGRESS_HOST
$ INGRESS_HOST=192.168.219.100
192.168.219.100 : Internal IP of my device
create input
$ cat <<EOF > "./iris-input.json"
{
"instances": [
[6.8, 2.8, 4.8, 1.4],
[6.0, 3.4, 4.5, 1.6]
]
}
EOF
send request
$ curl -v -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/sklearn-iris:predict -d #./iris-input.json
* Trying 192.168.219.100:31018...
* Connected to 192.168.219.100 (192.168.219.100) port 31018 (#0)
> POST /v1/models/sklearn-iris:predict HTTP/1.1
> Host: sklearn-iris.kserve-test.example.com
> User-Agent: curl/7.71.1
> Accept: */*
> Content-Length: 76
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 76 out of 76 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< location: /dex/auth?client_id=kubeflow-oidc-authservice&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTY3MTU5MDMxOHxFd3dBRUhZek16WktlRlZJZFc1alowVlROVTA9fFynQ-3082qPF_-qUwnYllySrEQPAKGqpBuF-Pu9gcnx
< date: Wed, 21 Dec 2022 02:38:38 GMT
< x-envoy-upstream-service-time: 6
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host 192.168.219.100 left intact
Got 302 Found.
As far as I know, this result is because I was asked for authentication from Dex.
what i did to solve this
Authentication
I tried to get the authservice_session token by following the method here: kserve:github
From CLI
$ CLUSTER_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.clusterIP}')
CLUSTER_IP: 10.103.239.220
$ curl -v http://${CLUSTER_IP}
* Trying 10.103.239.220:80...
* Connected to 10.103.239.220 (10.103.239.220) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.103.239.220
> User-Agent: curl/7.71.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< content-type: text/html; charset=utf-8
< location: /dex/auth?client_id=kubeflow-oidc-authservice&redirect_uri=%2Flogin%2Foidc&response_type=code&scope=profile+email+groups+openid&state=MTY3MTU5MzE2MXxFd3dBRUdGM2JUSnlaMkZVYjFWUVNFa3dURGs9fEDuO8ql3cFsetSfKntLvFV0al5tEZJeh23VK-JrJubM
< date: Wed, 21 Dec 2022 03:26:01 GMT
< content-length: 269
< x-envoy-upstream-service-time: 4
< server: istio-envoy
<
Found.
* Connection #0 to host 10.103.239.220 left intact
I stuck at this stage. I think it's wrong to see 302 Found.
From the browser
Copy the token content from the cookie authservice_session
$ SESSION =MTY3MTUyODQ2M3xOd3dBTkRkRVExbEdVa0kzVFRJMFMwOU9VRE5hV2pSS1VGVkNSRVJVUlRKVlVVOUlTa2hDVWpOU1RUZFRVRkJGVTFGV1N6UktXVkU9fCoQdbMu_diLBJAKLZSmF4qoqQTlINKq7A63hy-QNQcR
$ curl -v -H "Host: ${SERVICE_HOSTNAME}" -H "Cookie: authservice_session=${SESSION}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/sklearn-iris:predict -d ./iris-input.json
got 404 Not Found
curl -v -H "Host: ${SERVICE_HOSTNAME}" -H "Cookie: authservice_session=${SESSION}" http://${CLUSTER_IP}/v1/models/${MODEL_NAME}:predict -d #./iris-input.json
got 404 Not Found
added Envoy filter for bypass Dex
$ vi envoyfilter.yaml
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: sklearn-iris-filter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
routeConfiguration:
vhost:
name: sklearn-iris.kserve-test.example.com:31018
patch:
operation: MERGE
value:
per_filter_config:
envoy.ext_authz:
disabled: true
$ kubectl apply -f envoyfilter.yaml
Note: spec.configPatches.match.routeConfiguration.vhost.name : sklearn-iris.kserve-test.example.com:31018
It's not working.
Got still 302 Found
External Authorization
reference: https://github.com/kubeflow/kubeflow/issues/4549#issuecomment-932259673
add AuthorizationPolicy
$ vi authorizationpolicy.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: dex-auth
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: CUSTOM
provider:
# The provider name must match the extension provider defined in the mesh config.
name: dex-auth-provider
rules:
# The rules specify when to trigger the external authorizer.
- to:
- operation:
notPaths: ["/v1*"]
$ kubectl apply -f authorizationpolicy.yaml
Note: rules.to.operation: notPaths: ["/v1*"]
and delete envoyfilters named authn-filter that originally existed
$ kubectl delete -n istio-system envoyfilters.networking.istio.io authn-filter
next, restart deployment/istiod
$ kubectl rollout restart deployment/istiod -n istio-system
It's not working.
Got still 302 Found if don't delete envoyfilters named authn-filter that originally existed, or block the connection if I delete authn-filter .
What I need help:
How can I get Dex authorization and make a connected?
Or how can I bypass Dex if I can't get Dex authorization?
Maybe my model serving example is wrong. Thanks for any advice on what's wrong.
env:
ubuntu 20.04
$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:28:56Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:
I installed istio and kanative that included in the kubeflow manifest by following: kubeflow/manifasts.
$ kubectl get pod -n istio-system
istio-system authservice-0 1/1 Running
istio-system cluster-local-gateway-5449f87d9b-bb4vs 1/1 Running
istio-system istio-ingressgateway-77b9d69b74-xmv98 1/1 Running
istio-system istiod-67fcb675b5-kzfvd 1/1 Running
$ kubectl get pod -n knative-eventing
$ kubectl get pod -n knative-serving
knative-eventing eventing-controller-8457bd9747-855lc 1/1 Running
knative-eventing eventing-webhook-69986cfb5d-hn7tx 1/1 Running
knative-serving activator-7c5cd78566-pz6ns 2/2 Running
knative-serving autoscaler-98487645d-vh5wk 2/2 Running
knative-serving controller-7546f544b7-mng9g 2/2 Running
knative-serving domain-mapping-5d56bfc7d-5cb9l 2/2 Running
knative-serving domainmapping-webhook-696559d49c-p8rwr 2/2 Running
knative-serving net-istio-controller-c4d469c-lt5fl 2/2 Running
knative-serving net-istio-webhook-855bcb6747-wbl4x 2/2 Running
knative-serving webhook-59f9fdd446-xsf6n 2/2 Running
And installed KServe and KServe Built-in ClusterServingRuntimes by following: kserve installation
$ kubectl apply -f https://github.com/kserve/kserve/releases/download/v0.9.0/kserve.yaml
$ kubectl apply -f https://github.com/kserve/kserve/releases/download/v0.9.0/kserve-runtimes.yaml
$ kubectl get pod -n kserve
kserve-controller-manager-5fc887875d-m7rlp 2/2 Running
check gateway selector
knative-local-gateway in namespace knative-serving
$ kubectl get gateways knative-local-gateway -n knative-serving -o yaml
spec:
selector:
app: cluster-local-gateway
istio: cluster-local-gateway
istio-ingressgateway in namespace istio-system
$ kubectl get gateways istio-ingressgateway -n istio-system -o yaml
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
cluster-local-gateway in namespace istio-system
$ kubectl get gateways cluster-local-gateway -n istio-system -o yaml
spec:
selector:
app: cluster-local-gateway
istio: cluster-local-gateway
kubeflow-gateway in namespace kubeflow
$ kubectl get gateways kubeflow-gateway -n kubeflow -o yaml
spec:
selector:
istio: ingressgateway

Related

K8s Kf-serving: Using a different storage other than Google Cloud Storage [duplicate]

Is it possible to replace the usage of Google Cloud Storage buckets with an alternative on-premises solution so that it is possible to run e.g. Kubeflow Pipelines completely independent from the Google Cloud Platform?
Yes it is possible. You can use minio, it's like s3/gs but it runs on a persistent volume of your on-premises storage.
Here are the instructions on how to use it as a kfserving inference storage:
Validate that minio is running in your kubeflow installation:
$ kubectl get svc -n kubeflow |grep minio
minio-service ClusterIP 10.101.143.255 <none> 9000/TCP 81d
Enable a tunnel for your minio:
$ kubectl port-forward svc/minio-service -n kubeflow 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Browse http://localhost:9000 to get to the minio UI and create a bucket/upload your model. Credentials minio/minio123. Alternatively you can use the mc command to do it from your terminal:
$ mc ls minio/models/flowers/0001/
[2020-03-26 13:16:57 CET] 1.7MiB saved_model.pb
[2020-04-25 13:37:09 CEST] 0B variables/
Create a secret&serviceaccount for the minio access, note that the s3-endpoint defines the path to the minio, keyid&acceskey are the credentials encoded in base64:
$ kubectl get secret mysecret -n homelab -o yaml
apiVersion: v1
data:
awsAccessKeyID: bWluaW8=
awsSecretAccessKey: bWluaW8xMjM=
kind: Secret
metadata:
annotations:
serving.kubeflow.org/s3-endpoint: minio-service.kubeflow:9000
serving.kubeflow.org/s3-usehttps: "0"
name: mysecret
namespace: homelab
$ kubectl get serviceAccount -n homelab sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa
namespace: homelab
secrets:
- name: mysecret
Finally, create your inferenceservice as follows:
$ kubectl get inferenceservice tensorflow-flowers -n homelab -o yaml
apiVersion: serving.kubeflow.org/v1alpha2
kind: InferenceService
metadata:
name: tensorflow-flowers
namespace: homelab
spec:
default:
predictor:
serviceAccountName: sa
tensorflow:
storageUri: s3://models/flowers

Deployment pod cannot connect ClusterIP service

I try to expose my server IP by using Ingress.
The server is an Express.js app. It listens at http://localhost:5000 locally when without docker.
Here are my Kubernetes config files:
server-deployment.yaml
apiVersion: v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 1
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: hongbomiao/hongbomiao-server:latest
ports:
- containerPort: 5000
env:
- name: NODE_ENV
value: development
server-cluster-ip-service.yaml
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: server-cluster-ip-service
port:
number: 5000
I got my minikube IP by
➜ minikube ip
192.168.64.12
When I open 192.168.64.12 in my browser, I got 502 Bad Gateway.
I got some debug idea after reading https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#kubectl-apply. Here is what I have tried:
➜ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h34m
server-cluster-ip-service ClusterIP 10.102.5.161 <none> 5000/TCP 4h39m
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-deployment-bc6777445-pj59f 1/1 Running 0 4h39m
➜ kubectl exec -it server-deployment-bc6777445-pj59f -- sh
/app # apk add --no-cache curl
...
/app # curl 10.102.5.161:5000
curl: (28) Failed to connect to 10.102.5.161 port 5000: Operation timed out
It seems my deployment pod has issue connecting ClusterIP service now. Any help will be nice!
It turns out the issue is caused by my VPN.
I didn't change anything for the Kubernetes config in my question.
Also, letting the Express.js server explicitly listen at 0.0.0.0 is not necessary neither.
(Note #David Maze's comment under the question about 0.0.0.0 is still valuable)
const app = express()
.use(bodyParser.json())
.use(express.static(path.join(__dirname, '../dist')))
app.listen(5000); // This just works. No need explicitly change to app.listen(5000, '0.0.0.0');
At the time of writing, I was in China. To get rid of the VPN while still using Kubenetes / minikube, I found a way and posted it at GitHub here.
After turning off the VPN with this workaround solution, everything just works.
Copy my solution using minikube in China here:
Step 1 - Download the Aliyun version minikube
curl -Lo minikube https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.14.2/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Note: You can find if there is a new version to replace v1.14.2 in the command above at https://github.com/AliyunContainerService/minikube/wiki#%E5%AE%89%E8%A3%85minikube
Step 2 - Start the minikube
minikube start --image-mirror-country cn \
--driver=hyperkit \
--iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.15.0.iso \
--registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com
Note 1: You can find latest minikube version at https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md, then replace v1.15.0 in the command above to newer version.
However, Aliyun's minikube version is a little behind. To verify if a new version exists, you can replace the version in the URL of https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.15.0.iso to different new versions, such as v1.15.1, and then open it in the browser.
Note 2: For the xxxxxxxx in the command above, you can find yours at
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
(Need register an Aliyun account first)
Note 3: You can pass more parameters to this Aliyun version minikube start, check at https://github.com/AliyunContainerService/minikube/wiki#%E5%90%AF%E5%8A%A8
In my case, I am using the driver hyperkit on macOS, and Aliyun's iso-url, registry-mirror to speed up.

Can't Access Kubernetes Service Exposed via NodePort

I'm using minikube to test kubernetes on latest MacOS.
Here are my relevant YAMLs:
namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: micro
labels:
name: micro
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: adderservice
spec:
replicas: 1
template:
metadata:
labels:
run: adderservice
spec:
containers:
- name: adderservice
image: jeromesoung/adderservice:0.0.1
ports:
- containerPort: 8080
service.yml
apiVersion: v1
kind: Service
metadata:
name: adderservice
labels:
run: adderservice
spec:
ports:
- port: 8080
name: main
protocol: TCP
targetPort: 8080
selector:
run: adderservice
type: NodePort
After running minikube start, the steps I took to deploy is as follows:
kubectl create -f namespace.yml to create the namespace
kubectl config set-context minikube --namespace=micro
kubectl create -f deployment.yml
kubectl create -f service.yml
Then, I get the NodeIP and NodePort with below commands:
kubectl get services to get the NodePort
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice NodePort 10.99.155.255 <none> 8080:30981/TCP 21h
minikube ip to get the nodeIP
$ minikube ip
192.168.99.103
But when I do curl, I always get Connection Refused like this:
$ curl http://192.168.99.103:30981/add/1/2
curl: (7) Failed to connect to 192.168.99.103 port 30981: Connection refused
So I checked node, pod, deployment and endpoint as follows:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 23h v1.13.3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
adderservice-5b567df95f-9rrln 1/1 Running 0 23h
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
adderservice 1 1 1 1 23h
$ kubectl get endpoints
NAME ENDPOINTS AGE
adderservice 172.17.0.5:8080 21h
I also checked service list from minikube with:
$ minikube service -n micro adderservice --url
http://192.168.99.103:30981
I've read many posts regarding accessing k8s service via NodePorts. To my knowledge, I should be able to access the app with no problem. The only thing I suspect is that I'm using a custom namespace. Will this cause the access issue?
I know namespace will change the DNS, so, to be complete, I ran below commands also:
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.micro
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: kubernetes.micro
Address: 198.105.244.130
Name: kubernetes.micro
Address: 104.239.207.44
Could anyone help me out? Thank you.
The error Connection Refused mostly means that the application inside the container does not accept requests on the targeted interface or not mapped through the expected ports.
Things you need to be aware of:
Make sure that your application bind to 0.0.0.0 so it can receive requests from outside the container either externally as in public or through other containers.
Make sure that your application is actually listening on the containerPort and targetPort as expect
In your case you have to make sure that ADDERSERVICE_SERVICE_HOST equals to 0.0.0.0 and ADDERSERVICE_SERVICE_PORT equals to 8080 which should be the same value as targetPort in service.yml and containerPort in deployment.yml
Not answering the question but if someone who googled comes here like me who faced the same issue. Here is my solution for the same problem.
My Mac System IP and minikube IP are different.
So localhost:port didn't work instead try getting IP
minikube ip
Later, use that IP:Port to access the app and it works.
Check if service is really listening on 8080.
Try telnet within the container.
telnet 127.0.0.1 8080
.
.
.
telnet 172.17.0.5 8080

Cannot access K8s dashboard after installation of kubeadm-dind-cluster

I am using kubeadm-dind-cluster a Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker).
I have a fresh Centos 7 install on which I have just run ./dind-cluster-v1.13.sh up. I did not set any other values and am using all the default values for networking.
All appears well:
[root#node01 dind-cluster]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 23h v1.13.0
kube-node-1 Ready <none> 23h v1.13.0
kube-node-2 Ready <none> 23h v1.13.0
[root#node01 dind-cluster]# kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:32769
name: dind
contexts:
- context:
cluster: dind
user: ""
name: dind
current-context: dind
kind: Config
preferences: {}
users: []
[root#node01 dind-cluster]# kubectl cluster-info
Kubernetes master is running at http://127.0.0.1:32769
KubeDNS is running at http://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root#node01 dind-cluster]#
and it appears healthy:
[root#node01 dind-cluster]# curl -w '\n' http://127.0.0.1:32769/healthz
ok
I know the dashboard service is there:
[root#node01 dind-cluster]# kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.102.82.8 <none> 80:31990/TCP 23h
however any attempt to access it is refused:
[root#node01 dind-cluster]# curl http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
[root#node01 dind-cluster]# curl http://127.0.0.1:8080/ui
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
I also see the following in the firewall log:
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 127.0.0.1 --dport 32769 -j DNAT --to-destination 10.192.0.2:8080 ! -i br-669b654fc9cd' failed: iptables: No chain/target/match by that name.
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i br-669b654fc9cd -o br-669b654fc9cd -p tcp -d 10.192.0.2 --dport 8080 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 10.192.0.2 -d 10.192.0.2 --dport 8080 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
Any suggestions on how I actually access the dashboard externally from my development machine? I don't want to use the proxy to do this.
You should be able to access kubernetes-dashboard using the following addresses:
ClusterIP(works for other pods in cluster):
http://10.102.82.8:80/
NodePort(works for every host who can access cluster nodes using their IPs):
http://clusterNodeIP:31990/
Usually Kubernetes dashboard uses https protocol, so you may need to use different ports in request to kubernetes-dashboard Service for that.
You can also access the dashboard using kube-apiserver as a proxy:
Directly to dashboard Pod:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/pods/https:kubernetes-dashboard-pod-name:/proxy/#!/login
To dashboard ClusterIP service:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
I can guess that <master-ip>:<apiserver-port> would mean 127.0.0.1:32769 in your case.
In that situation, you'd indeed expect that everything works out-of-the box. However, seemingly the setup is missing a suitable service-account to access and manage the cluster through the dashboard.
Note I might be entirely mislead here, and maybe kubeadm-dind-cluster in fact provides such an account. Please note also that this project has been discontinued some time ago.
Anyway, here is how I fixed that problem. Hopefully it's of some help for other people (still) trying that out...
define the missing account and Role binding: Create a yaml file
# ------------------- Dashboard Secret ------------------- #
# ...already available
# ------------------- Dashboard Service Account ------------------- #
# ...already available
# ------------------- Dashboard Cluster Admin Account ------------------- #
#
# added by Ichthyo 2019-2
# - ServiceAccount and ClusterRoleBinding
# - allows administrative Access intoto Namespace kube-system
# - necessary to log-in via Kubernetes-Dashboard
#
apiVersion: v1
kind: ServiceAccount
metadata:
name: dash-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dash-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dash-admin
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Apply it to the already running cluster
kubectl apply -f k8s-dashboard-RBAC.yaml
Then find out the security token corresponding to dash-admin
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep dash-admin | awk '{print $1}')|egrep '^token:\s+'|awk '{print $2}
finally paste the extracted Token into the login screen

kubectl timeout inside kube-addon-manager

I was debugging a issue from my cluster, seems kubectl commands timeout inside the kube-addon-manager pod, while the equivalent curl command works fine.
bash-4.3# kubectl get node --v 10
I1119 16:35:55.506867 54 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.10.5 (linux/amd64) kubernetes/32ac1c9" http://localhost:8080/api
I1119 16:36:25.507550 54 round_trippers.go:405] GET http://localhost:8080/api in 30000 milliseconds
I1119 16:36:25.507959 54 round_trippers.go:411] Response Headers:
I1119 16:36:25.508122 54 cached_discovery.go:124] skipped caching discovery info due to Get http://localhost:8080/api: dial tcp: i/o timeout
Equivalent curl command output
bash-4.3# curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.10.5 (linux/amd64) kubernetes/32ac1c9" http://localhost:8080/api
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /api HTTP/1.1
> Host: localhost:8080
> Accept: application/json, */*
> User-Agent: kubectl/v1.10.5 (linux/amd64) kubernetes/32ac1c9
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Mon, 19 Nov 2018 16:43:00 GMT
< Content-Length: 134
<
{"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"172.16.1.13:6443"}]}
* Connection #0 to host localhost left intact
Also tried to run a docker container with host network mode, kubectl command still timeout.
kube-addon-manager.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-addon-manager
namespace: kube-system
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
labels:
component: kube-addon-manager
spec:
hostNetwork: true
containers:
- name: kube-addon-manager
image: gcr.io/google-containers/kube-addon-manager:v8.6
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- /opt/kube-addons.sh
resources:
requests:
cpu: 5m
memory: 50Mi
volumeMounts:
- mountPath: /etc/kubernetes/
name: addons
readOnly: true
volumes:
- name: addons
hostPath:
path: /etc/kubernetes/
Seems like in your config you are trying to talk to port 8080 which is the insecure port in the kube-apiserver.
You can try starting your kube-apiserver with this option:
--insecure-port
The default for the insecure port is 8080. Note that this option might be deprecated in the future.
Also, keep in mind the the kube-addon-manager is part of the legacy add-ons.

Resources