kube-apiserver - invalid bearer token, Token has been invalidated - kubeadm

When I try to start kube-apiserver I get following logs:
I1215 14:18:23.130968 1 controller.go:83] Starting OpenAPI controller
I1215 14:18:23.131021 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1215 14:18:23.131047 1 naming_controller.go:288] Starting NamingConditionController
I1215 14:18:23.131067 1 establishing_controller.go:73] Starting EstablishingController
I1215 14:18:23.131084 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1215 14:18:23.149275 1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I1215 14:18:23.191491 1 client.go:354] parsed scheme: ""
I1215 14:18:23.191600 1 client.go:354] scheme "" not registered, fallback to default scheme
I1215 14:18:23.191681 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I1215 14:18:23.206235 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1215 14:18:23.224439 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
I1215 14:18:23.232453 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1215 14:18:23.234017 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1215 14:18:23.279768 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1215 14:18:23.289645 1 cache.go:39] Caches are synced for autoregister controller
I1215 14:18:23.292993 1 client.go:354] parsed scheme: ""
I1215 14:18:23.293085 1 client.go:354] scheme "" not registered, fallback to default scheme
I1215 14:18:23.293264 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I1215 14:18:23.294432 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1215 14:18:23.328179 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1215 14:18:23.375123 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1215 14:18:23.834251 1 controller.go:107] OpenAPI AggregationController: Processing item
I1215 14:18:23.834297 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1215 14:18:23.849807 1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
E1215 14:18:24.259439 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
I1215 14:18:24.943441 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
E1215 14:18:25.269310 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:26.277415 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:27.283456 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:28.290149 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:29.297812 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:30.303821 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:31.312197 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:32.320085 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:33.331813 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:34.337389 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:35.345553 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
E1215 14:18:36.354106 1 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
I1215 14:18:36.916194 1 client.go:354] parsed scheme: ""
I1215 14:18:36.916270 1 client.go:354] scheme "" not registered, fallback to default scheme
I1215 14:18:36.916389 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I1215 14:18:36.916756 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1215 14:18:36.933986 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1215 14:18:36.944414 1 client.go:354] parsed scheme: ""
I1215 14:18:36.944474 1 client.go:354] scheme "" not registered, fallback to default scheme
[ more invalid token logs goes from here ]
When I try to run: kubectl get pods:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
My api server config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=...
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: ...
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
Kube controller config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
image: k8s.gcr.io/kube-controller-manager:v1.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
I use k8s in 1.15.0 version
I wonder what is the root cause - how I can refresh token in order to apiserver function correctly?

Related

ListenAndServeTLS runs locally but not in Docker container

When running a Go HTTPS server locally with self signed certificates, things are fine
When pushing the same to a docker container (via skaffold -- or Google GKE), ListenAndServeTLS is hanging and the container is looping on recreation.
Certificate was create via:
openssl genrsa -out https-server.key 2048
openssl ecparam -genkey -name secp384r1 -out https-server.key
openssl req -new -x509 -sha256 -key https-server.key -out https-server.crt -days 3650
main.go contains:
if IsSSL {
err := http.ListenAndServeTLS(addr+":"+srvPort, os.Getenv("CERT_FILE"), os.Getenv("KEY_FILE"), handler)
if err != nil {
log.Fatal(err)
}
} else {
log.Fatal(http.ListenAndServe(addr+":"+srvPort, handler))
}
The crt and key files are passed via K8s secrets and my yaml file contains the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
volumes:
- name: google-cloud-key
secret:
secretName: ecomm-key
- name: ssl-cert
secret:
secretName: ecomm-cert-server
- name: ssl-key
secret:
secretName: ecomm-cert-key
containers:
- name: frontend
image: gcr.io/sca-ecommerce-291313/frontend:latest
ports:
- containerPort: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: "/_healthz"
port: 8080
httpHeaders:
- name: "Cookie"
value: "shop_session-id=x-readiness-probe"
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: "/_healthz"
port: 8080
httpHeaders:
- name: "Cookie"
value: "shop_session-id=x-liveness-probe"
volumeMounts:
- name: ssl-cert
mountPath: /var/secrets/ssl-cert
- name: ssl-key
mountPath: /var/secrets/ssl-key
env:
- name: USE_SSL
value: "true"
- name: CERT_FILE
value: "/var/secrets/ssl-cert/cert-server.pem"
- name: KEY_FILE
value: "/var/secrets/ssl-key/cert-key.pem"
- name: PORT
value: "8080"
I have the same behaviour when referencing the file directly in the code like:
err := http.ListenAndServeTLS(addr+":"+srvPort, "https-server.crt", "https-server.key", handler)
The strange and not helping thing is that ListenAndServeTLS does not give any log output on why it's hanging or a hinch on the problem ( using kubectl logs )
Looking at the kubectl describe pod output:
Name: frontend-85f4d9cb8c-9bjh4
Namespace: ecomm-ns
Priority: 0
Start Time: Fri, 01 Jan 2021 17:04:29 +0100
Labels: app=frontend
app.kubernetes.io/managed-by=skaffold
pod-template-hash=85f4d9cb8c
skaffold.dev/run-id=44518449-c1c1-4b6c-8cc1-406ac6d6b91f
Annotations: sidecar.istio.io/rewriteAppHTTPProbers: true
Status: Running
IP: 192.168.10.7
IPs:
IP: 192.168.10.7
Controlled By: ReplicaSet/frontend-85f4d9cb8c
Containers:
frontend:
Container ID: docker://f867ea7a2f99edf891b571f80ae18f10e261375e073b9d2007bbff1600d272c7
Image: gcr.io/sca-ecommerce-291313/frontend:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22
Image ID: docker://sha256:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 01 Jan 2021 17:05:08 +0100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 01 Jan 2021 17:04:37 +0100
Finished: Fri, 01 Jan 2021 17:05:07 +0100
Ready: False
Restart Count: 1
Limits:
cpu: 200m
memory: 128Mi
Requests:
cpu: 100m
memory: 64Mi
Liveness: http-get http://:8080/_healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/_healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
CERT_FILE: /var/secrets/ssl-cert/cert-server.crt
KEY_FILE: /var/secrets/ssl-key/cert-server.key
PORT: 8080
USE_SSL: true
ONLINE_PRODUCT_CATALOG_SERVICE_ADDR: onlineproductcatalogservice:4040
ENV_PLATFORM: gcp
DISABLE_TRACING: 1
DISABLE_PROFILER: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tm62d (ro)
/var/secrets/google from google-cloud-key (rw)
/var/secrets/ssl-cert from ssl-cert (rw)
/var/secrets/ssl-key from ssl-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: ecomm-key
Optional: false
ssl-cert:
Type: Secret (a volume populated by a Secret)
SecretName: https-cert-server
Optional: false
ssl-key:
Type: Secret (a volume populated by a Secret)
SecretName: https-cert-key
Optional: false
default-token-tm62d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tm62d
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46s default-scheduler Successfully assigned ecomm-ns/frontend-85f4d9cb8c-9bjh4
Warning Unhealthy 17s (x2 over 27s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 400
Normal Pulled 8s (x2 over 41s) kubelet Container image "gcr.io/frontend:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22" already present on machine
Normal Created 8s (x2 over 39s) kubelet Created container frontend
Warning Unhealthy 8s (x3 over 28s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 400
Normal Killing 8s kubelet Container frontend failed liveness probe, will be restarted
Normal Started 7s (x2 over 38s) kubelet Started container frontend
The liveness probe and readyness probes are getting a 400 response.

Can't connect to the ETCD of Kubernetes

I've accidentally drained/uncordoned all nodes in Kubernetes (even master) and now I'm trying to bring it back by connecting to the ETCD and manually change some keys in there. I successfuly bashed into etcd container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8fbcb67da963 quay.io/coreos/etcd:v3.3.10 "/usr/local/bin/etcd" 17 hours ago Up 17 hours etcd1
a0d6426df02a cd48205a40f0 "kube-controller-man…" 17 hours ago Up 17 hours k8s_kube-controller-manager_kube-controller-manager-node1_kube-system_0441d7804a7366fd957f8b402008efe5_16
5fa8e47441a0 6bed756ced73 "kube-scheduler --au…" 17 hours ago Up 17 hours k8s_kube-scheduler_kube-scheduler-node1_kube-system_6f33d7866b72ca1b13c79edd42fa8dc6_14
2c8e07cf499f gcr.io/google_containers/pause-amd64:3.1 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-scheduler-node1_kube-system_6f33d7866b72ca1b13c79edd42fa8dc6_3
2ca43282ea1c gcr.io/google_containers/pause-amd64:3.1 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-controller-manager-node1_kube-system_0441d7804a7366fd957f8b402008efe5_3
9473644a3333 gcr.io/google_containers/pause-amd64:3.1 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-apiserver-node1_kube-system_93ff1a9840f77f8b2b924a85815e17fe_3
and then I run:
docker exec -it 8fbcb67da963 /bin/sh
and then I try to run the following:
ETCDCTL_API=3 etcdctl --endpoints https://172.16.16.111:2379 --cacert /etc/ssl/etcd/ssl/ca.pem --key /etc/ssl/etcd/ssl/member-node1-key.pem --cert /etc/ssl/etcd/ssl/member-node1.pem get / --prefix=true -w json --debug
and here is the result I get:
ETCDCTL_CACERT=/etc/ssl/etcd/ssl/ca.pem
ETCDCTL_CERT=/etc/ssl/etcd/ssl/member-node1.pem
ETCDCTL_COMMAND_TIMEOUT=5s
ETCDCTL_DEBUG=true
ETCDCTL_DIAL_TIMEOUT=2s
ETCDCTL_DISCOVERY_SRV=
ETCDCTL_ENDPOINTS=[https://172.16.16.111:2379]
ETCDCTL_HEX=false
ETCDCTL_INSECURE_DISCOVERY=true
ETCDCTL_INSECURE_SKIP_TLS_VERIFY=false
ETCDCTL_INSECURE_TRANSPORT=true
ETCDCTL_KEEPALIVE_TIME=2s
ETCDCTL_KEEPALIVE_TIMEOUT=6s
ETCDCTL_KEY=/etc/ssl/etcd/ssl/member-node1-key.pem
ETCDCTL_USER=
ETCDCTL_WRITE_OUT=json
INFO: 2020/06/24 15:44:07 ccBalancerWrapper: updating state and picker called by balancer: IDLE, 0xc420246c00
INFO: 2020/06/24 15:44:07 dialing to target with scheme: ""
INFO: 2020/06/24 15:44:07 could not get resolver for scheme: ""
INFO: 2020/06/24 15:44:07 balancerWrapper: is pickfirst: false
INFO: 2020/06/24 15:44:07 balancerWrapper: got update addr from Notify: [{172.16.16.111:2379 <nil>}]
INFO: 2020/06/24 15:44:07 ccBalancerWrapper: new subconn: [{172.16.16.111:2379 0 <nil>}]
INFO: 2020/06/24 15:44:07 balancerWrapper: handle subconn state change: 0xc4201708d0, CONNECTING
INFO: 2020/06/24 15:44:07 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc420246c00
Error: context deadline exceeded
Here is my etcd.env:
# Environment file for etcd v3.3.10
ETCD_DATA_DIR=/var/lib/etcd
ETCD_ADVERTISE_CLIENT_URLS=https://172.16.16.111:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.16.16.111:2380
ETCD_INITIAL_CLUSTER_STATE=existing
ETCD_METRICS=basic
ETCD_LISTEN_CLIENT_URLS=https://172.16.16.111:2379,https://127.0.0.1:2379
ETCD_ELECTION_TIMEOUT=5000
ETCD_HEARTBEAT_INTERVAL=250
ETCD_INITIAL_CLUSTER_TOKEN=k8s_etcd
ETCD_LISTEN_PEER_URLS=https://172.16.16.111:2380
ETCD_NAME=etcd1
ETCD_PROXY=off
ETCD_INITIAL_CLUSTER=etcd1=https://172.16.16.111:2380,etcd2=https://172.16.16.112:2380,etcd3=https://172.16.16.113:2380
ETCD_AUTO_COMPACTION_RETENTION=8
ETCD_SNAPSHOT_COUNT=10000
# TLS settings
ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-node1.pem
ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-node1-key.pem
ETCD_CLIENT_CERT_AUTH=true
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
ETCD_PEER_CERT_FILE=/etc/ssl/etcd/ssl/member-node1.pem
ETCD_PEER_KEY_FILE=/etc/ssl/etcd/ssl/member-node1-key.pem
ETCD_PEER_CLIENT_CERT_AUTH=True
Update 1:
Here is my kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.16.111
bindPort: 6443
certificateKey: d73faece88f86e447eea3ca38f7b07e0a1f0bbb886567fee3b8cf8848b1bf8dd
nodeRegistration:
name: node1
taints: []
criSocket: /var/run/dockershim.sock
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: cluster.local
etcd:
external:
endpoints:
- https://172.16.16.111:2379
- https://172.16.16.112:2379
- https://172.16.16.113:2379
caFile: /etc/ssl/etcd/ssl/ca.pem
certFile: /etc/ssl/etcd/ssl/node-node1.pem
keyFile: /etc/ssl/etcd/ssl/node-node1-key.pem
dns:
type: CoreDNS
imageRepository: docker.io/coredns
imageTag: 1.6.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.233.0.0/18
podSubnet: 10.233.64.0/18
kubernetesVersion: v1.16.6
controlPlaneEndpoint: 172.16.16.111:6443
certificatesDir: /etc/kubernetes/ssl
imageRepository: gcr.io/google-containers
apiServer:
extraArgs:
anonymous-auth: "True"
authorization-mode: Node,RBAC
bind-address: 0.0.0.0
insecure-port: "0"
apiserver-count: "1"
endpoint-reconciler-type: lease
service-node-port-range: 30000-32767
kubelet-preferred-address-types: "InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP"
profiling: "False"
request-timeout: "1m0s"
enable-aggregator-routing: "False"
storage-backend: etcd3
runtime-config:
allow-privileged: "true"
extraVolumes:
- name: usr-share-ca-certificates
hostPath: /usr/share/ca-certificates
mountPath: /usr/share/ca-certificates
readOnly: true
certSANs:
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- 10.233.0.1
- localhost
- 127.0.0.1
- node1
- lb-apiserver.kubernetes.local
- 172.16.16.111
- node1.cluster.local
timeoutForControlPlane: 5m0s
controllerManager:
extraArgs:
node-monitor-grace-period: 40s
node-monitor-period: 5s
pod-eviction-timeout: 5m0s
node-cidr-mask-size: "24"
profiling: "False"
terminated-pod-gc-threshold: "12500"
bind-address: 0.0.0.0
configure-cloud-routes: "false"
scheduler:
extraArgs:
bind-address: 0.0.0.0
extraVolumes:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes:
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig:
qps: 5
clusterCIDR: 10.233.64.0/18
configSyncPeriod: 15m0s
conntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: False
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: node1
iptables:
masqueradeAll: False
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: []
minSyncPeriod: 0s
scheduler: rr
syncPeriod: 30s
strictARP: False
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
nodePortAddresses: []
oomScoreAdj: -999
portRange:
udpIdleTimeout: 250ms
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- 169.254.25.10
Update 2:
Contents of /etc/kubernetes/manigests/kube-apiserver.yaml:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=172.16.16.111
- --allow-privileged=true
- --anonymous-auth=True
- --apiserver-count=1
- --authorization-mode=Node,RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/ssl/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-aggregator-routing=False
- --enable-bootstrap-token-auth=true
- --endpoint-reconciler-type=lease
- --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
- --etcd-certfile=/etc/ssl/etcd/ssl/node-node1.pem
- --etcd-keyfile=/etc/ssl/etcd/ssl/node-node1-key.pem
- --etcd-servers=https://172.16.16.111:2379,https://172.16.16.112:2379,https://172.16.16.113:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
- --profiling=False
- --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key
- --request-timeout=1m0s
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --runtime-config=
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/ssl/sa.pub
- --service-cluster-ip-range=10.233.0.0/18
- --service-node-port-range=30000-32767
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
image: gcr.io/google-containers/kube-apiserver:v1.16.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.16.16.111
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/ssl/etcd/ssl
name: etcd-certs-0
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/ssl/etcd/ssl
type: DirectoryOrCreate
name: etcd-certs-0
- hostPath:
path: /etc/kubernetes/ssl
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: ""
name: usr-share-ca-certificates
status: {}
I used kubespray to install the cluster.
How can I connect to the etcd? Any help would be appreciated.
This context deadline exceeded generally happens because of
Using wrong certificates. You could be using peer certificates instead of client certificates. You need to check the Kubernetes API Server parameters which will tell you where are the client certificates located because Kubernetes API Server is a client to ETCD. Then you can use those same certificates in the etcdctl command from the node.
The etcd cluster is not operational anymore because peer members are down.

how docker-registry persist images in openshift origin

i'm new to openshift/kubernetes/docker and i was wondering where the docker registry of openshift origin persist the images , knowing that :
1.in the deployment's yaml of the docker registry , there is only emptyDir volumes declaration
volumes:
- emptyDir: {}
name: registry-storage
2.in the machine where the pod is deployed i can't see no volume using
docker volumes ls
3.the images are still persisted even if i restart the pod
docker registry deployment's yaml :
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
creationTimestamp: '2020-04-26T18:16:50Z'
generation: 1
labels:
docker-registry: default
name: docker-registry
namespace: default
resourceVersion: '1844231'
selfLink: >-
/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry
uid: 1983153d-87ea-11ea-a4bc-fa163ee581f7
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
docker-registry: default
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
docker-registry: default
spec:
containers:
- env:
- name: REGISTRY_HTTP_ADDR
value: ':5000'
- name: REGISTRY_HTTP_NET
value: tcp
- name: REGISTRY_HTTP_SECRET
value:
- name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
value: 'false'
- name: OPENSHIFT_DEFAULT_REGISTRY
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /etc/secrets/registry.crt
- name: REGISTRY_OPENSHIFT_SERVER_ADDR
value: 'docker-registry.default.svc:5000'
- name: REGISTRY_HTTP_TLS_KEY
value: /etc/secrets/registry.key
image: 'docker.io/openshift/origin-docker-registry:v3.11'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: registry
ports:
- containerPort: 5000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 5000
scheme: HTTPS
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /etc/secrets
name: registry-certificates
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/infra: 'true'
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: registry
serviceAccountName: registry
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: registry-storage
- name: registry-certificates
secret:
defaultMode: 420
secretName: registry-certificates
test: false
triggers:
- type: ConfigChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: '2020-04-26T18:17:12Z'
lastUpdateTime: '2020-04-26T18:17:12Z'
message: replication controller "docker-registry-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2020-05-05T09:39:57Z'
lastUpdateTime: '2020-05-05T09:39:57Z'
message: Deployment config has minimum availability.
status: 'True'
type: Available
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
to restart : i just delete the pod and a new one is created since i'm using a deployment
i'm creating the file in the /registry
Restarting does not mean the data is deleted, it still exist in the container top layer, suggest you get started by reading this.
Persistence is, for example in Kubernetes, when a pod is deleted and re-created on another node and still maintains the same state of a volume.

Kubernetes - nginx-ingress is crashing after file upload via php

I'am running Kubernetes cluster on Google Cloud Platform via their Kubernetes Engine. Cluster version is 1.13.11-gke.14. PHP application pod contains 2 containers - Nginx as a reverse proxy and php-fpm (7.2).
In google cloud is used TCP Load Balancer and then internal routing via Nginx Ingress.
Problem is:
when I upload some bigger file (17MB), ingress is crashing with this error:
W 2019-12-01T14:26:06.341588Z Dynamic reconfiguration failed: Post http+unix://nginx-status/configuration/backends: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E 2019-12-01T14:26:06.341658Z Unexpected failure reconfiguring NGINX:
W 2019-12-01T14:26:06.345575Z requeuing initial-sync, err Post http+unix://nginx-status/configuration/backends: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
I 2019-12-01T14:26:06.354869Z Configuration changes detected, backend reload required.
E 2019-12-01T14:26:06.393528796Z Post http+unix://nginx-status/configuration/backends: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E 2019-12-01T14:26:08.077580Z healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: connection refused
I 2019-12-01T14:26:12.314526990Z 10.132.0.25 - [10.132.0.25] - - [01/Dec/2019:14:26:12 +0000] "GET / HTTP/2.0" 200 541 "-" "GoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring)" 99 1.787 [bap-staging-bap-staging-80] [] 10.102.2.4:80 553 1.788 200 5ac9d438e5ca31618386b35f67e2033b
E 2019-12-01T14:26:12.455236Z healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: connection refused
I 2019-12-01T14:26:13.156963Z Exiting with 0
Here is yaml configuration of Nginx ingress. Configuration is default by Gitlab's system that is creating cluster on their own.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2019-11-24T17:35:04Z"
generation: 3
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "2638973"
selfLink: /apis/apps/v1/namespaces/gitlab-managed-apps/deployments/ingress-nginx-ingress-controller
uid: bfb695c2-0ee0-11ea-a36a-42010a84009f
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
release: ingress
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: nginx-ingress
component: controller
release: ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=gitlab-managed-apps/ingress-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=gitlab-managed-apps/ingress-nginx-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/nginx/modsecurity/modsecurity.conf
name: modsecurity-template-volume
subPath: modsecurity.conf
- mountPath: /var/log/modsec
name: modsecurity-log-volume
- args:
- /bin/sh
- -c
- tail -f /var/log/modsec/audit.log
image: busybox
imagePullPolicy: Always
name: modsecurity-log
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log/modsec
name: modsecurity-log-volume
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ingress-nginx-ingress
serviceAccountName: ingress-nginx-ingress
terminationGracePeriodSeconds: 60
volumes:
- configMap:
defaultMode: 420
items:
- key: modsecurity.conf
path: modsecurity.conf
name: ingress-nginx-ingress-controller
name: modsecurity-template-volume
- emptyDir: {}
name: modsecurity-log-volume
I have no Idea what else to try. I'm running cluster on 3 nodes (2x 1vCPU, 1.5GB RAM and 1x Preemptile 2vCPU, 1,8GB RAM), all of them on SSD drives.
Anytime i upload the image, disk IO will get crazy.
Disk IOPS
Disk I/O
Thanks for your help.
Found solution. Nginx-ingress pod contained modsecurity too. All requests were analyzed by mod security and bigger uploaded files caused those crashes. It wasn't crash at all but took too much CPU and I/O, that caused longer healthcheck response to all other pods. Solution is to configure correctly modsecurity or disable.

kubelet can not set cluster dns parameter using kube-dns

I have a problem when I set kubelet parameter cluster-dns
My OS is CentOS Linux release 7.0.1406 (Core)
Kernel:Linux master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
kubelet config file:
KUBELET_HOSTNAME="--hostname-override=master"
#KUBELET_API_SERVER="--api-servers=http://master:8080
KUBECONFIG="--kubeconfig=/root/.kube/config-demo"
KUBELET_DNS="–-cluster-dns=10.254.0.10"
KUBELET_DOMAIN="--cluster-domain=cluster.local"
# Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest"
config file:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://master:8080"
kubelet.service file:
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_DNS \
$KUBELET_DOMAIN \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS \
$KUBECONFIG
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
When I start the kubelet service I can see the "--cluster-dns=10.254.0.10" parameter is correct set:
root 29705 1 1 13:24 ? 00:00:16 /usr/bin/kubelet --logtostderr=true --v=4 –-cluster-dns=10.254.0.10 --cluster-domain=cluster.local --hostname-override=master --allow-privileged=false --cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest --kubeconfig=/root/.kube/config-demo
But when I use systemctl status kubelet check the service the cluster-domain parameter just have only on "-" like:
systemctl status kubelet -l
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-07-13 13:24:07 CST; 5s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 29705 (kubelet)
Memory: 30.6M
CGroup: /system.slice/kubelet.service
└─29705 /usr/bin/kubelet --logtostderr=true --v=4 -cluster-dns=10.254.0.10 --cluster-domain=cluster.local --hostname-override=master --allow-privileged=false --cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest --kubeconfig=/root/.kube/config-demo
In the logs say there is nothing set in cluster-dns flag:
Jul 13 13:24:07 master kubelet: I0713 13:24:07.680625 29705 flags.go:27] FLAG: --cluster-dns="[]"
Jul 13 13:24:07 master kubelet: I0713 13:24:07.680636 29705 flags.go:27] FLAG: --cluster-domain="cluster.local"
The Pods with errors:
pod: "java-deploy-69c84746b9-b2d7j_default(ce02d183-864f-11e8-9bdb-525400c4f6bf)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
My kube-dns config file:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: kube-dns
# namespace: kube-system
# labels:
# kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: 177.1.1.35/library/kube-dns:1.14.8
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --kube-master-url=http://177.1.1.40:8080
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: 177.1.1.35/library/dnsmasq:1.14.8
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: 177.1.1.35/library/sidecar:1.14.8
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
#serviceAccountName: kube-dns
Recheck your kubelet config:
KUBELET_DNS="–-cluster-dns=10.254.0.10"
It seems to me that the first dash is longer than the second.
Maybe a copy&paste you made causes that strange character.
Retype it and retry.

Resources