Docker getsockopt: connection refuse - docker

I am running a docker server on my Ubuntu machine and I am getting an error when one of the application running into the server tries to open a socket, I get the following error, any clue why this is happening?
I have run the same code in virtual box and I don't see it... Maybe a memory lack issue?
thanks in adnvance,
regards!
croft_1 | 2016/04/15 13:55:03 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:05 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:07 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:09 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:11 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:13 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:15 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:17 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:19 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:21 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:23 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:25 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:27 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:29 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:31 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:33 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:35 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:37 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:39 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:41 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | panic: runtime error: invalid memory address or nil pointer dereference
croft_1 | [signal 0xb code=0x1 addr=0x20 pc=0x401570]
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3b4a7e9023ad 0.00% 7.741 MB / 1.039 GB 0.74% 9.314 kB / 5.679 kB 5.046 MB / 0 B 0
6614c0b5c616 0.02% 4.379 MB / 1.039 GB 0.42% 10.02 kB / 648 B 5.009 MB / 0 B 0
658988a96412 3.09% 77.4 MB / 1.039 GB 7.45% 9.83 kB / 648 B 46.69 MB / 17.5 MB 0
9e1c772d7635 0.83% 115.7 MB / 1.039 GB 11.13% 15.12 kB / 5.484 kB 37.06 MB / 147.5 kB 0
0cc07e241f7a 0.22% 102.6 MB / 1.039 GB 9.87% 12.16 kB / 2.652 kB 37.25 MB / 471 kB 0
^Clorabackbone#lorabackbone-desktop:~/server-devenv$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b4a7e9023ad serverdevenv_croft "./croft" 3 hours ago Up 2 minutes 1700/tcp, 0.0.0.0:1700->1700/udp serverdevenv_croft_1
6614c0b5c616 ansi/mosquitto "/usr/local/sbin/mosq" 3 hours ago Up 3 minutes 0.0.0.0:1883->1883/tcp serverdevenv_mosquitto_1
658988a96412 serverdevenv_mongodb "/entrypoint.sh mongo" 3 hours ago Up 3 minutes 0.0.0.0:27017->27017/tcp serverdevenv_mongodb_1
9e1c772d7635 rabbitmq:3-management "/docker-entrypoint.s" 3 hours ago Up 3 minutes 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp serverdevenv_rabbitmq_1
0cc07e241f7a cpswan/node-red "/usr/local/bin/node-" 3 hours ago Up 3 minutes 0.0.0.0:1880->1880/tcp

Related

Jaeger All in one docker gRPC issues

Getting this error when i startup jaeger allinone docker latest. Not sure why this is - can anyone help here? I am running this on Windows, Docker for desktop. This is behind a corp proxy, if that's helpful. This is the command i am using to startup
docker run --name jaeger -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 -p 14250:14250 -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 9411:9411 jaegertracing/all-in-one:latest
jaeger_1 | {"level":"warn","ts":1614280472.7859962,"caller":"grpc#v1.29.1/clientconn.go:1275","msg":"grpc: addrConn.createTransport failed to connect to {:14250 0 }. Err: connection error: desc = "transport: Error while dialing failed to do connect handshake, response: \"HTTP/1.0 400 Bad Request\\r\\nConnection: close\\r\\n\\r\\nBad request\"". Reconnecting...","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280472.7862206,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280472.7865677,"caller":"grpc#v1.29.1/clientconn.go:417","msg":"Channel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280472.786733,"caller":"grpc/builder.go:119","msg":"Agent collector connection state change","dialTarget":":14250","status":"TRANSIENT_FAILURE"}
jaeger_1 | {"level":"info","ts":1614280473.7868812,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280473.7870724,"caller":"grpc#v1.29.1/clientconn.go:1193","msg":"Subchannel picks a new address ":14250" to connect","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"warn","ts":1614280473.9255273,"caller":"grpc#v1.29.1/clientconn.go:1275","msg":"grpc: addrConn.createTransport failed to connect to {:14250 0 }. Err: connection error: desc = "transport: Error while dialing failed to do connect handshake, response: \"HTTP/1.0 400 Bad Request\\r\\nConnection: close\\r\\n\\r\\nBad request\"". Reconnecting...","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280473.925584,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280475.517604,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280475.517681,"caller":"grpc#v1.29.1/clientconn.go:1193","msg":"Subchannel picks a new address ":14250" to connect","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"warn","ts":1614280475.6554408,"caller":"grpc#v1.29.1/clientconn.go:1275","msg":"grpc: addrConn.createTransport failed to connect to {:14250 0 }. Err: connection error: desc = "transport: Error while dialing failed to do connect handshake, response: \"HTTP/1.0 400 Bad Request\\r\\nConnection: close\\r\\n\\r\\nBad request\"". Reconnecting...","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280475.6555314,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280478.3234975,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280478.3236346,"caller":"grpc#v1.29.1/clientconn.go:1193","msg":"Subchannel picks a new address ":14250" to connect","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"warn","ts":1614280478.5104907,"caller":"grpc#v1.29.1/clientconn.go:1275","msg":"grpc: addrConn.createTransport failed to connect to {:14250 0 }. Err: connection error: desc = "transport: Error while dialing failed to do connect handshake, response: \"HTTP/1.0 400 Bad Request\\r\\nConnection: close\\r\\n\\r\\nBad request\"". Reconnecting...","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280478.5105915,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280481.8225894,"caller":"grpc#v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"info","ts":1614280481.8227663,"caller":"grpc#v1.29.1/clientconn.go:1193","msg":"Subchannel picks a new address ":14250" to connect","system":"grpc","grpc_log":true}
jaeger_1 | {"level":"warn","ts":1614280482.031132,"caller":"grpc#v1.29.1/clientconn.go:1275","msg":"grpc: addrConn.createTransport failed to connect to {:14250 0 }. Err: connection error: desc = "transport: Error while dialing failed to do connect handshake, response: \"HTTP/1.0 400 Bad Request\\r\\nConnection: close\\r\\n\\r\\nBad request\"". Reconnecting...","system":"grpc","grpc_log":true}

kubeadm init failing on ARM64

I'm trying to setup a single master cluster on some sopine64s (quad A53 with 2GB RAM) running Armbian 5.38 (Ubuntu 16.04 based). Kernel is 3.10.107-pine64.
Steps taken so far:
usual ip address, hostname, timezone, dns, etc config
apt upgrade
disable swap
set net.bridge.bridge-nf-call-iptables to 1 in sysctl.conf (intending to use weavenet)
install docker 1.13.1 (docker.io package)
install kubeadm, kubelet, kubectl v1.11
systemctl enable and start kubelet and docker
reboot
kubeadm config images pull (all download ok)
Here's the output of kubeadm init:
I0712 18:58:42.149510 31708 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0712 18:58:42.301648 31708 kernel_validator.go:81] Validating kernel version
I0712 18:58:42.302621 31708 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sopine0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.16]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [sopine0 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [sopine0 localhost] and IPs [192.168.0.16 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-arm64:v1.11.0
- k8s.gcr.io/kube-controller-manager-arm64:v1.11.0
- k8s.gcr.io/kube-scheduler-arm64:v1.11.0
- k8s.gcr.io/etcd-arm64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
If I look at the containers, the one for kube-apiserver is exiting and being recreated every few minutes. Here's it's log file:
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0712 07:06:39.855921 1 server.go:703] external host was not specified, using 192.168.0.16
I0712 07:06:39.856998 1 server.go:145] Version: v1.11.0
I0712 07:07:05.966337 1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0712 07:07:05.966598 1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0712 07:07:05.975261 1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0712 07:07:05.975630 1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0712 07:07:06.459185 1 master.go:234] Using reconciler: lease
W0712 07:07:30.376324 1 genericapiserver.go:319] Skipping API batch/v2alpha1 because it has no resources.
W0712 07:07:33.264038 1 genericapiserver.go:319] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0712 07:07:33.325028 1 genericapiserver.go:319] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0712 07:07:33.508270 1 genericapiserver.go:319] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0712 07:07:38.454808 1 genericapiserver.go:319] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/07/12 07:07:38 log.go:33: [restful/swagger] listing is available at https://192.168.0.16:6443/swaggerapi
[restful] 2018/07/12 07:07:38 log.go:33: [restful/swagger] https://192.168.0.16:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/07/12 07:07:48 log.go:33: [restful/swagger] listing is available at https://192.168.0.16:6443/swaggerapi
[restful] 2018/07/12 07:07:48 log.go:33: [restful/swagger] https://192.168.0.16:6443/swaggerui/ is mapped to folder /swagger-ui/
I0712 07:07:48.845592 1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0712 07:07:48.845818 1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0712 07:08:11.577474 1 serve.go:96] Serving securely on [::]:6443
I0712 07:08:11.578033 1 available_controller.go:278] Starting AvailableConditionController
I0712 07:08:11.578198 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0712 07:08:11.578033 1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0712 07:08:11.581700 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0712 07:08:11.581449 1 crd_finalizer.go:242] Starting CRDFinalizer
I0712 07:08:11.581617 1 autoregister_controller.go:136] Starting autoregister controller
I0712 07:08:11.582060 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0712 07:08:11.583450 1 controller.go:84] Starting OpenAPI AggregationController
I0712 07:08:11.584707 1 customresource_discovery_controller.go:199] Starting DiscoveryController
I0712 07:08:11.585112 1 naming_controller.go:284] Starting NamingConditionController
I0712 07:08:11.585243 1 establishing_controller.go:73] Starting EstablishingController
I0712 07:08:11.585336 1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0712 07:08:11.585379 1 controller_utils.go:1025] Waiting for caches to sync for crd-autoregister controller
I0712 07:08:13.059515 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41525: EOF
<above message repeats 9 more times on different ports in the 415xx range>
I0712 07:08:15.961160 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41566: EOF
I0712 07:08:16.582527 1 controller_utils.go:1032] Caches are synced for crd-autoregister controller
I0712 07:08:16.700615 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41564: EOF
<above message repeats 60 more times on different ports in the 41[5-7]xx range>
I0712 07:08:17.535106 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41720: EOF
I0712 07:08:17.560585 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0712 07:08:17.563061 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41723: EOF
I0712 07:08:17.577852 1 cache.go:39] Caches are synced for autoregister controller
I0712 07:08:17.596321 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41696: EOF
<above message repeats 6 more times on different ports in the 41[5-7]xx range>
I0712 07:08:17.686658 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41706: EOF
I0712 07:08:17.688440 1 trace.go:76] Trace[288588746]: "List /api/v1/services" (started: 2018-07-12 07:08:17.127883224 +0000 UTC m=+97.754900744) (total time: 560.373467ms):
Trace[288588746]: [560.004232ms] [559.9889ms] Listing from storage done
I0712 07:08:17.696643 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41726: EOF
<above message repeats 11 more times on different ports in the 41[5-7]xx range>
I0712 07:08:17.811279 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41716: EOF
I0712 07:08:17.831546 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0712 07:08:17.850811 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41717: EOF
<above message repeats 11 more times on different ports in the 41[5-7]xx range>
I0712 07:08:18.303267 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41752: EOF
I0712 07:08:18.359750 1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0712 07:08:18.386442 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41763: EOF
I0712 07:08:18.399648 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41759: EOF
I0712 07:08:18.431038 1 trace.go:76] Trace[413119584]: "GuaranteedUpdate etcd3: *core.Pod" (started: 2018-07-12 07:08:17.845710035 +0000 UTC m=+98.472727763) (total time: 585.187661ms):
Trace[413119584]: [499.634456ms] [499.240097ms] Transaction prepared
I0712 07:08:18.432293 1 trace.go:76] Trace[838520449]: "Patch /api/v1/namespaces/kube-system/pods/kube-apiserver-sopine0/status" (started: 2018-07-12 07:08:17.845257845 +0000 UTC m=+98.472275323) (total time: 586.889091ms):
Trace[838520449]: [272.406761ms] [271.550004ms] About to check admission control
Trace[838520449]: [586.455609ms] [314.048848ms] Object stored in database
I0712 07:08:18.590379 1 controller.go:158] Shutting down kubernetes service endpoint reconciler
I0712 07:08:18.591681 1 available_controller.go:290] Shutting down AvailableConditionController
I0712 07:08:18.592066 1 autoregister_controller.go:160] Shutting down autoregister controller
I0712 07:08:18.592253 1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0712 07:08:18.593252 1 crd_finalizer.go:254] Shutting down CRDFinalizer
I0712 07:08:18.593636 1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0712 07:08:18.593831 1 establishing_controller.go:84] Shutting down EstablishingController
I0712 07:08:18.593962 1 naming_controller.go:295] Shutting down NamingConditionController
I0712 07:08:18.596110 1 customresource_discovery_controller.go:210] Shutting down DiscoveryController
I0712 07:08:18.596965 1 serve.go:136] Stopped listening on [::]:6443
I0712 07:08:18.597046 1 controller.go:90] Shutting down OpenAPI AggregationController
E0712 07:08:18.605877 1 memcache.go:147] couldn't get resource list for authorization.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/authorization.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.608345 1 memcache.go:147] couldn't get resource list for autoscaling/v1: Get https://127.0.0.1:6443/apis/autoscaling/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.610552 1 memcache.go:147] couldn't get resource list for autoscaling/v2beta1: Get https://127.0.0.1:6443/apis/autoscaling/v2beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.613608 1 memcache.go:147] couldn't get resource list for batch/v1: Get https://127.0.0.1:6443/apis/batch/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.616508 1 memcache.go:147] couldn't get resource list for batch/v1beta1: Get https://127.0.0.1:6443/apis/batch/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.619558 1 memcache.go:147] couldn't get resource list for certificates.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.620335 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:discovery: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.623207 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:basic-user: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.630429 1 available_controller.go:311] v1beta1.extensions failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.extensions/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.632957 1 available_controller.go:311] v1beta1.batch failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.batch/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.634480 1 available_controller.go:311] v1.authorization.k8s.io failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.k8s.io/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.636395 1 memcache.go:147] couldn't get resource list for networking.k8s.io/v1: Get https://127.0.0.1:6443/apis/networking.k8s.io/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.637222 1 available_controller.go:311] v1beta1.authentication.k8s.io failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.authentication.k8s.io/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.637426 1 available_controller.go:311] v1.authentication.k8s.io failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1.authentication.k8s.io/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.637987 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/admin: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/admin: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.638575 1 memcache.go:147] couldn't get resource list for policy/v1beta1: Get https://127.0.0.1:6443/apis/policy/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.648978 1 repair.go:73] unable to refresh the port block: Get https://127.0.0.1:6443/api/v1/services: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.649411 1 controller.go:192] unable to sync kubernetes service: Post https://127.0.0.1:6443/api/v1/namespaces: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.649511 1 controller.go:179] unable to create required kubernetes system namespace kube-system: Post https://127.0.0.1:6443/api/v1/namespaces: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.652296 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/edit: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/edit: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.653581 1 memcache.go:147] couldn't get resource list for rbac.authorization.k8s.io/v1: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.656058 1 repair.go:88] unable to refresh the service IP block: Get https://127.0.0.1:6443/api/v1/services: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.657312 1 controller.go:179] unable to create required kubernetes system namespace kube-public: Post https://127.0.0.1:6443/api/v1/namespaces: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.657317 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/view: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/view: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.659613 1 memcache.go:147] couldn't get resource list for rbac.authorization.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.663703 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.665261 1 memcache.go:147] couldn't get resource list for storage.k8s.io/v1: Get https://127.0.0.1:6443/apis/storage.k8s.io/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.666096 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.667801 1 memcache.go:147] couldn't get resource list for storage.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/storage.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.669445 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.670988 1 memcache.go:147] couldn't get resource list for admissionregistration.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.672630 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:heapster: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.673420 1 memcache.go:147] couldn't get resource list for apiextensions.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.674753 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:node: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.675802 1 controller.go:160] no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Apologies for what may seem like a dump of log files, but any help is appreciated.
Did some more digging and apparently this is a known issue with bare metal kubeadm deployments on all versions above 1.9.6. I've been able to successfully run init by downgrading the version.

Can't see node after joining it to cluster - "getsockopt: connection refused"

I am new in Kubernetes.
I have two nodes:
Master
Worker
I installed Kubernetes on both of them and on the muster I ran the kubeadm init... command in the Master node and received the command to join a new worker to the cluster:
sudo kubeadm join --token 61a503.3bdf2341a37a2732 192.168.190.159:6443 --discovery-token-ca-cert-hash sha256:ef66d8b7284af9e80f18767af39b5f164e00fd7fe714d3092e8ff682f07076da
I ran the above command inside the Worker node and it seems that it succeed:
This is the output:
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.190.159:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.190.159:6443"
[discovery] Requesting info from "https://192.168.190.159:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.190.159:6443"
[discovery] Successfully established connection with API Server "192.168.190.159:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
But when I go to the Master and run:
kubectl get nodes
I see only the master:
master#osboxes:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
osboxes Ready master 4h v1.9.1
Docker version on both nodes:
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.6.2
Git commit: 092cba3
Built: Thu Nov 2 20:40:23 2017
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.6.2
Git commit: 092cba3
Built: Thu Nov 2 20:40:23 2017
OS/Arch: linux/amd64
Experimental: false
How can I find what is the problem?
Any idea what can it be ?
By the way, I tried the same thing on two nodes in AWS and it worked fine.
EDIT (15.5.2018): Logs
These are the logs from the kubelet daemon from the Worker node, I exported it with sudo journalctl -u kubelet > logs.txt
May 15 06:39:05 osboxes kubelet[12160]: E0515 06:39:05.113840 12160 kubelet_node_status.go:375] Unable to update node status: update node status exceeds retry count
May 15 06:39:06 osboxes kubelet[12160]: E0515 06:39:06.060871 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:06 osboxes kubelet[12160]: E0515 06:39:06.072458 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:06 osboxes kubelet[12160]: E0515 06:39:06.075082 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:07 osboxes kubelet[12160]: E0515 06:39:07.064412 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:07 osboxes kubelet[12160]: E0515 06:39:07.082627 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:07 osboxes kubelet[12160]: E0515 06:39:07.084203 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:08 osboxes kubelet[12160]: E0515 06:39:08.084848 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:08 osboxes kubelet[12160]: E0515 06:39:08.085296 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:08 osboxes kubelet[12160]: E0515 06:39:08.086186 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:09 osboxes kubelet[12160]: E0515 06:39:09.091850 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:09 osboxes kubelet[12160]: E0515 06:39:09.092907 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:09 osboxes kubelet[12160]: E0515 06:39:09.093494 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:10 osboxes kubelet[12160]: E0515 06:39:10.094472 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:10 osboxes kubelet[12160]: E0515 06:39:10.097289 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:10 osboxes kubelet[12160]: E0515 06:39:10.098355 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:11 osboxes kubelet[12160]: E0515 06:39:11.101260 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:11 osboxes kubelet[12160]: E0515 06:39:11.102788 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:11 osboxes kubelet[12160]: E0515 06:39:11.103772 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:12 osboxes kubelet[12160]: E0515 06:39:12.109494 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:12 osboxes kubelet[12160]: E0515 06:39:12.126419 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:12 osboxes kubelet[12160]: E0515 06:39:12.127858 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:13 osboxes kubelet[12160]: E0515 06:39:13.128797 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:13 osboxes kubelet[12160]: E0515 06:39:13.130811 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:13 osboxes kubelet[12160]: E0515 06:39:13.132159 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:14 osboxes kubelet[12160]: E0515 06:39:14.132703 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:14 osboxes kubelet[12160]: E0515 06:39:14.133885 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:14 osboxes kubelet[12160]: E0515 06:39:14.134534 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.123979 12160 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "osboxes": Get https://192.168.190.159:6443/api/v1/nodes/osboxes?resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.126886 12160 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "osboxes": Get https://192.168.190.159:6443/api/v1/nodes/osboxes: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.128832 12160 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "osboxes": Get https://192.168.190.159:6443/api/v1/nodes/osboxes: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.132161 12160 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "osboxes": Get https://192.168.190.159:6443/api/v1/nodes/osboxes: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.134043 12160 kubelet_node_status.go:383] Error updating node status, will retry: error getting node "osboxes": Get https://192.168.190.159:6443/api/v1/nodes/osboxes: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.134746 12160 kubelet_node_status.go:375] Unable to update node status: update node status exceeds retry count
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.142404 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.143773 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:15 osboxes kubelet[12160]: E0515 06:39:15.144730 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:16 osboxes kubelet[12160]: E0515 06:39:16.146062 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:16 osboxes kubelet[12160]: E0515 06:39:16.147948 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:16 osboxes kubelet[12160]: E0515 06:39:16.148963 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:17 osboxes kubelet[12160]: E0515 06:39:17.153690 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:17 osboxes kubelet[12160]: E0515 06:39:17.169648 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:17 osboxes kubelet[12160]: E0515 06:39:17.170775 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:18 osboxes kubelet[12160]: E0515 06:39:18.171909 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:18 osboxes kubelet[12160]: E0515 06:39:18.174020 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:18 osboxes kubelet[12160]: E0515 06:39:18.175013 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:19 osboxes kubelet[12160]: E0515 06:39:19.178296 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:19 osboxes kubelet[12160]: E0515 06:39:19.182903 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:19 osboxes kubelet[12160]: E0515 06:39:19.184147 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:20 osboxes kubelet[12160]: E0515 06:39:20.183063 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:20 osboxes kubelet[12160]: E0515 06:39:20.198007 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:20 osboxes kubelet[12160]: E0515 06:39:20.199996 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:21 osboxes kubelet[12160]: E0515 06:39:21.186122 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.190.159:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:21 osboxes kubelet[12160]: E0515 06:39:21.203974 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.190.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dosboxes&limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
May 15 06:39:21 osboxes kubelet[12160]: E0515 06:39:21.207920 12160 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.190.159:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.190.159:6443: getsockopt: connection refused
Logs from Kubelet:
worker2#osboxes:~$ sudo kubelet
I0515 11:22:38.938557 33604 feature_gate.go:220] feature gates: &{{} map[]}
I0515 11:22:38.938712 33604 controller.go:114] kubelet config controller: starting controller
I0515 11:22:38.938757 33604 controller.go:118] kubelet config controller: validating combination of defaults and flags
W0515 11:22:38.949230 33604 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
I0515 11:22:38.987488 33604 server.go:182] Version: v1.9.1
I0515 11:22:38.987524 33604 feature_gate.go:220] feature gates: &{{} map[]}
I0515 11:22:38.987620 33604 plugins.go:101] No cloud provider specified.
W0515 11:22:38.987656 33604 server.go:328] standalone mode, no API client
W0515 11:22:39.028891 33604 server.go:236] No api server defined - no events will be sent to API server.
I0515 11:22:39.028997 33604 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0515 11:22:39.029367 33604 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0515 11:22:39.029459 33604 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
I0515 11:22:39.029627 33604 container_manager_linux.go:266] Creating device plugin manager: false
W0515 11:22:39.032405 33604 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0515 11:22:39.032456 33604 kubelet.go:571] Hairpin mode set to "hairpin-veth"
I0515 11:22:39.034512 33604 client.go:80] Connecting to docker on unix:///var/run/docker.sock
I0515 11:22:39.034568 33604 client.go:109] Start docker client with request timeout=2m0s
W0515 11:22:39.042224 33604 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
I0515 11:22:39.052530 33604 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
I0515 11:22:39.075356 33604 docker_service.go:237] Docker Info: &{ID:N4M2:L4UZ:CZTV:LQHL:KAFZ:EB2Z:ZCF2:ED6G:KRR4:AI6X:KFQH:BTAH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:aufs DriverStatus:[[Root Dir /var/lib/docker/aufs] [Backing Filesystem extfs] [Dirs 25] [Dirperm1 Supported true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2018-05-15T11:22:39.055044415-04:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.13.0-36-generic OperatingSystem:Ubuntu 16.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4206ae690 NCPU:1 MemTotal:2066481152 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:osboxes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420690500} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:N/A Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:N/A Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=apparmor name=seccomp,profile=default]}
I0515 11:22:39.075476 33604 docker_service.go:250] Setting cgroupDriver to cgroupfs
I0515 11:22:39.103735 33604 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
I0515 11:22:39.105235 33604 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
I0515 11:22:39.114094 33604 server.go:755] Started kubelet
E0515 11:22:39.114169 33604 server.go:511] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
E0515 11:22:39.114305 33604 kubelet.go:1275] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
W0515 11:22:39.114329 33604 kubelet.go:1359] No api server defined - no node status update will be sent.
I0515 11:22:39.114672 33604 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
I0515 11:22:39.115490 33604 server.go:129] Starting to listen on 0.0.0.0:10250
I0515 11:22:39.117147 33604 server.go:299] Adding debug handlers to kubelet server.
F0515 11:22:39.119054 33604 server.go:141] listen tcp 0.0.0.0:10250: bind: address already in use
error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair
Usually, this is a permission issue. Check permissions of the certificate file, it should be readable for Kubelet user.
If it does not help you - please share the Kubelet logs, namely the logs of the daemon, do not start it manually in the console.
Based on the update of the question:
192.168.190.159:6443: getsockopt: connection refused
It means that Kubelet on the node cannot connect to the master. Check the network connection between your node and the master. The node should be able to connect to https://192.168.190.159:6443, which is your API server endpoint.
OK, the problem was that both nodes (Master and Worker) had the same hostname :).
I noticed it after I ran kubectl describe node and in the Addresses field I saw the IP address of the worker with the same Hostname of the master:
Addresses:
InternalIP: 192.168.190.162
Hostname: worker2node
I run sudo kubeadm reset on both Master and Worker.
On Master:
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.190.159
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f kube-flannel.yml
On Worker:
Changed the hostname:
hostnamectl set-hostname worker2node
sudo vi /etc/hosts # (edit this file with the new name for 127.0.1.1)
Restart the worker and join it again.
I checked and now it was added.

docker private registry getsockopt: connection refused

I am using self signed certificate with nginx configuration while i am running below command i am getting error as below
root#ip-172-31-12-38:/etc/nginx/sites-available# docker push docker-reg.sogeti-aws.nl:5000/busybox
The push refers to a repository [docker-reg.sogeti-aws.nl:5000/busybox]
Put http://docker-reg.sogeti-aws.nl:5000/v1/repositories/busybox/: dial tcp 13.126.242.122:5000: getsockopt: connection refused

Connection error when deploying chaincode

I just started using chaincode.
I am following step by step:
http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/#running-the-chaincode
I am using Docker toolbox on Windows.
But when I start to run chaincode_example02, I get the following errors:
2016/09/15 14:05:53 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051: connectex: The requested address is not valid in its context."; Reconnecting to {"0.0.0.0:7051" <nil>}
2016/09/15 14:05:54 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051: connectex: The requested address is not valid in its context."; Reconnecting to {"0.0.0.0:7051" <nil>}
2016/09/15 14:05:55 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051: connectex: The requested address is not valid in its context."; Reconnecting to {"0.0.0.0:7051" <nil>}
Why?

Resources