Connection timedout uwsgi / unable to connect to node - uwsgi

We're having some issues with uwsgi behind a loadbalancer, logs are full or messages like below.
I'm new to nginx/uwsgi and stuff so I have no idea where to look at.
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (1 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http key: app.example.com client_addr: 10.133.223.217 client_port: 54217] hr_instance_read(): Connection reset by peer [plugins/http/http.c line 646]
And also these format:
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
Wed Oct 5 07:24:42 2016 - *** uWSGI listen queue of socket "127.0.0.1:55792" (fd: 3) full !!! (257/256) ***
Wed Oct 5 07:24:44 2016 - *** uWSGI listen queue of socket "127.0.0.1:55792" (fd: 3) full !!! (257/256) ***
and also:
connect_to_tcp()/socket(): Too many open files [core/socket.c line 462]
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
[uwsgi-http] unable to connect() to node "127.0.0.1:55792" (0 retries): Connection timed out
connect_to_tcp()/socket(): Too many open files [core/socket.c line 462]
connect_to_tcp()/socket(): Too many open files [core/socket.c line 462]
There are 4 servers with 1gb ram and 1 core getting this events. (hosted on digital ocean).
If this helps:
cat /proc/sys/fs/file-max -> 1048576
ulimit -n 32768
net.core.somaxconn = 65000

Related

Why I Cannot access eclipse download site from docker container (connection refused)?

I am trying to install eclipse inside a docker container (Ubuntu 22.04, JDK 17)
I am trying to use the oomph installer to install the eclipse. Seems like some of the eclipse download sites cant be accessed.
Errors like:
!MESSAGE Connection to https://download.eclipse.org/releases/2023-03/202301131000/p2.index failed on Connect to https://download.eclipse.org:443 [download.eclipse.org/198.41.30.199] failed: Connection refused
!MESSAGE Connection to https://download.eclipse.org/oomph/updates/milestone/latest/compositeContent.jar failed on Connect to https://download.eclipse.org:443 [download.eclipse.org/198.41.30.199] failed: Connection refused. Retry attempt 0 started
!MESSAGE Connection to https://download.eclipse.org/oomph/updates/milestone/latest/compositeArtifacts.jar failed on Connect to https://download.eclipse.org:443 [download.eclipse.org/198.41.30.199] failed: Connection refused. Retry attempt 0 started
!MESSAGE Connection to https://download.eclipse.org/technology/epp/packages/2023-03/202301121200/features/org.eclipse.epp.package.common.feature_4.27.0.20230112-0751.jar failed on Connect to https://download.eclipse.org:443 [download.eclipse.org/198.41.30.199] failed: Connection refused. Retry attempt 0 started
!MESSAGE Connection to https://download.eclipse.org/technology/epp/packages/2023-03/202301121200/features/org.eclipse.epp.package.java.feature_4.27.0.20230112-0751.jar failed on Connect to https://download.eclipse.org:443 [download.eclipse.org/198.41.30.199] failed: Connection refused. Retry attempt 0 started
!MESSAGE Failure reporting download statistics to URL: https://download.eclipse.org/stats/technology/epp/packages/2023-03-M1/org.eclipse.epp.package.common/4.27.0.20230112-0751
!MESSAGE Unable to connect to repository https://download.eclipse.org/stats/technology/epp/packages/2023-03-M1/org.eclipse.epp.package.common/4.27.0.20230112-0751
!MESSAGE Connection to https://eclipse.mirror.rafal.ca/releases/2023-03/202301131000/plugins/org.eclipse.mylyn.wikitext.asciidoc.ui_3.0.42.202201072301.jar failed on Connect to https://eclipse.mirror.rafal.ca:443 [eclipse.mirror.rafal.ca/207.210.46.249] failed: Connection refused. Retry attempt 0 started
But these URLs I can access on the host machine. Inside docker container I cannot access them. Furthermore, I can access and download many other files from inside docker container like jdk from oracle, oomph installer file, eclipse tar file etc.
Why this error?

Keycloak http-management returns Connection refused

I have Keycloak 14 running and would like to scrape metrics data from it.
So I configured Prometheus to scrape http://KEYCLOAK_HOST:9990/metrics. Unfortunately this gives me a "Connection refused".
When I try to connect from another host manually I get the same:
user#host:/$ curl -vvv 10.244.3.154:9990/metrics
* Expire in 0 ms for 6 (transfer 0x5566ecabbfb0)
* Trying 10.244.3.154...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5566ecabbfb0)
* connect to 10.244.3.154 port 9990 failed: Connection refused
* Failed to connect to 10.244.3.154 port 9990: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.3.154 port 9990: Connection refused
user#host:/$
In fact I get this error on all paths on the management port. Even on those, that do not exist:
user#host:/$ curl -vvv 10.244.3.154:9990/some_endpoint
* Expire in 0 ms for 6 (transfer 0x55eea4059fb0)
* Trying 10.244.3.154...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55eea4059fb0)
* connect to 10.244.3.154 port 9990 failed: Connection refused
* Failed to connect to 10.244.3.154 port 9990: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.3.154 port 9990: Connection refused
user#host:/$
From within the Keycloak host it works fine:
bash-4.4$ curl -vvv localhost:9990/metrics
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 9990 (#0)
> GET /metrics HTTP/1.1
> Host: localhost:9990
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
...
bash-4.4$
It only works on localhost though:
bash-4.4$ hostname -I
10.244.3.154
bash-4.4$ curl -vvv 10.244.3.154:9990/metrics
* Trying 10.244.3.154...
* TCP_NODELAY set
* connect to 10.244.3.154 port 9990 failed: Connection refused
* Failed to connect to 10.244.3.154 port 9990: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.3.154 port 9990: Connection refused
bash-4.4$
So I assume Keycloak is not providing the management endpoints "to the outside world". But how can I enable it?
I'm using the codecentric Helm chart for deployment (https://github.com/codecentric/helm-charts/tree/master/charts/keycloak). I'm running Keycloak 14.0.0 right now, but had the same issue with 15.x (cannot update right now due to a bug).
Thanks in advance!
Found in the documentation of the Helm chart, that I had to enable it by setting the env variable KEYCLOAK_STATISTICS to all.
https://github.com/codecentric/helm-charts/tree/master/charts/keycloak#prometheus-metrics-support

kubeadm init failing on ARM64

I'm trying to setup a single master cluster on some sopine64s (quad A53 with 2GB RAM) running Armbian 5.38 (Ubuntu 16.04 based). Kernel is 3.10.107-pine64.
Steps taken so far:
usual ip address, hostname, timezone, dns, etc config
apt upgrade
disable swap
set net.bridge.bridge-nf-call-iptables to 1 in sysctl.conf (intending to use weavenet)
install docker 1.13.1 (docker.io package)
install kubeadm, kubelet, kubectl v1.11
systemctl enable and start kubelet and docker
reboot
kubeadm config images pull (all download ok)
Here's the output of kubeadm init:
I0712 18:58:42.149510 31708 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0712 18:58:42.301648 31708 kernel_validator.go:81] Validating kernel version
I0712 18:58:42.302621 31708 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sopine0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.16]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [sopine0 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [sopine0 localhost] and IPs [192.168.0.16 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-arm64:v1.11.0
- k8s.gcr.io/kube-controller-manager-arm64:v1.11.0
- k8s.gcr.io/kube-scheduler-arm64:v1.11.0
- k8s.gcr.io/etcd-arm64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
If I look at the containers, the one for kube-apiserver is exiting and being recreated every few minutes. Here's it's log file:
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0712 07:06:39.855921 1 server.go:703] external host was not specified, using 192.168.0.16
I0712 07:06:39.856998 1 server.go:145] Version: v1.11.0
I0712 07:07:05.966337 1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0712 07:07:05.966598 1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0712 07:07:05.975261 1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0712 07:07:05.975630 1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0712 07:07:06.459185 1 master.go:234] Using reconciler: lease
W0712 07:07:30.376324 1 genericapiserver.go:319] Skipping API batch/v2alpha1 because it has no resources.
W0712 07:07:33.264038 1 genericapiserver.go:319] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0712 07:07:33.325028 1 genericapiserver.go:319] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0712 07:07:33.508270 1 genericapiserver.go:319] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0712 07:07:38.454808 1 genericapiserver.go:319] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/07/12 07:07:38 log.go:33: [restful/swagger] listing is available at https://192.168.0.16:6443/swaggerapi
[restful] 2018/07/12 07:07:38 log.go:33: [restful/swagger] https://192.168.0.16:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/07/12 07:07:48 log.go:33: [restful/swagger] listing is available at https://192.168.0.16:6443/swaggerapi
[restful] 2018/07/12 07:07:48 log.go:33: [restful/swagger] https://192.168.0.16:6443/swaggerui/ is mapped to folder /swagger-ui/
I0712 07:07:48.845592 1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0712 07:07:48.845818 1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0712 07:08:11.577474 1 serve.go:96] Serving securely on [::]:6443
I0712 07:08:11.578033 1 available_controller.go:278] Starting AvailableConditionController
I0712 07:08:11.578198 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0712 07:08:11.578033 1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0712 07:08:11.581700 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0712 07:08:11.581449 1 crd_finalizer.go:242] Starting CRDFinalizer
I0712 07:08:11.581617 1 autoregister_controller.go:136] Starting autoregister controller
I0712 07:08:11.582060 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0712 07:08:11.583450 1 controller.go:84] Starting OpenAPI AggregationController
I0712 07:08:11.584707 1 customresource_discovery_controller.go:199] Starting DiscoveryController
I0712 07:08:11.585112 1 naming_controller.go:284] Starting NamingConditionController
I0712 07:08:11.585243 1 establishing_controller.go:73] Starting EstablishingController
I0712 07:08:11.585336 1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0712 07:08:11.585379 1 controller_utils.go:1025] Waiting for caches to sync for crd-autoregister controller
I0712 07:08:13.059515 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41525: EOF
<above message repeats 9 more times on different ports in the 415xx range>
I0712 07:08:15.961160 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41566: EOF
I0712 07:08:16.582527 1 controller_utils.go:1032] Caches are synced for crd-autoregister controller
I0712 07:08:16.700615 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41564: EOF
<above message repeats 60 more times on different ports in the 41[5-7]xx range>
I0712 07:08:17.535106 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41720: EOF
I0712 07:08:17.560585 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0712 07:08:17.563061 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41723: EOF
I0712 07:08:17.577852 1 cache.go:39] Caches are synced for autoregister controller
I0712 07:08:17.596321 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41696: EOF
<above message repeats 6 more times on different ports in the 41[5-7]xx range>
I0712 07:08:17.686658 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41706: EOF
I0712 07:08:17.688440 1 trace.go:76] Trace[288588746]: "List /api/v1/services" (started: 2018-07-12 07:08:17.127883224 +0000 UTC m=+97.754900744) (total time: 560.373467ms):
Trace[288588746]: [560.004232ms] [559.9889ms] Listing from storage done
I0712 07:08:17.696643 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41726: EOF
<above message repeats 11 more times on different ports in the 41[5-7]xx range>
I0712 07:08:17.811279 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41716: EOF
I0712 07:08:17.831546 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0712 07:08:17.850811 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41717: EOF
<above message repeats 11 more times on different ports in the 41[5-7]xx range>
I0712 07:08:18.303267 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41752: EOF
I0712 07:08:18.359750 1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0712 07:08:18.386442 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41763: EOF
I0712 07:08:18.399648 1 logs.go:49] http: TLS handshake error from 192.168.0.16:41759: EOF
I0712 07:08:18.431038 1 trace.go:76] Trace[413119584]: "GuaranteedUpdate etcd3: *core.Pod" (started: 2018-07-12 07:08:17.845710035 +0000 UTC m=+98.472727763) (total time: 585.187661ms):
Trace[413119584]: [499.634456ms] [499.240097ms] Transaction prepared
I0712 07:08:18.432293 1 trace.go:76] Trace[838520449]: "Patch /api/v1/namespaces/kube-system/pods/kube-apiserver-sopine0/status" (started: 2018-07-12 07:08:17.845257845 +0000 UTC m=+98.472275323) (total time: 586.889091ms):
Trace[838520449]: [272.406761ms] [271.550004ms] About to check admission control
Trace[838520449]: [586.455609ms] [314.048848ms] Object stored in database
I0712 07:08:18.590379 1 controller.go:158] Shutting down kubernetes service endpoint reconciler
I0712 07:08:18.591681 1 available_controller.go:290] Shutting down AvailableConditionController
I0712 07:08:18.592066 1 autoregister_controller.go:160] Shutting down autoregister controller
I0712 07:08:18.592253 1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0712 07:08:18.593252 1 crd_finalizer.go:254] Shutting down CRDFinalizer
I0712 07:08:18.593636 1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0712 07:08:18.593831 1 establishing_controller.go:84] Shutting down EstablishingController
I0712 07:08:18.593962 1 naming_controller.go:295] Shutting down NamingConditionController
I0712 07:08:18.596110 1 customresource_discovery_controller.go:210] Shutting down DiscoveryController
I0712 07:08:18.596965 1 serve.go:136] Stopped listening on [::]:6443
I0712 07:08:18.597046 1 controller.go:90] Shutting down OpenAPI AggregationController
E0712 07:08:18.605877 1 memcache.go:147] couldn't get resource list for authorization.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/authorization.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.608345 1 memcache.go:147] couldn't get resource list for autoscaling/v1: Get https://127.0.0.1:6443/apis/autoscaling/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.610552 1 memcache.go:147] couldn't get resource list for autoscaling/v2beta1: Get https://127.0.0.1:6443/apis/autoscaling/v2beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.613608 1 memcache.go:147] couldn't get resource list for batch/v1: Get https://127.0.0.1:6443/apis/batch/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.616508 1 memcache.go:147] couldn't get resource list for batch/v1beta1: Get https://127.0.0.1:6443/apis/batch/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.619558 1 memcache.go:147] couldn't get resource list for certificates.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.620335 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:discovery: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.623207 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:basic-user: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.630429 1 available_controller.go:311] v1beta1.extensions failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.extensions/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.632957 1 available_controller.go:311] v1beta1.batch failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.batch/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.634480 1 available_controller.go:311] v1.authorization.k8s.io failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.k8s.io/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.636395 1 memcache.go:147] couldn't get resource list for networking.k8s.io/v1: Get https://127.0.0.1:6443/apis/networking.k8s.io/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.637222 1 available_controller.go:311] v1beta1.authentication.k8s.io failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1beta1.authentication.k8s.io/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.637426 1 available_controller.go:311] v1.authentication.k8s.io failed with: Put https://127.0.0.1:6443/apis/apiregistration.k8s.io/v1/apiservices/v1.authentication.k8s.io/status: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.637987 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/admin: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/admin: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.638575 1 memcache.go:147] couldn't get resource list for policy/v1beta1: Get https://127.0.0.1:6443/apis/policy/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.648978 1 repair.go:73] unable to refresh the port block: Get https://127.0.0.1:6443/api/v1/services: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.649411 1 controller.go:192] unable to sync kubernetes service: Post https://127.0.0.1:6443/api/v1/namespaces: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.649511 1 controller.go:179] unable to create required kubernetes system namespace kube-system: Post https://127.0.0.1:6443/api/v1/namespaces: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.652296 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/edit: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/edit: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.653581 1 memcache.go:147] couldn't get resource list for rbac.authorization.k8s.io/v1: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.656058 1 repair.go:88] unable to refresh the service IP block: Get https://127.0.0.1:6443/api/v1/services: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.657312 1 controller.go:179] unable to create required kubernetes system namespace kube-public: Post https://127.0.0.1:6443/api/v1/namespaces: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.657317 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/view: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/view: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.659613 1 memcache.go:147] couldn't get resource list for rbac.authorization.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.663703 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.665261 1 memcache.go:147] couldn't get resource list for storage.k8s.io/v1: Get https://127.0.0.1:6443/apis/storage.k8s.io/v1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.666096 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.667801 1 memcache.go:147] couldn't get resource list for storage.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/storage.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.669445 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.670988 1 memcache.go:147] couldn't get resource list for admissionregistration.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.672630 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:heapster: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.673420 1 memcache.go:147] couldn't get resource list for apiextensions.k8s.io/v1beta1: Get https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.674753 1 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:node: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: dial tcp 127.0.0.1:6443: connect: connection refused
E0712 07:08:18.675802 1 controller.go:160] no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
Apologies for what may seem like a dump of log files, but any help is appreciated.
Did some more digging and apparently this is a known issue with bare metal kubeadm deployments on all versions above 1.9.6. I've been able to successfully run init by downgrading the version.

Connection error when deploying chaincode

I just started using chaincode.
I am following step by step:
http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/#running-the-chaincode
I am using Docker toolbox on Windows.
But when I start to run chaincode_example02, I get the following errors:
2016/09/15 14:05:53 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051: connectex: The requested address is not valid in its context."; Reconnecting to {"0.0.0.0:7051" <nil>}
2016/09/15 14:05:54 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051: connectex: The requested address is not valid in its context."; Reconnecting to {"0.0.0.0:7051" <nil>}
2016/09/15 14:05:55 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 0.0.0.0:7051: connectex: The requested address is not valid in its context."; Reconnecting to {"0.0.0.0:7051" <nil>}
Why?

Docker getsockopt: connection refuse

I am running a docker server on my Ubuntu machine and I am getting an error when one of the application running into the server tries to open a socket, I get the following error, any clue why this is happening?
I have run the same code in virtual box and I don't see it... Maybe a memory lack issue?
thanks in adnvance,
regards!
croft_1 | 2016/04/15 13:55:03 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:05 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:07 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:09 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:11 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:13 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:15 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:17 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:19 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:21 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:23 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:25 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:27 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:29 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:31 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:33 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:35 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:37 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:39 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | 2016/04/15 13:55:41 Failed to connect: dial tcp 172.17.0.5:5672: getsockopt: connection refused
croft_1 | panic: runtime error: invalid memory address or nil pointer dereference
croft_1 | [signal 0xb code=0x1 addr=0x20 pc=0x401570]
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3b4a7e9023ad 0.00% 7.741 MB / 1.039 GB 0.74% 9.314 kB / 5.679 kB 5.046 MB / 0 B 0
6614c0b5c616 0.02% 4.379 MB / 1.039 GB 0.42% 10.02 kB / 648 B 5.009 MB / 0 B 0
658988a96412 3.09% 77.4 MB / 1.039 GB 7.45% 9.83 kB / 648 B 46.69 MB / 17.5 MB 0
9e1c772d7635 0.83% 115.7 MB / 1.039 GB 11.13% 15.12 kB / 5.484 kB 37.06 MB / 147.5 kB 0
0cc07e241f7a 0.22% 102.6 MB / 1.039 GB 9.87% 12.16 kB / 2.652 kB 37.25 MB / 471 kB 0
^Clorabackbone#lorabackbone-desktop:~/server-devenv$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b4a7e9023ad serverdevenv_croft "./croft" 3 hours ago Up 2 minutes 1700/tcp, 0.0.0.0:1700->1700/udp serverdevenv_croft_1
6614c0b5c616 ansi/mosquitto "/usr/local/sbin/mosq" 3 hours ago Up 3 minutes 0.0.0.0:1883->1883/tcp serverdevenv_mosquitto_1
658988a96412 serverdevenv_mongodb "/entrypoint.sh mongo" 3 hours ago Up 3 minutes 0.0.0.0:27017->27017/tcp serverdevenv_mongodb_1
9e1c772d7635 rabbitmq:3-management "/docker-entrypoint.s" 3 hours ago Up 3 minutes 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp serverdevenv_rabbitmq_1
0cc07e241f7a cpswan/node-red "/usr/local/bin/node-" 3 hours ago Up 3 minutes 0.0.0.0:1880->1880/tcp

Resources