Connection refused on pushing a docker image - docker

I'm going to setup a local registry by following https://docs.docker.com/registry/deploying/.
docker run -d -p 5000:5000 --restart=always --name reg ubuntu:16.04
When I try to run the following command:
$ docker push localhost:5000/my-ubuntu
I get Error:
Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect:connection refused
Any idea?

Connection refused usually means that the service you are trying to connect to isn't actually up and running like it should. There could be other reasons as outlined in this question, but essentially, for your case, it simply means that the registry is not up yet.
Wait for the registry container to be created properly before you do anything else - docker run -d -p 5000:5000 --restart=always --name registry registry:2 that creates a local registry from the official docker image.
Make sure that the registry container is up by running docker ps | grep registry, and then proceed further.

More comments about
Kubenetes(K8s) / Minikube
docker / image / registry, container
If you are using Minikube, and want to pull down an image from 127.0.0.1:5000,
then you meet the errors below:
Failed to pull image "127.0.0.1:5000/nginx_operator:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Full logs:
$ kubectl describe pod/your_pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m29s default-scheduler Successfully assigned tj-blue-whale-05-system/tj-blue-whale-05-controller-manager-6c8f564575-kwxdv to minikube
Normal Pulled 2m25s kubelet Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0" already present on machine
Normal Created 2m24s kubelet Created container kube-rbac-proxy
Normal Started 2m23s kubelet Started container kube-rbac-proxy
Normal BackOff 62s (x5 over 2m22s) kubelet Back-off pulling image "127.0.0.1:5000/nginx_operator:latest"
Warning Failed 62s (x5 over 2m22s) kubelet Error: ImagePullBackOff
Normal Pulling 48s (x4 over 2m23s) kubelet Pulling image "127.0.0.1:5000/nginx_operator:latest"
Warning Failed 48s (x4 over 2m23s) kubelet Failed to pull image "127.0.0.1:5000/nginx_operator:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 48s (x4 over 2m23s) kubelet Error: ErrImagePull
Possible root cause:
The registry must be setup inside the Minikube side instead of your host side.
i.e.
host: registry (127.0.0.1:5000)
minikube: no registry (the K8s could not find your image)
How to check?
Step1: check your Minikube container
$ docker ps -a
CONTAINER ID IMAGE ... STATUS PORTS NAMES
8c6f49491dd6 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 ... Up 15 hours 127.0.0.1:49156->22/tcp, 127.0.0.1:49155->2376/tcp, 127.0.0.1:49154->5000/tcp, 127.0.0.1:49153->8443/tcp minikube
# your Minikube is under running
# host:49154 <--> minikube:5000
# where:
# - port 49154 was allocated randomly by the docker service
# - port 22: for ssh
# - port 2376: for docker service
# - port 5000: for registry (image repository)
# - port 8443: for Kubernetes
Step2: login to your Minikube
$ minikube ssh
docker#minikube:~$ curl 127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
# setup
# =====
# You did not setup the registry.
# Let's try to setup it.
docker#minikube:~$ docker run --restart=always -d -p 5000:5000 --name registry registry:2
# test
# ====
# test the registry using the following commands
docker#minikube:~$ curl 127.0.0.1:5000
docker#minikube:~$ curl 127.0.0.1:5000/v2
Moved Permanently.
docker#minikube:~$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":[]}
# it's successful
docker#minikube:~$ exit
logout
Step3: build your image, and push it into the registry of your Minikube
# Let's take nginx as an example. (You can build your own image)
$ docker pull nginx
# modify the repository (the source and the name)
$ docker tag nginx 127.0.0.1:49154/nginx_operator
# check the new repository (source and the name)
$ docker images | grep nginx
REPOSITORY TAG IMAGE ID CREATED SIZE
127.0.0.1:49154/nginx_operator latest ae2feff98a0c 3 weeks ago 133MB
# push the image into the registry of your Minikube
$ docker push 127.0.0.1:49154/nginx_operator
Step4: login to your Minikube again
$ minikube ssh
# check the registry
$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":["nginx_operator"]}
# it's successful
# get the image info
$ curl 127.0.0.1:5000/v2/nginx_operator/manifests/latest
docker#minikube:~$ exit
logout
Customize exposed ports of Minikube
if you would like to use the port 5000 on the host side instead of using 49154 (which was allocated randomly by the docker service)
i.e.
host:5000 <--> minikube:5000
you need to recreate a minikube instance with the flag --ports
# delete the old minikube instance
$ minkube delete
# create a new one (with the docker driver)
$ minikube start --ports=5000:5000 --driver=docker
# or
$ minikube start --ports=127.0.0.1:5000:5000 --driver=docker
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d1e5b61a3bf gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp, 127.0.0.1:49162->22/tcp, 127.0.0.1:49161->2376/tcp, 127.0.0.1:49160->5000/tcp, 127.0.0.1:49159->8443/tcp minikube
$ docker port minikube
22/tcp -> 127.0.0.1:49162
2376/tcp -> 127.0.0.1:49161
5000/tcp -> 127.0.0.1:49160
5000/tcp -> 0.0.0.0:5000
8443/tcp -> 127.0.0.1:49159
you can see: 0.0.0.0:5000->5000/tcp
Re-test your registry in the Minikube
# in the host side
$ docker pull nginx
$ docker tag nginx 127.0.0.1:5000/nginx_operator
$ docker ps -a
$ docker push 127.0.0.1:5000/nginx_operator
$ minikube ssh
docker#minikube:~$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":["nginx_operator"]}
# Great!

Related

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

Trying to pull/run docker images from docker hub on Minikube fails

I am very new to Kuberetes and I have done some work with docker previously. I am trying to accomplish following:
Spin up Minikube
Use Kube-ctl to spin up a docker image from docker hub.
I started minikube and things look like they are up and running. Then I pass following command
kubectl run nginx --image=nginx (Please note I do not have this image anywhere on my machine and I am expecting k8 to fetch it for me)
Now, when I do that, it spins up the pod but the status is ImagePullBackOff. So I ran kubectl describe pod command on it and the results look like following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned default/ngix-67c6755c86-qm5mv to minikube
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:52133->192.168.64.1:53: read: connection refused
Normal Pulling 8m (x2 over 8m) kubelet, minikube Pulling image "nginx"
Warning Failed 8m (x2 over 8m) kubelet, minikube Error: ErrImagePull
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:40073->192.168.64.1:53: read: connection refused
Normal BackOff 8m (x3 over 8m) kubelet, minikube Back-off pulling image "nginx"
Warning Failed 8m (x3 over 8m) kubelet, minikube Error: ImagePullBackOff
Then I searched around to see if anyone has faced similar issues and it turned out that some people have and they did resolve it by restarting minikube using some more flags which look like below:
minikube start --vm-driver="xhyve" --insecure-registry="$REG_IP":80
when I do nslookup inside Minikube, it does resolve with following information:
Server: 10.12.192.22
Address: 10.12.192.22#53
Non-authoritative answer:
hub.docker.com canonical name = elb-default.us-east-1.aws.dckr.io.
elb-default.us-east-1.aws.dckr.io canonical name = us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com.
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 52.205.36.130
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 3.217.62.246
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 35.169.212.184
still no luck. Is there anything that I am doing wrong here?
There error message suggests that the Docker daemon running in the minikube VM can't resolve the registry-1.docker.io hostname because the DNS nameserver it's configured to use for DNS resolution (192.168.64.1:53) is refusing connection. It's strange to me that the Docker deamon is trying to resolve registry-1.docker.io via a nameserver at 192.168.64.1 but when you nslookup on the VM it's using a nameserver at 10.12.192.22. I did an Internet search for "minkube Get registry-1.docker.io/v2: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53" and found an issue where someone made this comment, seems identical to your problem, and seems specific to xhyve.
In that comment the person says:
This issue does look like an xhyve issue not seen with virtualbox.
and
Switching to virtualbox fixed this issue for me.
I stopped minikube, deleted it, started it without --vm-driver=xhyve (minikube uses virtualbox driver by default), and then docker build -t hello-node:v1 . worked fine without errors
In my case it was caused by running dnsmasq, a dns server, on my Mac using Homebrew, which caused the DNS requests to fail inside minikube. After stopping dnsmasq, everything worked.
I got this problem with my local minikube setup and I wasn't able to pull any images I added to a simple deployment manifest.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test1 0/1 ImagePullBackOff 0 68s
Tried to execute the below test:
apiVersion: v1
kind: Pod
metadata:
name: test1
labels:
site: blog
spec:
containers:
- name: web
image: nginx:latest
It was possible or fixed only after restarting the minikube.
Maybe the dnsmasq was really the cause in this case.
You have:
minukube running with default settings.
docker building your images
(*) configured minikube to point to your docker images local repo
And now minikube can't pull images from public "container" registries, like docker hub.
stop and start minikube, then point it back to your local docker images repo. The commands to do this (and (*) this):
minikube stop
minikube start
minikube -p minikube docker-env
eval $(minikube -p minikube docker-env)
Since running the above I was able to pull nginx, alpine and frens from hub.docker.come just by setting image: alpine in the yaml spec.
The issue was just a short drop in my network connectivity. So if you have no dns/vpn/xhyve complications and it just stops, the fix is easy enough.

Minikube start stuck in waiting for pods and timeout

I try to run a sample application in my Ubuntu 18 vm.
I have installed Docker client and server version of 18.06.1-ce. I already have VirtualBox running.
I use below link and install kubectl 1.14 too: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
I have Minikube v1.0.1 also installed. But Minikube start command stuck in Waiting for pods: apiserver and timeout
harshana#-Virtual-Machine:~$ sudo minikube start
πŸ˜„ minikube v1.0.1 on linux (amd64)
🀹 Downloading Kubernetes v1.14.1 images in the background ...
⚠️ Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the none driver.
⚠️ To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
⚠️ Alternatively, you may delete the existing VM using `minikube delete -p minikube`
πŸ”„ Restarting existing none VM for "minikube" ...
βŒ› Waiting for SSH access ...
πŸ“Ά "minikube" IP address is xxx.xxx.x.xxx
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.1-ce
βŒ› Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
πŸ’Ύ Downloading kubeadm v1.14.1
πŸ’Ύ Downloading kubelet v1.14.1
🚜 Pulling images required by Kubernetes v1.14.1 ...
πŸ”„ Relaunching Kubernetes v1.14.1 using kubeadm ...
βŒ› Waiting for pods: apiserver
sudo minikube logs:
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.825465 10572 kubelet.go:2244] node "minikube" not found
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.895848 10572 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I got the same behaviour because I have created a first VM using kvm. I have followed the instructions and deleted the VM. Run the below :
1- minikube delete -p minikube
2- minikube start

How to access etcd in docker

I create a container by
docker run -d --name etcd \
-v /usr/share/ca-certificates/:/etc/ssl/certs \
quay.io/coreos/etcd:v3.0.4 /usr/local/bin/etcd -advertise-client-urls \
http://0.0.0.0:2379 -listen-client-urls http://0.0.0.0:2379
And use
docker exec 40cc9457f132 ifconfig
to get its IP "172.17.0.2"
And then I use local etcdctl to get data,
etcdctl --endpoint=http://172.17.0.2:2379 get /testdir/testkey1
but fail with:
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 0.0.0.0:2379: getsockopt: connection refused
What should I do?
PS:
To make sure the data is actually stored on the container, I stoped local etcd first.
systemctl stop etcd
If I don't do that, I could get the data, but it's not the same with the result of
docker exec 40cc9457f132 etcdctl get /testdir/testkey1
"40cc9457f132 " is the container id.
OK, I fix it. It's the problem of version.
My local etcd is v2.2.4 (installed by apt install), and the etcd image version is v3.0.4.
I update both of them to v3.3.5 and set $ETCDCTL_API=3.
Now it seems all right.
It seems that same port use both localhost and docker container.
please assign another port for container. example 2379->2380
etcdctl --endpoint=http://localhost:2379 -> localhost
etcdctl --endpoint=http://localhost:2380 -> docker container

Resources