How to access kind control plane port from another docker container? - docker

I'm creating a kind cluster with kind create cluster --name kind and I want to access it from another docker container but when I try to apply a Kubernetes file from a container (kubectl apply -f deployment.yml) I got this error:
The connection to the server 127.0.0.1:6445 was refused - did you specify the right host or port?
Indeed when I try to curl kind control-plane from a container, it's unreachable.
> docker run --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
curl: (7) Failed to connect to 127.0.0.1 port 6445 after 0 ms: Connection refused
However kind control-plane is publishing to the right port but only to the localhost.
> docker ps --format "table {{.Image}}\t{{.Ports}}"
IMAGE PORTS
kindest/node:v1.23.4 127.0.0.1:6445->6443/tcp
Currently the only solution I found is to set the host network mode.
> docker run --network host --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
Client sent an HTTP request to an HTTPS server.
This solution don't look to be the most secure. Is there another way like connecting the kind network to my container or something like that that I missed ?

Don't have enough rep to comment on the other answer, but wanted to comment on what ultimately worked for me.
Takeaways
Kind cluster running in it own bridge network kind
Service with kubernetes client running in another container with a mounted kube config volume
As described above the containers need to be in the same network unless you want your service to run in the host network.
The server address for the kubeconfig is the container name + internal port e.g. kind-control-plane:6443. The port is NOT the exposed port in the example below 6443 NOT 38669
CONTAINER ID IMAGE PORTS
7f2ee0c1bd9a kindest/node:v1.25.3 127.0.0.1:38669->6443/tcp
Kube config for the container
# path/to/some/kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true # Don't use in Prod equivalent of --insecure on cli
server: https://<kind-control-plane container name>:6443 # NOTE port is internal container port
name: kind-kind # or whatever
contexts:
- context:
cluster: kind-kind
user: <some-service-account>
name: kind-kind # or whatever
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: <some-service-account>
user:
token: <TOKEN>
Docker container stuff
If using docker-compose you can add the kind network to the container such as:
#docker-compose.yml
services:
foobar:
build:
context: ./.config
networks:
- kind # add this container to the kind network
volumes:
- path/to/some/kube/config:/somewhere/in/the/container
networks:
kind: # define the kind network
external: true # specifies that the network already exists in docker
If running a new container:
docker run --network kind -v path/to/some/kube/config:/somewhere/in/the/container <image>
Container already running?
docker network connect kind <container name>

I don't know exactly why you want to do this. but no problem I think this could help you:
first, lets pull your docker image:
❯ docker pull curlimages/curl
In my kind cluster I got 3 control plane nodes and 3 worker nodes. Here are the pod of my kind cluster:
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39dbbb8ca320 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:35327->6443/tcp so-cluster-1-control-plane
62b5538275e9 kindest/haproxy:v20220207-ca68f7d4 "haproxy -sf 7 -W -d…" 7 days ago Up 7 days 127.0.0.1:35625->6443/tcp so-cluster-1-external-load-balancer
9f189a1b6c52 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:40845->6443/tcp so-cluster-1-control-plane3
4c53f745a6ce kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:36153->6443/tcp so-cluster-1-control-plane2
97e5613d2080 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30081->30080/tcp so-cluster-1-worker2
0ca64a907707 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30080->30080/tcp so-cluster-1-worker
9c5d26caee86 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30082->30080/tcp so-cluster-1-worker3
The container that is interesting for us here is the haproxy one (kindest/haproxy:v20220207-ca68f7d4) which have the role of loadbalancing the enterring traffic to the nodes (and, in our example, especially the control plane nodes.) we can see that the port 35625 of our host machine is mapped to the port 6443 of the haproxy container. (127.0.0.1:35625->6443/tcp)
so, our cluster endpoint is https://127.0.0.1:35625, we can confirm this in our kubeconfig file (~/.kube/config):
❯ cat .kube/config
apiVersion: v1
kind: Config
preferences: {}
users:
- name: kind-so-cluster-1
user:
client-certificate-data: <base64data>
client-key-data: <base64data>
clusters:
- cluster:
certificate-authority-data: <certificate-authority-dataBase64data>
server: https://127.0.0.1:35625
name: kind-so-cluster-1
contexts:
- context:
cluster: kind-so-cluster-1
user: kind-so-cluster-1
namespace: so-tests
name: kind-so-cluster-1
current-context: kind-so-cluster-1
let's run the curl container in background:
❯ docker run -d --network host curlimages/curl sleep 3600
ba183fe2bb8d715ed1e503a9fe8096dba377f7482635eb12ce1322776b7e2366
as expected, we cant HTTP request the endpoint that listen on an HTTPS port:
❯ docker exec -it ba curl 127.0.0.1:35625
Client sent an HTTP request to an HTTPS server.
we can try to use the certificate that is in the field "certificate-authority-data" in our kubeconfig to check if that change something (it should):
Lets create a file named my-ca.crt that contain the stringData of the certificate:
base64 -d <<< <certificate-authority-dataBase64dataFromKubeConfig> > my-ca.crt
since the working directory of the curl docker image is "/" lets copy our cert to this location in the container and verify that it is actually there:
docker cp my-ca.crt ba183fe:/
❯ docker exec -it ba sh
/ $ ls my-ca.crt
my-ca.crt
Let's try again our curl request but with the certificate:
❯ docker exec -it ba curl --cacert my-ca.crt https://127.0.0.1:35625
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
YOU, can get the same result by adding the "--insecure" flag to your curl request:
❯ docker exec -it ba curl https://127.0.0.1:35625 --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
However, we can't access our cluster with anonymous user ! So lets get a token from kubernetes (cf https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/):
# Create a secret to hold a token for the default service account
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-token
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
Once the token controller has populated the secret with a token:
# Get the token value
❯ kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi
Now lets execute the curl command directly with the token !
❯ docker exec -it ba curl -X GET https://127.0.0.1:35625/api --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.18.0.5:6443"
}
]
}
It works !
I still don't know why you want to do this but I hope that this helped you.
Since It's not what you wanted because here I use host network, You can use this : How to communicate between Docker containers via "hostname" as proposed #SergioSantiago thanks for your comment !
bguess

Related

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

127.0.0.1:5000: getsockopt: connection refused in Minikube

Using minikube and docker on my local Ubuntu workstation I get the following error in the Minikube web UI:
Failed to pull image "localhost:5000/samples/myserver:snapshot-180717-213718-0199": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
after I have created the below deployment config with:
kubectl apply -f hello-world-deployment.yaml
hello-world-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/samples/myserver:snapshot-180717-213718-0199
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
And output from docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
samples/myserver latest aa0a1388cd88 About an hour ago 435MB
samples/myserver snapshot-180717-213718-0199 aa0a1388cd88 About an hour ago 435MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 months ago 97MB
Based on this guide:
How to use local docker images with Minikube?
I have also run:
eval $(minikube docker-env)
and based on this:
https://github.com/docker/for-win/issues/624
I have added:
"InsecureRegistry": [
"localhost:5000",
"127.0.0.1:5000"
],
to /etc/docker/daemon.json
Any suggestion on what I missing to get the image pull to work in minikube?
I have followed the steps in the below answer but when I get to this step:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
it just hangs like this:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
and I get the same error in minikube dashboard after I create my deploymentconfig.
Based on answer from BMitch I have now tried to create a local docker repository and push an image to it with:
$ docker run -d -p 5000:5000 --restart always --name registry registry:2
$ docker pull ubuntu
$ docker tag ubuntu localhost:5000/ubuntu:v1
$ docker push localhost:5000/ubuntu:v1
Next when I do docker images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 74f8760a2a8b 4 days ago 82.4MB
localhost:5000/ubuntu v1 74f8760a2a8b 4 days ago 82.4MB
I have then updated my deploymentconfig hello-world-deployment.yaml to:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/ubuntu:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
and
kubectl create -f hello-world-deployment.yaml
But in Minikube I still get similar error:
Failed to pull image "localhost:5000/ubuntu:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
So seems Minikube is not allowed to see the local registry I just created?
It looks like you’re facing a problem with localhost on your computer and localhost used within the context of minikube VM.
To have registry working, you have to set an additional port forwarding.
If your minikube installation is currently broken due to a lot of attempts to fix registry problems,
I would suggest restarting minikube environment:
minikube stop && minikube delete && rm -fr $HOME/.minikube && minikube start
Next, get kube registry yaml file:
curl -O https://gist.githubusercontent.com/coco98/b750b3debc6d517308596c248daf3bb1/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml
Then, apply it on minikube:
kubectl create -f kube-registry.yaml
Test if registry inside minikube VM works:
minikube ssh && curl localhost:5000
On Ubuntu, forward ports to reach registry at port 5000:
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
If you would like to share your private registry from your machine, you may be interested in sharing local registry for minikube blog entry.
If you're specifying the image source as the local registry server, you'll need to run a registry server there, and push your images to it.
You can self host a registry server with multiple 3rd party options, or run this one that is packaged inside a docker container: https://hub.docker.com/_/registry/
This only works on a single node environment unless you setup TLS keys, trust the CA, or tell all other nodes of the additional insecure registry.
You can also specify the imagePullPolicy as Never.
Both of these solutions were already in your linked question and I'm not seeing any evidence of you trying either in this question. Without showing how you tried those steps and experienced a different problem, this question should probably be closed as a duplicate.
it is unclear from your question how many nodes do you have?
If you have more than one, your problem is in your deployment with replicas: 1.
If not, please ignore this answer.
You don't know where and what that replica will be. So if you don't have docker local registry on all of your nodes, and you got unlucky that kubernetes is trying to use some node without docker registry, you will end up with that error.
Same thing happened to me, same error connection refused because deployment went to node without local docker registry.
As I am typing this, I think this can be resolved with ingress.
You do registry as deployment, add service, add volume for images and put it to ingress.
Little more of work but at least all your nodes will be sync (all of your pods sorry).

Private registry with Kubernetes

I'm trying (for tests purpose) to expose to kubernetes a very simple image pong http:
FROM golang:onbuild
EXPOSE 8000
I built the docker image:
docker build -t pong .
I started a private registry (with certificates):
docker run -d --restart=always --name registry -v `pwd`/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2.6.2
I created a secret:
kubectl create secret docker-registry regsecret --docker-server=localhost --docker-username=johndoe --docker-password=johndoe --docker-email=johndoe#yopmail.com
I uploaded the image:
docker tag 9c0bb659fea1 localhost/pong
docker push localhost/pong
I had an insecure registry configuration
{
"storage-driver" : "aufs",
"insecure-registries" : [
"localhost"
],
"debug" : true,
"experimental" : true
}
So I tried to create my kubernetes pods with:
apiVersion: v1
kind: Pod
metadata:
name: pong
spec:
containers:
- name: pong
image: localhost/pong:latest
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
I'm on MacOS with docker Version 17.12.0-ce-mac49 (21995).
If I use image: localhost/pong:latest I got:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: error
parsing HTTP 404 response body: invalid character ''d'' looking for beginning
of value: "default backend - 404"'
reason: ErrImagePull
I'm stuck on it since the beginning of the week, without success.
It was not a problem of registry configuration.
I forgot to mention that I used minikube.
For the flags to be taken into account, I had to delete the minikube configuration and recreate it
minikube delete
minikube start --insecure-registry="10.0.4.0/24"
Hey try to browse your registry using this nice front end app https://hub.docker.com/r/konradkleine/docker-registry-frontend/
Perhaps this will give you some hint , it looks like the registry has some configuration issue...
instead of deleting the cluster first (minikube delete) the configuration json may be editied at ~/.minikube/config/config.json to add this section accordingly:
{
...
"HostOptions": {
...
"InsecureRegistry": [
"private.docker.registry:5000"
],
...
},
...
}
...
}
this only works on started clusters, as the configuration file won't be populated otherwise. the answer above using minikube --insecure-registry="" is fine.

Docker 1.10 access a container by its hostname from a host machine

I have the Docker version 1.10 with embedded DNS service.
I have created two service containers in my docker-compose file. They are reachable each other by hostname and by IP, but when I would like reach one of them from the host machine, it doesn't work, it works only with IP but not with hostname.
So, is it possible to access a docker container from the host machine by it's hostname in the Docker 1.10, please?
Update:
docker-compose.yml
version: '2'
services:
service_a:
image: nginx
container_name: docker_a
ports:
- 8080:80
service_b:
image: nginx
container_name: docker_b
ports:
- 8081:80
then I start it by command: docker-compose up --force-recreate
when I run:
docker exec -i -t docker_a ping -c4 docker_b - it works
docker exec -i -t docker_b ping -c4 docker_a - it works
ping 172.19.0.2 - it works (172.19.0.2 is docker_b's ip)
ping docker_a - fails
The result of the docker network inspect test_default is
[
{
"Name": "test_default",
"Id": "f6436ef4a2cd4c09ffdee82b0d0b47f96dd5aee3e1bde068376dd26f81e79712",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1/16"
}
]
},
"Containers": {
"a9f13f023761123115fcb2b454d3fd21666b8e1e0637f134026c44a7a84f1b0b": {
"Name": "docker_a",
"EndpointID": "a5c8e08feda96d0de8f7c6203f2707dd3f9f6c3a64666126055b16a3908fafed",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"c6532af99f691659b452c1cbf1693731a75cdfab9ea50428d9c99dd09c3e9a40": {
"Name": "docker_b",
"EndpointID": "28a1877a0fdbaeb8d33a290e5a5768edc737d069d23ef9bbcc1d64cfe5fbe312",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
As answered here there is a software solution for this, copying the answer:
There is an open source application that solves this issue, it's called DNS Proxy Server
It's a DNS server that resolves container hostnames, and when it can't resolve a hostname then it can resolve it using public nameservers.
Start the DNS Server
$ docker run --hostname dns.mageddo --name dns-proxy-server -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will set as your default DNS automatically (and revert back to the original when it stops).
Start your container for the test
docker-compose up
docker-compose.yml
version: '2'
services:
redis:
container_name: redis
image: redis:2.8
hostname: redis.dev.intranet
network_mode: bridge # that way he can solve others containers names even inside, solve elasticsearch, for example
elasticsearch:
container_name: elasticsearch
image: elasticsearch:2.2
hostname: elasticsearch.dev.intranet
Now resolve your containers' hostnames
from host
$ nslookup redis.dev.intranet
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: redis.dev.intranet
Address: 172.21.0.3
from another container
$ docker exec -it redis ping elasticsearch.dev.intranet
PING elasticsearch.dev.intranet (172.21.0.2): 56 data bytes
As well it resolves Internet hostnames
$ nslookup google.com
Server: 172.17.0.2
Address: 172.17.0.2#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
Here's what I do.
I wrote a Python script called dnsthing, which listens to the Docker events API for containers starting or stopping. It maintains a hosts-style file with the names and addresses of containers. Containers are named <container_name>.<network>.docker, so for example if I run this:
docker run --rm --name mysql -e MYSQL_ROOT_PASSWORD=secret mysql
I get this:
172.17.0.2 mysql.bridge.docker
I then run a dnsmasq process pointing at this hosts file. Specifically, I run a dnsmasq instance using the following configuration:
listen-address=172.31.255.253
bind-interfaces
addn-hosts=/run/dnsmasq/docker.hosts
local=/docker/
no-hosts
no-resolv
And I run the dnsthing script like this:
dnsthing -c "systemctl restart dnsmasq_docker" \
-H /run/dnsmasq/docker.hosts --verbose
So:
dnsthing updates /run/dnsmasq/docker.hosts as containers
stop/start
After an update, dnsthing runs systemctl restart dnsmasq_docker
dnsmasq_docker runs dnsmasq using the above configuration, bound
to a local bridge interface with the address 172.31.255.253.
The "main" dnsmasq process on my system, maintained by
NetworkManager, uses this configuration from
/etc/NetworkManager/dnsmasq.d/dockerdns:
server=/docker/172.31.255.253
That tells dnsmasq to pass all requests for hosts in the .docker
domain to the docker_dnsmasq service.
This obviously requires a bit of setup to put everything together, but
after that it seems to Just Work:
$ ping -c1 mysql.bridge.docker
PING mysql.bridge.docker (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.087 ms
--- mysql.bridge.docker ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
To specifically solve this problem I created a simple "etc/hosts" domain injection tool that resolves names of local Docker containers on the host. Just run:
docker run -d \
-v /var/run/docker.sock:/tmp/docker.sock \
-v /etc/hosts:/tmp/hosts \
--name docker-hoster \
dvdarias/docker-hoster
You will be able to access a container using the container name, hostname, container id and vía the network aliases they have declared for each network.
Containers are automatically registered when they start and removed when they are paused, dead or stopped.
The easiest way to do this is to add entries to your hosts file
for linux: add 127.0.0.1 docker_a docker_b to /etc/hosts file
for mac: similar to linux but use ip of virtual machine docker-machine ip default
Similar to #larsks, I wrote a Python script too but implemented it as service. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp

Resources