docker container gives unknowHost exception when
ping "private Network hostname"
ping: unknown host
But when I ping by IP it gives result
8 packets transmitted, 8 packets received, 0% packet loss
The work around seems to be adding host entry to /etc/hosts file in the running docker container but I am using the docker in K8 platform that dynamically creates new container so I can not manually add host entries. I was wondering why it can not resolve host name. Any help appreciated :)
You can add hostAliases in Pod Spec. For details, see the official doc.
Here is an example of Pod where hostAliases are used:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "8.8.8.8"
hostnames:
- "foo.local"
- "bar.local"
containers:
- name: cat-hosts
image: busybox
command:
- ping
args:
- "foo.local"
If we see the logs of the pod:
$ kubectl logs po/hostaliases-pod
PING foo.local (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=61 time=51.333 ms
64 bytes from 8.8.8.8: seq=1 ttl=61 time=59.600 ms
....
As it is said in official doc, there is some limitations:
HostAlias is only supported in 1.7+.
HostAlias support in 1.7 is limited to non-hostNetwork Pods because kubelet only manages the hosts file for non-hostNetwork Pods.
In 1.8, HostAlias is supported for all Pods regardless of network configuration.
Related
I am running a cluster in default namespace with all the pods in Running state.
I have an issue, I am trying to telnet from one pod to another pod using the pod hostname 'abcd-7988b76669-lgp8l' but I am not able to connect. although it works if I use pods internal ip. Why does the dns is not resolved?
I looked at
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-5lpfd 1/1 Running 0 12h
coredns-6955765f44-9cvnb 1/1 Running 0 12h
Anybody has any idea how to connect from one pod to another using hostname resolution ?
First of all it is worth mentioning that typically you won't connect to individual Pods using their domain names. One good reason for that is their ephemeral nature. Note that typically you don't create plain Pods but controller such as Deployment which manages your Pods and ensures that specific number of Pods of a certain kind is constantly up and running. Pods may be often deleted and recreated hence you should never rely on their domain names in your applications. Typically you will expose them to another apps e.g. running in other Pods via Service.
Although using invididual Pod's domain name is not recommended, it is still possible. You can do it just for fun or learning/experimenting purposes.
As #David already mentioned you would help us much more in providing you a comprehensive answer if you EDIT your question and provide a few important details, showing what you've tried already such as your Pods and Services definitions in yaml format.
Answering literally to your question posted in the title:
minikube how to connect from one pod to another using hostnames?
You won't be able to connect to a Pod using simply its hostname. You can e.g. ping your backend Pods exposed via ClusterIP Service by simply pinging the <service-name> (provided it is in the same namespace as the Pod your pinging from).
Keep in mind however that it doesn't work for Pods - neither Pods names nor their hostnames are resolvable by cluster DNS.
You should be able to connect to an individual Pod using its fully quallified domain name (FQDN) provided you have configured everything properly. Just make sure you didn't overlook any of the steps described here:
Make sure you've created a simple Headless Service which may look like this:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
Make sure that your Pods definitions didn't lack any important details:
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
Speaking about important details, pay special attention that you correctly defined hostname and subdomain in Pod specification and that labels used by Pods match the labels used by Service's selector.
Once everything is configured properly you will be able to attach to Pod busybox1 and ping Pod busybox2 by using its FQDN like in the example below:
$ kubectl exec -ti busybox1 -- /bin/sh
/ # ping busybox-2.default-subdomain.default.svc.cluster.local
PING busybox-2.default-subdomain.default.svc.cluster.local (10.16.0.109): 56 data bytes
64 bytes from 10.16.0.109: seq=0 ttl=64 time=0.051 ms
64 bytes from 10.16.0.109: seq=1 ttl=64 time=0.082 ms
64 bytes from 10.16.0.109: seq=2 ttl=64 time=0.081 ms
I hope this helps.
I am trying to connect two locally developed projects running on docker-compose by using external networking.
From one side I have an 1st application intended to be exposed. Compose contains hosts: app and rabbit:
version: '3.4'
services:
app:
# ...
rabbit:
# ...
networks:
default:
driver: bridge
From other side I have second application expected to see 1st application:
version: '3.4'
services:
app:
# ...
networks:
- paymentservice_default
- default
networks:
paymentservice_default:
external: true
Reaching host rabbit.paymentservice_default is possible.
However service app (1st) conflicts with app (2nd):
root#6db86687229c:/app# ping app.paymentservice_default
PING app.paymentservice_default (192.168.80.6) 56(84) bytes of data.
root#6db86687229c:/app# ping app
PING app (192.168.80.6) 56(84) bytes of data.
In general from 2nd compose perspective hosts app and app.paymentservice_default shares same IP making app.paymentservice_default undiscoverable.
The question here is, do I have proper configuration and conflict can be avoided without changing service names app? Why this constraint? Taking consideration that every docker-compose configuration is shared across projects and can be developed in micro-services world.
$ docker-compose --version
docker-compose version 1.17.1, build unknown
$ docker --version
Docker version 19.03.4, build 9013bf583a
Thank you.
I use the following configuration on Docker Playground
paymentservice.docker-compose.yml
version: '3.4'
services:
app:
image: busybox
# keep container running
command: tail -f /dev/null
rabbit:
image: rabbitmq
networks:
default:
driver: bridge
other.docker-compose.yml
version: '3.4'
services:
app:
image: busybox
# keep container running
command: tail -f /dev/null
networks:
- paymentservice_default
- default
networks:
paymentservice_default:
external: true
Run both projects
$ COMPOSE_PROJECT_NAME=paymentservice docker-compose -f paymentservice.docker-compose.yml up -d
$ COMPOSE_PROJECT_NAME=other docker-compose -f other.docker-compose.yml up -d
Show Docker IPs
$ docker ps -q | xargs -n 1 docker inspect --format '{{ .Name }} {{range .NetworkSettings.Networks}} {{.IPAddress}}{{end}}' | sed 's#^/##';
I got
other_app_1 172.20.0.2 172.19.0.4
paymentservice_app_1 172.19.0.3
paymentservice_rabbit_1 172.19.0.2
and I pinged paymentservice_app_1 (172.19.0.3) from other_app_1 using app.paymentservice_default
$ docker exec -it other_app_1 ping -c 1 app.paymentservice_default
PING app.paymentservice_default (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.258 ms
--- app.paymentservice_default ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.258/0.258/0.258 ms
and I pinged other_app_1 (172.20.0.2) from other_app_1 using app
$ docker exec -it other_app_1 ping -c 1 app
PING app (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.054 ms
--- app ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.054/0.054 ms
As you can see, I can access the 1st app (of paymentservice.docker-compose.yml) from the 2nd app (of other.docker-compose.yml).
The same works in the other direction. I pinged other_app_1 (172.19.0.4) from paymentservice_app_1 using app.paymentservice_default
$ docker exec -it paymentservice_app_1 ping -c 1 app.paymentservice_default
PING app.paymentservice_default (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: seq=0 ttl=64 time=0.198 ms
--- app.paymentservice_default ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.198/0.198/0.198 ms
I pinged paymentservice_app_1 (172.19.0.3) from paymentservice_app_1 using app
$ docker exec -it paymentservice_app_1 ping -c 1 app
PING app (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.057 ms
--- app ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.057/0.057 ms
As you can see, I can access app service of both projects. If I like to access the service of the same project, I use the default network of the project. If I'd like to access the service of another project, I use the external network shared between both projects.
Note: I would recommend to make this more explicit by creating the shared network outside of the projects using the command line
docker network create shared-between-paymentservice-and-other
and declaring it as external in both projects.
Note: There is still the limitation that service discovery may not work if you have 3 projects with the same service name (e.g. app) in the same (external) network (sort of a namespace). In that case, it might be a better idea to rename your services, use multiple external networks, define aliases or use a totally different approach to discover/identify the Docker containers.
Afterword
Has that been the requirement? I tried to reproduce your issue, but I'm not sure if I did the same as you. For example, I'm not sure, where you are running ping. Is root#6db86687229c the Docker host or a Docker container? Which container? I assumed it is the Docker container of service app of other.docker-compose.yml. Please comment if I'm missing something or misinterpreted your question and I will update my answer. Then I may explain in more detail or make another suggestion how to do service discovery between multiple Docker Compose projects.
Appendix
Cleanup
$ COMPOSE_PROJECT_NAME=other docker-compose -f other.docker-compose.yml down
$ COMPOSE_PROJECT_NAME=paymentservice docker-compose -f paymentservice.docker-compose.yml down
Versions
$ docker --version
Docker version 20.10.0, build 7287ab3
$ docker-compose --version
docker-compose version 1.26.0, build unknown
I am a beginner to docker.Please correct me if anything wrong.
As shown in this docker swarm tutorial https://www.youtube.com/watch?v=nGSNULpHHZc , i am trying to setup multhost setup for my hyperledger fabric application.
I am using two oracle linux servers namely server 1 and server 2.
I connected both the servers using the docker swarm as managers and created overlay network called my-net.
I followed the same syntax given in the above mentioned tutorial and created the service using the beolw mentioned syntax.
docker service create --name myservice --network my-net --replicas 2 alpine sleep 1d
As expected it created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.4 and server 2 container IP 10.0.0.5.
Now, i am trying to ping from the second servers container to first server's container as shown below and it is pinging.
# docker exec -it ContainerID sh
/ # ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.082 ms
64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.062 ms
64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.067 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.062/0.070/0.082 ms
Now, I am trying to create my service(1) using the beolw mentioned syntax.
docker service create --name myservice1 --network my-net --replicas 2 hyperledger/fabric-peer sleep 1d
As expected this also created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.6 and server 2 container IP 10.0.0.7.
Now, I am trying to ping from the second servers container to first server's container as shown below.
This time i am getting ping not found error,
# docker exec -it ContainerID sh
# ping 10.0.0.6
sh: 1: ping: not found
Can anyone please help what is the problem with the second myservice1.
The Fabric Docker images are based on a bare bones base Ubuntu image and do not include utilities like ping. Once you "exec" into the peer containers, you use "apt" to install ping:
apt-get update
apt-get install inetutils-ping
Added -ping at the end
Expanding on Gari Singh's answer, on a Fabric network I've spun this week, the inetutils has been split in different packages:
# apt-cache search inetutils
inetutils-ftp - File Transfer Protocol client
inetutils-ftpd - File Transfer Protocol server
inetutils-inetd - internet super server
inetutils-ping - ICMP echo tool
inetutils-syslogd - system logging daemon
inetutils-talk - talk to another user
inetutils-talkd - remote user communication server
inetutils-telnet - telnet client
inetutils-telnetd - telnet server
inetutils-tools - base networking utilities (experimental pac
so to install e.g. ping the correct command has become:
# apt-get install inetutils-ping
The Ubuntu version of the peer is:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"
I am trying to add a cluster with replicas using docker-compose scale graylog-es-slave=2 but for a version 3 Dockerfile unlike Docker compose and hostname
What I am trying to do ix figure out how to get the specific node in the replica set
Here is what I have tried
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave -c 2
PING graylog-es-slave (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: icmp_seq=0 ttl=64 time=0.067 ms
64 bytes from 172.19.0.4: icmp_seq=1 ttl=64 time=0.104 ms
--- graylog-es-slave ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.067/0.085/0.104/0.000 ms
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave.1 -c 2
ping: unknown host
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave_1 -c 2
ping: unknown host
The docker-compose.yml
version: 3
service:
graylog-es-slave:
image: elasticsearch:2
command: "elasticsearch -Des.cluster.name='graylog'"
environment:
ES_HEAP_SIZE: 2g
deploy:
replicas: 2 <-- this is ignored on docker-compose just putting it here for completeness
Instead of ., use _ (underscore), and add the prefix of the project name (the directory that holds your docker-compose.yml, I assume that it is liberty-docker_graylog):
ping liberty-docker_graylog-es-slave_1
You can see that doing network ls, search for the right network, then docker network inspect network_id.
I'm running https://github.com/openshift/origin/tree/v0.3.3 on centos 6.6. When i run:
sudo /opt/bin/openshift start
i see an error:
I0301 22:02:04.738381 18093 pod_cache.go:194] error getting pod deploy-docker-registry-16mttp status: Get http://localhost:10250/api/v1beta1/podInfo?podID=deploy-docker-registry-16mttp&podNamespace=default: dial tcp 127.0.0.1:10250: connection refused, retry later
E0301 22:02:04.738422 18093 pod_cache.go:260] Error getting info for pod default/deploy-docker-registry-16mttp: Get http://localhost:10250/api/v1beta1/podInfo?podID=deploy-docker-registry-16mttp&podNamespace=default: dial tcp 127.0.0.1:10250: connection refused
If i do:
docker ps -a | grep origin-deployer
then i see:
b207ce593385 openshift/origin-deployer:v0.3.3 "/usr/bin/openshift- 31 hours ago Exited (255) 31 hours ago k8s_deployment.6c8f5c13_deploy-docker-registry-16mttp.default.api_11ae6e53-bf85-11e4-b8b2-080027bb06ce_8c701fc0
so i run:
docker logs b207ce593385
and get:
228 20:06:37.955877 1 deployer.go:64] Get https://10.0.2.15:8443/api/v1beta1/replicationControllers/docker-registry-1?namespace=default: dial tcp 10.0.2.15:8443: no route to host
If i do:
ping 10.0.2.15
it works. If i try:
https://10.0.2.15:8443
it returns:
404 Page Not Found
so the server is responsive. If i open the OpenShift Web Console at https://10.0.2.15:8444/ and Browse the default project it shows one deploy-docker-registry-16mttp pod with a status of Failed. The "IP on node" is 172.17.0.3 and it does respond to a ping. If i run:
osc describe service docker-registry
it returns:
Name: docker-registry
Labels: docker-registry=default
Selector: docker-registry=default
Port: 5000
Endpoints: <empty>
No events.
it should be returning:
Endpoints: 172.17.0.60:5000
according to the instructions. When i try:
ping 172.17.0.60
it returns:
PING 172.17.0.60 (172.17.0.60) 56(84) bytes of data.
From 172.17.42.1 icmp_seq=2 Destination Host Unreachable
From 172.17.42.1 icmp_seq=3 Destination Host Unreachable
...
Lot of moving parts and i'm new to it so any suggestions would be appreciated. I've probably missed one of the configuration steps.
It appears to be related to Centos 6.6. When i try the same process on Centos 7 (using netinstall) there is no problem.