I create docker network on server with Ubuntu 14.04, run 2 containers in this network (hub and node) and try to ping hub from hode by container name. Its works for a few seconds or minutes, but after some time conection lost.
Network and containers:
docker network create grid
docker run -d --rm --net grid --name selenium-hub hub:v0.1
docker run -d --rm --net grid -it node:v0.1 bash
Ping:
root#54385bbb4922:/# ping selenium-hub
PING selenium-hub (172.24.0.2) 56(84) bytes of data.
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=2 ttl=64 time=0.046 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=4 ttl=64 time=0.060 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=5 ttl=64 time=0.043 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=6 ttl=64 time=0.048 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=7 ttl=64 time=0.046 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=8 ttl=64 time=0.040 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=9 ttl=64 time=0.047 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=10 ttl=64 time=0.042 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=11 ttl=64 time=0.047 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=12 ttl=64 time=0.049 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=13 ttl=64 time=0.048 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=14 ttl=64 time=0.045 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=15 ttl=64 time=0.068 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=16 ttl=64 time=0.065 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=17 ttl=64 time=0.059 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=18 ttl=64 time=0.055 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=19 ttl=64 time=0.056 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=20 ttl=64 time=0.062 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=21 ttl=64 time=0.048 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=22 ttl=64 time=0.043 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=23 ttl=64 time=0.056 ms
64 bytes from selenium-hub.grid (172.24.0.2): icmp_seq=24 ttl=64 time=0.054 ms
^C
--- selenium-hub ping statistics ---
54 packets transmitted, 24 received, 55% packet loss, time 52999ms
rtt min/avg/max/mdev = 0.040/0.052/0.088/0.012 ms
After conection lost docker still resolve container name to ip
root#2313d40e2018:/# ping selenium-hub
PING selenium-hub (172.17.0.2) 56(84) bytes of data.
^C
--- selenium-hub ping statistics ---
2458 packets transmitted, 0 received, 100% packet loss, time 2476656ms
This situation reproduse each time i remove and recreate network.
Meanwhile on my local system with Ubuntu 16.04 everything works fine.
I find this answer https://askubuntu.com/a/708487, but:
1) as far as i understand, there is no NetworkManager on the server
~$ ps ax | grep anager
120 ? S< 0:00 [charger_manager]
19798 pts/5 S+ 0:00 grep --color=auto anager
2) adding iface docker0 inet manual to /etc/network/interfaces does not change anything
What else can automatically corrupt/reconfigure docker network?
Docker version 18.06.3-ce, build d7080c1
Related
I have set up a docker stack with telegraf, influxdb and grafana to monitor urls using telegraf's http_request input.
When it monitors external URLs like google there is no problem, but when it launches the request for hostname mydomain. com and it resolves to the same ip the telegraf container gives a timeout.
I have tried from inside the container to launch a curl and indeed it fails, but from the host (outside the container) the curl works.
Any idea what could be going on or where I could move forward?
root#08ad708c4a09:/# curl -m 5 https://mydomain1.com:9443
curl: (28) Connection timed out after 5001 milliseconds
root#08ad708c4a09:/# ping mydomain1.com
PING mydomain1.com (itself.ip.host.machine) 56(84) bytes of data.
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=3 ttl=64 time=0.126 ms
^C
--- mydomain1.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 25ms
rtt min/avg/max/mdev = 0.126/0.137/0.148/0.013 ms
root#08ad708c4a09:/# curl -m 5 mydomain2.com
Hello world
thank you very much community.
I hope that telegraf's http_request knows how to resolve a domain that points to the same ip and does not respond with a timeout.
I know "host.docker.internal" points to the host running docker daemon. I'd like to achieve the following:
services:
xx:
extra_hosts: ["example.com:host.docker.internal"]
But I can only use a specific IP address in extra_hosts.
My question: Is there a way to do this?
If your docker version is above 20.04, then you could use next:
extra_hosts:
- "host.docker.internal:host-gateway"
Detail see this.
Then you could use host.docker.internal to communicate with host, e.g.:
$ docker run --rm -it --add-host=host.docker.internal:host-gateway debian:10 ping -c 4 host.docker.internal
PING host.docker.internal (172.17.0.1) 56(84) bytes of data.
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=1 ttl=64 time=0.064 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=2 ttl=64 time=0.094 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=3 ttl=64 time=0.094 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=4 ttl=64 time=0.095 ms
--- host.docker.internal ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 74ms
rtt min/avg/max/mdev = 0.064/0.086/0.095/0.017 ms
I have gitlab running on docker container.
Also, I have gitlab-runner installed on my Cenots7 machine. Runner configured use docker executor.
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "test"
url = "http://localhost"
token = "gQfiiD4PUyPs4TiXLX9-"
executor = "docker"
log_level = "debug"
pre_clone_script = "ls -la"
clone_url = "http://localhost/"
[runners.docker]
tls_verify = false
image = "node"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
# network_mode = "gitlab_default"
# pull_policy = "never"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
When runner take job it can`t clone repo and print error:
Cloning repository...
Cloning into '/builds/root/project'...
fatal: unable to access 'http://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#localhost/root/project.git/': Failed to connect to localhost port 80: Connection refused
/bin/bash: line 64: cd: /builds/root/project: No such file or directory
ERROR: Job failed: exit code 1
also, I tried: clone_url = "http://172.17.0.1", but get same error
ping from docker executor:
ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.158 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.090 ms
64 bytes from 172.17.0.1: seq=2 ttl=64 time=0.086 ms
64 bytes from 172.17.0.1: seq=3 ttl=64 time=0.084 ms
64 bytes from 172.17.0.1: seq=4 ttl=64 time=0.086 ms
64 bytes from 172.17.0.1: seq=5 ttl=64 time=0.087 ms
64 bytes from 172.17.0.1: seq=6 ttl=64 time=0.087 ms
64 bytes from 172.17.0.1: seq=7 ttl=64 time=0.109 ms
64 bytes from 172.17.0.1: seq=8 ttl=64 time=0.089 ms
64 bytes from 172.17.0.1: seq=9 ttl=64 time=0.088 ms
64 bytes from 172.17.0.1: seq=10 ttl=64 time=0.098 ms
64 bytes from 172.17.0.1: seq=11 ttl=64 time=0.088 m
I have very weird problem:
I have the swarm cluster and one of my service have wrong ip:
$ docker service inspect nginx_backend | grep Addr
"Addr": "10.0.0.107/24"
From any container in the cluster:
/ # ping nginx_backend
PING nginx_backend (10.0.0.107): 56 data bytes
64 bytes from 10.0.0.107: seq=0 ttl=64 time=0.057 ms
64 bytes from 10.0.0.107: seq=1 ttl=64 time=0.061 ms
64 bytes from 10.0.0.107: seq=2 ttl=64 time=0.064 ms
64 bytes from 10.0.0.107: seq=3 ttl=64 time=0.083 ms
^C
--- nginx_backend ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.066/0.083 ms
But in the server which hosted nginx_backend container:
root#backend:~# docker inspect nginx_backend.1.myzy10psfdl9r4jljrsz5zd5t | grep IPv4
"IPv4Address": "10.0.0.87"
And when some service try connect by name it got connect error, but if I manually put record like 10.0.0.87 nginx_backend to /etc/hosts inside a container, it have successful connect.
What I did wrong?)
Docker creates (by default) a Virtual IP (VIP) for each service. That's the 10.0.0.107. It then balances requests between the backend containers. In the second example (10.0.0.87) you're seeing the IP address of one of the containers. That's routable within Docker as well (which is why hitting the IP works). However the name (nginx_backend.1.myzy10psfdl9r4jljrsz5zd5t) is not DNS resolvable so that's why that fails.
You can find a list of the 'backing' containers for a service by doing a DNS lookup on tasks.nginx_backend.
Some more background here: https://docs.docker.com/network/overlay/
After creating the next network using docker:
sudo docker network create --driver bridge mynet_nw
how can I do to issue commands from inside this network? i.e it is like simulating my machine is inside mynet_nw so I can issue ping commands to different docker images created inside mynet_nw?
Thx
You issue commands from any Docker container inside your network. You just need to add a container first. See docker network connect for that.
Either way, any Docker container on a network can talk to any other Docker container on the same network. Assuming you have your host, containerA, and containerB, you'll be able to ping any of those three from each other:
containerA:~$ ping 172.18.0.1 # from containerA to host
56 bytes from 172.18.0.1: icmp_seq=0 ttl=64 time=0.101 ms
56 bytes from 172.18.0.1: icmp_seq=0 ttl=64 time=0.098 ms
56 bytes from 172.18.0.1: icmp_seq=0 ttl=64 time=0.102 ms
56 bytes from 172.18.0.1: icmp_seq=0 ttl=64 time=0.100 ms
containerA:~$ ping 172.18.0.3 # from containerA to containerB
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.082 ms
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.079 ms
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.116 ms
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.094 ms
Asssuming you know hostnames, you should be able to even ping with hostnames directly. This might be a bit weirder when attempting to ping your host, but it should work between containers.
containerA:~$ ping containerB # from containerA to containerB
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.067 ms
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.084 ms
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.080 ms
56 bytes from 172.18.0.3: icmp_seq=0 ttl=64 time=0.0811 ms
Generally I suggest to read the documentation for the docker network commands: https://docs.docker.com/engine/reference/commandline/network_connect/#related-commands
https://docs.docker.com/engine/userguide/networking/work-with-networks/
Anyway the docker network generates a network and you need a container to execute commands in there.
Example:
docker run -it alpine --network mynet_nw /bin/bash
ping mydb
You can alway check the network from the host itself if you know the name of the container:
INSTANCE_NAME='myalpine'
IPADD=$( docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $INSTANCE_NAME )
ping $IPADD