I can't ping a service by it's service name from another container on the same overlay network in docker swarm. My steps are:
# docker swarm init
# docker network create -d overlay --attachable net1
# docker service create --name dns1 --network net1 tutum/dnsutils sleep 3000
# docker service create --name dns2 --network net1 tutum/dnsutils sleep 3000
This creates a 1 node swarm, a user defined overlay network and 2 services. I should be able to exec into 1 container and ping the other via service name but it does not work:
# docker exec -it dns1.1.6rned8409m9jkqoxgutzjz4y4 /bin/bash
root#05cba6fd8a0b:/# ping dns2
PING dns2 (10.0.5.5) 56(84) bytes of data.
From 05cba6fd8a0b (10.0.5.3) icmp_seq=1 Destination Host Unreachable
From 05cba6fd8a0b (10.0.5.3) icmp_seq=2 Destination Host Unreachable
From 05cba6fd8a0b (10.0.5.3) icmp_seq=3 Destination Host Unreachable
^C
--- dns2 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3062ms
I can ping the container directly either via the full hostname (dns2.1.idkledfjgd5dwknv6pirywpfk) or IP (10.0.5.6).
Environment Info:
# docker network inspect -v net1
[
{
"Name": "net1",
"Id": "ngzwl7l7m0zb5brvee21mvfcz",
"Created": "2020-12-14T22:05:25.962132239Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.5.0/24",
"Gateway": "10.0.5.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"05cba6fd8a0bc4e480b50f91fb395d27ee4998277d480109cb95249c38852909": {
"Name": "dns1.1.6rned8409m9jkqoxgutzjz4y4",
"EndpointID": "6bcc76c8688527fcf26d2ed313e351a54b8de69d28cde4388032849a2ff91a3e",
"MacAddress": "02:42:0a:00:05:03",
"IPv4Address": "10.0.5.3/24",
"IPv6Address": ""
},
"c1d9252f528b177ac397b7b9bf627996993ddc0f54aad3ee3862d93dcac407a3": {
"Name": "dns2.1.idkledfjgd5dwknv6pirywpfk",
"EndpointID": "fafd8335715737c26c83ff8a3e7c52a302eb48cbb6b7bb75e396ed6a483bfd31",
"MacAddress": "02:42:0a:00:05:06",
"IPv4Address": "10.0.5.6/24",
"IPv6Address": ""
},
"lb-net1": {
"Name": "net1-endpoint",
"EndpointID": "09e3b875528a05dc39a910b8cfe5cfd57756681c4aeffd56a0c9fb41d6bffd23",
"MacAddress": "02:42:0a:00:05:04",
"IPv4Address": "10.0.5.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "4dc98c7e5f08",
"IP": "192.168.1.26"
}
],
"Services": {
"dns1": {
"VIP": "10.0.5.2",
"Ports": [],
"LocalLBIndex": 269,
"Tasks": [
{
"Name": "dns1.1.6rned8409m9jkqoxgutzjz4y4",
"EndpointID": "6bcc76c8688527fcf26d2ed313e351a54b8de69d28cde4388032849a2ff91a3e",
"EndpointIP": "10.0.5.3",
"Info": {
"Host IP": "192.168.1.26"
}
}
]
},
"dns2": {
"VIP": "10.0.5.5",
"Ports": [],
"LocalLBIndex": 270,
"Tasks": [
{
"Name": "dns2.1.idkledfjgd5dwknv6pirywpfk",
"EndpointID": "fafd8335715737c26c83ff8a3e7c52a302eb48cbb6b7bb75e396ed6a483bfd31",
"EndpointIP": "10.0.5.6",
"Info": {
"Host IP": "192.168.1.26"
}
}
]
}
}
}
]
and
# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.4.2-docker)
Server:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 7
Server Version: 20.10.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: x2o135d3kkfxw6lb6mfyx8s3h
Is Manager: true
ClusterID: v5x80quwm3vwsubwdd6pclj4r
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.1.26
Manager Addresses:
192.168.1.26:2377
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.73-1-pve
Operating System: Ubuntu 20.10
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 15.62GiB
Name: dockerHost
ID: CCGD:MQRE:PGJJ:YRU5:M4IM:5INT:EGA5:IER3:22UL:7CI3:PZOU:EZZ2
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No blkio weight support
WARNING: No blkio weight_device support
For anyone looking at this in the future. The issue for me was that I was running docker in a LXC container on proxmox (ubuntu 20.04 template). I tested this in a ubuntu 20.04 VM and it works exactly as expected. I don't know exactly what the issue is or if it can be fixed, but essentially running this in a LXC container will not work.
Related
I have an external network which is used by docker-compose as well as docker run. I can specify network alias in 'docker run' and it would resolve to the actual container ip, but the alias I define in docker compose doesn't resolve to actual ip. why is this? What should I do to get the alias in docker-compose resolve to actual IP?
step1: create an external network
docker network create --attachable -d overlay test-docker-network
step2: create a docker-compose which creates an alias
docker-compose.yml
version: '3.0'
services:
host1:
image: linuxserver/openssh-server
environment:
USER_PASSWORD: 'password'
USER_NAME: 'user'
PASSWORD_ACCESS: 'true'
SUDO_ACCESS: 'true'
ports:
- 2222:2222
networks:
default:
aliases:
- netcatalias
networks:
default:
external:
name: test-docker-network
step3: deploy stack
docker stack deploy -c docker-compose.yml netcat
step4: deploy 'docker run' container in same network
docker run --rm --name host2 --network-alias=myalias -ti --network test-docker-network debian:buster bash
step5: resolve both the aliases
root#de1f75728a7e:~/gitprojects/docker-network-troubleshoot# docker run --rm --name host2 --network-alias=myalias -ti --network test-docker-network debian:buster bash
root#ea765c15dae8:/# ping myalias
PING myalias (10.0.8.5) 56(84) bytes of data.
64 bytes from ea765c15dae8 (10.0.8.5): icmp_seq=1 ttl=255 time=0.022 ms
64 bytes from ea765c15dae8 (10.0.8.5): icmp_seq=2 ttl=255 time=0.042 ms
64 bytes from ea765c15dae8 (10.0.8.5): icmp_seq=3 ttl=255 time=0.034 ms
^C
--- myalias ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 49ms
rtt min/avg/max/mdev = 0.022/0.032/0.042/0.010 ms
root#ea765c15dae8:/# ping netcatalias
PING netcatalias (10.0.8.2) 56(84) bytes of data.
64 bytes from ip-10-0-8-2.ec2.internal (10.0.8.2): icmp_seq=1 ttl=255 time=0.069 ms
64 bytes from ip-10-0-8-2.ec2.internal (10.0.8.2): icmp_seq=2 ttl=255 time=0.068 ms
64 bytes from ip-10-0-8-2.ec2.internal (10.0.8.2): icmp_seq=3 ttl=255 time=0.067 ms
^C
--- netcatalias ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 19ms
rtt min/avg/max/mdev = 0.067/0.068/0.069/0.000 ms
root#ea765c15dae8:/#
step 6: get actual ip from 'network inspect'
root#de1f75728a7e:~/gitprojects/docker-network-troubleshoot# docker network inspect test-docker-network
[
{
"Name": "test-docker-network",
"Id": "3ev3r0eo2rg81pyb2yovlmmg3",
"Created": "2020-01-18T03:09:58.748025872Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.8.0/24",
"Gateway": "10.0.8.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2ba6c329d157b4a03480f978645e558bb6b46d9d5c7af3d152d943aae75c696a": {
"Name": "netcat_host1.1.180sln82qyxp03rk8o5od5p9a",
"EndpointID": "cf2eaf42b10083296696c3cad8e43fe392ed2374cd65fd8aa8c048a134171bd2",
"MacAddress": "02:42:0a:00:08:03",
"IPv4Address": "10.0.8.3/24",
"IPv6Address": ""
},
"ea765c15dae8c1cf6f6945447897a126fdf03ae1e42d2811c95d94a9d9112f39": {
"Name": "host2",
"EndpointID": "67ca483fd4bd231db74a39ba8f782a95c102fc04937ef9e245bfc14100f61d11",
"MacAddress": "02:42:0a:00:08:05",
"IPv4Address": "10.0.8.5/24",
"IPv6Address": ""
},
"lb-test-docker-network": {
"Name": "test-docker-network-endpoint",
"EndpointID": "0754c146c555fdf0e2d683c8ead3e0670196e201148c411f35899df226d77cc4",
"MacAddress": "02:42:0a:00:08:04",
"IPv4Address": "10.0.8.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4106"
},
"Labels": {},
"Peers": [
{
"Name": "08bdcafe53fb",
"IP": "10.0.0.30"
}
]
}
]
Issue:
we can see that the 'docker run' alias 'myalias' correctly resolves to (10.0.8.5) from 'docker network inspect'. But 'netcatalias' resolves to (10.0.8.2). But it should actually resolved to "10.0.8.3". Why is this happening. How can resolve netcatalias to "10.0.8.3"?
It's the ip of a virtual service load balancer that sits in front of services and distributes traffic to replicas.
If you change service routing mode to dnsrr instead vip (virtual-ip), then docker dns service will resolve names to container ips in round-robin mode.
I have a docker container that likes to shutdown without restarting, despite having a restart=unless-stopped policy set.
Other containers are running on the same host (with similar startup configuration parameters) which I don't have any problems with. The host is a node in a swarm on a somewhat unstable network, and the container is frequent user of the node network (talking to the master node) so I'm not surprised that it would fail regularly, but I expect it to restart itself.
This is due to the restartmanger. The docker inspect State.error shows a message which clearly came from docker and not my container. The logs show:
... time="2019-09-21T02:06:31.969473802Z" level=error msg="restartmanger wait error: Could not attach to network cqr3v2jode1boqh2yofqrh7bx: context deadline exceeded"
So it appears that -- occasionally -- when the container gets restarted the network is down and the manger decides stop restarting. The question becomes how to override this behavior.
docker info:
Client:
Debug Mode: false
Server:
Containers: 4
Running: 2
Paused: 0
Stopped: 2
Images: 43
Server Version: 19.03.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: wgn64s7lx9jvgw36gtlu0dsou
Is Manager: false
Node Address: 10.0.0.2
Manager Addresses:
10.0.0.1:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.19.66-v7+
Operating System: Raspbian GNU/Linux 9 (stretch)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 874.5MiB
Name: sensors-2
ID: NTRC:WPLS:GH2P:ZTLM:EDAN:H7HB:HGP6:6G6A:3YVW:T2I7:TVJU:XV3N
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Here are the relevant bits from docker inspect on the non-restarting container. Note that it has restarted a few times, it exited due to a network error, and the MaximumRetryCount is 0 (which I assume is unlimited). Most recently it wasn't up for long... but my understanding of unless-stopped is that docker would continue restarting the container, though it would increase the delay between restarts.
[
{
"Id": "fa7c59dfa38f25c70d4c1293db27965c2e76af950fa19a2097b4ce63e1af2be4",
"Created": "2019-06-24T05:25:10.792698029Z",
"Path": "/srv/bin/weather_collector_server",
"Args": [
"/etc/config.ini"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "Could not attach to network cqr3v2jode1boqh2yofqrh7bx: context deadline exceeded",
"StartedAt": "2019-09-21T03:56:40.911764904Z",
"FinishedAt": "2019-09-21T03:58:07.234852939Z"
},
"Image": "sha256:ee0e5023f37917f074dd0bf03dca328833eafd117fe69041203533768a196789",
"ResolvConfPath": "/var/lib/docker/containers/fa7c59dfa38f25c70d4c1293db27965c2e76af950fa19a2097b4ce63e1af2be4/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/fa7c59dfa38f25c70d4c1293db27965c2e76af950fa19a2097b4ce63e1af2be4/hostname",
"HostsPath": "/var/lib/docker/containers/fa7c59dfa38f25c70d4c1293db27965c2e76af950fa19a2097b4ce63e1af2be4/hosts",
"LogPath": "",
"Name": "/weather_collector_server",
"RestartCount": 3,
"Driver": "overlay2",
"Platform": "linux",
...
"HostConfig": {
...
"RestartPolicy": {
"Name": "unless-stopped",
"MaximumRetryCount": 0
},
...
],
"NetworkSettings": {
"Bridge": "",
"SandboxID": "0e901219511bb618d66943a12af1e09d8bbcb78ca4caa0bad88880f21d843c55",
...
"Networks": {
"hostnet": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"fa7c59dfa38f"
],
"NetworkID": "cqr3v2jode1boqh2yofqrh7bx",
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"DriverOpts": {}
}
}
}
}
]
I feel like this is simple, but I can't figure it out. I have two services, consul and traefik up in a single node swarm on the same host.
> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3g1obv9l7a9q consul_consul replicated 1/1 progrium/consul:latest
ogdnlfe1v8qx proxy_proxy global 1/1 traefik:alpine *:80->80/tcp, *:443->443/tcp
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090f1ed90972 progrium/consul:latest "/bin/start -server …" 12 minutes ago Up 12 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul_consul.1.o0j8kijns4lag6odmwkvikexv
20f03023d511 traefik:alpine "/entrypoint.sh -c /…" 12 minutes ago Up 12 minutes 80/tcp
Both containers have access to the "consul" overlay network, which was created as such.
> docker network create --driver overlay --attachable consul
ypdmdyx2ulqt8l8glejfn2t25
Traefik is complaining that it can't reach consul.
time="2019-03-18T18:58:08Z" level=error msg="Load config error: Get http://consul:8500/v1/kv/traefik?consistent=&recurse=&wait=30000ms: dial tcp 10.0.2.2:8500: connect: connection refused, retrying in 7.492175404s"
I can go into the traefik container and confirm that I can't reach consul through the overlay network, although it is pingable.
> docker exec -it 20f03023d511 ash
/ # nslookup consul
Name: consul
Address 1: 10.0.2.2
/ # curl consul:8500
curl: (7) Failed to connect to consul port 8500: Connection refused
# ping consul
PING consul (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.085 ms
However, if I look a little deeper, I find that they are connected, just that the overlay network isn't transmitting traffic to the actual destination for some reason. If I go directly to the actual consul ip, it works.
/ # nslookup tasks.consul
Name: tasks.consul
Address 1: 10.0.2.3 0327c8e1bdd7.consul
/ # curl tasks.consul:8500
Moved Permanently.
I could workaround this, technically there will only ever be one copy of consul running, but I'd like to know why the data isn't routing in the first place before I get deeper into it. I can't think of anything else to try. Here is various information related to this setup.
> docker --version
Docker version 18.09.2, build 6247962
> docker network ls
NETWORK ID NAME DRIVER SCOPE
cee3cdfe1194 bridge bridge local
ypdmdyx2ulqt consul overlay swarm
5469e4538c2d docker_gwbridge bridge local
5fd928ea1e31 host host local
9v22k03pg9sl ingress overlay swarm
> docker network inspect consul
[
{
"Name": "consul",
"Id": "ypdmdyx2ulqt8l8glejfn2t25",
"Created": "2019-03-18T14:44:27.213690506-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0327c8e1bdd7ebb5a7871d16cf12df03240996f9e590509984783715a4c09193": {
"Name": "consul_consul.1.8v4bshotrco8fv3sclwx61106",
"EndpointID": "ae9d5ef1d19b67e297ebf40f6db410c33e4e3c0266c56e539e696be3ed4c81a5",
"MacAddress": "02:42:0a:00:02:03",
"IPv4Address": "10.0.2.3/24",
"IPv6Address": ""
},
"c21f5dfa93a2f43b747aedc64a343d94d6c1c2e6558d81bd4a52e2ba4b5fa90f": {
"Name": "proxy_proxy.sb6oindhmfukq4gcne6ynb2o2.4zvco02we58i3ulbyrsw1b2ok",
"EndpointID": "7596a208e0b05ba688f318814e24a2a1a3401765ed53ca421bf61c73e65c235a",
"MacAddress": "02:42:0a:00:02:06",
"IPv4Address": "10.0.2.6/24",
"IPv6Address": ""
},
"lb-consul": {
"Name": "consul-endpoint",
"EndpointID": "23e74716ef54f3fb6537b305176b790b4bc4132dda55f20588d7ce4ca71d7372",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
{
"Name": "e11b9bd30b31",
"IP": "10.8.0.1"
}
]
}
]
> cat consul/docker-compose.yml
version: '3.1'
services:
consul:
image: progrium/consul
command: -server -bootstrap
networks:
- consul
volumes:
- consul:/data
deploy:
labels:
- "traefik.enable=false"
networks:
consul:
external: true
> cat proxy/docker-compose.yml
version: '3.3'
services:
proxy:
image: traefik:alpine
command: -c /traefik.toml
networks:
# We need an external proxy network and the consul network
# - proxy
- consul
ports:
# Send HTTP and HTTPS traffic to the proxy service
- 80:80
- 443:443
configs:
- traefik.toml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
# Deploy the service to all nodes that match our constraints
mode: global
placement:
constraints:
- "node.role==manager"
- "node.labels.proxy==true"
labels:
# Traefik uses labels to configure routing to your services
# Change the domain to your own
- "traefik.frontend.rule=Host:proxy.mcwebsite.net"
# Route traffic to the web interface hosted on port 8080 in the container
- "traefik.port=8080"
# Name the backend (not required here)
- "traefik.backend=traefik"
# Manually set entrypoints (not required here)
- "traefik.frontend.entryPoints=http,https"
configs:
# Traefik configuration file
traefik.toml:
file: ./traefik.toml
# This service will be using two external networks
networks:
# proxy:
# external: true
consul:
external: true
There were two optional kernel configs CONFIG_IP_VS_PROTO_TCP and CONFIG_IP_VS_PROTO_UDP disabled in my kernel which, you guessed it, enable tcp and udp load balancing.
I wish I'd checked that about four hours sooner than I did.
I am facing problem within my docker container's tomcat where it is throwing socket exception while connecting to URL. Same is working fine until few days back. Same is url is getting connected from host server of docker service.
[localhost-startStop-1] Error getting Properties from Config URL :http://config.server.com/config/
org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://config.server.com/config/public/rest-less-api/query-configurations": Connection reset; nested exception is java.net.SocketException: Connection reset
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:666) ~[spring-web-4.3.9.RELEASE.jar:4.3.9.RELEASE]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:613) ~[spring-web-4.3.9.RELEASE.jar:4.3.9.RELEASE]
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:531) ~[spring-web-4.3.9.RELEASE.jar:4.3.9.RELEASE]
Unfortunately, our base docker image(organization level) doesn't include ping or ssh tools. I am bit clueless to troubleshoot the same.
[root#mylin# docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 2
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Authorization: rhel-push-plugin
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
selinux
Kernel Version: 3.10.0-862.14.4.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 4
Total Memory: 7.638 GiB
Name: vc2crtp1287181n.fmr.com
ID: B4VP:4BCJ:476O:RUWA:IT3G:O7NO:DZOQ:RR6Z:QMBG:FPB5:DMSE:G5HG
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://registry.access.redhat.com/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Registries: registry.access.redhat.com (secure), docker.io (secure)
Edited:
I did a few more analysis to find the same docker image works with another host server. When I ran curl command inside the docker container in the server where I had problem, I getting following error message
sh-4.2# curl --header "Content-Type: application/json" --request POST --data '{"search-query" :"q21321", "structure-format":"FLA T"}' http://config.server.com/config/public/rest-less-api/query-configurations
curl: (56) Recv failure: Connection reset by peer
where in another host server where image is working fine, curl command returns the values.
Any direction to resolve this problem will be of great help?
Additional Info:
Below are inspect information of the working container
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4b8207ce56b3741b7bd864f7adffdc324ba2e9db9e07ae031e10c90f351be158",
"Created": "2018-12-06T04:29:23.258033812-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"6bcf920c0dc86d60dd288fd086f4d971aee217cf2ee49d71fd47dc1570460504": {
"Name": "GRK-BRK-EVENT",
"EndpointID": "b875dcdf4db8832fe518620801ae87137c6df44697ae7035148921f6a179b64a",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"de1dc8c4a9e09b2612d2d4e0ede5b875b42c4a819f27fe32ed9728d3cc4d756b": {
"Name": "GRK-BRK-REST",
"EndpointID": "d0149fb42645e63c0d8e9c8ad1c605f9ddcb3afa4c41e52c10a554cd31452727",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
and below is inspect log for not-working container
linux-x86-64]# docker inspect bridge
[
{
"Name": "bridge",
"Id": "c245b3b5c4cedca3b9fa5370b464e0e9c2aef0dc2c520daeedf3e726e8b153e4",
"Created": "2018-12-18T11:14:10.806753755-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"e5bfa7fefa002b50f7e763ea30e2e602b4b577b1b558000725453773a4f10903": {
"Name": "GRK-BRK-REST",
"EndpointID": "64ff097ad0c72e107845c00aac2708ced6c9e896f37c317a247be7d3f482fcc0",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
I found find only difference is Gateway is added to IPAM segment in container which is not working
I'm trying to create 3 zookeeper services in my docker swarm. However only managed to create 2 of the 3 containers:
docker ps -a returns:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2c883f9148ff hyperledger/fabric-zookeeper:latest "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 2181/tcp, 2888/tcp, 3888/tcp fabric_zookeeper1.1.td4wpq2t9uj5yjnw0q76gsqi0
068ef5d9075b hyperledger/fabric-zookeeper:latest "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 2181/tcp, 2888/tcp, 3888/tcp fabric_zookeeper2.1.u3zr2o8lifcncjo6g2u2yqhwu
docker network ls return:
NETWORK ID NAME DRIVER SCOPE
0e17f2cd7e8d bridge bridge local
4f78c376719f docker_gwbridge bridge local
djds6rgg0pqc fabric overlay swarm
o1es27fz05i1 fabric_net overlay swarm
2f99d3b30b86 host host local
ls05jfjuekg0 ingress overlay swarm
e7d8a3ff8bb2 net_blockcord bridge local
42ec3d9a4f1b none null local
docker network inspect fabric_net return:
[
{
"Name": "fabric_net",
"Id": "o1es27fz05i1g9cjrq5nvv0ok",
"Created": "2018-10-26T07:41:49.436040523Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.6.0/24",
"Gateway": "10.0.6.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"068ef5d9075bc9c61b313b97cfbb36189401bc4eb72258b4346f659add5b3a0a": {
"Name": "fabric_zookeeper2.1.u3zr2o8lifcncjo6g2u2yqhwu",
"EndpointID": "3274a8bc693c742a0acedd786174a1c7ed4c2843cd28a6ff9140a2e977059657",
"MacAddress": "02:42:0a:00:06:11",
"IPv4Address": "10.0.6.17/24",
"IPv6Address": ""
},
"2c883f9148ff3b53228e8d02a8bd60db754cd2677155307e5db31f426e356223": {
"Name": "fabric_zookeeper1.1.td4wpq2t9uj5yjnw0q76gsqi0",
"EndpointID": "f58c3c303a6f2fe22ba410e0881f67ce002cbfc5e0afe9cd1104f7f11e2c6ecf",
"MacAddress": "02:42:0a:00:06:15",
"IPv4Address": "10.0.6.21/24",
"IPv6Address": ""
},
"lb-fabric_net": {
"Name": "fabric_net-endpoint",
"EndpointID": "d70a81ad2631c3b76feac7484599e0715c9b901d2ed72153a38105b236b4c882",
"MacAddress": "02:42:0a:00:06:02",
"IPv4Address": "10.0.6.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4103"
},
"Labels": {
"com.docker.stack.namespace": "fabric"
},
"Peers": [
{
"Name": "a2beaca62ca3",
"IP": "10.0.0.5"
},
{
"Name": "fa12393e1d65",
"IP": "137.116.149.79"
}
]
}
]
With my container showing only 2 of my 3 zookeepers
I first create an overlay network
docker network create --attachable --driver overlay fabric
and ran the below docker compose file using command:
docker stack deploy -c docker-compose-zookeeper.yaml fabric
docker-compose-zookeeper.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '3'
networks:
net:
services:
zookeeper0:
hostname: zookeeper0.example.com
image: hyperledger/fabric-zookeeper
ports:
- 2181
- 2888
- 3888
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=0.0.0.0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- net
zookeeper1:
hostname: zookeeper1.example.com
image: hyperledger/fabric-zookeeper
ports:
- 2181
- 2888
- 3888
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper2:2888:3888
networks:
- net
zookeeper2:
hostname: zookeeper2.example.com
image: hyperledger/fabric-zookeeper
ports:
- 2181
- 2888
- 3888
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=0.0.0.0:2888:3888
networks:
- net
docker info:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 15
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: x8mooygnt8mzruof5c5d3p0vp
Is Manager: true
ClusterID: vmqqjuwztz3sraag3e8dgpqbl
Managers: 2
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.0.0.5
Manager Addresses:
137.116.149.79:2377
168.63.239.163:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-1023-azure
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.853GiB
Name: blockcord-staging2
ID: UT5F:4ZFW:4PRT:LGFS:JIV4:3YAD:DK5I:BIYL:FU6P:ZFEB:3OD3:U5EX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Found out that the container was created in my other nodes. But my container wasnt able to resolve address of the service