How to reach internet urls from inside containers - docker

Is there any way to get access to internet from inside a docker container?
My container have to reach some urls in order to work...
My containers are:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
457c79c831b6 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s agent" 15 hours ago Up 10 minutes k3d-k3s-default-worker-1
b9b39e82a6b2 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s agent" 15 hours ago Up 10 minutes k3d-k3s-default-worker-0
fb795905ec64 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s server --h…" 15 hours ago Up 10 minutes 0.0.0.0:6443->6443/tcp k3d-k3s-default-server
As you can see, they are running rancher/k3s:--- image.
I've took a look on logs:
E0205 08:07:07.844781 6 kuberuntime_manager.go:729] createPodSandbox for pod "vault-helm-1580888075-agent-injector-b7647bf59-vght5_default(7210fa15-5ba4-4c61-9e2c-2bce05cd3bc0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
It seems it's not able to reach registry-1.docker.io repository.
However, I'm able to pull images from my host:
$ docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
bdbbaa22dec6: Pull complete
Digest: sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest
My host is working behind a coorporate proxy:
$ cat /etc/systemd/system/docker.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.49.0.1:8080/"
Environment="HTTPS_PROXY=http://10.49.0.1:8080/"
Environment="NO_PROXY="localhost,127.0.0.1,::1"
Also, I've tried to test if containers are able to reach proxy ip:
$ docker exec -it 457c79c831b6 sh
/ # ping 10.49.0.1
PING 10.49.0.1 (10.49.0.1): 56 data bytes
<no response>
EDIT
/etc/resolv.conf content:
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
EDIT 2
Network related inspecttion of k3d master container node:
$ docker inspect k3d-k3s-default-server | grep -i networks -A10
"NetworkSettings": {
"Bridge": "",
"SandboxID": "57705be8c175394ac122b95f070321dbe48d4c7b7752482391fc243562babb75",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"6443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "6443"
--
"Networks": {
"k3d-k3s-default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"k3d-k3s-default-server",
"fb795905ec64"
],
"NetworkID": "337e73b268477428e97798665dd8013fd1e17d2003e33dcce694ab78f7f8b4bb",
"EndpointID": "a35a783664dff4d68d199c6e23cd6d2c5a7cd0eac7a5f4b1691d524befe4ec01",
"Gateway": "172.18.0.1",
EDIT 3
$ docker network inspect k3d-k3s-default
[
{
"Name": "k3d-k3s-default",
"Id": "337e73b268477428e97798665dd8013fd1e17d2003e33dcce694ab78f7f8b4bb",
"Created": "2020-02-04T17:40:01.13490488+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"457c79c831b6a76ae9b78cf360ae437eed04b18bd18429ac2e8436801ba0f4f7": {
"Name": "k3d-k3s-default-worker-1",
"EndpointID": "af38a2ecd618cf31df3dd4c88dea58ddc54de621e580934eb308105835f549d1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"b9b39e82a6b2ef0863cbc8ed9f09cabbbcf8618fc14a2877feac9218b6803575": {
"Name": "k3d-k3s-default-worker-0",
"EndpointID": "87aacc1963289bca9097586cfc28fa17c7a98ee7716d5918a4c83143c35c8b00",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"fb795905ec64f99aac5ed1ad654d3e44a73e702327d15a91e4f60df4e5d03724": {
"Name": "k3d-k3s-default-server",
"EndpointID": "a35a783664dff4d68d199c6e23cd6d2c5a7cd0eac7a5f4b1691d524befe4ec01",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"app": "k3d",
"cluster": "k3s-default"
}
}
]

Related

Problems with network connectivity and docker on Synology

I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?

How to set up replication from one docker couchDB to another?

I have the following docker containers running on my windows 10 host:
PS C:\Users\jj2> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aacbb0c8f189 couchdb:2.1.1 "tini -- /docker-ent…" 15 seconds ago Up 12 seconds 4369/tcp, 9100/tcp, 0.0.0.0:15984->5984/tcp, 0.0.0.0:15986->5986/tcp jj2_server-1_1
b00138d9c030 couchdb:2.1.1 "tini -- /docker-ent…" 16 seconds ago Up 12 seconds 4369/tcp, 9100/tcp, 0.0.0.0:25984->5984/tcp, 0.0.0.0:25986->5986/tcp jj2_server-2_1
e4c984413ac1 couchdb:2.1.1 "tini -- /docker-ent…" 16 seconds ago Up 12 seconds 0.0.0.0:5984->5984/tcp, 4369/tcp, 9100/tcp, 0.0.0.0:5986->5986/tcp jj2_server-0_1
And I'm able to launch Fauxton like so for each instance:
http://127.0.0.1:5984/
http://127.0.0.1:15984/
http://127.0.0.1:25984/
Now I try to set up replication on the main container … but I must be messing up the value for replication target.
these are the values I'm specifying:
Replication Source: Local Database
Source Name: widgets
Replication Target: New Remote Database
New Database: http://127.0.0.1:15984/widgets
Replication Type: Continuous
When I save this, the replication attempt fails... and if I reopen the configuration tool, the target is changed to "Existing local database".
This is what the original config JSON looks like:
{
"_id": "310ab1c7a68d4ae4aba039d2fa00320f",
"_rev": "2-cf1a3abced5f09ceebd9d54f42ebd65d",
"user_ctx": {
"name": "couchdb",
"roles": [
"_admin",
"_reader",
"_writer"
]
},
"source": {
"headers": {
"Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
},
"url": "http://127.0.0.1:5984/widgets"
},
"target": {
"headers": {
"Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
},
"url": "http://127.0.0.1:15984/widgets"
},
"create_target": true,
"continuous": true,
"owner": "couchdb"
}
the hint / help for the "New Database" field seems to indicate I need to use a URL... which is why I tried the 127.0.0.1.
Any suggestions would be appreciated.
EDIT 1
ONe thing I should add is that the 2 additional nodes have not had a setup run on them. Meaning, I created the cluster, but when I launch the webapp, it prompts me to create either a single node or a cluster. do I have to set up each node as a single node before replication will work?
Also, this is how I created the cluster / containers in the first place:
https://github.com/apache/couchdb-docker/issues/74
I used that docker-compose.yml file.
EDIT 2
I know realize / learned that anything 127.0.0.1 will be pointing to the HOST machine which is where I've strayed. But how do I point one container to another?
As far as the cluster goes, using fauxton running on 127.0.0.1:5984, for server-0 i have added the following 2 nodes like so :
couchdb-1:5984 bind address 0.0.0.0
couchdb-2:5984 bind address 0.0.0.0
Then when I do this (notice the port):
http://127.0.0.1:15984/_node/couchdb#couchdb-1/_config
I get a legit json response showing that something is running under the name "couchdb-1". However, I realize that I'm still using my HOST machine to get a view into couchdb-1 server. (server-1)
Via commandline, I confirmed I have nodes like so:
PS C:\Users\jj2> curl -X GET "http://127.0.0.1:5984/_membership" --user couchdb
Enter host password for user 'couchdb':
{"all_nodes":["couchdb#couchdb-0"],"cluster_nodes":["couchdb#couchdb-0","couchdb#couchdb-1","couchdb#couchdb-2"]}
PS C:\Users\jj2>
Lastly, I thought maybe the I could use the IP addresss of the containers assigned by docker, but none of them are pingable from the host. They are all 172.x.x.x addresses.
EDIT 3
IN case it helps.
PS C:\Users\jj2> docker network inspect jj2_network
[
{
"Name": "jj2_network",
"Id": "a0a799f7069ff49306438d9cb7884399a66470a7f0e9ac5364600c462153f53c",
"Created": "2020-01-30T21:18:55.5841557Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"006b6d02cd4e962f3df9d6584d58b36b67864872446f2d00209001ec58d3cd52": {
"Name": "jj2_server-1_1",
"EndpointID": "91260368a2d5014743b41c9ab863a2acbfe0a8c7f0a18ea7ad35a3c16efb4445",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"15b261831c46fb89cdc83f9deb638ada0d9d8a89ece0bc065e0a45818e9b4ce3": {
"Name": "jj2_server-2_1",
"EndpointID": "cf072d0bbd95ab86308ac4c15b71b47223b09484506e07e5233d526f46baca1e",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"aeaf74cf591cffa8e7463e82b75e9ca57ebbcfd1a84d3f893ea5dcae324dbd1e": {
"Name": "jj2_server-0_1",
"EndpointID": "0a6d66b95bf973f0432b9ae88c61709e63f9e51c6bbf92e35ddf6eab5f694cc1",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "network",
"com.docker.compose.project": "jj2",
"com.docker.compose.version": "1.24.1"
}
}
]
Do you have the docker instances bound to 0.0.0.0 or just 127.0.0.1
If 0.0.0.0 then you can replicate by setting source and target as remote database with the IP address of the local machine and the specific ports for each instance.
If only 127.0.0.1 and they are both on the same docker network (see docker network ls and docker network inspect <network_name>) then you can use the docker network IP addresses to replicate between the containers.

deploy running docker in external IP (VM host)

My API couldn't be published on the specific ip address (VM host) when using docker
First, I run the file in terminal :
Rscript run.R
This works fine, my api is up and running on the ip address http://35.157.131.3:8000/swagger/ . After which, I would like to deploy it with docker:
docker run --rm -p 8000:8000 --expose 8000 -d --name diemdiem trestletech/plumber
This showed the file was plumbed successfully, however, when i went to the api link, http://35.157.131.3:8000/swagger/ showed 404-error.
After reading docker documentations, i created a container network which specifies the host ip address that i want the docker container would run on:
-o "com.docker.network.bridge.host_binding_ipv4"="35.157.131.3" \
simple-network````
then, i connect the running diemdiem container to simple-network:
``` docker network connect simple-network diemdiem```
I inspect to see whether the container is connected or not:
```docker network inspect simple-network```
The result is:
[
{
"Name": "simple-network",
"Id": "95ec0c55aeb984952459edda2d4d0bb7c9eea71824e6cec184b7c61d2e807e7b",
"Created": "2019-07-08T17:30:23.709654207Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c83125bf68a89aebda3effe28ebee4d6323657e1427cf08fd3d63b6e411f8448": {
"Name": "diemdiem",
"EndpointID": "7fab3354e051dc81ef798bd86c19361f6a721b578237b3a3695cb415b1aee2e4",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.host_binding_ipv4": "35.157.131.3"
},
"Labels": {}
}
]
The final API is still not up and running in the ip address which i specified. I appreciate your advice.

containers in the same network unable to communicate

I have a Docker bridge network, to which are attached 2 containers:
a Node.js server running on the port 3333
a Flask server running on the port 5000
The network has this configuration:
[
{
"Name": "mynetwork",
"Id": "f94f76533b065d39515b65d20b8645c22617be51ec9335fcfad8ce707ca48841",
"Created": "2019-02-20T17:17:29.029434324+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.1.0.0/16",
"Gateway": "10.1.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"Containers": {
"c8084141e36c756710cbfa020f664127f234e407986362331ab127d415c9b074": {
"Name": "nodeContainer",
"EndpointID": "e25f8797c1b7488d7c3810d8f38c4b3dea6b9f19f17558a164b710015fdd9e1a",
"MacAddress": "02:42:0a:01:00:03",
"IPv4Address": "10.1.0.3/16",
"IPv6Address": ""
},
"f9c582d031515f4bba910286118df806a6a2b04a36917234eca09fdf335d4457": {
"Name": "flaskContainer",
"EndpointID": "fbf053f97acc7b9491c536966b640862d366d1599fbfb400915cd8bc26b04f6a",
"MacAddress": "02:42:0a:01:00:02",
"IPv4Address": "10.1.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Normally, those 2 containers communicate (nodeContainer makes requests to http://flaskContainer:5000), but this stopped to work after setting a different Subnet and Gateway (for external network constraints).
In particular, I get an error like ETIMEDOUT 10.1.0.2:3333. This makes me think that the address is correctly resolved, but for some reason, there is no answer (and in fact flaskContainer logs nothing).
As additional information:
docker exec flaskContainer curl flaskContainer
docker exec nodeContainer curl nodeContainer
obviously does not work (Failed to connect to flaskContainer port 80).
docker exec flaskContainer curl flaskContainer:5000
docker exec nodeContainer curl nodeContainer:3333
correctly give results.
docker exec flaskContainer curl nodeContainer:3333
docker exec nodeContainer curl flaskContainer:5000
goes in timeout.
Have you any idea of what can be the reason? How can I solve this?
Thank you

Container published port accepting TCP connection without anything in container listening

Edit: I found a work around, disabling the userland proxy (docker-proxy) via the daemon.json seems to resolve this. This likely means this is a bug in the docker-proxy and as far as I can tell everything I was running before works correctly.
I am attempting to debug an issue configuring TCP health checks with consul. The configuration of consul etc. isn't relevant as I have isolated this to a rather simple scenario. All containers are connected to bridge0 (config below). Host OS is Centos 7.
What I expect to see is nc from Container 2 to return connection refused but instead it appears to connect and then after sending some random characters a broken pipe. Is this to be expected?
Container 1:
[user#192.168.1.2 ~]$ docker run --net=bridge0 -it -p 50032:8000 centos:7 bash
[root#a691f149c045 /]#
Container 2:
[user#192.168.1.2 ~]$ docker run -it --net=bridge0 centos:7 bash
[root#e9c1cbaf3922 /]# nc 192.168.1.2 50032
asd
asd
Ncat: Broken pipe.
Host:
[user#192.168.1.2 ~]$ nc 192.168.1.2 50032
Ncat: Connection refused.
Docker bridge0 config
[user#host ~]$ docker network inspect bridge0
[
{
"Name": "bridge0",
"Id": "b50864883bb2c9482b2d0da595abbe4b12e0de6b7fa91657119316fd75dcac83",
"Created": "2018-08-16T21:38:11.501721012-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.24.0.0/16",
"IPRange": "172.24.0.0/24",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"a691f149c045be06ad90c66221a9c35f2586b75e9a5e2f104c443ced311fdf03": {
"Name": "gallant_lamport",
"EndpointID": "e66a16eac8d7a405d3698fa37f6d6a47484b63c9cf07a714bbab6caf107741d6",
"MacAddress": "02:42:ac:18:00:09",
"IPv4Address": "172.24.0.9/16",
"IPv6Address": ""
},
"e9c1cbaf3922774183afe613c6641e19346cac8d707bb2374d1251b02855a94f": {
"Name": "xenodochial_bose",
"EndpointID": "a01755d0468a2aa188f1b607ee63590bda4dc3e89e15dc78f1556b79fa1aac42",
"MacAddress": "02:42:ac:18:00:0a",
"IPv4Address": "172.24.0.10/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Resources