How to set up replication from one docker couchDB to another? - docker

I have the following docker containers running on my windows 10 host:
PS C:\Users\jj2> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aacbb0c8f189 couchdb:2.1.1 "tini -- /docker-ent…" 15 seconds ago Up 12 seconds 4369/tcp, 9100/tcp, 0.0.0.0:15984->5984/tcp, 0.0.0.0:15986->5986/tcp jj2_server-1_1
b00138d9c030 couchdb:2.1.1 "tini -- /docker-ent…" 16 seconds ago Up 12 seconds 4369/tcp, 9100/tcp, 0.0.0.0:25984->5984/tcp, 0.0.0.0:25986->5986/tcp jj2_server-2_1
e4c984413ac1 couchdb:2.1.1 "tini -- /docker-ent…" 16 seconds ago Up 12 seconds 0.0.0.0:5984->5984/tcp, 4369/tcp, 9100/tcp, 0.0.0.0:5986->5986/tcp jj2_server-0_1
And I'm able to launch Fauxton like so for each instance:
http://127.0.0.1:5984/
http://127.0.0.1:15984/
http://127.0.0.1:25984/
Now I try to set up replication on the main container … but I must be messing up the value for replication target.
these are the values I'm specifying:
Replication Source: Local Database
Source Name: widgets
Replication Target: New Remote Database
New Database: http://127.0.0.1:15984/widgets
Replication Type: Continuous
When I save this, the replication attempt fails... and if I reopen the configuration tool, the target is changed to "Existing local database".
This is what the original config JSON looks like:
{
"_id": "310ab1c7a68d4ae4aba039d2fa00320f",
"_rev": "2-cf1a3abced5f09ceebd9d54f42ebd65d",
"user_ctx": {
"name": "couchdb",
"roles": [
"_admin",
"_reader",
"_writer"
]
},
"source": {
"headers": {
"Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
},
"url": "http://127.0.0.1:5984/widgets"
},
"target": {
"headers": {
"Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
},
"url": "http://127.0.0.1:15984/widgets"
},
"create_target": true,
"continuous": true,
"owner": "couchdb"
}
the hint / help for the "New Database" field seems to indicate I need to use a URL... which is why I tried the 127.0.0.1.
Any suggestions would be appreciated.
EDIT 1
ONe thing I should add is that the 2 additional nodes have not had a setup run on them. Meaning, I created the cluster, but when I launch the webapp, it prompts me to create either a single node or a cluster. do I have to set up each node as a single node before replication will work?
Also, this is how I created the cluster / containers in the first place:
https://github.com/apache/couchdb-docker/issues/74
I used that docker-compose.yml file.
EDIT 2
I know realize / learned that anything 127.0.0.1 will be pointing to the HOST machine which is where I've strayed. But how do I point one container to another?
As far as the cluster goes, using fauxton running on 127.0.0.1:5984, for server-0 i have added the following 2 nodes like so :
couchdb-1:5984 bind address 0.0.0.0
couchdb-2:5984 bind address 0.0.0.0
Then when I do this (notice the port):
http://127.0.0.1:15984/_node/couchdb#couchdb-1/_config
I get a legit json response showing that something is running under the name "couchdb-1". However, I realize that I'm still using my HOST machine to get a view into couchdb-1 server. (server-1)
Via commandline, I confirmed I have nodes like so:
PS C:\Users\jj2> curl -X GET "http://127.0.0.1:5984/_membership" --user couchdb
Enter host password for user 'couchdb':
{"all_nodes":["couchdb#couchdb-0"],"cluster_nodes":["couchdb#couchdb-0","couchdb#couchdb-1","couchdb#couchdb-2"]}
PS C:\Users\jj2>
Lastly, I thought maybe the I could use the IP addresss of the containers assigned by docker, but none of them are pingable from the host. They are all 172.x.x.x addresses.
EDIT 3
IN case it helps.
PS C:\Users\jj2> docker network inspect jj2_network
[
{
"Name": "jj2_network",
"Id": "a0a799f7069ff49306438d9cb7884399a66470a7f0e9ac5364600c462153f53c",
"Created": "2020-01-30T21:18:55.5841557Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"006b6d02cd4e962f3df9d6584d58b36b67864872446f2d00209001ec58d3cd52": {
"Name": "jj2_server-1_1",
"EndpointID": "91260368a2d5014743b41c9ab863a2acbfe0a8c7f0a18ea7ad35a3c16efb4445",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"15b261831c46fb89cdc83f9deb638ada0d9d8a89ece0bc065e0a45818e9b4ce3": {
"Name": "jj2_server-2_1",
"EndpointID": "cf072d0bbd95ab86308ac4c15b71b47223b09484506e07e5233d526f46baca1e",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"aeaf74cf591cffa8e7463e82b75e9ca57ebbcfd1a84d3f893ea5dcae324dbd1e": {
"Name": "jj2_server-0_1",
"EndpointID": "0a6d66b95bf973f0432b9ae88c61709e63f9e51c6bbf92e35ddf6eab5f694cc1",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "network",
"com.docker.compose.project": "jj2",
"com.docker.compose.version": "1.24.1"
}
}
]

Do you have the docker instances bound to 0.0.0.0 or just 127.0.0.1
If 0.0.0.0 then you can replicate by setting source and target as remote database with the IP address of the local machine and the specific ports for each instance.
If only 127.0.0.1 and they are both on the same docker network (see docker network ls and docker network inspect <network_name>) then you can use the docker network IP addresses to replicate between the containers.

Related

How to get Docker Containers in one Docker Network talk to each other

A problem with a Docker Container running NextJS application trying to access another Docker Container running a NestJS-API.
The environment looks like this:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b04de77cb381 ui "docker-entrypoint.s…" 9 minutes ago Up 9 minutes 0.0.0.0:8004->3000/tcp ui
6af7c952afd6 redis:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8003->6379/tcp redis
784b6f925817 api "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8001->3001/tcp api
c0fb02031834 postgres:latest "docker-entrypoint.s…" 21 hours ago Up 21 hours 0.0.0.0:8002->5432/tcp db
All containers are in the same bridged network.
A 'docker network inspect ' shows all Containers.
Containers are started in different docker-compose files (ui, redis+api, db)
API to DB
The api talks to the database db with postgresql://username:password#db:5432/myDb?schema=public
Notice the 'db' being the name on the Docker Network and port 5432 in the url.
Since they are on the same network you need to use the internal port 5432 instead of 8002.
Local UI
When I run the UI on the Host (on port 3000), it is able to access the API (in the Container).
Data is transferred from db-container to api-container to ui-on-the-host.
UI in the Container
Now I start also a browser on localhost:8004. This is the UI in the Container.
The UI is accessing the api on http://api:3001/*.
Sounds logical to use Docker Networkname and internal port. I also do that from API to DB.
But, this does not work: "net::ERR_NAME_NOT_RESOLVED".
Test: ncat test
Docker-Exec into the UI Container and doing a check (with ncat) shows the port is open:
/app $ nc -v api 3001
api (192.168.48.4:3001) open
Test: curl in the UI Container
(Added later)
When doing a Curl test out of the UI-Container to the API-Container I do get a result.
(See the simple/stupid Debug=endpoint called /dbg)
$ docker exec -u 0 -it ui /bin/bash
UI$ url http://api:3001/dbg
{"status":"I'm alive and kicking!!"}
About the Network
I did create my own Bridged Network.
So, the network is there and it looks like all Containers are connected to the network.
/Users/bert/_> docker network inspect my-net
[
{
"Name": "my-net",
"Id": "e786d630f252cf12856372b708c309f90f8bf177b6b1f742d6ae02f5094c7223",
"Created": "2021-03-11T14:10:50.417675Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.48.0/20",
"Gateway": "192.168.48.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"6af7c952afd60a3b4f36e244273db5f9f8a993a6f738785f624ffb59f381cf3d": {
"Name": "redis",
"EndpointID": "d9a6e6f6a4467bf38d3348889c86dcc53c5b0fa5ddc9fcf17c7e722fc6673a25",
"MacAddress": "02:42:c0:a8:30:05",
"IPv4Address": "192.168.48.5/20",
"IPv6Address": ""
},
"784b6f9258179e8ac03ee8bbc8584582dd9199ef5ec1af1404f7cf600ac707e1": {
"Name": "api",
"EndpointID": "d4b82f37559a4ee567cb304f033e1394af8c83e84916dc62f7e81f3b875a6e5f",
"MacAddress": "02:42:c0:a8:30:04",
"IPv4Address": "192.168.48.4/20",
"IPv6Address": ""
},
"c0fb02031834b414522f8630fcde31482e32d948de98d3e05678b34b26a1e783": {
"Name": "db",
"EndpointID": "dde944e1eda2c69dd733bcf761b170db2756aad6c2a25c8993ca626b48dc0e81",
"MacAddress": "02:42:c0:a8:30:03",
"IPv4Address": "192.168.48.3/20",
"IPv6Address": ""
},
"d678b3e96e0f0765ed62a70cc880b07836cf1ebf17590dc0e3351e8ee8b9b639": {
"Name": "ui",
"EndpointID": "c5a8d7e3d8b31d8dacb2f343bb77f4b364f0f3e3a5ed1025cc4ec5b65b44fd27",
"MacAddress": "02:42:c0:a8:30:02",
"IPv4Address": "192.168.48.2/20",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Conclusion:
UI-Container with Curl in Container can talk to API-Container.
UI-on-Host with Browser on Host can talk to API-Container.
UI-Container with Browser on Host cannot talk to API-Containe. !!?? Why????
Question then is how to use a UI-container in the browser and talk to other Containers over the Docker Bridged Network?
Ok, problem solved.
It was a matter of confusion where the NextJS Application gets the API-location from.
Since the NextJS-Application (the UI) is in the end just running in a browser, you need to specify the API location as seen from the browser, not as seen as inter-Container communication.

Problems with network connectivity and docker on Synology

I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?

How to reach internet urls from inside containers

Is there any way to get access to internet from inside a docker container?
My container have to reach some urls in order to work...
My containers are:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
457c79c831b6 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s agent" 15 hours ago Up 10 minutes k3d-k3s-default-worker-1
b9b39e82a6b2 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s agent" 15 hours ago Up 10 minutes k3d-k3s-default-worker-0
fb795905ec64 rancher/k3s:v1.17.0-k3s.1 "/bin/k3s server --h…" 15 hours ago Up 10 minutes 0.0.0.0:6443->6443/tcp k3d-k3s-default-server
As you can see, they are running rancher/k3s:--- image.
I've took a look on logs:
E0205 08:07:07.844781 6 kuberuntime_manager.go:729] createPodSandbox for pod "vault-helm-1580888075-agent-injector-b7647bf59-vght5_default(7210fa15-5ba4-4c61-9e2c-2bce05cd3bc0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
It seems it's not able to reach registry-1.docker.io repository.
However, I'm able to pull images from my host:
$ docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
bdbbaa22dec6: Pull complete
Digest: sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Status: Downloaded newer image for busybox:latest
docker.io/library/busybox:latest
My host is working behind a coorporate proxy:
$ cat /etc/systemd/system/docker.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.49.0.1:8080/"
Environment="HTTPS_PROXY=http://10.49.0.1:8080/"
Environment="NO_PROXY="localhost,127.0.0.1,::1"
Also, I've tried to test if containers are able to reach proxy ip:
$ docker exec -it 457c79c831b6 sh
/ # ping 10.49.0.1
PING 10.49.0.1 (10.49.0.1): 56 data bytes
<no response>
EDIT
/etc/resolv.conf content:
cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
EDIT 2
Network related inspecttion of k3d master container node:
$ docker inspect k3d-k3s-default-server | grep -i networks -A10
"NetworkSettings": {
"Bridge": "",
"SandboxID": "57705be8c175394ac122b95f070321dbe48d4c7b7752482391fc243562babb75",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"6443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "6443"
--
"Networks": {
"k3d-k3s-default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"k3d-k3s-default-server",
"fb795905ec64"
],
"NetworkID": "337e73b268477428e97798665dd8013fd1e17d2003e33dcce694ab78f7f8b4bb",
"EndpointID": "a35a783664dff4d68d199c6e23cd6d2c5a7cd0eac7a5f4b1691d524befe4ec01",
"Gateway": "172.18.0.1",
EDIT 3
$ docker network inspect k3d-k3s-default
[
{
"Name": "k3d-k3s-default",
"Id": "337e73b268477428e97798665dd8013fd1e17d2003e33dcce694ab78f7f8b4bb",
"Created": "2020-02-04T17:40:01.13490488+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"457c79c831b6a76ae9b78cf360ae437eed04b18bd18429ac2e8436801ba0f4f7": {
"Name": "k3d-k3s-default-worker-1",
"EndpointID": "af38a2ecd618cf31df3dd4c88dea58ddc54de621e580934eb308105835f549d1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"b9b39e82a6b2ef0863cbc8ed9f09cabbbcf8618fc14a2877feac9218b6803575": {
"Name": "k3d-k3s-default-worker-0",
"EndpointID": "87aacc1963289bca9097586cfc28fa17c7a98ee7716d5918a4c83143c35c8b00",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"fb795905ec64f99aac5ed1ad654d3e44a73e702327d15a91e4f60df4e5d03724": {
"Name": "k3d-k3s-default-server",
"EndpointID": "a35a783664dff4d68d199c6e23cd6d2c5a7cd0eac7a5f4b1691d524befe4ec01",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"app": "k3d",
"cluster": "k3s-default"
}
}
]

containers in the same network unable to communicate

I have a Docker bridge network, to which are attached 2 containers:
a Node.js server running on the port 3333
a Flask server running on the port 5000
The network has this configuration:
[
{
"Name": "mynetwork",
"Id": "f94f76533b065d39515b65d20b8645c22617be51ec9335fcfad8ce707ca48841",
"Created": "2019-02-20T17:17:29.029434324+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.1.0.0/16",
"Gateway": "10.1.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"Containers": {
"c8084141e36c756710cbfa020f664127f234e407986362331ab127d415c9b074": {
"Name": "nodeContainer",
"EndpointID": "e25f8797c1b7488d7c3810d8f38c4b3dea6b9f19f17558a164b710015fdd9e1a",
"MacAddress": "02:42:0a:01:00:03",
"IPv4Address": "10.1.0.3/16",
"IPv6Address": ""
},
"f9c582d031515f4bba910286118df806a6a2b04a36917234eca09fdf335d4457": {
"Name": "flaskContainer",
"EndpointID": "fbf053f97acc7b9491c536966b640862d366d1599fbfb400915cd8bc26b04f6a",
"MacAddress": "02:42:0a:01:00:02",
"IPv4Address": "10.1.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Normally, those 2 containers communicate (nodeContainer makes requests to http://flaskContainer:5000), but this stopped to work after setting a different Subnet and Gateway (for external network constraints).
In particular, I get an error like ETIMEDOUT 10.1.0.2:3333. This makes me think that the address is correctly resolved, but for some reason, there is no answer (and in fact flaskContainer logs nothing).
As additional information:
docker exec flaskContainer curl flaskContainer
docker exec nodeContainer curl nodeContainer
obviously does not work (Failed to connect to flaskContainer port 80).
docker exec flaskContainer curl flaskContainer:5000
docker exec nodeContainer curl nodeContainer:3333
correctly give results.
docker exec flaskContainer curl nodeContainer:3333
docker exec nodeContainer curl flaskContainer:5000
goes in timeout.
Have you any idea of what can be the reason? How can I solve this?
Thank you

Connecting across containers in Docker network

I have this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3252abd7587 cdt-tests "/bin/bash /home/new…" 5 seconds ago Exited (1) 22 seconds ago cdt-tests
f492760705e3 cdt-server "/bin/bash /usr/loca…" 52 seconds ago Up About a minute 0.0.0.0:3040->3040/tcp cdt-server
89c5a28855df mongo "docker-entrypoint.s…" 55 seconds ago Up About a minute 27017/tcp cdt-mongo
1eaa4aa684a9 selenium/standalone-firefox:3.4.0-chromium "/opt/bin/entry_poin…" 59 seconds ago Up About a minute 4444/tcp cdt-selenium
the cdt-tests container, is attempting to make connections to the other containers in the same network. the network looks like this:
$ docker network inspect cdt-net # this yields the below json
[
{
"Name": "cdt-net",
"Id": "8c2b486e950076130860e0c6c09f06eaf8cccf02127786b80bf7cc169f8eae0f",
"Created": "2018-01-23T21:52:34.5021152Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1eaa4aa684a9d7c1ad7a1b7ac28418b100e6b8d7a22ceb2a97cf51e4487c5fb2": {
"Name": "cdt-selenium",
"EndpointID": "674ce85e14339e67e767ab9844cd2fd1356fc3e60ab050e1cd1457e4168ac9fc",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"89c5a28855dfde05fe9db9a35bbe7bce232eb56d9024022785d2a65570c423b5": {
"Name": "cdt-mongo",
"EndpointID": "ed497939965363cd194b4fea8e6a26ee47ef7f24bef56c9726003a897be83dd1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"f492760705e30be4fe8469ae422e96548ee2192f41314e3815762a9e39a4cc82": {
"Name": "cdt-server",
"EndpointID": "17e8bd6f7735a52669f0fe92b2d5717c7a3ae6954c108af3f29c13233ef20ee6",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
in my cdt-tests container, I run these commands:
export CDT_SELENIUM_HOST="cdt-selenium.cdt-net";
export OPENSHIFT_NODEJS_IP="127.0.0.1";
export OPENSHIFT_NODEJS_PORT="3040";
export CDT_SERVER_HOST="127.0.0.1";
export CDT_SERVER_PORT=3040;
export OPENSHIFT_MONGODB_DB_HOST="127.0.0.1"
export OPENSHIFT_MONGODB_DB_PORT=27017
nc -zv "$CDT_SELENIUM_HOST" 4444 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "selenium server is live"
else
echo "selenium server is NOT live"
exit 1;
fi
nc -zv "$OPENSHIFT_MONGODB_DB_HOST" 27017 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "mongo server is live"
else
echo "mongo server is NOT live"
exit 1;
fi
nc -zv "$CDT_SERVER_HOST" 3040 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "cdt server is live"
else
echo "cdt server is NOT live"
exit 1;
fi
and all of those connection tests fail. Does anyone know how to connect between containers in the same Docker network? is there some surefire pattern to use?
It looks like you're trying to use 127.0.0.1 as the address for your other containers. Docker containers all have a unique ip address in an isolated network space. Much like your own physical host, 127.0.0.1 is a special address that means "this container". So since none of those services are running in the container in which you're running your tests, you can't connect to anything.
You need to use the ip address of the container running the service you want to test. Because ip addresses change with every deployment, it's not convenient to use the literal address. You need some way to get the information dynamically. For this reason, Docker maintains a DNS service on each network so that you can simply use the name of a container like any other hostname.
For example, in your environment, you could set:
export OPENSHIFT_MONGODB_DB_HOST="cdt-mongo"
And then your mongo test should succeed. And so forth for the other _HOST and _IP variables you're using.
I could assure you that the way #larsks mentioned would work well.
But there is still an easy way. I could see container cdt-tests is in the same network with other containers based on your description above. So you may attach all containers to a user-defined bridge network created by docker network create -d bridge mynet.
When starting containers, just add --net=mynet flag.
E.g.docker run -tid --name=cdt-server --net=mynet cdt-server .
Thus, there is no need to add any ENVs into your container(s). Since user-defined bridge networks provide automatic DNS resolution between containers. You may see introduction here.

Resources