How can I modify the internal swarm load balancer IP address? - docker

I have a docker swarm in a cluster of machines and my use case is deploying several standalone containers that need to be connected which have static IP configurations, so I created an overlay to connect all the nodes of the swarm. I don't use/want to use anything related to docker SERVICES nor its replication in my docker swarm, it's not a real word scenario it's a test one.
The problem is when I deploy a container to a certain host a the swarm load balancer is created with a certain IP address which is random and I need it to be static because I can't change the configurations of the containers I want to deploy. I already searched how can I remove this load balancer, because as far as I'm concerned it's only used for external traffic coming into the swarm services/containers and they are not useful for my use case.
A solution would be deploy a dummy container and check which IP was assigned to the swarm load balancers in each node and then adjust the configuration files of the containers I want to deploy, but this approach does not scale well and it's a workaround of the actual problem. My problems started when randomly my containers couldn't start giving docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded. where I could not identify it's reason to happen and then inferred it was because these load balancers where using the same IP adress I wanted to use in my containers.
My question is how can I statically define the IP of these load balancers or remove them completely for every node? Thank you for your time.
Docker Swarm Architecture Here is the output of docker inspect network <my-overlay-network>
"Name": "my-network",
"Id": "mo8rcf8ozr05qrnuqh64wamhs",
"Created": "2020-11-16T01:59:20.100290182Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"95b8e9c3ab5f9870987c4077ce264b96a810dad573a7fa2de485dd6f4b50f307": {
"Name": "unruffled_haslett",
"EndpointID": "422d83efd66ae36dd10ab0b1eb1a70763ccef6789352b06b8eb3ec8bca48410f",
"MacAddress": "02:42:0a:00:01:0c",
"IPv4Address": "10.0.1.12/24",
"IPv6Address": ""
},
"lb-my-network": {
"Name": "my-network-endpoint",
"EndpointID": "192ffaa13b7d7cfd36c4751f87c3d08dc65e66e97c0a134dfa302f55f77dcef3",
"MacAddress": "02:42:0a:00:01:08",
"IPv4Address": "10.0.1.8/24",
"IPv6Address": ""
}
`

I just use a wider subnet mask of /16 instead of /24. Which allowed me to have more IP addresses and thus avoiding collisions with the Internal load balancers.

Related

Docker Container not able to access in localhost and also same network segment

New to docker, please correct my statement
I'm trying to access docker container ex:nginx web server using port 80 in docker engine machine but am unable to access it.
Here my docker Engine network 10.20.20.0/24.
Docker Engine IP : 10.20.20.3
> Telnet 10.20.20.3 80 Connection failed
tcp 0 0 10.20.20.3:80 0.0.0.0:* LISTEN 28953/docker-proxy
Docker Container IP : 172.18.0.4
> Telnet 172.18.0.4 80 Connection success
Docker network detail
[root#xxxxxxxxx]# docker network inspect 1984f08c739d [
{
"Name": "xxxxxxxxxxxxx",
"Id": "1984f08c739d6d6fc6b4769e877714844b9e57ca680f61edf3a528bd12ca5ad1",
"Created": "2021-11-13T21:01:27.53591809+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"126d5128621fa6cde0389f4c6e0b53be36670bce5f62e981da6e17722b88c4a9": {
"Name": "xxxxxxxxxxxxxxx",
"EndpointID": "b011052062ae137e3d190032311e0fbc5850f0459c9b24d8cc1d315a9dc18773",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "xxxxxxxx",
"com.docker.compose.version": "1.29.2"
}
} ]
I can access these nginx in other networks like 10.20.21.0/24 so on. But not on the same network 10.20.20.0/24 or same docker engine running on it.
My Environment Docker Engine VM having 2 Interfaces i.e. eth0 and eth1 with different subnet. In Previously it'll not work because both interfaces having separate routing table in /etc/sysconfig/network-scripts (route-eth0,route-eth1 and rule-eth0,eth1) Base hyper-v AHV. These route written to persistent interface. I tried to removing route for eth0. Since eth0 doesnt required routing table to persistent, it'll come by default route table in Linux. Then restarted the network..Hola there we go the docker listening on eth0 interfaces and something did for eth1. it's worked. Both eth0 and eth1 interfaces I can map to the dockers network. It's work like charm. I believe in AHV doesnt not required routing table for AHV VMs for different and same network subnets. So here the concluded its routing issues. Issues was resolved, I can access docker container with eth0,eth1 interfaces across different subnets and same subnet.
Both interfaces worked after restarting without any routes in AHV VMs and power off.

How to get Docker Containers in one Docker Network talk to each other

A problem with a Docker Container running NextJS application trying to access another Docker Container running a NestJS-API.
The environment looks like this:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b04de77cb381 ui "docker-entrypoint.s…" 9 minutes ago Up 9 minutes 0.0.0.0:8004->3000/tcp ui
6af7c952afd6 redis:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8003->6379/tcp redis
784b6f925817 api "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8001->3001/tcp api
c0fb02031834 postgres:latest "docker-entrypoint.s…" 21 hours ago Up 21 hours 0.0.0.0:8002->5432/tcp db
All containers are in the same bridged network.
A 'docker network inspect ' shows all Containers.
Containers are started in different docker-compose files (ui, redis+api, db)
API to DB
The api talks to the database db with postgresql://username:password#db:5432/myDb?schema=public
Notice the 'db' being the name on the Docker Network and port 5432 in the url.
Since they are on the same network you need to use the internal port 5432 instead of 8002.
Local UI
When I run the UI on the Host (on port 3000), it is able to access the API (in the Container).
Data is transferred from db-container to api-container to ui-on-the-host.
UI in the Container
Now I start also a browser on localhost:8004. This is the UI in the Container.
The UI is accessing the api on http://api:3001/*.
Sounds logical to use Docker Networkname and internal port. I also do that from API to DB.
But, this does not work: "net::ERR_NAME_NOT_RESOLVED".
Test: ncat test
Docker-Exec into the UI Container and doing a check (with ncat) shows the port is open:
/app $ nc -v api 3001
api (192.168.48.4:3001) open
Test: curl in the UI Container
(Added later)
When doing a Curl test out of the UI-Container to the API-Container I do get a result.
(See the simple/stupid Debug=endpoint called /dbg)
$ docker exec -u 0 -it ui /bin/bash
UI$ url http://api:3001/dbg
{"status":"I'm alive and kicking!!"}
About the Network
I did create my own Bridged Network.
So, the network is there and it looks like all Containers are connected to the network.
/Users/bert/_> docker network inspect my-net
[
{
"Name": "my-net",
"Id": "e786d630f252cf12856372b708c309f90f8bf177b6b1f742d6ae02f5094c7223",
"Created": "2021-03-11T14:10:50.417675Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.48.0/20",
"Gateway": "192.168.48.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"6af7c952afd60a3b4f36e244273db5f9f8a993a6f738785f624ffb59f381cf3d": {
"Name": "redis",
"EndpointID": "d9a6e6f6a4467bf38d3348889c86dcc53c5b0fa5ddc9fcf17c7e722fc6673a25",
"MacAddress": "02:42:c0:a8:30:05",
"IPv4Address": "192.168.48.5/20",
"IPv6Address": ""
},
"784b6f9258179e8ac03ee8bbc8584582dd9199ef5ec1af1404f7cf600ac707e1": {
"Name": "api",
"EndpointID": "d4b82f37559a4ee567cb304f033e1394af8c83e84916dc62f7e81f3b875a6e5f",
"MacAddress": "02:42:c0:a8:30:04",
"IPv4Address": "192.168.48.4/20",
"IPv6Address": ""
},
"c0fb02031834b414522f8630fcde31482e32d948de98d3e05678b34b26a1e783": {
"Name": "db",
"EndpointID": "dde944e1eda2c69dd733bcf761b170db2756aad6c2a25c8993ca626b48dc0e81",
"MacAddress": "02:42:c0:a8:30:03",
"IPv4Address": "192.168.48.3/20",
"IPv6Address": ""
},
"d678b3e96e0f0765ed62a70cc880b07836cf1ebf17590dc0e3351e8ee8b9b639": {
"Name": "ui",
"EndpointID": "c5a8d7e3d8b31d8dacb2f343bb77f4b364f0f3e3a5ed1025cc4ec5b65b44fd27",
"MacAddress": "02:42:c0:a8:30:02",
"IPv4Address": "192.168.48.2/20",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Conclusion:
UI-Container with Curl in Container can talk to API-Container.
UI-on-Host with Browser on Host can talk to API-Container.
UI-Container with Browser on Host cannot talk to API-Containe. !!?? Why????
Question then is how to use a UI-container in the browser and talk to other Containers over the Docker Bridged Network?
Ok, problem solved.
It was a matter of confusion where the NextJS Application gets the API-location from.
Since the NextJS-Application (the UI) is in the end just running in a browser, you need to specify the API location as seen from the browser, not as seen as inter-Container communication.

Problems with network connectivity and docker on Synology

I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?

How to restrict access to Windows Server docker containers using Firewall?

I am trying to restrict access to Windows docker containers to specific IP(s). Looks like this can be easily done with iptables on Linux containers, but I'm having a difficult time finding a proper solution for Windows Server containers. A similar thing I'm trying to do on a Windows container is described on the first answer on THIS StackOverFlow question but that's Linux.
For starters, I cannot seem to start the Windows Defender Firewall service INSIDE the container (not on the host). What exactly happens is described in THIS StackOverFlow question. But in short, Start-Service -Name MpsSvc simply does not work. Modifying registry keys don't work. It's described in that post. So does this mean using Windows Firewall inside a container to restrict access is out of the question?
Network isolation and security document states that default Outbound/Inbound traffic on Windows Server containers is ALLOW ALL. I want to lock it down so that traffic to the container only comes from one source.
Orchestrator we're using is Azure Service Fabric. In the cluster, there's a main SYSTEM node/host. That node hosts Traefik (it's like a traffic manager/load balancer). Traefik instances are running on each subsequent nodes in the cluster as well. And each node has a bunch of containers - simple. So basic idea here is, I want only traffic from the primary SYSTEM node to hit the containers. All else needs to be blocked.
How can I achieve this?
Base Image for the container we're using is mcr.microsoft.com/windows/servercore:ltsc2019. I'm trying to create our own version of the base image with some modifications (like this security hardening, logging, etc.), publish that to our Azure Container Registry (ACR), and plan is for developers to PULL from our ACR instead of Microsoft's public hub.
As for the network, not sure if it matters or not but it's using the default NAT network.
docker network inspect nat
[
{
"Name": "nat",
"Id": "78l2lk902jsxu82jskais92alxp51mcf2907djsla81m154985snjo1d69xh51da",
"Created": "2020-04-15T20:18:39.9097816-07:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.27.128.0/20",
"Gateway": "172.27.128.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.windowsshim.hnsid": "5045b0b6-d9a6-4b50-b1da-f66b0b770feb",
"com.docker.network.windowsshim.networkname": "nat"
},
"Labels": {}
}
]

How to set up replication from one docker couchDB to another?

I have the following docker containers running on my windows 10 host:
PS C:\Users\jj2> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aacbb0c8f189 couchdb:2.1.1 "tini -- /docker-ent…" 15 seconds ago Up 12 seconds 4369/tcp, 9100/tcp, 0.0.0.0:15984->5984/tcp, 0.0.0.0:15986->5986/tcp jj2_server-1_1
b00138d9c030 couchdb:2.1.1 "tini -- /docker-ent…" 16 seconds ago Up 12 seconds 4369/tcp, 9100/tcp, 0.0.0.0:25984->5984/tcp, 0.0.0.0:25986->5986/tcp jj2_server-2_1
e4c984413ac1 couchdb:2.1.1 "tini -- /docker-ent…" 16 seconds ago Up 12 seconds 0.0.0.0:5984->5984/tcp, 4369/tcp, 9100/tcp, 0.0.0.0:5986->5986/tcp jj2_server-0_1
And I'm able to launch Fauxton like so for each instance:
http://127.0.0.1:5984/
http://127.0.0.1:15984/
http://127.0.0.1:25984/
Now I try to set up replication on the main container … but I must be messing up the value for replication target.
these are the values I'm specifying:
Replication Source: Local Database
Source Name: widgets
Replication Target: New Remote Database
New Database: http://127.0.0.1:15984/widgets
Replication Type: Continuous
When I save this, the replication attempt fails... and if I reopen the configuration tool, the target is changed to "Existing local database".
This is what the original config JSON looks like:
{
"_id": "310ab1c7a68d4ae4aba039d2fa00320f",
"_rev": "2-cf1a3abced5f09ceebd9d54f42ebd65d",
"user_ctx": {
"name": "couchdb",
"roles": [
"_admin",
"_reader",
"_writer"
]
},
"source": {
"headers": {
"Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
},
"url": "http://127.0.0.1:5984/widgets"
},
"target": {
"headers": {
"Authorization": "Basic Y291Y2hkYjpwYXNzd29yZA=="
},
"url": "http://127.0.0.1:15984/widgets"
},
"create_target": true,
"continuous": true,
"owner": "couchdb"
}
the hint / help for the "New Database" field seems to indicate I need to use a URL... which is why I tried the 127.0.0.1.
Any suggestions would be appreciated.
EDIT 1
ONe thing I should add is that the 2 additional nodes have not had a setup run on them. Meaning, I created the cluster, but when I launch the webapp, it prompts me to create either a single node or a cluster. do I have to set up each node as a single node before replication will work?
Also, this is how I created the cluster / containers in the first place:
https://github.com/apache/couchdb-docker/issues/74
I used that docker-compose.yml file.
EDIT 2
I know realize / learned that anything 127.0.0.1 will be pointing to the HOST machine which is where I've strayed. But how do I point one container to another?
As far as the cluster goes, using fauxton running on 127.0.0.1:5984, for server-0 i have added the following 2 nodes like so :
couchdb-1:5984 bind address 0.0.0.0
couchdb-2:5984 bind address 0.0.0.0
Then when I do this (notice the port):
http://127.0.0.1:15984/_node/couchdb#couchdb-1/_config
I get a legit json response showing that something is running under the name "couchdb-1". However, I realize that I'm still using my HOST machine to get a view into couchdb-1 server. (server-1)
Via commandline, I confirmed I have nodes like so:
PS C:\Users\jj2> curl -X GET "http://127.0.0.1:5984/_membership" --user couchdb
Enter host password for user 'couchdb':
{"all_nodes":["couchdb#couchdb-0"],"cluster_nodes":["couchdb#couchdb-0","couchdb#couchdb-1","couchdb#couchdb-2"]}
PS C:\Users\jj2>
Lastly, I thought maybe the I could use the IP addresss of the containers assigned by docker, but none of them are pingable from the host. They are all 172.x.x.x addresses.
EDIT 3
IN case it helps.
PS C:\Users\jj2> docker network inspect jj2_network
[
{
"Name": "jj2_network",
"Id": "a0a799f7069ff49306438d9cb7884399a66470a7f0e9ac5364600c462153f53c",
"Created": "2020-01-30T21:18:55.5841557Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"006b6d02cd4e962f3df9d6584d58b36b67864872446f2d00209001ec58d3cd52": {
"Name": "jj2_server-1_1",
"EndpointID": "91260368a2d5014743b41c9ab863a2acbfe0a8c7f0a18ea7ad35a3c16efb4445",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"15b261831c46fb89cdc83f9deb638ada0d9d8a89ece0bc065e0a45818e9b4ce3": {
"Name": "jj2_server-2_1",
"EndpointID": "cf072d0bbd95ab86308ac4c15b71b47223b09484506e07e5233d526f46baca1e",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"aeaf74cf591cffa8e7463e82b75e9ca57ebbcfd1a84d3f893ea5dcae324dbd1e": {
"Name": "jj2_server-0_1",
"EndpointID": "0a6d66b95bf973f0432b9ae88c61709e63f9e51c6bbf92e35ddf6eab5f694cc1",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "network",
"com.docker.compose.project": "jj2",
"com.docker.compose.version": "1.24.1"
}
}
]
Do you have the docker instances bound to 0.0.0.0 or just 127.0.0.1
If 0.0.0.0 then you can replicate by setting source and target as remote database with the IP address of the local machine and the specific ports for each instance.
If only 127.0.0.1 and they are both on the same docker network (see docker network ls and docker network inspect <network_name>) then you can use the docker network IP addresses to replicate between the containers.

Resources