I'm trying to create multiple user-defined bridge networks. But seem docker can create only 31 user-defined bridge networks per host machine. So, Can I increase more than 31 networks on my machine? Doesn't anyone know how I can do that, how to config my host machine? Thank you for your time!
Have a look at this:
This is due to the fact that it uses hardcoded list of broad network ranges – 172.17-31.x.x/16 and 192.168.x.x/20 – for bridge network driver.
You can get around this by manually specifying the subnet for each network created:
for net in {1..50}
do
docker network create -d bridge --subnet=172.18.${net}.0/24 net${net}
done
Since Docker version 18.06 the allocation ranges can be customized in the daemon configuration like so:
/etc/docker/daemon.json
{
"default-address-pools": [
{"base": "172.17.0.0/16", "size": 24}
]
}
This example would create a pool of /24 subnets out of a /16 range, increasing the bridge network limit from 31 to 255. More pools could be added as necessary. Note that this limits the number of attached containers per network to about 254.
Yer it is possible to extend network limit edit: /etc/docker/daemon.json:
{
"default-address-pools": [
{
"base":"172.17.0.0/12",
"size":16
},
{
"base":"192.168.0.0/16",
"size":20
},
{
"base":"10.99.0.0/16",
"size":24
}
]
}
(add param if not exists), then sudo service docker restart
First two are default docker address pools, last is one of the private network
With this change you have additionally 255 networks. New containers attach to new address pool 10.99.0.0/16.
Related
I have an existing (MacVLAN) docker network called "VPN" to which I normally attach docker containers that I all want to run on a VPN. There are two single 'host' docker containers running openvpn, each with their own IP, and I attach other containers to these as I please.
I have recently moved, and my new router is at address 192.168.0.1. However, the old house's router had the gateway at 192.168.2.254, and the existing docker network had the subnet mask, the IP range and the gateway all configured for this.
If I run docker network inspect VPN it gives:
[
{
"Name": "VPN",
"Id": [anidea],
"Created": [sometimenottolongago],
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.2.0/24",
"IPRange": "192.168.2.128/28",
"Gateway": "192.168.2.254"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"parent": "enp5s0"
},
"Labels": {}
}
]
There were two machines on the network, and I cannot access them currently. Both machines are a container to which other containers are attached to.
I have tried:
Recreating the docker containers with new IP addresses on the subnet of the new home network. This doesn't work as the docker network "VPN" allows only IP's on the old range.
Access the docker containers/machines at their old IP. Then I get a timeout; possibly I need to design some IP routing or something? This is where my knowledge (if any) starts getting cloudy.
I think the best is to just update the docker network "VPN" to play nice with the new Gateway/router/home network; I would like to change the IPAM["Config"] parameters to update for the new gateway and subnet. However, I can't find online how to do this (the only things that come up is how to change the default settings for the default docker network).
Long story short:
How do I change configuration/parameters of an existing docker network?
If, in the end, this is a bad way of doing things (for instance, if I can access the containers on the network as-currently-is), I'm also open for ideas.
The host machine is running ubuntu-server 20.04.1 LTS.
Thanks in advance.
The simplest approach to this would be to delete the VPN network and create it anew with new parameters but the same name. If you use docker-compose up to recreate containers, include the networks section in the first container that you recreate.
First, run this to delete the existing network:
docker network rm VPN
Then add the macvlan network definition to yml of your first re-created container. Here is the networks section I used, adapted somewhat to your situation:
networks:
VPN:
driver: macvlan
enable_ipv6: true # if needed
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.0.0/24
gateway: 192.168.0.1
ip_range: 192.168.0.8/30 # reserve some IP addresses for other machines
# in that subnet - adjust as needed
- subnet: xx:xx:xx:xx::/63 # put your IPv6 subnet here if needed
gateway: xx:xx:xx:xx:xx::xx # IPv6 (external) of your router
Alternatively, you could change your new router config to match the old one, and leave your macvlan VPN as is.
I have a HBase + HDFS setup, in which each of the HBase master, regionservers, HDFS namenode and datanodes are containerized.
When running all of these containers on a single host VM, things work fine as I can use the docker container names directly, and set configuration variables as:
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
for both the regionserver and datanode. The system works as expected in this configuration.
When attempting to distribute these across multiple host VMs however, I run into issue.
I updated the config variables above to look like:
CORE_CONF_fs_defaultFS: hdfs://hostname:9000
and make sure the namenode container is exposing port 9000 and mapping it to the host machine's port 9000.
It looks like the names are not resolving correctly when I use the hostname, and the error I see in the datanode logs looks like:
2019-08-24 05:46:08,630 INFO impl.FsDatasetAsyncDiskService: Deleted BP-1682518946-<ip1>-1566622307307 blk_1073743161_2337 URI file:/hadoop/dfs/data/current/BP-1682518946-<ip1>-1566622307307/current/rbw/blk_1073743161
2019-08-24 05:47:36,895 INFO datanode.DataNode: Receiving BP-1682518946-<ip1>-1566622307307:blk_1073743166_2342 src: /<ip3>:48396 dest: /<ip2>:9866
2019-08-24 05:47:36,897 ERROR datanode.DataNode: <hostname>-datanode:9866:DataXceiver error processing WRITE_BLOCK operation src: /<ip3>:48396 dst: /<ip2>:9866
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:786)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
at java.lang.Thread.run(Thread.java:748)
Where <hostname>-datanode is the name of the datanode container, and the IPs are various container IPs.
I'm wondering if I'm missing some configuration variable that would let containers from other VMs connect to the namenode, or some other change that'd allow this system to be distributed correctly. I'm wondering if the system is expecting the containers to be named a certain way, for example.
I am trying to use Test Containers to run an integration test against HBase launched in a Docker container. The problem I am running into may be a bit unique to how a client interacts with HBase.
When the HBase Master starts in the container, it stores its hostname:port in Zookeeper so that clients can find it. In this case, it stores "localhost:16000".
In my test case running outside the container, the client retrieves "localhost:16000" from Zookeeper and cannot connect. The connection fails because the port has been remapped by TestContainers to some other random port, other than 16000.
Any ideas how to overcome this?
(1) One idea is to find a way to tell the HBase Client to use the remapped port, ignoring the value it retrieved from Zookeeper, but I have yet to find a way to do this.
(2) If I could get the HBase Master to write the externally accessible host:port in Zookeeper that would also fix the problem. But I do not believe the container itself has any knowledge about how Test Containers is doing the port remapping.
(3) Perhaps there is a different solution that Test Containers provides for this sort of situation?
You can take a look at KafkaContainer's implementation where we start a Socat (fast tcp proxy) container first to acquire a semi-random port and use it later to configure the target container.
The algorithm is:
In doStart, first start Socat targetting the original container's network alias & port like 12345
Get mapped port (it will be something like 32109 pointing to 12345)
Make the original container (e.g. with environment variables) use the mapped port in addition to the original one, or, if only one port can be configured, see CouchbaseContainer for the more advanced option
Return Socat's host & port to the client
we build a new image of hbase to be compliant with test container.
Use this image:
docker run --env HBASE_MASTER_PORT=16000 --env HBASE_REGION_PORT=16020 jcjabouille/hbase-standalone:2.4.9
Then create this Container (in scala here)
private[test] class GenericHbase2Container
extends GenericContainer[GenericHbase2Container](
DockerImageName.parse("jcjabouille/hbase-standalone:2.4.9")
) {
private val randomMasterPort: Int = FreePortFinder.findFreeLocalPort(18000)
private val randomRegionPort: Int = FreePortFinder.findFreeLocalPort(20000)
private val hostName: String = InetAddress.getLocalHost.getHostName
val hbase2Configuration: Configuration = HBaseConfiguration.create
addExposedPort(randomMasterPort)
addExposedPort(randomRegionPort)
addExposedPort(2181)
withCreateContainerCmdModifier { cmd: CreateContainerCmd =>
cmd.withHostName(hostName)
()
}
waitingFor(Wait.forLogMessage(".*0 row.*", 1))
withStartupTimeout(Duration.ofMinutes(10))
withEnv("HBASE_MASTER_PORT", randomMasterPort.toString)
withEnv("HBASE_REGION_PORT", randomRegionPort.toString)
setPortBindings(Seq(s"$randomMasterPort:$randomMasterPort", s"$randomRegionPort:$randomRegionPort").asJava)
override protected def doStart(): Unit = {
super.doStart()
hbase2Configuration.set("hbase.client.pause", "200")
hbase2Configuration.set("hbase.client.retries.number", "10")
hbase2Configuration.set("hbase.rpc.timeout", "3000")
hbase2Configuration.set("hbase.client.operation.timeout", "3000")
hbase2Configuration.set("hbase.client.scanner.timeout.period", "10000")
hbase2Configuration.set("zookeeper.session.timeout", "10000")
hbase2Configuration.set("hbase.zookeeper.quorum", "localhost")
hbase2Configuration.set("hbase.zookeeper.property.clientPort", getMappedPort(2181).toString)
}
}
More details here: https://hub.docker.com/r/jcjabouille/hbase-standalone
From a regular ECS container running with the bridge mode, or from a standard EC2 instance, I usually run
curl http://169.254.169.254/latest/meta-data/local-ipv4
to retrieve my IP.
In an ECS container running with the awsvpc network mode, I get the IP of the underlying EC2 instance which is not what I want. I want the address of the ENI attached to my container. How do I do that?
A new convenience environment variable is injected by the AWS container agent into every container in AWS ECS: ${ECS_CONTAINER_METADATA_URI}
This contains the URL to the metadata endpoint, so now you can do
curl ${ECS_CONTAINER_METADATA_URI}
The output looks something like
{
"DockerId":"redact",
"Name":"redact",
"DockerName":"ecs-redact",
"Image":"redact",
"ImageID":"redact",
"Labels":{ },
"DesiredStatus":"RUNNING",
"KnownStatus":"RUNNING",
"Limits":{ },
"CreatedAt":"2019-04-16T22:39:57.040286277Z",
"StartedAt":"2019-04-16T22:39:57.29386087Z",
"Type":"NORMAL",
"Networks":[
{
"NetworkMode":"awsvpc",
"IPv4Addresses":[
"172.30.1.115"
]
}
]
}
Under the key Networks you'll find IPv4Address
You application code can then look something like this (python)
METADATA_URI = os.environ['ECS_CONTAINER_METADATA_URI']
container_metadata = requests.get(METADATA_URI).json()
ALLOWED_HOSTS.append(container_metadata['Networks'][0]['IPv4Addresses'][0])
import * as publicIp from 'public-ip';
const publicIpAddress = await publicIp.v4(); // your container's public IP
Docker version 18.09.0, build 4d60db4
OS VERSION : Centos 7
Problem statement : -
I have a master machine and a worker machine on the same LAN. Both are connected by the docker-machine utility. The worker machine is expected to run 3 zookeeper standalone containers. I have created a docker overlay network, on my master machine, using -
docker network create --attachable --subnet 200.184.152.0/24 --gateway 200.184.152.254 --driver overlay my network
I want to bind zookeeper container with fixed IP address as follows : -
zoo1 : 200.184.152.2
zoo2: 200.184.152.3
zoo3: 200.184.152.4
After moving to the shell of the worker machine by using the command :
eval $(docker-machine env worker-1)
I run zookeeper’s first instancezoo1: 200.184.152.2. It successfully runs.
Then I try to run second Zookeeper instance zoo2: 200.184.152.3, but it fails to give a network error.
Then I inspect the network using docker network inspect my-network on worker machine and in the containers section it shows the results : -
"Containers": {
"1a2471c777c907dc05bb0f23e819919354669de699f862e4fdc093293d145b31": {
"Name": "zoo1",
"EndpointID": "8bb5eebd1cacf36dc7cdad7fc3d72c45047c66b455c568cc9e8911d2121fd8d6",
"MacAddress": "02:42:c8:b8:98:02",
"IPv4Address": "200.184.152.2/24",
"IPv6Address": ""
},
"lb-my-network": {
"Name": "my-network-endpoint",
"EndpointID": "8f747b26e921ee1543631b758b7dfaf8c7e820c47a95849608dc3568f9957e79",
"MacAddress": "02:42:c8:b8:98:03",
"IPv4Address": "200.184.152.3/24",
"IPv6Address": ""
}
},
This lb-my-network is using IP address required by my components, because of this the entire automatic deployment fails. The worst is, the IP address of lb-my-network changes it keeps clashing with IP address of other containers.
Is this expected? If yes, then how can I deploy a number of containers using automation when docker is using IP addresses randomly from the pool reserved by user’s logic.
In the documentation, it is stated that the docker needs three address for it’s working, a network address, a broadcast address, and a gateway address, so 253 addresses can be used by the user in /24 subnet. But here the lb-my-network is using IP address from the pool being expected to be used by user’s containers.
Help will be appreciated.