Forbid docker to use specific network - docker

Is there a way to tell docker to not use some specific network when runing docker-compose up?
I am using some out of the box examples (hyperledger) and each time docker takes an address that breaks my remote connection.
[xxx#xxx fabric]$ docker network inspect some_network
[
{
"Name": "some_network",
"Id": "xxx",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.30.0.0/16",
"Gateway": "172.30.0.1/16"
}
]
},
"Internal": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
What I would like is to tell docker (without editing the docker-compose.yaml file) to never use the network 172.30.0.0/16.

When you create a network inside docker-compose.yml, you can specify IP range, subnet, gateway, etc. You can do it this way.
version: "3"
services:
service1:
build: .
...
networks:
default:
subnet: 172.28.0.0/16
ip_range: 172.28.5.0/24
gateway: 172.28.5.254
To do this in the Docker daemon, you have pass --bip and --fixed-cidr parameters.
From Docker docs: https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/
--bip=CIDR: supply a specific IP address and netmask for the
docker0 bridge, using standard CIDR notation. For example:
192.168.1.5/24.
--fixed-cidr=CIDR: restrict the IP range from the docker0 subnet,
using standard CIDR notation. For example: 172.16.1.0/28. This range
must be an IPv4 range for fixed IPs, such as 10.20.0.0/16, and must
be a subset of the bridge IP range (docker0 or set using --bridge).
For example, with --fixed-cidr=192.168.1.0/25, IPs for your
containers will be chosen from the first half of addresses included in
the 192.168.1.0/24 subnet.
To make this changes permanent, create or edit /etc/docker/daemon.json and add options bip and fixed-cidr.
{
"bip": "192.168.1.5/24",
"fixed-cidr": "192.168.1.0/25"
}

Related

How to connect additional network to container with ipvlan l3 network?

My setup: I have an external defined ipvlan l3 network connect to the host nic named dmz_net. It span an isolated subnet to connect several containers. This works as expected.
Now I want to create a service stack with docker compose. It has a backend container (database) and a service container. The backend container has his own internal defined network (default bridge mode). The service container should connected to the 'dmz_net' network and to the backend network.
docker compose extract
networks:
dmz:
external:
name: dmz_net
backend:
internal: true
services:
service:
networks:
dmz:
ipv4_address: ${IPV4}
backend:
docker network inspect dmz_net:
[
{
"Name": "dmz_net",
"Id": "9b98f5e01245c8081a10fe377a450e1e5eedd08511b4e715b4469986d7aadce6",
"Created": "2022-02-21T20:37:58.688032649+01:00",
"Scope": "local",
"Driver": "ipvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.20.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
},
"Options": {
"ipvlan_mode": "l3",
"parent": "enp36s0f1.20"
},
"Labels": {}
}
]
Starting the service container failed with the error message failed to set gateway while updating gateway: file exists.
How can I get it to work? Is it possible at all?
After a long night of experiments and to little sleep, I found the solution...
Unfortunately it isn't mentioned in the docker documentation, only an 7 years old issue describe the problem and a PR to fix it. A look into the code give me the light...
The solution: using ipvlan L3 driver (instead of the default bridge) for the internal (backend) network and setting 'internal=true' do the trick.
This definition prevent the network connection later on to create a default gateway, which is not possible in case another (external reachable, means internal=false) L3 network is connected.
Hopefully others find the information helpful.

Docker Container not able to access in localhost and also same network segment

New to docker, please correct my statement
I'm trying to access docker container ex:nginx web server using port 80 in docker engine machine but am unable to access it.
Here my docker Engine network 10.20.20.0/24.
Docker Engine IP : 10.20.20.3
> Telnet 10.20.20.3 80 Connection failed
tcp 0 0 10.20.20.3:80 0.0.0.0:* LISTEN 28953/docker-proxy
Docker Container IP : 172.18.0.4
> Telnet 172.18.0.4 80 Connection success
Docker network detail
[root#xxxxxxxxx]# docker network inspect 1984f08c739d [
{
"Name": "xxxxxxxxxxxxx",
"Id": "1984f08c739d6d6fc6b4769e877714844b9e57ca680f61edf3a528bd12ca5ad1",
"Created": "2021-11-13T21:01:27.53591809+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"126d5128621fa6cde0389f4c6e0b53be36670bce5f62e981da6e17722b88c4a9": {
"Name": "xxxxxxxxxxxxxxx",
"EndpointID": "b011052062ae137e3d190032311e0fbc5850f0459c9b24d8cc1d315a9dc18773",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "xxxxxxxx",
"com.docker.compose.version": "1.29.2"
}
} ]
I can access these nginx in other networks like 10.20.21.0/24 so on. But not on the same network 10.20.20.0/24 or same docker engine running on it.
My Environment Docker Engine VM having 2 Interfaces i.e. eth0 and eth1 with different subnet. In Previously it'll not work because both interfaces having separate routing table in /etc/sysconfig/network-scripts (route-eth0,route-eth1 and rule-eth0,eth1) Base hyper-v AHV. These route written to persistent interface. I tried to removing route for eth0. Since eth0 doesnt required routing table to persistent, it'll come by default route table in Linux. Then restarted the network..Hola there we go the docker listening on eth0 interfaces and something did for eth1. it's worked. Both eth0 and eth1 interfaces I can map to the dockers network. It's work like charm. I believe in AHV doesnt not required routing table for AHV VMs for different and same network subnets. So here the concluded its routing issues. Issues was resolved, I can access docker container with eth0,eth1 interfaces across different subnets and same subnet.
Both interfaces worked after restarting without any routes in AHV VMs and power off.

How to change configuration of existing docker network

I have an existing (MacVLAN) docker network called "VPN" to which I normally attach docker containers that I all want to run on a VPN. There are two single 'host' docker containers running openvpn, each with their own IP, and I attach other containers to these as I please.
I have recently moved, and my new router is at address 192.168.0.1. However, the old house's router had the gateway at 192.168.2.254, and the existing docker network had the subnet mask, the IP range and the gateway all configured for this.
If I run docker network inspect VPN it gives:
[
{
"Name": "VPN",
"Id": [anidea],
"Created": [sometimenottolongago],
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.2.0/24",
"IPRange": "192.168.2.128/28",
"Gateway": "192.168.2.254"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"parent": "enp5s0"
},
"Labels": {}
}
]
There were two machines on the network, and I cannot access them currently. Both machines are a container to which other containers are attached to.
I have tried:
Recreating the docker containers with new IP addresses on the subnet of the new home network. This doesn't work as the docker network "VPN" allows only IP's on the old range.
Access the docker containers/machines at their old IP. Then I get a timeout; possibly I need to design some IP routing or something? This is where my knowledge (if any) starts getting cloudy.
I think the best is to just update the docker network "VPN" to play nice with the new Gateway/router/home network; I would like to change the IPAM["Config"] parameters to update for the new gateway and subnet. However, I can't find online how to do this (the only things that come up is how to change the default settings for the default docker network).
Long story short:
How do I change configuration/parameters of an existing docker network?
If, in the end, this is a bad way of doing things (for instance, if I can access the containers on the network as-currently-is), I'm also open for ideas.
The host machine is running ubuntu-server 20.04.1 LTS.
Thanks in advance.
The simplest approach to this would be to delete the VPN network and create it anew with new parameters but the same name. If you use docker-compose up to recreate containers, include the networks section in the first container that you recreate.
First, run this to delete the existing network:
docker network rm VPN
Then add the macvlan network definition to yml of your first re-created container. Here is the networks section I used, adapted somewhat to your situation:
networks:
VPN:
driver: macvlan
enable_ipv6: true # if needed
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.0.0/24
gateway: 192.168.0.1
ip_range: 192.168.0.8/30 # reserve some IP addresses for other machines
# in that subnet - adjust as needed
- subnet: xx:xx:xx:xx::/63 # put your IPv6 subnet here if needed
gateway: xx:xx:xx:xx:xx::xx # IPv6 (external) of your router
Alternatively, you could change your new router config to match the old one, and leave your macvlan VPN as is.

What is IP of bridge localhost for Docker?

I am dockerizing my application. I have two containers now. One of it wants to talk to another, in it's config I have "mongo": "127.0.0.1" I suppose they should talk through the bridge network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
},
"Labels": {}
}
]
Should I now change "mongo": "127.0.0.1" to "mongo": "0.0.0.0"?
You can check a container IP.
$ docker inspect $(container_name) -f "{{json .NetworkSettings.Networks}}"
You can find IPAddress attribute from the output of json.
Yes, you should use a bridge network. The default "bridge" can be used but won't give you DNS resolution, check https://docs.docker.com/engine/userguide/networking/#user-defined-networks for details.
Best way to use is using with --link option to avoid to many changes.
for ex: --link mongo01:mongo it will instruct Docker to use the container named mongo01 as a linked container, and name it mongo inside your application container
So in your application you can use mongo:27017. without making any changes.
refer this for more details.
https://www.thachmai.info/2015/05/10/docker-container-linking-mongo-node/

Access docker bridge using docker exec

first of all, I'm a totally n00b in docker, but I got into a project that are actually running in docker, so I've been reading about it.
My problem is, I have to inspect my development environment in a mobile device(iOS). I tried to access by my docker ip because this is what I basically do in my computer. After a few failed attempts I noticed that I've to access with the docker network bridge instead of docker host(the default).
I already have defined my docker bridge( I think its default), but i have no idea how to run my server with this network, can you guys help me?
A few important notes:
I'm using MAC OS X El capitan ( 10.11.1 )
The device and the mac are in the same wi-fi network and i can access using regularly localhost outside docker.
My following steps to run my server is:
cd gsat_grupo_5/docker && docker-compose -p gsat_grupo_5 up -d
docker exec -it gsatgrupo5_web_1 bash
python manage.py runserver 0.0.0.0:8000
When I run docker ps my output is:
My docker bridge output:
{
"Name": "bridge",
"Id": "1b3ddfda071096b16b92eb82590326fff211815e56344a5127cb0601ab4c1dc8",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {
"565caba7a4397a55471bc6025d38851b1e55ef1618ca7229fcb8f8dfcad68246": {
"Name": "gsatgrupo5_mongo_1",
"EndpointID": "471bcecbef0291d42dc2d7903f64cba6701f81e003165b6a7a17930a17164bd6",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"5e4ce98bb19313272aabd6f56e8253592518d6d5c371d270d2c6331003f6c541": {
"Name": "gsatgrupo5_thumbor_1",
"EndpointID": "67f37d27e86f4a53b05da95225084bf5146261304016809c99c7965fc2414068",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"a0b62a2da367e720d3a55deb7377e517015b06ebf09d153c6355b8ff30cc9977": {
"Name": "gsatgrupo5_web_1",
"EndpointID": "52687cc252ba36825d9e6d8316d878a9aa8b198ba2603b8f1f5d6ebcb1368dad",
"MacAddress": "02:42:ac:11:00:06",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
},
"b3286bbbe9259648f15e363c8968b64473ec0a9dfe1b1a450571639b8fa0ef6f": {
"Name": "gsatgrupo5_mysql_1",
"EndpointID": "53290cb44cf5ed8322801d2dd0c529518f7d414b3c5d71cb6cca527767dd21bd",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
If there's some another smart approach to access my environment in my mobile device I'm listening.
I've to access with the docker network bridge instead of docker host(the default).
Unless you have a protocol that does something odd, like connecting back out to the device from the server, normally accessing <macip>:8000 from your device would be enough. Can you test the service from any other computers?
If you do require direct access the container network, that's a bit harder when using a Mac...
Docker for Mac doesn't support direct access to the Linux virtual machines bridge networks where your containers run.
Docker Toolbox runs a VirtualBox VM with the boot2docker vm image. It would be possible to use this but it's a little harder to apply custom network config to the VM that is setup and run via the docker-machine tools.
Plain Virtualbox is probably your best option, running your own VM with Docker installed.
Add two bridged network interfaces to the VM in Virtualbox. One for the VM and one for the the container, so they can both be available on your main network.
The first interface is for the host. It should pick up an address from DHCP like normal and Docker will then be available on your normal network.
The second bridged interface can be attached to your docker bridge and then the containers on that bridge will be on your home network.
On pre v1.10 versions of docker Pipework can be used to physically mapped an interface in to the container.
There is some specific VirtualBox interface setup required for both methods to make sure all this works.
Vagrant
Vagrant might make the VM setup a bit easier and repeatable.
$ mkdir dockervm
$ cd dockervm
$ vagrant init debian/jessie64
Vagrantfile network config:
config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)"
config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)"
config.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id, '--nictype1', 'Am79C973']
v.customize ['modifyvm', :id, '--nicpromisc1', 'allow-all']
v.customize ['modifyvm', :id, '--nictype2', 'Am79C973']
v.customize ['modifyvm', :id, '--nicpromisc2', 'allow-all']
end
Note that this VM will have 3 interfaces. The first interface is for Vagrant to use as a management address and should be left as is.
Start up
$ vagrant up
$ vagrant ssh

Resources