How to change configuration of existing docker network - docker

I have an existing (MacVLAN) docker network called "VPN" to which I normally attach docker containers that I all want to run on a VPN. There are two single 'host' docker containers running openvpn, each with their own IP, and I attach other containers to these as I please.
I have recently moved, and my new router is at address 192.168.0.1. However, the old house's router had the gateway at 192.168.2.254, and the existing docker network had the subnet mask, the IP range and the gateway all configured for this.
If I run docker network inspect VPN it gives:
[
{
"Name": "VPN",
"Id": [anidea],
"Created": [sometimenottolongago],
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.2.0/24",
"IPRange": "192.168.2.128/28",
"Gateway": "192.168.2.254"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"parent": "enp5s0"
},
"Labels": {}
}
]
There were two machines on the network, and I cannot access them currently. Both machines are a container to which other containers are attached to.
I have tried:
Recreating the docker containers with new IP addresses on the subnet of the new home network. This doesn't work as the docker network "VPN" allows only IP's on the old range.
Access the docker containers/machines at their old IP. Then I get a timeout; possibly I need to design some IP routing or something? This is where my knowledge (if any) starts getting cloudy.
I think the best is to just update the docker network "VPN" to play nice with the new Gateway/router/home network; I would like to change the IPAM["Config"] parameters to update for the new gateway and subnet. However, I can't find online how to do this (the only things that come up is how to change the default settings for the default docker network).
Long story short:
How do I change configuration/parameters of an existing docker network?
If, in the end, this is a bad way of doing things (for instance, if I can access the containers on the network as-currently-is), I'm also open for ideas.
The host machine is running ubuntu-server 20.04.1 LTS.
Thanks in advance.

The simplest approach to this would be to delete the VPN network and create it anew with new parameters but the same name. If you use docker-compose up to recreate containers, include the networks section in the first container that you recreate.
First, run this to delete the existing network:
docker network rm VPN
Then add the macvlan network definition to yml of your first re-created container. Here is the networks section I used, adapted somewhat to your situation:
networks:
VPN:
driver: macvlan
enable_ipv6: true # if needed
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.0.0/24
gateway: 192.168.0.1
ip_range: 192.168.0.8/30 # reserve some IP addresses for other machines
# in that subnet - adjust as needed
- subnet: xx:xx:xx:xx::/63 # put your IPv6 subnet here if needed
gateway: xx:xx:xx:xx:xx::xx # IPv6 (external) of your router
Alternatively, you could change your new router config to match the old one, and leave your macvlan VPN as is.

Related

Can't deploy a docker stack or create an overlay nw in raspberry with ubuntu 22.04.1

I have a small docker swarm of 7 x Rpis:
ubuntu#rpi105:~/stacks$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
me5ma5mtkl98iutcztnyar7nf rpi102 Ready Active 20.10.23
125x7tjps3om8qp4awmlt9nkn rpi103 Ready Active 20.10.23
cxoluolounydb8wxydfhd0pd3 rpi104 Ready Active 20.10.23
6psveckpp209kx29je9bdug67 * rpi105 Ready Active Leader 20.10.23
tva9hlxlsgsagoic92b5nsv9e rpi106 Ready Active Reachable 20.10.23
qu2wboooaoux1kiy86yw2nkdk rpi107 Ready Active 20.10.23
iu3nnacxqlz34lgy2tzzczxf2 rpi108 Ready Active Reachable 20.10.23
I can deploy services on command line without any problem, but when I try to deploy them with a stack file, it gets stuck. I am trying with this basic stack:
version: "3.9"
services:
nginx_test:
image: nginx:latest
deploy:
replicas: 1
ports:
- 81:80
Nothing is deployed no matter how many hours I wait. No error raise, but just stays forever in "New":
ubuntu#rpi105:~/stacks$ docker stack deploy -c test_nginx.yml testng
Creating network testng_default
Creating service testng_nginx_test
ubuntu#rpi105:~/stacks$ docker stack ps testng
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
owgslvqtbgfx testng_nginx_test.1 nginx:latest Running New 21 minutes ago
Very suspiciously, the network that this command creates has no driver:
ubuntu#rpi105:~/stacks$ docker network ls
NETWORK ID NAME DRIVER SCOPE
b19cd2106cf0 bridge bridge local
d6ecdf2de829 host host local
nys70xbvgset ingress overlay swarm
46fa0761429f none null local
mhggl0kyq5o5 testng_default swarm
I don't know why it creates a network instead of using the ingress one.
Furthermore, if I manually create a network with a driver (overlay below), the network created has no driver:
ubuntu#rpi105:~/stacks$ docker network create -d overlay testnet
j5pg96332w5hvy7qoratpyvzc
ubuntu#rpi105:~/stacks$ docker network ls
NETWORK ID NAME DRIVER SCOPE
b19cd2106cf0 bridge bridge local
d6ecdf2de829 host host local
nys70xbvgset ingress overlay swarm
46fa0761429f none null local
j5pg96332w5h testnet swarm
mhggl0kyq5o5 testng_default swarm
ubuntu#rpi105:~/stacks$ docker network inspect testnet
[
{
"Name": "testnet",
"Id": "j5pg96332w5hvy7qoratpyvzc",
"Created": "2023-02-02T19:30:36.372991984Z",
"Scope": "swarm",
"Driver": "",
"EnableIPv6": false,
"IPAM": {
"Driver": "",
"Options": null,
"Config": null
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": null,
"Labels": null
}
]
Obviously, I am missing something. But after quite a few hours, I do not know what else I can do. Any help will be much appreciated.
For info, all Rpis in this cluster have:
Ubuntu 22.04.1 LTS
Docker Engine - Community 20.10.23
Docker Compose version v2.15.1
The ingress overlay nw was totally messed up because of a wrong instruction in the ansible code that I used to create the swarm master:
- name: Initialize Docker Swarm
community.docker.docker_swarm:
state: present
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
default_addr_pool: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}/24" # <-- THIS ONE!
ignore_errors: true
tags: swarm
Commenting that line and recreating the whole swarm got it working to some point.
I am still having problems with nodes in the swarm that fail to attach to the ingress overlay, but this is for a different question.

How to connect additional network to container with ipvlan l3 network?

My setup: I have an external defined ipvlan l3 network connect to the host nic named dmz_net. It span an isolated subnet to connect several containers. This works as expected.
Now I want to create a service stack with docker compose. It has a backend container (database) and a service container. The backend container has his own internal defined network (default bridge mode). The service container should connected to the 'dmz_net' network and to the backend network.
docker compose extract
networks:
dmz:
external:
name: dmz_net
backend:
internal: true
services:
service:
networks:
dmz:
ipv4_address: ${IPV4}
backend:
docker network inspect dmz_net:
[
{
"Name": "dmz_net",
"Id": "9b98f5e01245c8081a10fe377a450e1e5eedd08511b4e715b4469986d7aadce6",
"Created": "2022-02-21T20:37:58.688032649+01:00",
"Scope": "local",
"Driver": "ipvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.20.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
},
"Options": {
"ipvlan_mode": "l3",
"parent": "enp36s0f1.20"
},
"Labels": {}
}
]
Starting the service container failed with the error message failed to set gateway while updating gateway: file exists.
How can I get it to work? Is it possible at all?
After a long night of experiments and to little sleep, I found the solution...
Unfortunately it isn't mentioned in the docker documentation, only an 7 years old issue describe the problem and a PR to fix it. A look into the code give me the light...
The solution: using ipvlan L3 driver (instead of the default bridge) for the internal (backend) network and setting 'internal=true' do the trick.
This definition prevent the network connection later on to create a default gateway, which is not possible in case another (external reachable, means internal=false) L3 network is connected.
Hopefully others find the information helpful.

Windows Docker container Networking to Postges on host (windows 10) [duplicate]

This question already has answers here:
Docker container to connect to Postgres not in docker
(2 answers)
Closed 2 years ago.
OK.. Sorry to clog up this site with endless questions.
I have a .NET REST API that works in DOCKER. (Windows container)
But, the moment I try to connect to Postgres on my host I am unable to connect. I get unable to connect, request timed out, connection was actively refused... I have modified my connection string over a thousand times trying to get this to work.
when I look at docker networks is get:
C:\Windows\SysWOW64>docker network ls
NETWORK ID NAME DRIVER SCOPE
4c79ae3895aa Default Switch ics local
40dd0975349e nat nat local
90a25f9de905 none null local
when I inspect my container, it says it is using NAT for network.
C:\Windows\SysWOW64>docker network inspect nat
[
{
"Name": "nat",
"Id": "40dd0975349e1f4b334e5f7b93a3e8fb6aef864315ca884d8587c6fa7697dec5",
"Created": "2020-07-08T15:02:17.5277779-06:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.22.96.0/20",
"Gateway": "172.22.96.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0d2dc2658a9948d84b01eaa9f5eb5a0e7815933f5af17e5abea17b82a796e1ec": {
"Name": "***MyAPI***",
"EndpointID": "3510dac9e5c5d49f8dce18986393e2855008980e311fb48ed8c4494c9328c353",
"MacAddress": "00:15:5d:fc:4f:8e",
"IPv4Address": "172.22.106.169/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.windowsshim.hnsid": "3007307C-49DC-4DB5-91C8-0E05DAC8E2B6",
"com.docker.network.windowsshim.networkname": "nat"
},
"Labels": {}
}
]
When I look at my network properties of my host I have :
Name: vEthernet (nat)
Description: Hyper-V Virtual Ethernet Adapter #2
Physical address (MAC): 00:15:5d:fc:43:56
Status: Operational
Maximum transmission unit: 1500
IPv4 address: 172.22.96.1/20
IPv6 address: fe80::d017:d598:692a:2e67%63/64
DNS servers: fec0:0:0:ffff::1%1, fec0:0:0:ffff::2%1, fec0:0:0:ffff::3%1
Connectivity (IPv4/IPv6): Disconnected
I am guessing that the NAT in the docker network ls linking to this network hyper v adapter.
both have 172.22.96.1 as the IPAddress
connection string:
Server=172.22.96.1;Port=5433;Database=QuickTechAssetManager;Uid=QuickTech;Pwd=password;
SO... when I try to connect from container to host to connect to postgres I get errors even though the I can ping the UP address.
when I look at my host file, host.docker.internal is set to 10.0.0.47 (my wifi connection).
Is this "disconnect" part of my network problems.
I have posted a few questions on this and I get one answer and then nothing further.
I am would absolutely love to have someone work with me for a bit to resolve this one - what should be minor - issue.
I have modified my pg_hba.conf file, I have done everything I can find...
I will give a phone number or email to anyone who wants to help me solve this. I have been killing myself for over a week and am getting nowhere. I am not even sure is this sort of request is allowed here but I am desperate. I am three months into a project and cant get paid until I get this one minor problem solved.
here is the other question I asked a few days ago:
Docker container to connect to Postgres not in docker
rentedtrout#gmail.com for anyone who wants to work with me on this.
Please and thank you in advance.
Have you tried using the host only switch?
docker run --network host "imagename".
This will allow to use the same network as the one in the host i.e if you are able to connect to Postgres from host, then you will be able to connect it from the container as well (with the same ip address).

Why can't i attach a container to a docker network?

I've created a user defined attachable overlay swarm network. I can inspect it, but when i attempt to attach a container to it, i get the following error when running on the manager node:
$ docker network connect mrunner baz
Error response from daemon: network mrunner not found
The network is defined and is attachable
$ docker network inspect mrunner
[
{
"Name": "mrunner",
"Id": "kviwxfejsuyc9476eznb7a8yw",
"Created": "2019-06-20T21:25:45.271304082Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": null
}
]
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a454d677dea bridge bridge local
95383b47ee94 docker_gwbridge bridge local
249684755b51 host host local
zgx0nppx33vj ingress overlay swarm
kviwxfejsuyc mrunner overlay swarm
a30a12f8d7cc none null local
uftxcaoz9rzg taskman_default overlay swarm
Why is this network connection failing?
** This was answered here: https://github.com/moby/moby/issues/39391
See this:
To create an overlay network for use with swarm services, use a command like the following:
$ docker network create -d overlay my-overlay
To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the --attachable flag:
$ docker network create -d overlay --attachable my-attachable-overlay
So, by default overlay network cannot be used by standalone containers, if insist on, you need to add --attachable to allow this network be used by standalone containers.
Thanks to thaJeztah on docker git repo:
The solution is as follows, but essentially make the flow service centric:
docker network create -d overlay --attachable --scope=swarm somenetwork
docker service create --name someservice nginx:alpine
If you want to connect the service to the somenetwork after it was created, update the service;
docker service update --network-add somenetwork someservice
After this; all tasks of the someservice service will be connected to somenetwork (in addition to other overlay networks they were connected to).
https://github.com/moby/moby/issues/39391#issuecomment-505050610

Forbid docker to use specific network

Is there a way to tell docker to not use some specific network when runing docker-compose up?
I am using some out of the box examples (hyperledger) and each time docker takes an address that breaks my remote connection.
[xxx#xxx fabric]$ docker network inspect some_network
[
{
"Name": "some_network",
"Id": "xxx",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.30.0.0/16",
"Gateway": "172.30.0.1/16"
}
]
},
"Internal": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
What I would like is to tell docker (without editing the docker-compose.yaml file) to never use the network 172.30.0.0/16.
When you create a network inside docker-compose.yml, you can specify IP range, subnet, gateway, etc. You can do it this way.
version: "3"
services:
service1:
build: .
...
networks:
default:
subnet: 172.28.0.0/16
ip_range: 172.28.5.0/24
gateway: 172.28.5.254
To do this in the Docker daemon, you have pass --bip and --fixed-cidr parameters.
From Docker docs: https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/
--bip=CIDR: supply a specific IP address and netmask for the
docker0 bridge, using standard CIDR notation. For example:
192.168.1.5/24.
--fixed-cidr=CIDR: restrict the IP range from the docker0 subnet,
using standard CIDR notation. For example: 172.16.1.0/28. This range
must be an IPv4 range for fixed IPs, such as 10.20.0.0/16, and must
be a subset of the bridge IP range (docker0 or set using --bridge).
For example, with --fixed-cidr=192.168.1.0/25, IPs for your
containers will be chosen from the first half of addresses included in
the 192.168.1.0/24 subnet.
To make this changes permanent, create or edit /etc/docker/daemon.json and add options bip and fixed-cidr.
{
"bip": "192.168.1.5/24",
"fixed-cidr": "192.168.1.0/25"
}

Resources