docker macvlan network dhcp dns on production network - docker

we re running a docker swarm with 3 nodes (Ubuntu 18.04 LTS, Docker 18.09.2).
I ve created a macvlan network "CONTAINER" on one of the nodes "Swarm01".
I can connect new containers to network "CONTAINER" on Swarm01 and ping it.
But i want to use our external production dhcp / dns server which is in the same subnet as the network "CONTAINER" (192.168.101.0/24).
Is this possible and if yes how can i do this?
Many thanks!

Related

Access to a kubernetes cluster from inside a docker container

I have some docker containers running with docker compose (node.js, databases, nginx...). I have also a minikube Kubernetes cluster.
I am trying to communicate from node.js container to Kubernetes to manage some nodes (using Kubernetes API and the config file generated). But I can't get access to the Kubernetes, I tried to ping minikube IP from a docker container but I get not connection. But from my local machine, works without problems.
Someone can help? What is wrong?
My machine is a Linux Ubuntu 20.04 and minikube uses docker driver.
For Kubernetes to work, you need to have two network interfaces configured. One in a private network and another in dhcp that allows you to get an internet connection. You must take into account that the dhcp address will coincide with the ip used at the time of initializing the cluster, so you must assign a static ip.
Check your etcd.yaml file
maybe this network file configuration can help you
# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager
ethernets:
enp0s3:
addresses: [10.0.0.1/24]
dhcp4: no
dhcp6: no
enp0s8:
addresses: [192.168.XX.XX/24]
dhcp4: true
dhcp6: no

How to access docker container in a custom network from another docker container running with host network

My program is consisting of a network of ROS1 and ROS2 nodes, which are software that work with a publish/subscribe way of communication.
Assume there is 4 nodes running inside a custom network: onboard_network.
Those 4 nodes (ROS1) can only communicate together, therefore we have a bridge node (ROS1 & ROS2) that needs to be sitting on the edge on both onboard_network and host network. The reason why we need the host network is because the host is inside a VPN (Zerotier). Inside the VPN we have also our server (ROS2).
We also need the bride node to work with host network because ROS2 work with some multicast stuff that works only on host mode.
So basically, I want a docker compose file running 4 containers inside an onboard_network & a container running inside the host network. The last container needs to be seen from the containers in the onboard_network and being able to see them too. How could I do it ? Is it even possible ?
If you're running a container on the host network, its network setup is identical to a non-container process running on the host.
A container can't be set to use both host networking and Docker networking.
That means, for your network_mode: host container, it can call other containers using localhost as a hostname and their published ports: (because its network is the host's network). For your bridge-network containers, they can call the host-network container using the special hostname host.docker.internal on MacOS or Windows hosts, or on Linux they need to find some reachable IP address (this is discussed further in From inside of a Docker container, how do I connect to the localhost of the machine?.

Docker: how to access the hosts network with a docker container?

How can I access the hosts network with a docker container? Can I put a container in the hosts network with another IP from the hosts network?
Current situation:
Docker container (default bridge network): 172.17.0.2/16
Host (server): 10.0.0.2/24
Question:
Can I put the docker container on the 10.0.0.0/24 network as a secondary address?
(or) Can I access the hosts network on the container and vica versa?
Reason:
I want to access the hosts network from my container (for example: monitoring server).
I want the container to act as a server accessible from the hosts network on all ports.
Note:
I run several docker containers so a few ports are already forwarded from the host and these should remain so. So an all-port-forward from the hosts IP isn't really a solution here.
Setup on host:
basic docker system
Centos 7
Macvlan networks may be the solution you are looking for.
You could assign multiple MAC/IP addresses on virtual NICs over single physical NIC.
There are some prerequisites for using Macvlan.

Network routing for docker container using macvlan

TLDR; I cannot ping my docker containers from my other network clients. Only when a container actively pings the gateway I am able to reach the containers afterwards.
On my homenetwork (192.168.0.0/24) I run a gateway 192.168.0.1 which hosts a DNS server and also routes the internet traffic. My docker host (192.168.0.100) has a macvlan network, created with
docker network create -d macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.100 -o parent=eth0 dockernet
My containers now do get static IPs, like 192.168.0.200. The containers can actively ping other physical hosts on the network, so that works fine.
But if I spin up a new container, it cannot be pinged from my physical network. Not from the docker host (which is expected as this seems to be a limitation of the macvlan network), nor from the gateway or any other client.
Once the container actively pings the gateway, it gets also reachable for other clients.
So I guess some routing needs to be done and there I need your help.
Clients run on debian buster and I use an unmanaged switch to connect the clients.
The missing information above was that I am running docker on raspbian.
So this question is actually a duplicate of Docker MACVLAN only works Outbound
runsudo rpi-update on the host to make it work

Host unreachable after docker swarm init

I have Windows Server 2016 Core(Hyper-V VM). Docker is installed, working and I want to create swarm.
IP config at the beginning:
1. Ethernet - 192.168.0.1
2. vEthernet (HSN Internal NIC) - 172.30.208.1
Then I run
docker swarm init --advertise-addr 192.168.0.1
Swarm is created, but I have lost my main IP address. IP config:
1. vEthernet (HNS internal NIC) - 172.30.208.1
2. vEthernet (HNS Transparent) - 169.254.225.229
Created swarm manager node is not reachable on main address 192.168.0.1. I can't connect to it and swarm workers are not able to join with this IP. Where is the problem?
A little late answering this but ... Docker is going to take over your network card when you bring up the Swarm. What I did was use two network cards: one I left alone for Docker to use and the second I used for everything else including virtual machines.
Currently, you cannot use Docker for Mac or Docker for Windows alone to test a multi-node swarm. For single node swarm cluster,
If you are using Docker for Mac or Docker for Windows to test single-node swarm, simply run docker swarm init with no arguments
However, you can use the included version of Docker Machine to create the swarm nodes (see Get started with Docker Machine and a local VM), then follow the tutorial for all multi-node features
For furthere info read this
Edit:
Also refer to this

Resources