Connect Docker container to specific network interface (e.g. eth1)? - docker

I have been here, here, and all over the Docker documentation. I think I need this explained in much simpler terms.
My Docker container needs to be able to do two things:
Connect to a device (a camera) on the host machine
Connect to a specific network interface (eth1) to send data
To satisfy them both, I have chosen to run the container as a command given by a superservice.
My docker-compose.yml looks like this:
version: "3"
services:
superservice:
image: docker
command: docker run -it --device=/dev/vchiq my/image-to-run:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
replicas: 1
mode: replicated
stdin_open: true
tty: true
This way I can still docker stack deploy my service onto a swarm (otherwise, specifying --device doesn't work with swarm mode). When I initiate the swarm I do so with docker swarm init --advertise-addr my:wlan:ip:addr --data-path-addr eth1.
I thought this would route the packets that I'm sending from my container through eth1 to the destination.
But, when I tcpdump -i eth1 nothing goes through it. It's all still going through wlan0.
Why this is happening, and how to fix it?

The way I have found to do this is with a simple sudo route set default eth1. For some reason I thought that setting --data-path-addr=eth1 would also set the network interface I used. Apparently, this is not the case.
I haven't spent much time with networking, though that's on my list to learn, so perhaps I'll be able to address the issue with a better solution with some more sophisticated routing. For now, I just sudo route delete default eth1 when I want to return to the wlan0 interface.
This is an ok solution for me right now because when the machine is deployed it only uses the eth1 interface.

Related

Can't connect to localhost of the host machine from inside of my Docker container

The question is basic: how to I connect to the localhost of the host machine from inside of a Docker container?
I tried answers from this post, using add-host host.docker.internal:host-gateway or writing --network=host when running my container but none of these methods seem to work.
I have a simple hello world webserver up on my machine, and I can see it's contents with curl localhost:8000 from my host, but I can't curl it from inside the container. I tried curl host.docker.internal:8000, curl localhost:8000, and curl 127.0.0.1:8000 from inside the container (based on the solution I used to make localhost available there) but none of them seem to work and I get a Connection refused error every time.
I asked somebody else to try this out for me on their own machine and it worked for them, so I don't think I'm doing anything wrong.
Does anybody have any idea what is wrong with my containers?
Host machine: Ubuntu 20.04.01
Docker version: 20.10.7
Used image: Ubuntu 20.04 (and i386/ubuntu18.04)
Temporary solution
This does not completely solve the problem for production purposes, but at least in order to get the localhost working, by adding these lines into docker-compose.yml it solved my issue for now (source):
services:
my-service:
network_mode: host
I am using apache nifi to use Java REST endpoints with the same ubuntu and docker versions, so in my case, it looks like this:
services:
nifi:
network_mode: host
After changing docker-compose.yml, I recommend stopping docker container, removing containers(docker-compose rm - do not use if you need some containers, otherwise use docker container rm container_id) and build with docker-compose up --build again.
In this case, I needed to use another localhost IP for my service to access with a browser (nifi started on other ip - 127.0.1.1 but works fine as well).
Searching for the problem / deeper into ubuntu-docker networking
Firstly, I will write down some useful commands that may be useful to find out a solution for the docker-ubuntu networking issue:
ip a - show all routing, network devices, interfaces and tunnels (mainly I can observe state DOWN with docker0)
ifconfig - list all interfaces
brctl show - ethernet bridge administration (docker0 has no attached interface / veth pair)
docker network ls - manages docker networks - names, drivers, scope...
docker network inspect bridge - I can see for docker0 bridge has no attached docker containers - empty and not used bridge
(useful link for ubuntu-docker networking explanation)
I guess that problem lays within veth pair (see link above), because when docker-compose occurs, there is a new bridge created (not docker0) that is connected to veth pair in my case, and docker0 is not used. My guess is that if docker0 is used, then host.docker.internal:host-gateway would work. Somehow in ubuntu networking, there is docker0 not used as the default bridge and this maybe should be changed.
I don't have much time left actually, well, I suppose someone can use this information and resolve the core of the problem later on.

Problem exposing port with Docker swarm and windows containers

I am trying to run a container in docker swarm mode using docker stack deploy. I have read quite a lot of things about exposing ports of deployed services in docker swarm for windows containers, but none of them work for me.
I am trying to access a container from my local machine and the swarm stack is deployed on the same machine.
I have tried
deploy:
mode: global
ports:
- target: 80
published: 8080
mode: host
but that does not work. In this case it seems like the container fails to start and new containers are being created immediately. How can I inspect and see what the reason for the crashing is? When I do docker container ls no results are returned.
I have also tried to assign an external NAT network but this does not seem to work and I get the following message
network is declared as external, but it is not in the right scope: "local" instead of "swarm"
so I cannot attach to it.
Using
deploy:
endpoint_mode: dnsrr
has no results, as well. What I do in this case is to look at the IP when getting into the service, but for an IP address I get 10.0.7.2 which does not allow me to connect to the service from my host.
I would really appreciate if someone can give me some tips what I am doing wrong and how I can access my own container in a docker swarm service from my machine.
I have been stuck for quite a while and I am probably missing some important bits.

Hiding a docker container behind OpenVPN, in docker swarm, with an overlay network

The goal: To deploy on docker swarm a set of services, one of which is only available for me when I am connected to the OpenVPN server which has also been spun up on docker swarm.
How can I, step by step, only connect to a whoami example container, with a domain in the browser, when I am connected to a VPN?
Background
The general idea would be have, say, kibana and elasticsearch running internally which can only be accessed when on the VPN (rather like a corporate network), with other services running perfectly fine publicly as normal. These will all be on separate nodes, so I am using an overlay network.
I do indeed have OpenVPN running on docker swarm along with a whoami container, and I can connect to the VPN, however it doesn't look like the IP is changing and I have no idea how to make it so that the whoami container is only available when on the VPN, especially considering I'm using an overlay network which is multi-host. I'm also using traefik, a reverse proxy which provides me with a mostly automatic letsencrypt setup (via DNS challenge) for wildcard domains. With this I can get:
https://traefik.mydomain.com
But I also want to connect to vpn.mydomain.com (which I can do right now), and then be able to visit:
https://whoami.mydomain.com
...which I cannot. Yet. I've posted my traefik configuration in a different place in case you want to take a look, as this thread will grow too big if I post it here.
Let's start with where I am right now.
OpenVPN
Firstly, the interesting thing about OpenVPN and docker swarm is that OpenVPN needs to run in privileged mode because it has to make network interfaces changes amongst other things, and swarm doesn't have CAP_ADD capabilities yet. So the idea is to launch the container via a sort of 'proxy container' that will run the container manually with these privileges added for you. It's a workaround for now, but it means you can deploy the service with swarm.
Here's my docker-compose for OpenVPN:
vpn-udp:
image: ixdotai/swarm-launcher:latest
hostname: mainnode
environment:
LAUNCH_IMAGE: ixdotai/openvpn:latest
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'app-net'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-udp'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_PRIVILEGED: 'true'
LAUNCH_ENVIRONMENTS: 'OVPN_NATDEVICE=eth1'
LAUNCH_VOLUMES: '/etc/openvpn:/etc/openvpn:rw'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- my-net
deploy:
placement:
constraints:
- node.hostname==mainnode
I can deploy the above with: docker stack deploy --with-registry-auth --compose-file docker/docker-compose.prod.yml my-app-name and this is what I'm using for the rest. Importantly I cannot just deploy this as it won't load yet. OpenVPN configuration needs to exist in /etc/openvpn on the node, which is then mounted in the container, and I do this during provisioning:
// Note that you have to create the overlay network with --attachable for standalone containers
docker network create -d overlay app-net --attachable
// Create the config
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_genconfig -u udp://vpn.mydomain.com:1194 -b
// Generate all the vpn files, setup etc
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn bash -c 'yes yes | EASYRSA_REQ_CN=vpn.mydomain.com ovpn_initpki nopass'
// Setup a client config and grab the .ovpn file used for connecting
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn easyrsa build-client-full client nopass
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_getclient client > client.ovpn
So now, I have an attachable overlay network, and when I deploy this, OpenVPN is up and running on the first node. I can grab a copy of client.ovpn and connect to the VPN. Even if I check "send all traffic through the VPN" though, it looks like the IP isn't being changed, and I'm still nowhere near hiding a container behind it.
Whoami
This simple container can be deployed with the following in docker-compose:
whoami:
image: "containous/whoami"
hostname: mainnode
networks:
- ${DOCKER_NETWORK_NAME}
ports:
- 1337:80
deploy:
placement:
constraints:
- node.hostname==mainnode
I put port 1337 there for testing, as I can visit my IP:1337 and see it, but this doesn't achieve my goal of having whoami.mydomain.com only resolving when connected to OpenVPN.
I can ping a 192.168 address when connected to the vpn
I ran the following on the host node:
ip -4 address add 192.168.146.16/24 dev eth0
Then when connected to the VPN, I can resolve this address! So it looks like something is working at least.
How can I achieve the goal stated at the top? What is required? What OpenVPN configuration needs to exist, what network configuration, and what container configuration? Do I need a custom DNS solution as I suggest below? What better alternatives are there?
Some considerations:
I can have the domains, including the private one whoami.mydomain.com public. This means I would have https and get wildcard certificates for them easily, I suppose? But my confusion here is - how can I get those domains only on the VPN but also have tls certs for them without using a self-signed certificate?
I can also run my own DNS server for resolving. I have tried this but I just couldn't get it working, probably because the VPN part isn't working properly yet. I found dnsmasq for this and I had to add the aforementioned local ip to resolve.conf to get anything working locally for this. But domains would still not resolve when connected to the VPN, so it doesn't look like DNS traffic was going over the VPN either (even though I set it as such - my client is viscosity.
Some mention using a bridge network, but a bridge network does not work for multi-host
Resources thus far (I will update with more)
- Using swarm-launcher to deploy OpenVPN
- A completely non-explanatory answer on stackexchange which I have seen referenced as basically unhelpful by multiple people across other Github threads, and one of the links is dead
So I was banging my head head against a brick wall about this problem and just sort of "solved" it by pivoting your idea:
Basically I opened the port of the vpn container to its host. And then enable a proxy. This means that I can reach that proxy by visiting the ip of the pc in which the vpn resides (AKA the Docker Host of the VPN container/stack).
Hang with me:
I used gluetun vpn but I think this applies also if you use openvpn one. I just find gluetun easier.
Also IMPORTANT NOTE: I tried this in a localhost environment, but theoretically this should work also in a multi-host situation since I'm working with separated stacks. Probably, in a multi-host situation you need to use the public ip of the main docker host.
1. Create the network
So, first of all you create an attachable network for this docker swarm stacks:
docker network create --driver overlay --attachable --scope swarm vpn-proxy
By the way, I'm starting to think that this passage is superfluous but need to test it more.
2. Set the vpn stack
Then you create your vpn stack file, lets call it stack-vpn.yml:
(here I used gluetun through swarm-launcher "trick". This gluetun service connects through a VPN via Wireguard. And it also enables an http proxy at the port 8888 - this port is also mapped to its host by setting LAUNCH_PORTS: '8888:8888/tcp')
version: '3.7'
services:
vpn_launcher:
image: registry.gitlab.com/ix.ai/swarm-launcher
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- vpn-proxy
environment:
LAUNCH_IMAGE: qmcgaw/gluetun
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'vpn-proxy'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-gluetun'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_ENVIRONMENTS: 'VPNSP=<your-vpn-service> VPN_TYPE=wireguard WIREGUARD_PRIVATE_KEY=<your-private-key> WIREGUARD_PRESHARED_KEY=<your-preshared-key> WIREGUARD_ADDRESS=<addrs> HTTPPROXY=on HTTPPROXY_LOG=on'
LAUNCH_PORTS: '8888:8888/tcp'
deploy:
placement:
constraints: [ node.role == manager ]
restart_policy:
condition: on-failure
networks:
vpn-proxy:
external: true
Notice that either the swarm-launcher and the gluetun containers are using the network previously created vpn-proxy.
3. Set the workers stack
For the time being we will set an example with 3 replicas of alpine image here (filename stack-workers.yml):
version: '3.7'
services:
alpine:
image: alpine
networks:
- vpn-proxy
command: 'ping 8.8.8.8'
deploy:
replicas: 3
networks:
vpn-proxy:
external: true
They also use the vpn-proxy overlay network.
4. Launch our stacks
docker stack deploy -c stack-vpn.yml vpn
docker stack deploy -c stack-workers workers
Once they are up you can access any worker task and try to use the proxy by using the host ip where the proxy resides.
As I said before, theoretically this should work on a multi-host situation, but probably you need to use the public ip of the main docker host (although if they share the same overlay network it could also work with the internal ip address (192...) ).

Docker for windows: how to access container from dev machine (by ip/dns name)

Questions like seem to be asked but I really don't get it at all.
I have a window 10 dev machine host with docker for windows installed. Besides others networks it has DockerNAT netwrok with IP 10.0.75.1
I run some containers with docker-compose:
version: '2'
services:
service_a:
build: .
container_name: docker_a
It created some network bla_default, container has IP 172.18.0.4, ofcource I can not connect to 172.18.0.4 from host - it doesn't have any netwrok interface for this.
What should I do to be able to access this container from HOST machine? (by IP) and if possible by some DNS name? What should I add to my docker-compose.yml, how to configure netwroks?
For me it should be something basic, but I really don't understand how all this stuff works and to access to container from host directly.
Allow access to internal docker networks from dev machine:
route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2
Then use this https://github.com/aacebedo/dnsdock to enable DNS discovery.
Tips:
> docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock --net bridge -p 53:53/udp aacebedo/dnsdock:latest-amd64
> add 127.0.0.1 as DNS server on dev machine
> Use labels described in docs to have pretty dns names for containers
So the answer on the original question:
YES WE CAN!
Oh, this not actual.
MAKE DOCKER GREAT AGAIN!
The easiest option is port mapping: https://docs.docker.com/compose/compose-file/#/ports
just add
ports:
- "8080:80"
to the service definition in compose. If your service listens on port 80, requests to localhost:8080 on your host will be forwarded to the container. (I'm using docker machine, so my docker host is another IP, but I think localhost is how docker for windows appears)
Treating the service as a single process listening on one (or a few) ports has worked best for me, but if you want to start reading about networking options, here are some places to dig in:
https://docs.docker.com/engine/userguide/networking/
Docker's official page on networking - a very high level introduction, with most of the detail on the default bridge behavior.
http://www.linuxjournal.com/content/concerning-containers-connections-docker-networking
More information on network layout within a docker host
http://www.dasblinkenlichten.com/docker-networking-101-host-mode/
Host mode is kind of mysterious, and I'm curious if it works similarly on windows.

Why does using DOCKER_OPTS="--iptables=false" break the DNS discovery for docker-compose?

When I add this line to my /etc/default/docker
DOCKER_OPTS="--iptables=false"
then the DNS no longer works. A group of containers started by docker compose no longer able to find each other:
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
volumes:
- ./esdata:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
The above stops working when iptables=false is set. The kibana container is not able to 'find' the elasticsearch container. But when removed (and docker engine restarted) then this works fine.
Why is this?
(and more to the point, why is iptables=false not the default setting when ufw is used??)
thanks
From https://docs.docker.com/v1.5/articles/networking/#between-containers
Whether a container can talk to the world is governed by two factors.
Is the host machine willing to forward IP packets? This is governed by the ip_forward system parameter. Packets can only pass between containers if this parameter is 1. Usually you will simply leave the Docker server at its default setting --ip-forward=true and Docker will go set ip_forward to 1 for you when the server starts up.
Do your iptables allow this particular connection? Docker will never make changes to your system iptables rules if you set --iptables=false when the daemon starts. Otherwise the Docker server will append forwarding rules to the DOCKER filter chain.
Docker will not delete or modify any pre-existing rules from the DOCKER filter chain. This allows the user to create in advance any rules required to further restrict access to the containers.
From https://docs.docker.com/engine/installation/linux/ubuntulinux/#enable-ufw-forwarding
If you use UFW (Uncomplicated Firewall) on the same host as you run Docker, you’ll need to do additional configuration. Docker uses a bridge to manage container networking. By default, UFW drops all forwarding traffic. As a result, for Docker to run when UFW is enabled, you must set UFW’s forwarding policy appropriately.
I think the entire recipe for your case would be:
DEFAULT_FORWARD_POLICY="ACCEPT"
DOCKER_OPTS="--iptables=false"
Configure NAT in iptables
For more details you could see Running Docker behind the ufw firewall

Resources