Docker exposed port stops working when connected to a VPN - docker

I'm trying to create a Docker image which will forward a port through a VPN. I've created a simple image which exposes port 5144, and tested that it works properly:
sudo docker run -t -d -p 5144:5144 \
--name le-bridge \
--cap-add=NET_ADMIN \
--device=/dev/net/tun \
bridge
sudo docker exec -it le-bridge /bin/bash
I check that the port is exposed correctly like this:
[CONTAINER] root#6116787b1c1e:~# nc -lvvp 5144
[HOST] user$ nc -vv 127.0.0.1 5144
Then, whatever I type is correctly echoed in the container's terminal. However, as soon as I start the openvpn daemon, this doesn't work anymore:
[CONTAINER] root#6116787b1c1e:~# openvpn logger.ovpn &
[1] 33
Sun Apr 5 22:52:54 2020 OpenVPN 2.4.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 14 2019
Sun Apr 5 22:52:54 2020 library versions: OpenSSL 1.1.1 11 Sep 2018, LZO 2.08
Sun Apr 5 22:52:54 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]
Sun Apr 5 22:52:54 2020 UDPv4 link local (bound): [AF_INET][undef]:1194
Sun Apr 5 22:52:54 2020 UDPv4 link remote:
Sun Apr 5 22:52:54 2020 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Sun Apr 5 22:52:55 2020 [] Peer Connection Initiated with [AF_INET]
Sun Apr 5 22:53:21 2020 TUN/TAP device tun0 opened
Sun Apr 5 22:53:21 2020 do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Sun Apr 5 22:53:21 2020 /sbin/ip link set dev tun0 up mtu 1500
Sun Apr 5 22:53:21 2020 /sbin/ip addr add dev tun0 10.X.0.2/24 broadcast 10.X.0.255
Sun Apr 5 22:53:21 2020 Initialization Sequence Completed
root#6116787b1c1e:~#
root#6116787b1c1e:~# nc -lvvp 5144
listening on [any] 5144 ...
From here, using the exact same netcat command, I cannot reach the exposed port anymore from the host.
What am I missing?
EDIT: It's maybe worth mentioning that after the VPN is started, the connexion still succeeds from the host ; it just never reaches the netcat process inside the container.

I'm not exactly sure why, but it turns out that routes need to be fixed inside the container. In my case, the following command solves the issue:
ip route add 192.168.0.0/24 via 172.17.42.1 dev eth0
...where 172.17.42.1 is the IP of the docker0 interface on my host.
Hopefully this is helpful to someone one day.

Related

problem with docker container creating a VPN tunnel

I'm trying to make an OpenVPN server using docker I just started creating a tunnel between 2 containers after installing openvpn on both container the command :
openvpn --dev tun1 --ifconfig 10.0.0.1 10.0.0.2
gave me this error:
Mon Jul 12 12:26:28 2021 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
Mon Jul 12 12:26:28 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 27 2021
Mon Jul 12 12:26:28 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Mon Jul 12 12:26:28 2021 ******* WARNING *******: All encryption and authentication features disabled -- All data will be tunnelled as clear text and will not be protected against man-in-the-middle changes. PLEASE DO RECONSIDER THIS CONFIGURATION!
Mon Jul 12 12:26:28 2021 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Mon Jul 12 12:26:28 2021 Exiting due to fatal error
is the problem related to working on a container?
is it fine to make a OpenVPN server on a ubuntu image-based container?
if there is any other tips to make an OpenVPN server please tell me I'm new in this topic.

Creating a PHP Web Server Based on CentOS 8 with Docker

Here is my Dockerfile to create a simple web server based on CentOS 8:
FROM centos:8
RUN yum -y update && \
yum -y install httpd php
COPY . /var/www/html
CMD ["httpd", "-D", "FOREGROUND"]
I build and run the container with the following commands:
docker build -t web .
docker run --rm --name web -p 8000:80 --network net1 --mount type=bind,source=`pwd`,target=/var/www/html web
The error I see when accessing http://localhost:8000 is:
Service Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
The httpd logs say:
[Sat Jun 20 04:42:02.970003 2020] [suexec:notice] [pid 1:tid 140041021270272] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.19.0.2. Set the 'ServerName' directive globally to suppress this message
[Sat Jun 20 04:42:02.994125 2020] [lbmethod_heartbeat:notice] [pid 1:tid 140041021270272] AH02282: No slotmem from mod_heartmonitor
[Sat Jun 20 04:42:02.995333 2020] [http2:warn] [pid 1:tid 140041021270272] AH02951: mod_ssl does not seem to be enabled
[Sat Jun 20 04:42:03.001899 2020] [mpm_event:notice] [pid 1:tid 140041021270272] AH00489: Apache/2.4.37 (centos) configured -- resuming normal operations
[Sat Jun 20 04:42:03.002120 2020] [core:notice] [pid 1:tid 140041021270272] AH00094: Command line: 'httpd -D FOREGROUND'
[Sat Jun 20 04:42:04.782201 2020] [proxy:error] [pid 8:tid 140040377865984] (2)No such file or directory: AH02454: FCGI: attempt to connect to Unix domain socket /run/php-fpm/www.sock (*) failed
[Sat Jun 20 04:42:04.782280 2020] [proxy_fcgi:error] [pid 8:tid 140040377865984] [client 172.19.0.1:41072] AH01079: failed to make connection to backend: httpd-UDS
The problem is not as clear to me as it seems. I think it's a php-fpm issue, but have no idea how to fix it. Looked this up, but all solutions seem complicated. Is there a simple way to tell PHP to work with the server in the Docker image?
I recently read about using process managers like supervisord where one needs to start several services per container. But, is it possible to start PHP-FPM in a simpler way inside the web container?
I managed to tackle it when using a base rhel 8 image by
CMD ["bash", "-c", "/usr/sbin/apachectl start; /usr/sbin/php-fpm --nodaemonize"]
But when I tried it with base centos 8 image, it doesn't work, I don't know why

attempt to change docker data-root fails - why

I am trying to set my docker storage dir as other than default, something I've done on other machines:
/etc/docker/daemon.json:
{
"data-root": "/mnt/x/y/docker_data"
}
where the storage dir looks like
jeremyr#snorble:~$ ls -ltr /mnt/x/y
total 4
drwxrwxrwx 11 jeremyr 5001 122 Mar 19 08:14 docker_data
with the daemon.json file in place, sudo systemctl restart docker hits Job for docker.service failed (without that daemon.json, docker restarts fine and docker run hello-world runs fine) . with the daemon.json in place, journalctl -xn shows
Mar 25 14:20:33 bolt88 systemd[1]: docker.service start request repeated too quickly, refusing to start.
Mar 25 14:20:33 bolt88 systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Mar 25 14:20:33 bolt88 systemd[1]: Unit docker.service entered failed state.
Mar 25 14:20:34 bolt88 sudo[23961]: jeremyr : TTY=pts/18 ; PWD=/home/jeremyr ; USER=root ; COMMAND=/bin/journalctl -xn
Mar 25 14:20:34 bolt88 sudo[23961]: pam_unix(sudo:session): session opened for user root by jeremyr(uid=0)
while systemctl status docker.service just shows code=exited, status=1/FAILURE
and in dmesg I see this:
1547:[Mon Mar 25 14:21:41 2019] aufs au_opts_verify:1570:dockerd[20714]: dirperm1 breaks the protection by the permission bits on the lower branch
1548-[Mon Mar 25 14:21:41 2019] device veth34d1dfd entered promiscuous mode
1549-[Mon Mar 25 14:21:41 2019] IPv6: ADDRCONF(NETDEV_UP): veth34d1dfd: link is not ready
1550-[Mon Mar 25 14:21:41 2019] IPv6: ADDRCONF(NETDEV_CHANGE): veth34d1dfd: link becomes ready
1551:[Mon Mar 25 14:21:41 2019] docker0: port 1(veth34d1dfd) entered forwarding state
1552:[Mon Mar 25 14:21:41 2019] docker0: port 1(veth34d1dfd) entered forwarding state
1553:[Mon Mar 25 14:21:41 2019] docker0: port 1(veth34d1dfd) entered disabled state
1554-[Mon Mar 25 14:21:41 2019] device veth34d1dfd left promiscuous mode
1555:[Mon Mar 25 14:21:41 2019] docker0: port 1(veth34d1dfd) entered disabled state
1556-[Mon Mar 25 14:21:59 2019] systemd-sysv-generator[20958]: Ignoring creation of an alias umountiscsi.service for itself
Docker version 17.05.0-ce, build 89658be, on a debian 8.8 setup .
Does anyone know why docker isn't allowing use of that dir as data-root?
TD;DR -- worked on Ubuntu 18.04 just before post
follow the instructions:
sudo systemctl stop docker
sudo rsync -axPS /var/lib/docker/ /mnt/x/y/docker_data #copy all existing data to new location
sudo vi /lib/systemd/system/docker.service # or your favorite text editor
in file docker.service find one line like this:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
add --data-root /mnt/x/y/docker_data to it(on one line):
ExecStart=/usr/bin/dockerd --data-root /mnt/x/y/docker_data -H fd:// --containerd=/run/containerd/containerd.sock
save and quit, then
sudo systemctl daemon-reload
sudo systemctl start docker
docker info | grep "Root Dir"
last command should output: Docker Root Dir: /mnt/x/y/docker_data
that's it, should've done here.
The Too Long version, if you Do want to Read:
after some investigating, I found some outdated articles, include this one, they mentioned some confident solution, these are typical pages:
add -g option in docker.service
not working because -g and --graph Deprecated In Release: v17.05.0
add data-root in /etc/docker/daemon.json, the method tried by question author,
not working for some unknown reason
read those solution on about one dozen web pages, got the inspiration:
How To Change Docker Data Folder Configuration
not a very good solution -- not popular, , but the interesting part is below Update::
graph has been deprecated in v17.05.0 .You can use data-root instead.
Yeah, graph => data-root, and the --graph is just the long form of -g, so I tried this substitution in solution add -g option in docker.service, and Ta da ~
Something is off on the docker_data.
Solution:
remove the /etc/docker/daemon.json file.
start docker.
copy the /var/lib/docker contents to the path you've put in /etc/docker/daemon.json.
put back the file /etc/docker/daemon.json and restart docker.
Well, I'm not an expert of docker, but I see "dirperm1 breaks the protection by the permission bits on the lower branch" in your log. And I also see this.
"drwxrwxrwx 11 jeremyr 5001 122 Mar 19 08:14 docker_data"
As my understanding, docker daemon requires the access permission to the directory. Does 5001 mean "docker" group?
However, if you ran the daemon in root permission, then it shouldn't happen.
Check the docker version of your machine by
docker --version
I was facing the same issue, and it got solved after upgrading the docker to latest version which is available.
Even the documentation available on docker's official website have not mentioned anything like that.
Once you upgrade docker ,
Restart the docker by
systemctl restart docker
The error will be gone, and new changes will start reflecting.

Container exits if invoked from compose

I have a dockerized server process that merely listens on a port 5000
[admin#gol05854 compose]$ cat ../proc1/server.sh
#!/bin/sh
echo `date` "Starting server"
nc -v -l -p 5000
echo `date` "Exiting server"
I have a client that is expected to continuously send messages to the server:
[admin#gol05854 compose]$ cat ../client/client.sh
#!/bin/sh
echo `date` "Starting client"
while true
do
date
done | nc my_server 5000
echo `date` "Ending client"
I start these together using compose. However, the server exits with following messages:
[admin#gol05854 compose]$ docker logs e1_my_server_1
Wed Oct 26 04:10:34 UTC 2016 Starting server
listening on [::]:5000 ...
connect to [::ffff:172.27.0.2]:5000 from e1_my_client_1_1.e1_default:36500 ([::ffff:172.27.0.3]:36500)
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Exiting server
What is surprising is that if the same containers are started without compose, using docker run, the server remains running.
What is it that docker compose does that causes the server to exit after receiving a few messages?
The code can be found at https://github.com/yashgt/dockerpoc

Docker container published ports not accessible?

So here is the situation, I have a container running built with this dockerfile:
FROM python:2-onbuild
EXPOSE 8888
CMD [ "nohup", "mock-server", "--dir=/usr/src/app", "&" ]
I run it with this command:
docker build -t mock_server .
docker run -d -p 8888:8888 --name mocky mock_server
I am using it on a mac so boot2docker is going and I hit it from the boot2docker ip on 8888. I tried boot2docker ssh and hitting the container from there. I ran docker exec -it mocky bash and ps aux shows:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.9 113316 18576 ? Ss 15:16 0:00 /usr/local/bin/python2 /usr/local/bin/mock-server --dir=/usr/src/app &
root 5 1.6 0.1 21916 3440 ? Ss 17:52 0:00 bash
root 9 0.0 0.1 19180 2404 ? R+ 17:53 0:00 ps aux
When I cURL it:
curl -I -XGET localhost:8888/__manage
HTTP/1.1 200 OK
Content-Length: 183108
Set-Cookie: flash_msg_success=; expires=Thu, 04 Sep 2014 17:54:58 GMT; Path=/
Set-Cookie: flash_msg_error=; expires=Thu, 04 Sep 2014 17:54:58 GMT; Path=/
Server: TornadoServer/4.2.1
Etag: "efdb5b362491b8e4b8347b97ccafeca02db8d27d"
Date: Fri, 04 Sep 2015 17:54:58 GMT
Content-Type: text/html; charset=UTF-8
So I the app is running inside the container but I can't get anything from outside it. What can be done here?
First guess is the python program is explicitly binding to the loopback IP address 127.0.0.1 which disallows any remote connections. Check the docs for that python mock tornado server for something like --bind=0.0.0.0 and adjust accordingly.
You can confirm if this is the case by doing a docker exec and in the container running netstat -ntlp | grep 8888 and seeing which IP is bound. If it's 127.0.0.1, that confirms that is indeed the problem.
Docker runs on top of an OS and docker machine has its own ip address. One possible reason why the port is not accessible is that you are using localhost which is trying to hit 127.0.0.1: but your docker machine might be running another ip address hence by replacing the ip address your curl should work.
$ docker-machine ip default
This should give you docker machine's ip address replace it with localhost.

Resources