Container exits if invoked from compose - docker

I have a dockerized server process that merely listens on a port 5000
[admin#gol05854 compose]$ cat ../proc1/server.sh
#!/bin/sh
echo `date` "Starting server"
nc -v -l -p 5000
echo `date` "Exiting server"
I have a client that is expected to continuously send messages to the server:
[admin#gol05854 compose]$ cat ../client/client.sh
#!/bin/sh
echo `date` "Starting client"
while true
do
date
done | nc my_server 5000
echo `date` "Ending client"
I start these together using compose. However, the server exits with following messages:
[admin#gol05854 compose]$ docker logs e1_my_server_1
Wed Oct 26 04:10:34 UTC 2016 Starting server
listening on [::]:5000 ...
connect to [::ffff:172.27.0.2]:5000 from e1_my_client_1_1.e1_default:36500 ([::ffff:172.27.0.3]:36500)
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Exiting server
What is surprising is that if the same containers are started without compose, using docker run, the server remains running.
What is it that docker compose does that causes the server to exit after receiving a few messages?
The code can be found at https://github.com/yashgt/dockerpoc

Related

[Docker x ColdFusion][Apache2] - (95)Operation not supported: mod_jk

The Apache2 on my Docker container keeps failing on starting; I already check the config using apachectl configtest, and it's returning OK. The error below is what I found under /var/log/apache2/error.log
[Wed Aug 10 15:17:30.643137 2022] [mpm_event:notice] [pid 465:tid 139744629492672] AH00489: Apache/2.4.52 (Ubuntu) mod_jk/1.2.46 configured -- resuming normal operations
[Wed Aug 10 15:17:30.643188 2022] [core:notice] [pid 465:tid 139744629492672] AH00094: Command line: '/usr/sbin/apache2'
[Mon Oct 31 22:14:51.535467 2022] [jk:crit] [pid 63:tid 274907793600] (95)Operation not supported: mod_jk: could not create jk_log_lock
But when I tried to uninstall and reinstall apache2, I could access the localhost:80, but the ColdFusion under it was not working. It just shows me the directory of the working directory..
Docker Desktop: v4.13.1
Docker: version 20.10.20, build 9fdeb9c
ColdFusion: 2018
This happens only on my Macbook 13 M2. I tried running it on a windows laptop, and it's working well.

problem with docker container creating a VPN tunnel

I'm trying to make an OpenVPN server using docker I just started creating a tunnel between 2 containers after installing openvpn on both container the command :
openvpn --dev tun1 --ifconfig 10.0.0.1 10.0.0.2
gave me this error:
Mon Jul 12 12:26:28 2021 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
Mon Jul 12 12:26:28 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 27 2021
Mon Jul 12 12:26:28 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Mon Jul 12 12:26:28 2021 ******* WARNING *******: All encryption and authentication features disabled -- All data will be tunnelled as clear text and will not be protected against man-in-the-middle changes. PLEASE DO RECONSIDER THIS CONFIGURATION!
Mon Jul 12 12:26:28 2021 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Mon Jul 12 12:26:28 2021 Exiting due to fatal error
is the problem related to working on a container?
is it fine to make a OpenVPN server on a ubuntu image-based container?
if there is any other tips to make an OpenVPN server please tell me I'm new in this topic.

Failed to start LSB: Start Jenkins at boot time

I've been trying since a while to add and modify things within jenkins. Jenkins was running on 8080 port, I redirected trafic to 80 port through this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 54.185.x.x:8080
I did some modifications and now I cannot start jenkins:
Jun 08 13:20:17 ip-10-173-x-x jenkins[32108]: Correct java version found
Jun 08 13:20:17 ip-10-173-x-x jenkins[32108]: * Starting Jenkins Automation Server jenkins
Jun 08 13:20:17 ip-10-173-x-x su[32157]: Successful su for jenkins by root
Jun 08 13:20:17 ip-10-173-x-x su[32157]: + ??? root:jenkins
Jun 08 13:20:17 ip-10-173-x-x su[32157]: pam_unix(su:session): session opened for user jenkins by (uid=0)
Jun 08 13:20:17 ip-10-173-x-x su[32157]: pam_unix(su:session): session closed for user jenkins
Jun 08 13:20:18 ip-10-173-x-x jenkins[32108]: ...fail!
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: jenkins.service: Control process exited, code=exited status=7
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: jenkins.service: Failed with result 'exit-code'.
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: Failed to start LSB: Start Jenkins at boot time.
I changed few lines in the jenkins file and here how it looks like:
JENKINS_ARGS="--javahome=$JAVA_HOME --httpListenAddress=$HTTP_HOST --httpPort=$HTTP_PORT --webroot=~/.jenkins/war"

Docker exposed port stops working when connected to a VPN

I'm trying to create a Docker image which will forward a port through a VPN. I've created a simple image which exposes port 5144, and tested that it works properly:
sudo docker run -t -d -p 5144:5144 \
--name le-bridge \
--cap-add=NET_ADMIN \
--device=/dev/net/tun \
bridge
sudo docker exec -it le-bridge /bin/bash
I check that the port is exposed correctly like this:
[CONTAINER] root#6116787b1c1e:~# nc -lvvp 5144
[HOST] user$ nc -vv 127.0.0.1 5144
Then, whatever I type is correctly echoed in the container's terminal. However, as soon as I start the openvpn daemon, this doesn't work anymore:
[CONTAINER] root#6116787b1c1e:~# openvpn logger.ovpn &
[1] 33
Sun Apr 5 22:52:54 2020 OpenVPN 2.4.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 14 2019
Sun Apr 5 22:52:54 2020 library versions: OpenSSL 1.1.1 11 Sep 2018, LZO 2.08
Sun Apr 5 22:52:54 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]
Sun Apr 5 22:52:54 2020 UDPv4 link local (bound): [AF_INET][undef]:1194
Sun Apr 5 22:52:54 2020 UDPv4 link remote:
Sun Apr 5 22:52:54 2020 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Sun Apr 5 22:52:55 2020 [] Peer Connection Initiated with [AF_INET]
Sun Apr 5 22:53:21 2020 TUN/TAP device tun0 opened
Sun Apr 5 22:53:21 2020 do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Sun Apr 5 22:53:21 2020 /sbin/ip link set dev tun0 up mtu 1500
Sun Apr 5 22:53:21 2020 /sbin/ip addr add dev tun0 10.X.0.2/24 broadcast 10.X.0.255
Sun Apr 5 22:53:21 2020 Initialization Sequence Completed
root#6116787b1c1e:~#
root#6116787b1c1e:~# nc -lvvp 5144
listening on [any] 5144 ...
From here, using the exact same netcat command, I cannot reach the exposed port anymore from the host.
What am I missing?
EDIT: It's maybe worth mentioning that after the VPN is started, the connexion still succeeds from the host ; it just never reaches the netcat process inside the container.
I'm not exactly sure why, but it turns out that routes need to be fixed inside the container. In my case, the following command solves the issue:
ip route add 192.168.0.0/24 via 172.17.42.1 dev eth0
...where 172.17.42.1 is the IP of the docker0 interface on my host.
Hopefully this is helpful to someone one day.

Trying to set up PIA with OVPN client (docker)

I have been trying to get a OpenVPN client running with docker. But I got this error while setting up. My VPN provider is Private Internet Access. This is the Docker Image I used.
docker-compose up -d && docker logs -f openvpn
openvpn
openvpn
Creating openvpn
Wed Dec 18 02:17:32 2019 OpenVPN 2.4.7 armv6-alpine-linux-musleabihf [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on May 6 2019
Wed Dec 18 02:17:32 2019 library versions: OpenSSL 1.1.1d 10 Sep 2019, LZO 2.10
Wed Dec 18 02:17:32 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP]:1197
Wed Dec 18 02:17:32 2019 UDP link local: (not bound)
Wed Dec 18 02:17:32 2019 UDP link remote: [AF_INET][IP]:1197
Wed Dec 18 02:17:32 2019 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Wed Dec 18 02:17:33 2019 [[LONG_RANDOM_STRING]] Peer Connection Initiated with [AF_INET][IP]:1197
Wed Dec 18 02:17:39 2019 WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).
Wed Dec 18 02:17:39 2019 WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).
Wed Dec 18 02:17:39 2019 WARNING: cipher with small block size in use, reducing reneg-bytes to 64MB to mitigate SWEET32 attacks.
Wed Dec 18 02:17:39 2019 TUN/TAP device tun0 opened
Wed Dec 18 02:17:39 2019 /sbin/ip link set dev tun0 up mtu 1500
Wed Dec 18 02:17:39 2019 /sbin/ip addr add dev tun0 local [SHORTER_IP] peer [SHORTER_IP]
Wed Dec 18 02:17:39 2019 Initialization Sequence Completed
Wed Dec 18 02:17:49 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:17:59 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:05 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:05 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:15 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:25 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:35 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:39 2019 [[LON_RANDOM_STRING]] Inactivity timeout (--ping-restart), restarting
Wed Dec 18 02:18:39 2019 SIGUSR1[soft,ping-restart] received, process restarting
Wed Dec 18 02:18:44 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP]:1197
Wed Dec 18 02:18:44 2019 UDP link local: (not bound)
Wed Dec 18 02:18:44 2019 UDP link remote: [AF_INET][IP]:1197
Wed Dec 18 02:18:45 2019 [[LONG_RANDOM_STRING]] Peer Connection Initiated with [AF_INET][IP]:1197
Wed Dec 18 02:18:46 2019 AUTH: Received control message: AUTH_FAILED
Wed Dec 18 02:18:46 2019 SIGUSR1[soft,auth-failure (auth-token)] received, process restarting
These are the files I use:
[pia.ovpn]
client
dev tun
proto udp
remote [server].privateinternetaccess.com 1197
resolv-retry infinite
keepalive 10 60
nobind
persist-key
persist-tun
tls-client
remote-cert-tls server
auth-user-pass /vpn/vpn.auth
comp-lzo
verb 1
reneg-sec 0
redirect-gateway def1
disable-occ
fast-io
ca /vpn/ca.rsa.2048.crt
crl-verify /vpn/crl.rsa.2048.pem
vpn.auth contains my username and password. ca.rsa.2048.crt and crl.rsa.2048.pem I both got from this PIA support page.
Not sure if it is relevant, but this is the dockerfile I used.
version: '2'
services:
openvpn:
image: dperson/openvpn-client:armhf
container_name: openvpn
cap_add:
- net_admin
environment:
- TZ=[timezone]
networks:
- vpn
read_only: true
tmpfs:
- /run
- /tmp
restart: always
security_opt:
- label:disable
stdin_open: true
tty: true
volumes:
- /dev/net:/dev/net:z
- [PATH_TO]/vpn:/vpn
networks:
vpn:
I hope that someone sees what goes wrong here!
As I can see in your logs you've received Inactivity timeout (--ping-restart), restarting message after successfull connection in short period of time.
I had the same issue.
My client successfully connected and in few seconds (20-40) has been restarted.
In my case I've actually run two clients with the same client name (CN) on different hosts.
To fix it I've generated different clients for each host.
For me, the problem was using default PIA config. Once I switched to OPENVPN CONFIGURATION FILES (STRONG), the problem was gone.
You can find the configs at https://www.privateinternetaccess.com/helpdesk/kb/articles/where-can-i-find-your-ovpn-files-2, and if the link goes down, try googling "pia config".

Resources