I have been trying to get a OpenVPN client running with docker. But I got this error while setting up. My VPN provider is Private Internet Access. This is the Docker Image I used.
docker-compose up -d && docker logs -f openvpn
openvpn
openvpn
Creating openvpn
Wed Dec 18 02:17:32 2019 OpenVPN 2.4.7 armv6-alpine-linux-musleabihf [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on May 6 2019
Wed Dec 18 02:17:32 2019 library versions: OpenSSL 1.1.1d 10 Sep 2019, LZO 2.10
Wed Dec 18 02:17:32 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP]:1197
Wed Dec 18 02:17:32 2019 UDP link local: (not bound)
Wed Dec 18 02:17:32 2019 UDP link remote: [AF_INET][IP]:1197
Wed Dec 18 02:17:32 2019 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Wed Dec 18 02:17:33 2019 [[LONG_RANDOM_STRING]] Peer Connection Initiated with [AF_INET][IP]:1197
Wed Dec 18 02:17:39 2019 WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).
Wed Dec 18 02:17:39 2019 WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).
Wed Dec 18 02:17:39 2019 WARNING: cipher with small block size in use, reducing reneg-bytes to 64MB to mitigate SWEET32 attacks.
Wed Dec 18 02:17:39 2019 TUN/TAP device tun0 opened
Wed Dec 18 02:17:39 2019 /sbin/ip link set dev tun0 up mtu 1500
Wed Dec 18 02:17:39 2019 /sbin/ip addr add dev tun0 local [SHORTER_IP] peer [SHORTER_IP]
Wed Dec 18 02:17:39 2019 Initialization Sequence Completed
Wed Dec 18 02:17:49 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:17:59 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:05 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:05 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:15 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:25 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:35 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:39 2019 [[LON_RANDOM_STRING]] Inactivity timeout (--ping-restart), restarting
Wed Dec 18 02:18:39 2019 SIGUSR1[soft,ping-restart] received, process restarting
Wed Dec 18 02:18:44 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP]:1197
Wed Dec 18 02:18:44 2019 UDP link local: (not bound)
Wed Dec 18 02:18:44 2019 UDP link remote: [AF_INET][IP]:1197
Wed Dec 18 02:18:45 2019 [[LONG_RANDOM_STRING]] Peer Connection Initiated with [AF_INET][IP]:1197
Wed Dec 18 02:18:46 2019 AUTH: Received control message: AUTH_FAILED
Wed Dec 18 02:18:46 2019 SIGUSR1[soft,auth-failure (auth-token)] received, process restarting
These are the files I use:
[pia.ovpn]
client
dev tun
proto udp
remote [server].privateinternetaccess.com 1197
resolv-retry infinite
keepalive 10 60
nobind
persist-key
persist-tun
tls-client
remote-cert-tls server
auth-user-pass /vpn/vpn.auth
comp-lzo
verb 1
reneg-sec 0
redirect-gateway def1
disable-occ
fast-io
ca /vpn/ca.rsa.2048.crt
crl-verify /vpn/crl.rsa.2048.pem
vpn.auth contains my username and password. ca.rsa.2048.crt and crl.rsa.2048.pem I both got from this PIA support page.
Not sure if it is relevant, but this is the dockerfile I used.
version: '2'
services:
openvpn:
image: dperson/openvpn-client:armhf
container_name: openvpn
cap_add:
- net_admin
environment:
- TZ=[timezone]
networks:
- vpn
read_only: true
tmpfs:
- /run
- /tmp
restart: always
security_opt:
- label:disable
stdin_open: true
tty: true
volumes:
- /dev/net:/dev/net:z
- [PATH_TO]/vpn:/vpn
networks:
vpn:
I hope that someone sees what goes wrong here!
As I can see in your logs you've received Inactivity timeout (--ping-restart), restarting message after successfull connection in short period of time.
I had the same issue.
My client successfully connected and in few seconds (20-40) has been restarted.
In my case I've actually run two clients with the same client name (CN) on different hosts.
To fix it I've generated different clients for each host.
For me, the problem was using default PIA config. Once I switched to OPENVPN CONFIGURATION FILES (STRONG), the problem was gone.
You can find the configs at https://www.privateinternetaccess.com/helpdesk/kb/articles/where-can-i-find-your-ovpn-files-2, and if the link goes down, try googling "pia config".
Related
I'm trying to make an OpenVPN server using docker I just started creating a tunnel between 2 containers after installing openvpn on both container the command :
openvpn --dev tun1 --ifconfig 10.0.0.1 10.0.0.2
gave me this error:
Mon Jul 12 12:26:28 2021 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
Mon Jul 12 12:26:28 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 27 2021
Mon Jul 12 12:26:28 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Mon Jul 12 12:26:28 2021 ******* WARNING *******: All encryption and authentication features disabled -- All data will be tunnelled as clear text and will not be protected against man-in-the-middle changes. PLEASE DO RECONSIDER THIS CONFIGURATION!
Mon Jul 12 12:26:28 2021 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Mon Jul 12 12:26:28 2021 Exiting due to fatal error
is the problem related to working on a container?
is it fine to make a OpenVPN server on a ubuntu image-based container?
if there is any other tips to make an OpenVPN server please tell me I'm new in this topic.
I'm trying to create a Docker image which will forward a port through a VPN. I've created a simple image which exposes port 5144, and tested that it works properly:
sudo docker run -t -d -p 5144:5144 \
--name le-bridge \
--cap-add=NET_ADMIN \
--device=/dev/net/tun \
bridge
sudo docker exec -it le-bridge /bin/bash
I check that the port is exposed correctly like this:
[CONTAINER] root#6116787b1c1e:~# nc -lvvp 5144
[HOST] user$ nc -vv 127.0.0.1 5144
Then, whatever I type is correctly echoed in the container's terminal. However, as soon as I start the openvpn daemon, this doesn't work anymore:
[CONTAINER] root#6116787b1c1e:~# openvpn logger.ovpn &
[1] 33
Sun Apr 5 22:52:54 2020 OpenVPN 2.4.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 14 2019
Sun Apr 5 22:52:54 2020 library versions: OpenSSL 1.1.1 11 Sep 2018, LZO 2.08
Sun Apr 5 22:52:54 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]
Sun Apr 5 22:52:54 2020 UDPv4 link local (bound): [AF_INET][undef]:1194
Sun Apr 5 22:52:54 2020 UDPv4 link remote:
Sun Apr 5 22:52:54 2020 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Sun Apr 5 22:52:55 2020 [] Peer Connection Initiated with [AF_INET]
Sun Apr 5 22:53:21 2020 TUN/TAP device tun0 opened
Sun Apr 5 22:53:21 2020 do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Sun Apr 5 22:53:21 2020 /sbin/ip link set dev tun0 up mtu 1500
Sun Apr 5 22:53:21 2020 /sbin/ip addr add dev tun0 10.X.0.2/24 broadcast 10.X.0.255
Sun Apr 5 22:53:21 2020 Initialization Sequence Completed
root#6116787b1c1e:~#
root#6116787b1c1e:~# nc -lvvp 5144
listening on [any] 5144 ...
From here, using the exact same netcat command, I cannot reach the exposed port anymore from the host.
What am I missing?
EDIT: It's maybe worth mentioning that after the VPN is started, the connexion still succeeds from the host ; it just never reaches the netcat process inside the container.
I'm not exactly sure why, but it turns out that routes need to be fixed inside the container. In my case, the following command solves the issue:
ip route add 192.168.0.0/24 via 172.17.42.1 dev eth0
...where 172.17.42.1 is the IP of the docker0 interface on my host.
Hopefully this is helpful to someone one day.
I got a very simple environment that uses Redis on Docker and it used to work pretty well until I moved my stack to Digital Ocean. My application stops working and then I have to restart it. It works for several hours (less than a day) and then it stops again.
When I print out the logs of the container this is what I got:
1:S 30 Aug 2019 22:07:17.573 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:17.574 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:17.655 # Error condition on socket for SYNC: Connection refused
1:S 30 Aug 2019 22:07:18.577 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:18.578 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:18.660 # Error condition on socket for SYNC: Connection refused
1:S 30 Aug 2019 22:07:19.582 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:19.582 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:19.664 * Non blocking connect for SYNC fired the event.
1:S 30 Aug 2019 22:07:19.746 * Master replied to PING, replication can continue...
1:S 30 Aug 2019 22:07:19.910 * Trying a partial resynchronization (request a3f877d059813e333a734a91b16e8ebf822e3d20:1).
1:S 30 Aug 2019 22:07:19.993 * Full resync from master: ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ:0
1:S 30 Aug 2019 22:07:19.994 * Discarding previously cached master state.
1:S 30 Aug 2019 22:07:19.994 * MASTER <-> REPLICA sync: receiving 42680 bytes from master
1:S 30 Aug 2019 22:07:20.075 * MASTER <-> REPLICA sync: Flushing old data
1:S 30 Aug 2019 22:07:20.076 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 30 Aug 2019 22:07:20.076 # Wrong signature trying to load DB from file
1:S 30 Aug 2019 22:07:20.077 # Failed trying to load the MASTER synchronization DB from disk
1:S 30 Aug 2019 22:07:20.584 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:20.585 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:20.664 * Non blocking connect for SYNC fired the event.
1:S 30 Aug 2019 22:07:21.996 * Module 'system' loaded from /tmp/exp_lin.so
1:S 30 Aug 2019 22:07:22.076 # Error condition on socket for SYNC: Connection reset by peer
1:M 30 Aug 2019 22:07:22.078 # Setting secondary replication ID to a3f877d059813e333a734a91b16e8ebf822e3d20, valid up to offset: 1. New replication ID is e4c7f742ac612d2fdc2124c73a14f68641f1c61e
1:M 30 Aug 2019 22:07:22.078 * MASTER MODE enabled (user request from 'id=8 addr=x.x.x.x:43490 fd=9 name= age=5 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=34 qbuf-free=32734 obl=0 oll=0 omem=0 events=r cmd=slaveof')
sh: 1: killall: not found
./xmrig-notls: unrecognized option '--max-cpu-usage'
I didnt add any special configuration to replicate data, master, slave or anything like that. This is my compose
version: '3'
services:
server:
image: server
build: .
ports:
- "8091:8091"
container_name: server
environment:
- NODE_ENV=production
external_links:
- redis
redis:
image: redis:5.0.5
ports:
- "6379:6379"
container_name: redis
Anyone knows what is going on? It didnt happen
Your Redis is available from the Internet and been hacked. Close the port by removing ports section in redis service:
ports:
- "6379:6379"
Further, remove container docker-compose rm and up it again.
This post can explain what happened
I have a dockerized server process that merely listens on a port 5000
[admin#gol05854 compose]$ cat ../proc1/server.sh
#!/bin/sh
echo `date` "Starting server"
nc -v -l -p 5000
echo `date` "Exiting server"
I have a client that is expected to continuously send messages to the server:
[admin#gol05854 compose]$ cat ../client/client.sh
#!/bin/sh
echo `date` "Starting client"
while true
do
date
done | nc my_server 5000
echo `date` "Ending client"
I start these together using compose. However, the server exits with following messages:
[admin#gol05854 compose]$ docker logs e1_my_server_1
Wed Oct 26 04:10:34 UTC 2016 Starting server
listening on [::]:5000 ...
connect to [::ffff:172.27.0.2]:5000 from e1_my_client_1_1.e1_default:36500 ([::ffff:172.27.0.3]:36500)
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Exiting server
What is surprising is that if the same containers are started without compose, using docker run, the server remains running.
What is it that docker compose does that causes the server to exit after receiving a few messages?
The code can be found at https://github.com/yashgt/dockerpoc
I installed mongodb using this: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
When I run mongod, I get this:
mongod --help for help and startup options
Fri Mar 1 18:11:06
Fri Mar 1 18:11:06 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Fri Mar 1 18:11:06
Fri Mar 1 18:11:06 [initandlisten] MongoDB starting : pid=6265 port=27017 dbpath=/data/db/ 32-bit host=aboelseoud
Fri Mar 1 18:11:06 [initandlisten]
Fri Mar 1 18:11:06 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Fri Mar 1 18:11:06 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Fri Mar 1 18:11:06 [initandlisten] ** with --journal, the limit is lower
Fri Mar 1 18:11:06 [initandlisten]
Fri Mar 1 18:11:06 [initandlisten] db version v2.2.3, pdfile version 4.5
Fri Mar 1 18:11:06 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08
Fri Mar 1 18:11:06 [initandlisten] build info: Linux bs-linux32.10gen.cc 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
Fri Mar 1 18:11:06 [initandlisten] options: {}
Fri Mar 1 18:11:06 [initandlisten] exception in initAndListen: 10296
*********************************************************************
ERROR: dbpath (/data/db/) does not exist.
Create this directory or give existing directory in --dbpath.
See http://dochub.mongodb.org/core/startingandstoppingmongo
*********************************************************************
, terminating
Fri Mar 1 18:11:06 dbexit:
Fri Mar 1 18:11:06 [initandlisten] shutdown: going to close listening sockets...
Fri Mar 1 18:11:06 [initandlisten] shutdown: going to flush diaglog...
Fri Mar 1 18:11:06 [initandlisten] shutdown: going to close sockets...
Fri Mar 1 18:11:06 [initandlisten] shutdown: waiting for fs preallocator...
Fri Mar 1 18:11:06 [initandlisten] shutdown: closing all files...
Fri Mar 1 18:11:06 [initandlisten] closeAllFiles() finished
Fri Mar 1 18:11:06 dbexit: really exiting now
When I type mongo, I get this:
MongoDB shell version: 2.2.3
connecting to: test
Fri Mar 1 18:13:00 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
When I browse into localhost:3000, I get this:
Moped::Errors::ConnectionFailure in MembersController#lawlab
Could not connect to any secondary or primary nodes for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>
What am I missing here?
Should create data/db/ folder and give access to your user to that folder.
For clarity sake: there is a difference between creating data/db folder vs. /data/db. One is installed in the root, one is not. So, you should create /data/db/ folder and then give access to the folder by using sudo chown -R /data/db/