Error while running mongod - ruby-on-rails

I installed mongodb using this: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
When I run mongod, I get this:
mongod --help for help and startup options
Fri Mar 1 18:11:06
Fri Mar 1 18:11:06 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Fri Mar 1 18:11:06
Fri Mar 1 18:11:06 [initandlisten] MongoDB starting : pid=6265 port=27017 dbpath=/data/db/ 32-bit host=aboelseoud
Fri Mar 1 18:11:06 [initandlisten]
Fri Mar 1 18:11:06 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Fri Mar 1 18:11:06 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Fri Mar 1 18:11:06 [initandlisten] ** with --journal, the limit is lower
Fri Mar 1 18:11:06 [initandlisten]
Fri Mar 1 18:11:06 [initandlisten] db version v2.2.3, pdfile version 4.5
Fri Mar 1 18:11:06 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08
Fri Mar 1 18:11:06 [initandlisten] build info: Linux bs-linux32.10gen.cc 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49
Fri Mar 1 18:11:06 [initandlisten] options: {}
Fri Mar 1 18:11:06 [initandlisten] exception in initAndListen: 10296
*********************************************************************
ERROR: dbpath (/data/db/) does not exist.
Create this directory or give existing directory in --dbpath.
See http://dochub.mongodb.org/core/startingandstoppingmongo
*********************************************************************
, terminating
Fri Mar 1 18:11:06 dbexit:
Fri Mar 1 18:11:06 [initandlisten] shutdown: going to close listening sockets...
Fri Mar 1 18:11:06 [initandlisten] shutdown: going to flush diaglog...
Fri Mar 1 18:11:06 [initandlisten] shutdown: going to close sockets...
Fri Mar 1 18:11:06 [initandlisten] shutdown: waiting for fs preallocator...
Fri Mar 1 18:11:06 [initandlisten] shutdown: closing all files...
Fri Mar 1 18:11:06 [initandlisten] closeAllFiles() finished
Fri Mar 1 18:11:06 dbexit: really exiting now
When I type mongo, I get this:
MongoDB shell version: 2.2.3
connecting to: test
Fri Mar 1 18:13:00 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
When I browse into localhost:3000, I get this:
Moped::Errors::ConnectionFailure in MembersController#lawlab
Could not connect to any secondary or primary nodes for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>
What am I missing here?

Should create data/db/ folder and give access to your user to that folder.

For clarity sake: there is a difference between creating data/db folder vs. /data/db. One is installed in the root, one is not. So, you should create /data/db/ folder and then give access to the folder by using sudo chown -R /data/db/

Related

[Docker x ColdFusion][Apache2] - (95)Operation not supported: mod_jk

The Apache2 on my Docker container keeps failing on starting; I already check the config using apachectl configtest, and it's returning OK. The error below is what I found under /var/log/apache2/error.log
[Wed Aug 10 15:17:30.643137 2022] [mpm_event:notice] [pid 465:tid 139744629492672] AH00489: Apache/2.4.52 (Ubuntu) mod_jk/1.2.46 configured -- resuming normal operations
[Wed Aug 10 15:17:30.643188 2022] [core:notice] [pid 465:tid 139744629492672] AH00094: Command line: '/usr/sbin/apache2'
[Mon Oct 31 22:14:51.535467 2022] [jk:crit] [pid 63:tid 274907793600] (95)Operation not supported: mod_jk: could not create jk_log_lock
But when I tried to uninstall and reinstall apache2, I could access the localhost:80, but the ColdFusion under it was not working. It just shows me the directory of the working directory..
Docker Desktop: v4.13.1
Docker: version 20.10.20, build 9fdeb9c
ColdFusion: 2018
This happens only on my Macbook 13 M2. I tried running it on a windows laptop, and it's working well.

problem with docker container creating a VPN tunnel

I'm trying to make an OpenVPN server using docker I just started creating a tunnel between 2 containers after installing openvpn on both container the command :
openvpn --dev tun1 --ifconfig 10.0.0.1 10.0.0.2
gave me this error:
Mon Jul 12 12:26:28 2021 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
Mon Jul 12 12:26:28 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 27 2021
Mon Jul 12 12:26:28 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Mon Jul 12 12:26:28 2021 ******* WARNING *******: All encryption and authentication features disabled -- All data will be tunnelled as clear text and will not be protected against man-in-the-middle changes. PLEASE DO RECONSIDER THIS CONFIGURATION!
Mon Jul 12 12:26:28 2021 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Mon Jul 12 12:26:28 2021 Exiting due to fatal error
is the problem related to working on a container?
is it fine to make a OpenVPN server on a ubuntu image-based container?
if there is any other tips to make an OpenVPN server please tell me I'm new in this topic.

Trying to set up PIA with OVPN client (docker)

I have been trying to get a OpenVPN client running with docker. But I got this error while setting up. My VPN provider is Private Internet Access. This is the Docker Image I used.
docker-compose up -d && docker logs -f openvpn
openvpn
openvpn
Creating openvpn
Wed Dec 18 02:17:32 2019 OpenVPN 2.4.7 armv6-alpine-linux-musleabihf [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on May 6 2019
Wed Dec 18 02:17:32 2019 library versions: OpenSSL 1.1.1d 10 Sep 2019, LZO 2.10
Wed Dec 18 02:17:32 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP]:1197
Wed Dec 18 02:17:32 2019 UDP link local: (not bound)
Wed Dec 18 02:17:32 2019 UDP link remote: [AF_INET][IP]:1197
Wed Dec 18 02:17:32 2019 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Wed Dec 18 02:17:33 2019 [[LONG_RANDOM_STRING]] Peer Connection Initiated with [AF_INET][IP]:1197
Wed Dec 18 02:17:39 2019 WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).
Wed Dec 18 02:17:39 2019 WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).
Wed Dec 18 02:17:39 2019 WARNING: cipher with small block size in use, reducing reneg-bytes to 64MB to mitigate SWEET32 attacks.
Wed Dec 18 02:17:39 2019 TUN/TAP device tun0 opened
Wed Dec 18 02:17:39 2019 /sbin/ip link set dev tun0 up mtu 1500
Wed Dec 18 02:17:39 2019 /sbin/ip addr add dev tun0 local [SHORTER_IP] peer [SHORTER_IP]
Wed Dec 18 02:17:39 2019 Initialization Sequence Completed
Wed Dec 18 02:17:49 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:17:59 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:05 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:05 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:15 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:25 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:35 2019 Authenticate/Decrypt packet error: packet HMAC authentication failed
Wed Dec 18 02:18:39 2019 [[LON_RANDOM_STRING]] Inactivity timeout (--ping-restart), restarting
Wed Dec 18 02:18:39 2019 SIGUSR1[soft,ping-restart] received, process restarting
Wed Dec 18 02:18:44 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP]:1197
Wed Dec 18 02:18:44 2019 UDP link local: (not bound)
Wed Dec 18 02:18:44 2019 UDP link remote: [AF_INET][IP]:1197
Wed Dec 18 02:18:45 2019 [[LONG_RANDOM_STRING]] Peer Connection Initiated with [AF_INET][IP]:1197
Wed Dec 18 02:18:46 2019 AUTH: Received control message: AUTH_FAILED
Wed Dec 18 02:18:46 2019 SIGUSR1[soft,auth-failure (auth-token)] received, process restarting
These are the files I use:
[pia.ovpn]
client
dev tun
proto udp
remote [server].privateinternetaccess.com 1197
resolv-retry infinite
keepalive 10 60
nobind
persist-key
persist-tun
tls-client
remote-cert-tls server
auth-user-pass /vpn/vpn.auth
comp-lzo
verb 1
reneg-sec 0
redirect-gateway def1
disable-occ
fast-io
ca /vpn/ca.rsa.2048.crt
crl-verify /vpn/crl.rsa.2048.pem
vpn.auth contains my username and password. ca.rsa.2048.crt and crl.rsa.2048.pem I both got from this PIA support page.
Not sure if it is relevant, but this is the dockerfile I used.
version: '2'
services:
openvpn:
image: dperson/openvpn-client:armhf
container_name: openvpn
cap_add:
- net_admin
environment:
- TZ=[timezone]
networks:
- vpn
read_only: true
tmpfs:
- /run
- /tmp
restart: always
security_opt:
- label:disable
stdin_open: true
tty: true
volumes:
- /dev/net:/dev/net:z
- [PATH_TO]/vpn:/vpn
networks:
vpn:
I hope that someone sees what goes wrong here!
As I can see in your logs you've received Inactivity timeout (--ping-restart), restarting message after successfull connection in short period of time.
I had the same issue.
My client successfully connected and in few seconds (20-40) has been restarted.
In my case I've actually run two clients with the same client name (CN) on different hosts.
To fix it I've generated different clients for each host.
For me, the problem was using default PIA config. Once I switched to OPENVPN CONFIGURATION FILES (STRONG), the problem was gone.
You can find the configs at https://www.privateinternetaccess.com/helpdesk/kb/articles/where-can-i-find-your-ovpn-files-2, and if the link goes down, try googling "pia config".

Is it normal docker daemon kill/restart containers on a short time span?

We started to monitor docker events in our k8s cluster and noticed that are a lot of Kill/Die/Stop/Destroy for various containers in a short time period.
Is that normal? (I assume it's not)
Aparently is not a capacity problem:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 16 Aug 2018 11:19:30 -0300 Tue, 14 Aug 2018 14:02:37 -0300 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 16 Aug 2018 11:19:30 -0300 Tue, 14 Aug 2018 14:02:37 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 16 Aug 2018 11:19:30 -0300 Tue, 14 Aug 2018 14:02:37 -0300 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 16 Aug 2018 11:19:30 -0300 Fri, 11 May 2018 16:37:48 -0300 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 16 Aug 2018 11:19:30 -0300 Tue, 14 Aug 2018 14:02:37 -0300 KubeletReady kubelet is posting ready status
All Pods shows status "Running"
Any tips on how debug it further?
You can inspect the docker container status as following commands on the node hosts where runs pods on.
docker inspect <container id>
More option is here
And events logs and journal logs are helpful to debug.
kubectl get events
journalctl --no-pager

Container exits if invoked from compose

I have a dockerized server process that merely listens on a port 5000
[admin#gol05854 compose]$ cat ../proc1/server.sh
#!/bin/sh
echo `date` "Starting server"
nc -v -l -p 5000
echo `date` "Exiting server"
I have a client that is expected to continuously send messages to the server:
[admin#gol05854 compose]$ cat ../client/client.sh
#!/bin/sh
echo `date` "Starting client"
while true
do
date
done | nc my_server 5000
echo `date` "Ending client"
I start these together using compose. However, the server exits with following messages:
[admin#gol05854 compose]$ docker logs e1_my_server_1
Wed Oct 26 04:10:34 UTC 2016 Starting server
listening on [::]:5000 ...
connect to [::ffff:172.27.0.2]:5000 from e1_my_client_1_1.e1_default:36500 ([::ffff:172.27.0.3]:36500)
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Wed Oct 26 04:10:36 UTC 2016
Exiting server
What is surprising is that if the same containers are started without compose, using docker run, the server remains running.
What is it that docker compose does that causes the server to exit after receiving a few messages?
The code can be found at https://github.com/yashgt/dockerpoc

Resources