Random failure of creating a New Cassandra Cluster using OpsCenter - datastax-enterprise

OpsCenter version: 5.1.0 and
DSE Version: 4.6.0
Creating a brand new cluster by using OpsCenter directly, gives us the following error. It randomly works with the same settings but 95% of the times it fails with the same error. Opscenter is running on its own box but sharing the same Security groups as the cluster instances. For good measure, I have opened up all TCP ports to all IPs. The following is the stack trace of the error from the opscenterd.log:
*2015-03-19 10:06:12+0000 [] INFO: Starting provisioning process
2015-03-19 10:06:12+0000 [] INFO: Starting installation phase of cluster provisioning
2015-03-19 10:06:13+0000 [] WARN: HTTP request http://10.x.x.x:61621/alive? failed: Connection was refused by other side: 111: Connection refused.
2015-03-19 10:06:13+0000 [] INFO: Beginning install of OpsCenter agent to 54.x.x.x
2015-03-19 10:06:26+0000 [] WARN: HTTP request http://10.x.x.x:61621/alive? failed: Connection was refused by other side: 111: Connection refused.
2015-03-19 10:06:31+0000 [] INFO: Agent for ip 10.x.x.x is version None
2015-03-19 10:06:31+0000 [] INFO: Agent for ip 10.x.x.x is version u'5.1.0'
2015-03-19 10:07:23+0000 [] INFO: Successfully installed agent and dse on node 10.x.x.x
2015-03-19 10:07:23+0000 [] INFO: Beginning "stop" phase of cluster provisioning
2015-03-19 10:07:25+0000 [] WARN: Marking request '10.x.x.x: /ops/stop' (f6708fa2-b45f-42b4-b992-90a82b460ac7) as failed: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] WARN: Marking request 'stop stage' (0b6fcb6b-96ba-404e-a484-b4b6b167b309) as failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] WARN: Marking request 'provision' (daf1c15d-92e3-40b0-83ca-34d548ea835b) as failed: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR:
2015-03-19 10:07:25+0000 [] ERROR: Cluster provisioning failed: Exception: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] ERROR: Failed to provision cluster: Cluster provisioning failed: Exception: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:25+0000 [] WARN: Marking request 28c021fd-d21a-4fed-bb5c-a4fe17d362e0 as failed: Cluster provisioning failed: Exception: Stop stage failed: Failed to stop node 10.x.x.x: /usr/sbin/service dse stop failed
exit status: 1
stdout:
log_daemon_msg is a shell function
Cassandra 2.0 and later require Java 7 or later.
2015-03-19 10:07:41+0000 [] WARN: Unable to find a matching cluster for node with IP [u'fe80:0:0:0:2000:aff:feeb:31c7%2', u'10.x.x.x', u'0:0:0:0:0:0:0:1%1', u'127.0.0.1']; the message was [u'5.1.0', u'/1947480708/conf']. This usually indicates that an OpsCenter agent is still running on an old node that was decommissioned or is part of a cluster that OpsCenter is no longer monitoring.
Appreciate any help!
Thanks in advance
Harsha

OpCenter developer here. I make the OpsCenter provisioning features go zoom (or splat occasionally as you've seen). It is with sadness and shame that I must tell you that you're hitting a bug.
The Datastax AMI version 2.4 used by OpsCenter provisioning (https://github.com/riptano/ComboAMI/tree/2.4) does quite a bit of work at boot time via startup scripts. One of those tasks is to set up some gpg repository keys used to validate packages. Intermittently that process can fail, breaking package installs and leading to the series of errors that you're seeing. This failure is intermittent and has greatly increased in frequency recently. If you check /home/ubuntu/datastax-ami/ami.log you should see the gpg key failures that begin the rest of the failure chain.
Unfortunately, this error is pretty far down the technology stack and is difficult to manually work around. If you just need to provision a single cluster you can retry until you get a good run. Otherwise your best best is to manually launch the instances and use local provisioning to deploy dse/dsc to their private ip addresses:
Launch instances using ami-ada2b6c4 (assuming you're in us-east-1)
Make sure to add the instances to the OpsCenterSecurity group.
Make sure you have the private half of the keypair you use (you'll need it during local provisioning)
On the instance data page, hit the advanced pulldown and add the following userdata as text "--raidonly --java7"
Do a local-provisioning run against the private-ip's
Not a super-simple workaround. I wish your experience with OpsCenter this time around was more awesome. The good news is I'm on this bug and it will be fixed in an upcoming point release.
Edit: No longer necessary to manually remove /etc/security/limits.d/cassandra.conf

if its just complaining about java then install the java 7 preferably datastax wants oracle jdk and jre. you might already have java 7 and another version on your nodes but java 7 is not the default version. to change this do:
sudo update-java-alternatives -s java-7-oracle
which is a command you can script to run with ssh so you dont have to log in to each node

Related

GPU Acceleration with WSL 2

I'm trying to setup tensorflow to use GPU acceleration with WSL 2 running Ubuntu 20.04. I'm following this tutorial and am running into the error seen here. However, when I follow the solution there and try to start docker with sudo service docker start I get told docker is an unrecognized service. However, considering I can access the help menu and whatnot, I know docker is installed. While I can get docker to work with the desktop tool, since it doesn't support Cuda as mentioned in the SO post from earlier, it's not very helpful. It's not really giving me error logs or anything, so please ask if you need more details.
Edit:
Considering the lack of details, here are a list of solutions I've tried to no avail. 1 2 3
Update: I used sudo dockerd to get the container started and tried running the nvidia benchmark container only to be met with
INFO[2020-07-18T21:04:05.875283800-04:00] shim containerd-shim started address=/containerd-shim/021834ef5e5600bdf62a6a9e26dff7ffc1c76dd4ec9dadb9c1fcafb6c88b6e1b.sock debug=false pid=1960
INFO[2020-07-18T21:04:05.899420200-04:00] shim reaped id=70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736
ERRO[2020-07-18T21:04:05.909710600-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:05.909753500-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:06.001006700-04:00] 70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736 cleanup: failed to delete container from containerd: no such container
ERRO[2020-07-18T21:04:06.001045100-04:00] Handler for POST /v1.40/containers/70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled
Update 2: After installing windows insider and making everything as up to date as possible, I encountered a different error.
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Error: only 0 Devices available, 1 requested. Exiting.
I have a GTX 970, so I'm not sure why it's not being detected. After running sudo lshw -C display, it was confirmed that my graphics card isn't being detected. I got:
*-display UNCLAIMED
description: 3D controller
product: Microsoft Corporation
vendor: Microsoft Corporation
physical id: 4
bus info: pci#941e:00:00.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list
configuration: latency=0

Nebula Graph fails on CentOS 6.5

Nebula Graph fails on CentOS 6.5, the error message is as follows:
# storage log
Heartbeat failed, status:RPC failure in MetaClient: N6apache6thrift9transport19TTransportExceptionE: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused
# meta log
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0415 22:32:38.944437 15532 AsyncServerSocket.cpp:762] failed to set SO_REUSEPORT on async server socket Protocol not available
E0415 22:32:38.945001 15510 ThriftServer.cpp:440] Got an exception while setting up the server: 92failed to bind to async server socket: [::]:0: Protocol not available
E0415 22:32:38.945057 15510 RaftexService.cpp:90] Setup the Raftex Service failed, error: 92failed to bind to async server socket: [::]:0: Protocol not available
E0415 22:32:38.949586 15463 NebulaStore.cpp:47] Start the raft service failed
E0415 22:32:38.949597 15463 MetaDaemon.cpp:88] Nebula store init failed
E0415 22:32:38.949796 15463 MetaDaemon.cpp:215] Init kv failed!
Nebula service status is as follows:
[root#redhat6 scripts]# ./nebula.service status all
[WARN] The maximum files allowed to open might be too few: 1024
[INFO] nebula-metad: Exited
[INFO] nebula-graphd: Exited
[INFO] nebula-storaged: Running as 15547, Listening on 44500
Reason for error: CentOS 6.5 system kernel version is 2.6.32, which is less than 3.9. However, SO_REUSEPORT only supports Linux 3.9 and above.
Upgrading the system to CentOS 7.5 can solve the problem by itself.

Running a bitcoin node on regtest network fails

I'm trying to run a bitcoin network on regtest with this version of bitcoin node so I can test out bitpay's insight-ui block explorer.
Running on regtest I get this repeating error
Assertion failed: (psocket), function Shutdown, file zmq/zmqpublishnotifier.cpp, line 92.
[2017-05-19T00:42:44.515Z] warn: Bitcoin process unexpectedly exited with code: null
[2017-05-19T00:42:44.515Z] warn: Restarting bitcoin child process in 5000ms
[2017-05-19T00:42:49.516Z] info: Using bitcoin config file: /Users/harshagoli/BTCT/bitcoin.conf
[2017-05-19T00:42:49.517Z] warn: Stopping existing spawned bitcoin process with pid: 12690
[2017-05-19T00:42:49.517Z] warn: Unclean bitcoin process shutdown, process not found with pid: 12690
[2017-05-19T00:42:49.517Z] info: Starting bitcoin process
Which eventually becomes
[2017-05-19T00:42:54.133Z] error: RPCError: Bitcoin JSON-RPC: Request Error: connect ECONNREFUSED 127.0.0.1:8332
at Bitcoin._wrapRPCError (/Users/harshagoli/mynode/node_modules/bitcore-node/lib/services/bitcoind.js:449:13)
at /Users/harshagoli/mynode/node_modules/bitcore-node/lib/services/bitcoind.js:781:28
at ClientRequest.<anonymous> (/Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/bitcoind-rpc/lib/index.js:116:7)
at emitOne (events.js:77:13)
at ClientRequest.emit (events.js:169:7)
at Socket.socketErrorListener (_http_client.js:269:9)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at emitErrorNT (net.js:1269:8)
at nextTickCallbackWith2Args (node.js:458:9)
[2017-05-19T00:42:54.133Z] info: Beginning shutdown
[2017-05-19T00:42:54.133Z] info: Stopping insight-ui (not started)
[2017-05-19T00:42:54.134Z] info: Stopping insight-api (not started)
[2017-05-19T00:42:54.134Z] info: Stopping web (not started)
[2017-05-19T00:42:54.135Z] info: Stopping bitcoind
After which I have the reoccurring error
[2017-05-19T00:42:54.221Z] error: Error: Stopping while trying to spawn bitcoind.
at /Users/harshagoli/mynode/node_modules/bitcore-node/lib/services/bitcoind.js:905:25
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:676:51
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:726:13
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:52:16
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:264:21
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:44:16
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:723:17
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:167:37
at /Users/harshagoli/mynode/node_modules/bitcore-node/node_modules/async/lib/async.js:652:25
at /Users/harshagoli/mynode/node_modules/bitcore-node/lib/services/bitcoind.js:887:16
Thoughts on how I can get this up and running with a block to look at so I can use the block explorer?
Okay I figured it out. What was happening is there were some other bitcoind processes that had zombied out and were listening on the port this process was trying to access. I ran this command to kill the other processes
killall -9 bitcoind
Also, to create more blocks on regtest (while in the your node directory) use this command.
./node_modules/bitcore-node/bin/bitcoin-0.12.1/bin/bitcoin-cli -datadir=/Users/harshagoli/mynode/data -regtest generate 150

Why does Jenkins hang when I try to build with the gradle-release plugin?

I set up the release plugin on my Grails project and successfully ran it on my localhost.
When I try to set up the same build in Jenkins, the build hangs indefinitely. The last thing in the output before it hangs is the checkCommitNeeded step.
Anything I can do to figure out what's going wrong?
I have set -Prelease.useAutomaticVersion=true and the two version params in switches, as mentioned in the plugin docs.
Update
On the researchgate Gitter, Christian Gonzalez mentioned that Jenkins is detecting another commit caused by the release plugin, and getting itself stuck in a loop. For Git, an additional behavior can be added to ignore changes committed by the plugin. However, my project is using SVN.
Update
Below is a snippet of the output from adding -d
11:12:48.907 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter] Executing actions for task ':checkCommitNeeded'.
11:12:48.908 [INFO] [org.gradle.api.Project] Running [svn, status] in [/var/lib/jenkins/jobs/MyTeam/jobs/MyProject/jobs/MyProject-release/workspace]
11:12:48.924 [INFO] [org.gradle.api.Project] Running [svn, status] produced output: []
11:12:48.926 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter] Finished executing task ':checkCommitNeeded'
11:12:48.926 [INFO] [org.gradle.execution.taskgraph.AbstractTaskPlanExecutor] :checkCommitNeeded (Thread[Daemon worker,5,main]) completed. Took 0.02 secs.
11:12:48.926 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationWorkerRegistry] Worker root.3 completed (0 in use)
11:12:48.926 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationWorkerRegistry] Worker root.4 started (1 in use).
11:12:48.926 [INFO] [org.gradle.execution.taskgraph.AbstractTaskPlanExecutor] :checkUpdateNeeded (Thread[Daemon worker,5,main]) started.
11:12:48.927 [LIFECYCLE] [class org.gradle.internal.buildevents.TaskExecutionLogger] :myproject:checkUpdateNeeded
11:12:48.927 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter] Starting to execute task ':checkUpdateNeeded'
11:12:48.927 [DEBUG] [org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter] Determining if task ':checkUpdateNeeded' is up-to-date
11:12:48.927 [INFO] [org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter] Executing task ':checkUpdateNeeded' (up-to-date check took 0.0 secs) due to:
Task has not declared any outputs.
11:12:48.927 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter] Executing actions for task ':checkUpdateNeeded'.
11:12:48.928 [INFO] [org.gradle.api.Project] Running [svn, status, -q, -u] in [/var/lib/jenkins/jobs/MyTeam/jobs/MyProject/jobs/MyProject-release/workspace]
11:12:51.477 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
11:12:51.479 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
11:12:51.480 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired.
11:12:51.481 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
11:13:01.477 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
11:13:01.477 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
11:13:01.478 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired.
11:13:01.480 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
11:13:11.477 [DEBUG] [org.gradle.launcher.daemon.server.Daemon] DaemonExpirationPeriodicCheck running
11:13:11.477 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Waiting to acquire shared lock on daemon addresses registry.
11:13:11.477 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired.
11:13:11.479 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
...
The last 4 lines are repeated over and over.
I faced the same issue. For me, the reason was that I did the wrong setup configuration for the project. Example: wrong GitHub URL (without .git extension) added, incorrect Poll SCM config, etc.
Fix for me was to restart the Jenkins server & correct the changes under 'Manage' for your project & again build.

Erlang version 18.0 and ejabberd nodename conflict

I wanted to install latest ejabberd from https://github.com/processone/ejabberd. For this, Erlang/OTP 18 is required. That too, i have manually installed from https://github.com/erlang/otp. Then, i need to start ejabberd server with command ejabberdctl start. But there is some error in that.
My mnesia node name is akash#akash-Latitude-3450 and ejabberd nodename is akash#localhost. Due to this, server is not getting started. How to resolve this conflict ?
Log ->
2016-01-07 18:38:20.410 [critical] <0.39.0>#ejabberd_app:db_init:125 Node name mismatch: I'm [ejabberd#localhost], the database is owned by ['ejabberd#akash-Latitude-3450']
2016-01-07 18:38:20.410 [critical] <0.39.0>#ejabberd_app:db_init:127 Either set ERLANG_NODE in ejabberdctl.cfg or change node name in Mnesia
2016-01-07 18:38:20.410 [error] <0.38.0> CRASH REPORT Process <0.38.0> with 0 neighbours exited with reason: node_name_mismatch in ejabberd_app:db_init/0 line 129 in application_master:init/4 line 134
You have two options:
name your Erlang node with the name matching your Mnesia database when you start ejabberd. As suggested by error message, it can be changed in var ERLANG_NODE in ejabberdctl.cfg.
Backup Mnesia database by starting the node under old name, do a fresh install and restore your data with node started with the new name.

Resources