Datastax: CorruptSSTableException in the debug log file - datastax-enterprise

We are getting following exception in the debug log under '/var/log/cassandra'
java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: /var/lib/cassandra/data/keyspace/table-name-10c4a6813cc011e7bb285ba6dc91a4cc/mc-3600-big-Data.db
We perform nodetool repair daily on one node. And after crash started to occur more often we performed full nodetool repair couple of times, but it didn't help for long.
We are running on DSE 5.0.7.
Can you please help us how can we address this.

Related

Urbit error: Assertion '!"plan: no pier"' failed in vere/pier.c:2091

I downloaded urbit and am attempting to boot my ship for the first time. In following the instructions online, I ran the following command:
./urbit -w sampel-palnet -k path/to/my-planet.key
The script progressed until this line:
boot: downloading pill https://bootstrap.urbit.org/urbit-v0.10.4.pill
After some time, I received the error in the title of this question.
Does anyone know what the error means and what I can do to resolve it?
Have you tried checking this issue on Github. It's not exactly the same error as yours, but it's similar, so maybe it could be related.
Did you also check the Urbit Docs.
If none of this works I would recommend you create an issue in Urbit's Github page.

Dask aysncio tornado TimeoutError

I'm running a Dask-YARN job on a YARN cluster on a schedule. The job creates a list of Delayed Dask tasks, and submits it to the cluster using the following code:
from dask_yarn import YarnCluster
cluster = YarnCluster()
cluster.scale(8)
app_id = cluster.application_client.id
client = Client(cluster)
dask.compute(dask_tasks)
cluster.shutdown()
client.close()
Then, it logs the application worker logs using the command:
yarn logs -applicationId {app_id} -log_files dask.worker.log
After printing all the worker logs, I see the following error message:
End of LogType:dask.worker.log
********************************************************************************
2019/11/28 11:16:24 - asyncio - ERROR - Future exception was never retrieved
future: <Future finished exception=TimeoutError('Timeout')>
tornado.util.TimeoutError: Timeout
This job is running on a schedule and the error message above appears intermittently. The job also completes successfully in every case it shows this error message. So does anyone have an idea the reason for this error?
Logged warnings like these can sometimes occur if things didn't clean up perfectly. In practice it's not a big deal though. If your job completes successfully then I would probably ignore it.
If you're able to provide a minimal reproducible example then you might consider submitting an issue to the dask-yarn issue tracker.

archive querying error (Mod_mam)

This is the error i am getting from mod_mam, i need help to understand this
Error Log :
[error] <0.27129.40> CRASH REPORT Process <0.27129.40> with 0 neighbours exited with reason: {process_limit,{max_queue,5590}} in p1_fsm:terminate/8 line 755
2015-12-07 07:25:47.714 [error] <0.336.0> Supervisor ejabberd_c2s_sup had child undefined started with {ejabberd_c2s,start_link,undefined} at <0.27129.40> exit with reason {process_limit,{max_queue,5590}} in context child_terminated
2015-12-07 07:25:53.209 [error] <0.479.0>#gen_iq_handler:process_iq:128 {badarg,[{erlang,binary_to_atom,[null,utf8],[]},{jlib,binary_to_atom,1,[{file,"src/jlib.erl"},{line,934}]},{mod_mam,'-select/8-fun-2-',3,[{file,"src/mod_mam.erl"},{line,675}]},{lists,map,2,[{file,"lists.erl"},{line,1237}]},{mod_mam,select,8,[{file,"src/mod_mam.erl"},{line,669}]},{mod_mam,select_and_send,10,[{file,"src/mod_mam.erl"},{line,569}]},{gen_iq_handler,process_iq,6,[{file,"src/gen_iq_handler.erl"},{line,127}]},{gen_iq_handler,handle_info,2,[{file,"src/gen_iq_handler.erl"},{line,171}]}]}
This is related with querying archive due to lack of my knowledge of erlang i am not able to understand.please help me to understand this .
Reason for the first error is that your service is overloaded:
{process_limit,{max_queue,5590}
Note that ejabberd process_limit error message is about a per process limit of the number of message in the queue. This is to avoid having a slow process or message receiver take all server resources.
It has nothing to do with Erlang number of allowed process per node.
Regarding your second error in logs, I guess this is because you have upgrade your instance from older version and have old messages in archive. We improved the code of ejabberd to support the old stored messages.
This is already committed in ejabberd head and fix will be published in ejabberd 15.12 release.

Meteor [RangeError: Out of memory] exited with code: 3

I'm running a meteor app that connects to a SQL and Mongo database. When I start the app I get this error after successfully connecting to MSSQL. I don't know if the error is related to Kadira or not but here's what happens:
Every once in awhile the app will successfully run, only to exit and restart randomly after I see the out of memory error.
I also see sometimes:
C:\Users...\.meteor\local\build\programs\server\packages\meteorhacks_kadira.js:3130
originalRun.call(this,val);
^
RangeError: Out of memory
at Fibers.run (packages/meteorhacks:kadira/.../async.js:25:1)
at Object._onImmediate (pakages/meteor/fiber_helpers.js:126:1)
at processImmediate [as _immediateCallback] (timers.js:354:15)
Does this mean my computer doesn't have enough RAM to run my application? I'm not sure how to figure out the source of this error further but I was wondering if anyone has even encountered something like this? I don't think that this should be happening because others run the application without this error.

Windows service setup: Error 1001

I am getting the following error when I try to install my service installer on my machine:
Error 1001. The source was not found, but some or all event logs could be searched. Inaccessible logs: Security.
Error 1001. An exception occurred during the Rollback phase of the installation. This exception will be ignored and the rollback will continue. However, the machine might not fully revert to its initial state after the rollback is complete. --> The source was not found, but some or all event logs could be searched. Inaccessible logs: Security.
I have created the window service as specified in the link: http://support.microsoft.com/kb/816169
Any help on this is much appreciated.
Please note: I dont have admin rights in this machine.

Resources