Does anyone know why I might receive the following error while Neo4j is running: "Waiting for all transactions to close... out-of-order-sequence"? This just repeats again and again.
This seems to occur after heavy writing, but I'm unsure as to what is actually causing it.
Thanks.
Related
we have an issue with Long running transactions that have caused us to split up our destroys into smaller transactions. now.. The next requirement was to keep the PT functionality. How can i do this? Has anyone ran into this issue? also I are on version 6.0.2..
We have a Worker which had a bug that caused eroneous responses to a method being called. The issue has since been fixed, however when we restart the background workers we seem to still experience the issue.
We know the issue is resolved because for the meantime we have moved the logic to a rake task and it is now working fine. We suspect the issue relates to failed or unperformed jobs in the sidekiq queue.
We tried to overcome this by clearing the redis DB with the below approach:
Sidekiq.redis { |r| puts r.flushall }
Has anyone experienced a similar issue when using Sidekiq/Redis and how did you over come it?
I think flushall might just retry immediately all the still failed, but still queued, jobs
In most cases, if you fix a bug in the worker without changing the signature, you may be able to just let them retry (assuming the job itself is idempotent which is kinda recommended for this reason and others).
If however, the bug was in what you were passing into the async job call, then you'll have to remove all those entries because they will continue to fail every time they are retried which, by default, can go on for weeks.
I think what you want to do is clear them all... you can do it for that particular queue. You may have to be careful to not remove jobs that are newly queued if that's a problem by perhaps examining the job entries. If you just want to nuke them all:
queue = Sidekiq::Queue.new('your_queue')
queue.clear
I am loosing my patience with "delay write failed" errors. It silently disconnects the database from the application so nothing gets saved in the database while using it. Is there a way to detect the occurrence itself so I can flash a warning ? Or perhaps monitoring the connection itself for a disconnection ? Everyone seems to miss the balloon tip from the Windows XP so I figured to flash a more visible warning that the application must be restarted. It seems Microsoft has found a way to force people to upgrade....
I suppose this could be done with a timer and constantly check connected users:
cxlabel1.Caption := IntToStr(DataModule2.ABSDatabase1.GetDBFileConnectionsCount);
But I was thinking more of checking/detecting for the occurence itself. Is there something in Delphi that can detect this?
I would like to hear your ideas on this...
Putting this as an answer because the comment length is limited.
I have seen this before. IIRC, the problem you have is that the Delayed Write Error is an OS error, it has nothing to do with your application. Windows has already told you that the write has been committed correctly to disk. You would have to try and hook into the OS errors to see when this is happening.
You need to get the network issues resolved because that's where the problem is. In our situation it was a faulty router that was causing the problem.
It's unfair to expect users to check for the error message and then handle it. They could be out at lunch when it occurs as it's not immediate. They also have no way of know what has been saved and what hasn't. It's only a matter of time before your database is corrupted.
The problem with a timer is that it might tell you everything is fine because it triggers after the network resolves the problems.
A far better approach would be to switch to a Client/Server database. You can do this by setting up your own server that listens for web service or another remote call or switch to a database that supports client/server instead of using a file based database. This will tell you immediately when there is a problem with the save of data.
I know how useless and vague the title is. Sorry. I don't have much other than some observation and evidence that nothing changed in my code.
I have a Rails 3.2.14 app using DelayedJob and PostgreSQL 9.2. For months, I have had code that has background workers process file contents into the database. Each job/task will load 100K to 1M records. Until very, very recently, when I would watch the database, I could see the records accumulating by calling Product.count, for example.
Now, I only see Product.count update to a new sum when a job/task completes. It is almost as if the entire operation is now being wrapped in a transaction preventing me from seeing the incremental changes. I have verified that nothing in the relevant areas of code have changed and I've been on 3.2.14 for some time now.
Does anyone know what this could be? DelayedJob?
I am also using Ruby 2.0.0-p247.
From this post https://github.com/collectiveidea/delayed_job/issues/585#issuecomment-56743773, it appears that delayed_job is not wrapping the job in a transaction.
I assume there's some kind of race condition, although the code executes in the save transaction, so I'm not quite sure how these numbers could be getting out of sync.
All cache_counters on the site have at least a few of these errors.
Make a cron job to reset them.