we have an issue with Long running transactions that have caused us to split up our destroys into smaller transactions. now.. The next requirement was to keep the PT functionality. How can i do this? Has anyone ran into this issue? also I are on version 6.0.2..
Related
I have a Rails active job running on a Heroku dyno via Sidekiq.
When the job runs, the memory grows from 150MB to 550MB.
I found out the cause was n+1 queries to the DB, and fixed it by doing my calculations on the data in the DB query instead of in the code.
Afterwards I wanted to refactor a bit, as I generally like to keep the SQL somewhat simple, and have the logic in the code. For this reason I switched my "joins" with "includes", which allows me to use the associations of the objects. It turned out though that this refactoring introduced the memory issue once again though.
So my conclusion is, that it is the use of the associations that causes the memory growth, as the number of SQL queries are the same after I fixed the N+1 issue. Please note the job handles a lot of objects, around 250.000. It seems reasonable c.f. the delete vs. destroy functionality where delete performs better as the objects themselves are not instantiated as they are with destroy.
Is my conclusion accurate, or am I missing something?
Thanks,
-Louise
I have been developing an iOS app that utilizes the CloudKit feature available for Apple Developers. I've found it to be a wonderful resource, especially since the very day I started designing my backend, the service I was intending to use (Parse) announced it was shutting down. It's very appealing due to it's small learning curve, but I'm starting to notice some annoying little issues here and there so I'm seeking out some experts for advice and help. I posted another CloudKit question a couple days ago, which is still occurring: CloudKit Delete Self Option Not Working. But I want to limit this to a different issue that may be related.
Problem ~ Ever since I started using CloudKit I have noticed that whenever I manually try to edit (delete an entry, remove or add part of a list, even add a DeleteSelf option to a CKReference after creation), and then try to save the change, I get an error message and cannot proceed. Here is a screenshot of the error window that appears:
It's frustrating because anytime I want to manipulate a record to perform some sort of test, I either have to go do it through my app, or just delete the record entirely and create a new one (that I am able to do without issue). I have been just working around this issue for over a month now because it wasn't fatal to my progress. However, I am starting to think that this could be related to my other CloudKit issues, and maybe if I could get some advice on how to fix it I could also solve my other problems. I have file numerous bug reports with Apple, but haven't received a response or seen any changes.
I'd also like to mention that for a very long time now (at least a few days), I've noticed down in the bottom left hand corner of my Dashboard that it is consistently saying that it's "Reindexing Development Data". I remember at first that wasn't an issue, I would get that notification after making a change but it'd go away after the operation is complete. Now it seems to be stuck somewhere inside the process. And this is a chronic issue, it's saying this all the time, even right when I log into my dashboard.
Here is what I'm talking about:
As time goes on I find more small issues with CloudKit, I'm concerned that once I go into production more problems could start manifesting and then I could have a serious issue. I'd love to stick with CloudKit and avoid the learning curve of a different service like Amazon Web Services, but I also don't want to set myself up for failure.
Can anyone help me with this issue, or has anyone else experienced it on a regular basis? Thanks for the advice and help!
Pierce,
I found myself in a similar situation; the issue seemed to be linked to Assets; I had an Asset in my record definition. I and several other I noted reported the re-indexing issue on the apple support website and after about a month it eventually disappeared.
Have you tried resting your database schema completely, snapshot the definition; since you zap it completely and than reset, see inset.
Ultimately I simply created a new project, linked it to cloud kit and use the new container in my original app.
I know how useless and vague the title is. Sorry. I don't have much other than some observation and evidence that nothing changed in my code.
I have a Rails 3.2.14 app using DelayedJob and PostgreSQL 9.2. For months, I have had code that has background workers process file contents into the database. Each job/task will load 100K to 1M records. Until very, very recently, when I would watch the database, I could see the records accumulating by calling Product.count, for example.
Now, I only see Product.count update to a new sum when a job/task completes. It is almost as if the entire operation is now being wrapped in a transaction preventing me from seeing the incremental changes. I have verified that nothing in the relevant areas of code have changed and I've been on 3.2.14 for some time now.
Does anyone know what this could be? DelayedJob?
I am also using Ruby 2.0.0-p247.
From this post https://github.com/collectiveidea/delayed_job/issues/585#issuecomment-56743773, it appears that delayed_job is not wrapping the job in a transaction.
I have a multiuser delphi program which has Firebird database behind it.
And I want 2 user can insert 2 records same time but with given automated number for a field.
On the other hand I am not sure Firebird is eligible for this without one use commit and close the table. And the other one refreshing it...
I heard bad things about commitretaining and I don't now what to do now. Like:
Which transaction setting is best for me?
Wait or No-wait if I have to use commitretaining how can I do that safely?
Use GENERATORS. With GENERATORS you get always unique numbers. It doesn't matter how many transactions are active, they live outside the transaction control.
See Firebird Generator Guide
Our setup is Rails 3 with a 6 app servers behind a load balancer and one PostgreSQL database.
In our app, and user can "tip" and artist during a performance.
The process flow looks like this:
User clicks on "tip" button
Tip object is created
An after_create callback makes sure user account has enough money, if so a financial transaction moves money. Else, a Rollback exception is raised.
What can happen is that if the user "spams" the tip button, multiple tips can be in-process at once. When this occurs the "does this user have enough money?" check returns the same value for many tips since the financial transaction have not happened yet.
What I need is to make sure each "tip" gets process sequentially. That way, the balance check for tip #2 does happen before tip #1 updates the balance.
We're already using Resque for other stuff, so that might be one solution. Although I don't know of a way of making sure multiple works don't start processing jobs in parallel and cause the same issue. Having one worker do tip jobs would not be viable solution as our app processes a lot of tips at any given instant.
If you enforce this within database transactions it is a fairly simple problem to solve.
http://www.postgresql.org/docs/9.1/interactive/mvcc.html