I have a long running database import on a web application which massively skews my charts on New Relic.
On the controller I am calling NewRelic.IgnoreTransaction() but still seeing a huge spike on time spent in the database.
The actual import itself is done on a separate thread and wrapped in a transaction and I'm wondering if this is the reason. Do I need to call IgnoreTransaction again either within the transaction or the thread or is it simply not possible to make this work?
I work for New Relic,
You're correct with your assumption about calling it again.
You will need to call NewRelic.IgnoreTransaction() in both threads for this to work properly.
The IgnoreTransaction method doesn't ignore transactions that are wrapped in a controller, just the controller itself.
Related
I am building an app which list a set of files. When the user performs an action to one of the files, the action is process intensive (performs multiple ajax requests, updates an offline sqlite database, performs various checks on the file, etc). The user can trigger the said action on multiple files at a time.
The problem is that when the action is performed on more than one file, it blocks the UI. Digging down into the issue, removing certain part of the action (i.e. if I remove the section that performs a verification of the file size, type, and calculate the meta when missing, etc), it wears down the effect of UI lock, though it is still there.
From what I read so far, it is suggested to extract such logic out of the main process i.e. to have a separate process to handle those. The proposed solutions I've read are
use worker thread
create a child process
create a server
I've been struggling to understand and come up with a working example for each. I came across this module named piscina which says to generate workers (my understanding it is a wrapper to create worker thread?).
On the repo here I've setup an example which simulates the issue I have. So if I trigger multiple instances of the action by pressing the "Test" button at the bottom. And then try to increment the count at the top, I can experience the UI lock/lags. It takes approximately 10-15 clicks on the "Test" button to replicate. I can also hear the CPU fan increasing when performing this.
Are there any projects/examples around which gives a simplified implementation on how to mitigate or work around this problem?
EDIT
After #tromgy suggestions I've performed the following updates/test:
Adjusting the await run to a regular promise did seem to solve the issue.
But triggering multiple instances simultaneously, for example, 20, still causes UI lock. Attempting the same on my actual project causes memory heap errors.
I tried moving the Piscina instance outside, rather than creating a new instance on each event, it solves the memory leap, but not the UI lock/lag/freeze issue.
I've tried converting invoke to an actual event, the UI issue still persist.
I'm using miniprofiler and noticed that immediately after a rebuild it's giving me a crazy number of sql calls (485).
But the next time I call the page it seems to be caching / re-reading the result, because the call times are minimal. However the sql calls have reduced to a reasonable number (3). Also this seems to be a new occurance but I can't pinpoint when it started exactly.
Therefore I'm confused as to whether or not I have a problem. Does anybody know if this pertains to the rebuild and can I safely ignore it?
Or is it something I should investigate further?
Looks like MiniProfiler is recording all of the db calls made when you are attempting to drop and recreate the database.
If you don't want to record these and still want to profile that request, then try using the Ignore command in MiniProfiler.
using (MiniProfiler.Current.Ignore())
{
DbDatabase.SetInitializer<MyDBContext>(
new DropCreateDatabaseIfModelChanges<MyDBContext>());
DbDatabase.Initialize(false);
}
Include the Initialize function call within the Ignore block in order to make sure that initialization happens when you want it to happen.
The app I am working on fetches a bunch of different newsfeeds when it first starts up and updates any expired ones. While this is happening the interface often freezes up and you can't click anything. The actual network calls are being done on a separate thread, but the database operations are being done on the main thread. Would this cause the interface to freeze?
I have been told that I need to make it to where only two feeds to update are inserted into the network operation queue at a time so that it won't try all of them at once, but it's already set up to only do so many network calls at once. I don't understand how having less things in a queue at a time would cause it to go faster if they're just going to be put in there sequentially anyways. Please correct me if I am wrong, I'm still pretty new to this.
Any kind of help regarding what could cause the UI to freeze up during startup like this would be much appreciated!
It is always a good idea to move time consuming operation away from the main thread.
Fortunately it is pretty simple to do on iOS. If the time-consuming task is fairly simple you could consider using performSelectorInBackground
e.g:
[self performSelectorInBackground:#selector(myFunction:)
withObject:myParam];
It is however important to remberber, that you must not access the GUI from the background thread. To get objects back to the main thread use performSelectorOnMainThread
e.g:
[self performSelectorOnMainThread:#selector(myFunction:) myParamwaitUntilDone:YES];
Try applying this strategy to your database calls. Depending on your scenario you might want to wrap it up in a NSOperation or use a Thread when the cause of the freeze is found.
hi
i'm going to set up a rails-website where, after some initial user input, some heavy calculations are done (via c-extension to ruby, will use multithreading). as these calculations are going to consume almost all cpu-time (memory too), there should never be more than one calculation running at a time. also i can't use (asynchronous) background jobs (like with delayed job) as rails has to show the results of that calculation and the site should work without javascript.
so i suppose i need a separate process where all rails instances have to queue their calculation requests und wait for the answer (maybe an error message if the queue is full), kind of a synchronous job manager.
does anyone know if there is a gem/plugin with such functionality?
(nanite seemed pretty cool to me, but seems to be only asynchronous, so the rails instances would not know when the calculation is finished. is that correct?)
another idea is to write my own using distributed ruby (drb), but why invent the wheel again if it already exists?
any help would be appreciated!
EDIT:
because of the tips of zaius i think i will be able to do this asynchronously, so i'm going to try resque.
Ruby has mutexes / semaphores.
http://www.ruby-doc.org/core/classes/Mutex.html
You can use a semaphore to make sure only one resource intensive process is happening at the same time.
http://en.wikipedia.org/wiki/Mutex
http://en.wikipedia.org/wiki/Semaphore_(programming)
However, the idea of blocking a front end process while other tasks finish doesn't seem right to me. If I was doing this, I would use a background worker, and then use a page (or an iframe) with the refresh meta tag to continuously check on the progress.
http://en.wikipedia.org/wiki/Meta_refresh
That way, you can use the same code for both javascript enabled and disabled clients. And your web app threads aren't blocking.
If you have a separate process, then you have a background job... so either you can have it or you can't...
What I have done is have the website write the request params to a database. Then a separate process looks for pending requests in the database - using the daemons gem. It does the work and writes the results back to the database.
The website then polls the database until the results are ready and then displays them.
Although I use javascript to make it do the polling.
If you really cant use javascript, then it seems you need to either do the work in the web request thread or make that thread wait for the background thread to finish.
To make the web request thread wait, just do a loop in it, checking the database until the reply is saved back into it. Once its there, you can then complete the thread.
HTH, chris
I am trying to spawn a thread in Rails. I am usually not comfortable using threads as I will need to have an in-depth knowledge of Rails' request/response cycle, yet I cannot avoid using one as my request times out.
In order to avoid the time out, I am using a thread within a request. My question here is simple. The thread that I've used accesses a params[] variable inside it. And things seem to work OK now. I want to know whether this is right? I'd be happy if someone can throw some light on using Threads in Rails during request/response cycle.
[Starting a bounty]
The short answer is yes, but only to a degree; the binding in which the thread was created will continue to persist. The params will still exist only if no one (including Rails) goes out of their way to modify or delete the params hash. Instead, they rely on the garbage collector to clean up these objects. Since the thread has access to the current context (called the "binding" in Ruby) when it was created, all the variables that can be reached from that scope (effectively the entire state when the thread was created) cannot be deleted by the garbage collector. However, as executing continues in the main thread, the values of the variables in that context can be changed by the main thread, or even by the thread you created, if it can access it. This is the benefit--and the downfall--of threads: they share memory with everything else.
You can emulate a very similar environment to Rails to test your problem, using a function as such: http://gist.github.com/637719. As you can see, the code still prints 5.
However, this is not the correct way to do this. The better way to pass data to a thread is to pass it to Thread.new, like so:
# always dup objects when passing into a thread, else you really
# haven't done yourself any good-it would still be the same memory
Thread.new(params.dup) do |params|
puts params[:foo]
end
This way, you can be sure than any modifications to params will not affect your thread. The best practice is to only use data you pass to your thread in this way, or things that the thread itself created. Relying on the state of the program outside the thread is dangerous.
So as you can see, there are good reasons that this is not recommended. Multithreaded programming is hard, even in Ruby, and especially when you're dealing with as many libraries and dependencies as are used in Rails. In fact, Ruby seems to make it deceptively easy, but it's basically a trap. The bugs you will encounter are very subtle; they will happen "randomly" and be very hard to debug. This is why things like Resque, Delayed Job, and other background processing libraries are used so widely and recommended for Rails apps, and I would recommend the same.
The question is more does rails keep the request open whilst the thread is running than does it persist the value.
It won't persist the value as soon as the request ends and I also wouldn't recommend holding the request open unless there is a real need. As other users have said some stuff is just better in a delayed job.
Having said that we used threading a couple of times to query multiple sources concurrently and actually reduce the response time of an app (that was only for admins so didn't need to have fast response times) and if memory serves correctly the thread can keep the request open if you call join at the end and wait for each thread to finish before continuing.