Backgroundrb scheduled task ending - ruby-on-rails

I have a backroundrb scheduled task that takes quite a long time to run. However it seems that the process is ending after only 2.5 minutes.
My background.yml file:
:schedules:
:named_worker:
:task_name:
:trigger_args: 0 0 12 * * * *
:data: input_data
I have zero activity on the server when the process is running. (Meaning I am the only one on the server watching the log files do their thing until the process suddenly stops.)
Any ideas?

There's not much information here that allows us to get to the bottom of the problem.
Because backgroundrb operates in the background, it can be quite hard to monitor/debug.
Here are some ideas I use:
Write a unit test to test the worker code itself and make sure there are no problems there
Put "puts" statements at multiple points in the code so you can at least see some responses while the worker is running.
Wrap the entire worker in a begin..rescue..end block so that you can catch any errors that might be occurring and cutting the process short.

Thanks Andrew. Those debugging tips helped. Especially the begin..rescue..end block.
It was still a pain to debug though. In the end it wasn't BackgroundRB cutting short after 2.5 minutes. There was a network connection being made that wasn't being closed properly. Once that was found and closed, everything works great.

Related

Run Rufus-Scheduler with Ruby on Rails

I'm trying to make a application that serves as an REST API, which is related to information regarding X.
Simultaneously, I'd like to schedule some task to be run from time to time so that it retrieves remote information and inserts it into the database.
Looking as a very atractive solution for a begginer like me, I decided to Rufus as in https://github.com/jmettraux/rufus-scheduler#so-rails.
Firstly, I started by doing some dummy code, to test it out, and it appeard to work as intented.
First Try
The problem was after this, I tried doing so, but adding some real logic to it - "some task to be run from time to time so that it retrives remote information and inserts it into the database."
Here is when the issue begins: after the (usually) very first execution, the app doesn't do anything else - it even stops answering to REST requests. Then, as soon as I press Ctrl-C, it immediatly makes up for what it hasn't done, as in: does the "pending" tasks, printing the first logger.info as if it was in the intented time - although the insertion in the database only happens after I do this action of pressing Ctrl-C -, and answers to the REST requests.
After searching through the internet, I haven't found anything close to my problem, I believe I have some misconfiguration, or maybe I'm not running things as intented.
EDIT: Turns out I'm stupid and was pausing the program's execution, by selecting the terminal's text, as it was my first time developing on Windows.

Slow TTFB / Server Response time under ManagePipelineHandler Module

Every time I run the homepage of the project I'm working with, it takes so much time to load (6-8 seconds page load time). And checking with webpagetest.org, the initial request takes more than 5000ms or 5seconds for the TTFB.
Environment Set-up:
ASP.Net MVC
IIS 8.5
Upon thorough investigation and checking the IIS tracing Logs, the long wait happens under the ManagePipelineHandler Module with 7515ms/7.5seconds (it consistently happens with AspNetPagePreInitEnter, AspNetPageInitEnter and AspNetPageInitLeave). Sample on the IIS trace log below:
IIS Trace Logs where the hang happens
And as I search on what does the hang on this ASP.NET events means, but I can't find any concrete answer on what causes this hang or why this hang happens.
Any help is much appreciated.
First of all, you could try to disable anti-virus.
To figure out the root cause, it is recommended to dump file because the long wait happend in ManagePipelineHandler.
We may need to capture 2-3 dump files by generating a dump every 2 second.
Then we can compare the managed call stack. It will tell us what the thread is doing. We can improve the peformance only when we know what it was doing.
If you are not sure how to analyze dump file, then you could try
https://www.microsoft.com/en-us/download/details.aspx?id=58210
If you know how to analyze a dump file, then you could try mex extension with WINDBG.
!mex.aspxpagesext will show all current requests hand its thread.
As long as we know the thread ID, we can use mex.clrstack2! to show the managed call stack

How to get the CPU time of Delphi program?

Problem I'm trying to solve: My program uses System.Win.ScktComp.TServerSocket to communicate with another local process via Ethernet. Between receiving a packet from the local process and sending a response is 100ms--which shouldn't take this long. I'm trying to step through my program with the debugger to see where that 100ms is being spent.
The problem is that if I get the current time while I'm in the debugger it will obviously count the time it spent in the paused state of the debugger. Another problem is that the relevant part of my app is TTimer and event-driven so that when a routine returns you're not sure what routine will be called next.
My attempt: I can forgo using the debugger and print the current time everywhere like in all the OnTimer procedures and other events.
Much better solution: Step through with the debugger, getting the CPU time (which isn't affected by the time spent paused in the debugger) here and there to pinpoint where that 100ms is being lost.
I don't believe that you are tackling your problem the correct way, and have made that point in comments. Leaving that aside, the function that you are asking for is GetProcessTimes.
I'm trying to ... see where that 100ms is being spent.
A debugger will not be able to tell you that very easily. You need to use a profiler instead, like AQTime or similar, and let it clock your code in real-time and report the results, such as how much time was spent in specific functions and class methods.

max_execution_time always 300

I am unable to change my max execution time. I have changed it in my php.ini file and it still shows as 300 when I run phpinfo() even though I've set it to 0 and even ridiculous amounts (9000000000) Is there a setting I'm missing? I've rebooted the apache server and rebooted the actual server and I am still encountering this issue.
I ended up using the script on this post https://stackoverflow.com/a/7700253/815437 along with standard ini_set of memory limit and max execution time. This was not a solution to the underlying problem, but it did get me rolling for now.
I am still accepting answers on this problem though because I'm sure it'll be an issue in the future with some large databases I have to import.
Try adding this line of code:
set_time_limit(5);
This code sets the max execution time to 5 seconds and you can't set max execution time to 0 because that means that the script will run forever. You have to try a positive number, zero will make the script run forever.
The best way to deal with execution time issues is to enforce max execution time temporarily for that specific script. It's debatable but works fine as balanced solution.
Please read more about it
http://php.net/manual/en/function.set-time-limit.php

ERLANG wait() and blocking

Does the following function block on its running core?
wait(Sec) ->
receive
after (1000 * Sec) -> ok
end.
A great answer will detail the internal working of Erlang and/or the CPU.
The process which executes that code will block, the scheduler which runs that process currently will not block. The code you posted is equal to a yield, but with a timeout.
The Erlang VM scheduler for that core will continue to execute other processes until that timeout fires and that process will be scheduled for execution again.
Short answer: this will block only current (lightweight) process, and will not block all VM. For more details you must read about erlang scheduler. Nice description comes from book "Concurent Programming" by Francesco Cesarini and Simon Thompson.
...snip...
When a process is dispatched, it is assigned a number of reductions†
it is allowed to execute, a number which is reduced for every
operation executed. As soon as the process enters a receive clause
where none of the messages matches or its reduction count reaches
zero, it is preempted. As long as BIFs are not being executed, this
strategy results in a fair (but not equal) allocation of execution
time among the processes.
...snip...
nothing Erlang-specific, pretty classical problem: timeouts can only happen on a system clock interrupt. Same answer as above: that process is blocked waiting for the clock interrupt, everything else is working just fine.
There is another discussion about the actual time that process is going to wait which is not that precise exactly because it depends on the clock period (and that's system dependent) but that's another topic.

Resources