Event Machine chat server stops without error trace - ruby-on-rails

I have modified the SimpleChatServer example of EventMachine to work with a Rails App as a chat server. I initialize the chat server in a seperate thread as follows.
Thread.new {
puts "I entered a new thread"
EventMachine.run do
puts "I entered a new thread"
EventMachine.start_server("0.0.0.0", 3100, SimpleChatServer)
end
}
I have hosted my app on a VPS running apache and am using Phusion Passenger to serve the rails app. The chat server works perfect except for one problem: The server stops automatically after a few minutes. When I check the error log i find nothing related to the shutdown. An interesting thing that I have observed however is a strange behavior of the shutdown: During day time in my location (11 am - 5 pm) the chat server stops after a few minutes of starting (My timezone is 10 hours ahead of my server's timezone). However during nighttime at my side the server keeps on running without a shut down. This strange behavior is bogging down my mind.
My own assumption is that during my day time, the VPS has more load to handle and thus it kills the Chat Server thread. Can I somehow avoid that? Also I would love to know if there is any other reason for this strange behavior. Please help me with this
Addition: When I check my error log I see this.
"[ 2015-03-06 08:00:20.5859 25041/7f20f1439700 agents/HelperAgent/Main.cpp:722 ]: Disconnecting long-running connections for process 25069"
Here 25069 is the PID of my chat server. How can i avoid this? How can I instruct linux not to kill my process ever?

A long while back I found the solution to this in a thread on github. The process gets killed because passenger kills idle applications to save memory. In order to disable this and keep my process running, I needed to set max_pool_idle_time to 0 in my passenger configuration. Here is a link to the original thread: https://www.phusionpassenger.com/documentation/Users%20guide%20Nginx.html#_configuring_phusion_passenger

Related

What happens if an IIS application worker process hangs?

I am totally new in web programming... Now I am working on an already implemented ASP.NET MVC application which is deployed in IIS. This app is bound to an application pool which has only one worker process. At this moment, I am trying to understand what happens if the worker process freezes/hangs due to an uncontrolled exception thrown by app code. So may someone explain me it?
What we have observed is that when this happens, application stops working correctly and we need to restart its application pool in order to app begins to work correctly again. After observing this behavior, I have a doubt..... In application pool advanced configuration, under process model, the ping maximum response time (seconds) is set to 90 so as far as I know, when application pool pings the worker process and it does not respond because it is hang, after 90 seconds then worker process should terminate, but it seems it is not terminating because when this happens we need to restart application pool in order to app works again.... so Why in this case worker process does not terminate?
First off, you have "only" one Worker Process and should probably keep it that way. Often times Web Gardening causes more issues than it helps, particularly with .NET Apps. Second, you say it freezes/hangs due to "uncontrolled" (unhandled?) exception thrown by app code. Why do you think this is the case. Do you have an error page or something indicating its an exception? The "ping" process checks if the process is still doing work, but not necessarily finishing requests. So from the perspective of WAS, IIS is still responding.
If you want to troubleshoot, you could investigate getting a memory dump with DebugDiag and perform some automated analysis on it. https://support.microsoft.com/en-us/help/919792/how-to-use-the-debug-diagnostics-tool-to-troubleshoot-a-process-that-h

Phusion Passenger is killing my process?

As described here, I'm detecting that I've been forked by Phusion Passenger, and revive a background thread that is aggregating some data that will eventually get packaged and sent to a remote server after a set amount of time. But sometimes, before the thread wakes up from the sleep, the process disappears, and (according to my log messages, that report the PID when the thread wakes up), I never hear from it again. Any way to control or prevent this?
You shouldn't be creating threads within a Passenger hosted process. If Passenger doesn't think your process is busy servicing requests, it is free to shut it down without warning. Those background threads should be used only in the course of your request processing.
What you want is a background job processing facility like delayed_job to offload this.

Ruby mod_passenger process timeout

A few Ruby apps I've worked with hang for a long time on slow calls causing processes to backup on the machine eventually requiring a reboot. Is there a quick and easy way in Passenger to limit a execution time for a single Apache call.
In PHP if a process exceeds the max execution time setting in php.ini the process returns an error to Apache and the server keeps merrily plugging away.
I would take a look at fixing the application. Cutting off requests at the web server level is really more of a band aid and not addressing the core problem - which is request failures, one way or another. If the Ruby app is dependent on another service that is timing out, you can patch the code like this, using the timeout.rb library:
require 'timeout'
status = Timeout::timeout(5) {
# Something that should be interrupted if it takes too much time...
}
This will let the code "give up" and close out the request gracefully when needed.

What happens to a user request when a Mongrel thread locks up & gets restarted by monit?

I cannot find an answer to this anywhere I've looked so hoping someone can help.
We run a pack of 30 mongrel servers & have just started to use monit to identify locked threads and restart them.
My question is, what happens to the users request, which was being handled by the locked thread when this happens - especially, what do they see in their browser?
I assume they get some sort of error?
Thanks.
If Mongrel is forcibly restarted, the user gets a "Connection was reset by peer" message, usually, or in some cases, just a blank screen. If you want to test it, you can simulate it with an action that just calls while(1) {}, and then kill the mongrel running it.

Rails keeps being rebooted in production Passenger

I'm running an application that kicks off a Rufus Scheduler process in an initializer. The application is running with Passenger in production and I've noticed a couple weird behaviors:
First, in order to restart the server and make sure the initializer gets run, you have to both touch tmp/restart.txt and load the app in a browser. At that point, the initializer fires. The horrible thing is that if you only do the touch, the processes scheduled by Rufus get reset and aren't rescheduled until you load the app in a browser.
This alone I can deal with. But this leads to the second problem: I'll notice that the scheduled process hasn't run, so I load a page and suddenly the log file is telling me that it's running the initializers as if I'd rebooted. So, at some point, Passenger is randomly rebooting as if I'd touched tmp/restart.txt and wiping out my scheduled processes.
I have an incredibly poor understanding of Passenger and Rails's integration, so I don't know whether this occasional rebooting is aberrant or all part of the architecture. Can anyone offer any wisdom on this situation?
What you describe is the way Passenger works. It spawns new instances of the application when traffic warrants them, and shuts them down after periods of inactivity to free resources.
You should read the Passenger documentation, particularly the Resource Control and Optimization section. There are settings which can prevent the application from being shut down by Passenger, if that is what you want.
Using the PassengerPoolIdleTime setting, you could keep at least one process running, but you'll almost certainly want Passenger to start up other instances of the app as necessary. This thread on the Rufus Scheduler Google Group mentions using lock files to prevent more than one process from starting the scheduler, that may be useful to you.

Resources