I have a Clojure web application I am running on a free plan on Heroku. The app has been working well for more or less one month, but in the last 3 days the logs have been showing up this error and the app is not working.
I am not able to reproduce this error locally, where everything starts up fine.
I tried restarting the app several times, deploying a new instance and fiddling around with JAVA_OPTS and JAVA_TOOL_OPTIONS, but nothing has helped and I am stuck with the same errors.
The whole code for the application is here. Has anyone experience with this error and possible ways to work around it?
As the error message says, the app consumes more memory than Heroku allocated for it. Heroku allows you to look at app metrics, graph from there might be useful for identifying the cause.
Heroku has a special guide for memory related problems in JVM application java-memory-issues. You might find it useful.
Try setting your max heap size lower by running
$ heroku config:set JAVA_TOOL_OPTIONS="-Xmx256m"
I found the culprit of the exceeded memory.
The command run by Heroku on startup was not using the jar file. What I had before was web: lein ring server-headless and I changed it to execute the jar web: java -jar target/<app-name>-standalone.jar in the Procfile.
Since I am using ring, I also have Heroku run lein ring uberjar instead of lein uberjar before startup: this is as easy as setting LEIN_BUILD_TASK='ring uberjar' as a global config var in Heroku.
Related
Ok, I'm deploying my Rails App using Capistrano. I'm also using Puma. I've followed this tutorial to get it to work, although I'm using Debian rather then Ubuntu.
Everything works fine and I can deploy my app without issues. However if my server crashes or the server restarts, the App doesn't restart itself and the only way I got it to restart was deploying it again with the following command cap production deploy from within my App in my local machine, which we all can agree that's not ideal.
There's loads of information on the web on how to deploy a Rails App with Passenger, which I'd rather avoid to use due to lack of resources on the server part. I've also found this tutorial which seems to be a bit outdated.
Can someone please point me to an updated tutorial or give some directions on how I could get my App to start/restart who the server?
Many thanks
EDIT
As per #mudasobwa's comments, I'm detailing the steps I've taken after reading this page:
I have copied the contents of https://github.com/puma/puma/blob/master/tools/jungle/init.d/puma into /etc/init.d/puma made it executable. I've also copied the contents of https://github.com/puma/puma/blob/master/tools/jungle/init.d/run-puma into /usr/local/bin/run-puma also made it executable.
Lastly I've created a puma.conf file in /etc.
After that I've created the following directory: /path/to/app/tmp/puma and added these two files: pid and state. Note that I've also added the aforementioned folders into Capistrano's shared links structure.
After the above I've restarted my server and the App did not start as I expected.
What am I missing here?
Some of my apps at heroku has no dynos anymore, although previously it worked fine:
heroku logs says No web processes running. My other applications are working well.
How do I fix it?
i was having the same problem it really sucks i'm stuck in it from like 3 hours or more, eventually it's fixed just by deleting the whole heroku app and then specifying the buildpack that you gonna use in your interactive terminal with this command line heroku buildpacks:set heroku/php and you can add it on creating the app directly and that's what i did and it was fixed like that:
heroku create myapp --buildpack heroku/php
and the main reason was because of one python library has been installed i wasn't even using it so heroku finds two builpacks python and php and used the python one so when i did specifying that i'm actually using PHP everything has been pretty fine.
I have had a Rails 3 app deployed on Elastic Beanstalk for close to 2 years now. For the most part, I haven't had any issues; however, I recently upgraded to one of their new Ruby configurations (64bit Amazon Linux 2014.09 v1.0.9 running Ruby 2.1 (Passenger Standalone)) and I've been fighting an issue for several days where one of more Ruby processes will consume the CPU - to the point where my site becomes unresponsive. I was using a single m3.medium instance, but I've since moved to a m3.large, which only buys me some time to manually log into the EC2 instance and kill the run away process(es). I would say this happens once or twice a day.
The only thing I had an issue with when moving to the new Ruby config was that I had to add the following to my .ebextensions folder so Nokogiri could install (w/bundle install)...
commands:
build_nokogiri:
command: "bundle config build.nokogiri --use-system-libraries"
I don't think this would cause these hanging processes, but I could be wrong. I also don't want to rule out something unrelated to the Elastic Beanstalk upgrade, but I can't thing of any other significant change that would cause this problem. I realize this isn't a whole lot of information, but has anyone experienced anything similar to this? Anyone have suggestions for tracing these processes to their root cause?
Thanks in advance!
Since you upgraded your beanstalk configuration, I guess you also upgraded Ruby/Rails version. This bumped up all gem versions. The performance issue probably originate from one of these changes (and not the Hardware change).
So this brings us into the domain of RoR performance troubleshooting:
1. Check the beanstalk logs for errors. If you're lucky you'll find a configuration issue this way. give it an hour.
2. Assuming all well there, try to setup the exact same version on your localhost (passenger + ruby 2.1 + gems version). If you're lucky, you will witness the same slowness and be able to debug.
3. If you'd like to shoot straight for production debugging, I suggest you'd install newrelic (or any other application monitoring tool) and then drill into the details of the slowness in their dashboard. I found it extremely useful.
I was able to resolve my run away Ruby process issue by SSHing into my EC2 instance and installing/running gdb. Here's a link - http://isotope11.com/blog/getting-a-ruby-backtrace-from-gnu-debugger with the steps I followed. I did have to sudo yum install gdb before.
gdb uncovered an infinite loop in a section of my code that was looping through days in a date range.
I'm running Rails 2.3.3 application which is deployed with passenger/mod_rails with ruby-enterprise-1.8.6-20090610 and apache httpd.
The problem is that whenever I deploy our application, hundreds of httpd processes start dying. I'm getting this error:
[notice] child pid NNNNN exit signal Segmentation fault(11)
After a short period of time 10-20min. those errors pass off.
This problem started after migrating our database to a separate and dedicated machine. So I think it could be a problem of the mysql-db connection pools and management, however I can not define it.
Does anyone could help me with this problem or just give me a clue how to debug it deeper. Thank you in advance.
Start by enabling core dumps on your server.
Then run it to get a core file to get a backtrace and get an initial idea of where the server is core dumping.
I'm going throught the same problem at the moment. Not with Rails though.
HTH
I'm having ruby instances from mod_rails go "rogue" -- these processes are no longer listed in passenger-status and utilize 100% cpu.
Other than installing god/monit to kill the instance, can anyone give me some advice on how to prevent this? I haven't been able to find anything in the logs that helps.
If you're using Linux, you can install the "strace" utility to see what the Ruby process is doing that's consuming all the CPU. That will give you a good low-level view. It should be available in your package manager. Then you can:
$ sudo strace -p 22710
Process 22710 attached - interrupt to quit
...lots of stuff...
(press Ctrl+C)
Then, if you want to stop the process in the middle and dump a stack trace, you can follow the guide on using GDB in Ruby at http://eigenclass.org/hiki.rb?ruby+live+process+introspection, specifically doing:
gdb --pid=(ruby process)
session-ruby
stdout_redirect
(in other terminal) tail -f /tmp/ruby_debug.(pid)
eval "caller"
You can also use the ruby-debug Gem to remotely connect to debug sockets you open up, described in http://duckpunching.com/passenger-mod_rails-for-development-now-with-debugger
There also seems to be a project on Github concerned with debugging Passenger instances that looks interesting, but the documentation is lacking:
http://github.com/ddollar/socket-debugger/tree/master
I had a ruby process related to Phusion Passenger, which consumed lots of CPU, even though it should have been idle.
The problem went away after I ran
date -s "`date`"
as suggested in this thread. (That was on Debian Squeeze)
Apparently, the problem was related to a leap second, and could affect many other applications like MySQL, Java, etc. More info in this thread on lklm.
We saw something similar to this with very long running SQL queries.
MySQL would kill the queries because they exceeded the long running limit and the thread never realized that the query was dead.
You may want to check the database logs.
This is a recurring issue with passenger. I've seen this problem many times helping people that ran ruby on rails with passenger. I don't have a fix but you might want to try this http://www.modrails.com/documentation/Users%20guide%20Apache.html#debugging_frozen