Versions i use:
uWSGI: 2.0.19.1 (64bit)
os: Linux-3.10.0-1062.4.1.el7.x86_64
I am currently want to set up my vassal app with the uWSGI cheaper subsystem to handle the workers etc.
I decided to use the "spare2" algorithm, like in the uWSGI DocĀ“s explained.
https://uwsgi-docs.readthedocs.io/en/latest/Cheaper.html?highlight=spare2#spare2-cheaper-algorithm
However i get this message in my app log
unable to find requested cheaper algorithm, falling back to spare
So i looked into my uWSGI app with
uwsgi --cheaper-algos-list
*** uWSGI loaded cheaper algorithms ***
busyness
spare
backlog
manual
--- end of cheaper algorithms list ---
And it seems there is no "spare2" algorithm. In the Docs, Changelog of uWSGI i could not find any hint if "spare2" is maybe replaced or needed some special installment.
Question:
What happened with the "spare2" algorithm? Did i miss something in my uwsgi prerequisites? Do i have to download this as a plugin? Do i have to install uWSGI Cheaper algorithms?
Yeah I ran into the same problem, debugging for hours why spare2 was behaving exactly like spare would, without noticing the log line saying that spare2 was unavailable.
Anyway, yes, the PyPI version of uwsgi is 2.0.x while the documentation and github code in master are 2.1.x. From what I'm reading, this difference has been around for quite some time.
The author of spare2 kindly backported the plugin to 2.0.x: https://github.com/KLab/uwsgi-cheaper-spare2.
I'm inclined to use the built-in busyness, but then, in 2.1.x the situation will reverse: spare2 is built-in and busyness is plug-in.
Related
What would be the best way to profile a dataflow job if the scale does not permit doing so locally?
In the past, I tried using jstack to check what the Java threads are doing on the worker instances, but that doesn't seem to work for anymore.
Of course I can use stopwatches and log the measured timing data, but I was hoping maybe there is a better way.
Update: The instructions here still seem to work, with the only difference that instead of installing java with apt-get install openjdk-7-jdk, I had to download it from Oracle's site.
Thanks,
GB
As mentioned in the question, you can install jstack if you install the JDK.
We have a Github issue tracking the need for user-code profiling -- check there for progress.
I have had a Rails 3 app deployed on Elastic Beanstalk for close to 2 years now. For the most part, I haven't had any issues; however, I recently upgraded to one of their new Ruby configurations (64bit Amazon Linux 2014.09 v1.0.9 running Ruby 2.1 (Passenger Standalone)) and I've been fighting an issue for several days where one of more Ruby processes will consume the CPU - to the point where my site becomes unresponsive. I was using a single m3.medium instance, but I've since moved to a m3.large, which only buys me some time to manually log into the EC2 instance and kill the run away process(es). I would say this happens once or twice a day.
The only thing I had an issue with when moving to the new Ruby config was that I had to add the following to my .ebextensions folder so Nokogiri could install (w/bundle install)...
commands:
build_nokogiri:
command: "bundle config build.nokogiri --use-system-libraries"
I don't think this would cause these hanging processes, but I could be wrong. I also don't want to rule out something unrelated to the Elastic Beanstalk upgrade, but I can't thing of any other significant change that would cause this problem. I realize this isn't a whole lot of information, but has anyone experienced anything similar to this? Anyone have suggestions for tracing these processes to their root cause?
Thanks in advance!
Since you upgraded your beanstalk configuration, I guess you also upgraded Ruby/Rails version. This bumped up all gem versions. The performance issue probably originate from one of these changes (and not the Hardware change).
So this brings us into the domain of RoR performance troubleshooting:
1. Check the beanstalk logs for errors. If you're lucky you'll find a configuration issue this way. give it an hour.
2. Assuming all well there, try to setup the exact same version on your localhost (passenger + ruby 2.1 + gems version). If you're lucky, you will witness the same slowness and be able to debug.
3. If you'd like to shoot straight for production debugging, I suggest you'd install newrelic (or any other application monitoring tool) and then drill into the details of the slowness in their dashboard. I found it extremely useful.
I was able to resolve my run away Ruby process issue by SSHing into my EC2 instance and installing/running gdb. Here's a link - http://isotope11.com/blog/getting-a-ruby-backtrace-from-gnu-debugger with the steps I followed. I did have to sudo yum install gdb before.
gdb uncovered an infinite loop in a section of my code that was looping through days in a date range.
Can anyone point me to an article with the best way to quickly setup a Slicehost slice with Rails/Git from scratch?
Slicehost has a number of useful articles on how to set up rails. These capistrano recipes might also come in handy.
If you arent' experienced linux/apache admin, you can follow a sequence of 6-8 of pickled onions posts: apt-get update, SSH, iptables, mysql, ruby, gems, rails, apache, mod_rails
Here's the sequence for ubuntu intrepid
Here's what i used for Hardy
http://articles.slicehost.com/2008/4/25/ubuntu-hardy-setup-page-1
http://articles.slicehost.com/2008/4/25/ubuntu-hardy-setup-page-2
http://articles.slicehost.com/2009/2/2/ubuntu-intrepid-installing-mysql-with-rails-and-php-options
http://articles.slicehost.com/2008/4/25/ubuntu-hardy-installing-apache-and-php5
http://articles.slicehost.com/2008/4/28/ubuntu-hardy-apache-config-layout
http://articles.slicehost.com/2008/4/28/ubuntu-hardy-apache-configuration-1
http://articles.slicehost.com/2008/4/28/ubuntu-hardy-apache-configuration-2
http://articles.slicehost.com/2008/4/30/ubuntu-hardy-ruby-on-rails
(This is a good minimal sequence. I would recommend spending more time learning iptables, denyhosts, how to blacklist IP's and summarize logfiles to lock the server down).
I just did a slicehost installation (Ubuntu Hardy/RoR)
Install Ruby
thin installation (your RoR server)
Nginx installation (The web server/vhost/proxy)
Watch git tutorials here - they are great!
Thin/Nginx is very easy to set up compared to Apache/Mongrel, and uses less memory. Apache wins in some performance test, but uses more memory.
Rails core team member Josh Peek has some Capistrano recipes for setting up and deploying to Slicehost.
I searched the site already but couldn't find any suitable information. As there is always some expert around I'm sure one of guys knows exactly what I'm searching for :-)
We're on a balanced system:
Machine 1: HAProxy load balancer
Machine 2 & 3: Apache mod_rails and (of course) our Rails applications
Those were the days when we were able to monitor all Mongrel processes using monit (or other monitoring tools).
Is there any way to do an easy and clever monitoring of passenger processes with monit (or other tools), too? How can I dynamically get all pids of the running processes and pass them to monitoring?
Matt
There are various options available. Here are some of them:
The passenger-status tool lets you inspect its internal status
FiveRuns Manage can monitor a Passenger installation
Scout can also monitor Passenger
I made a plugin which make Passenger processes monitorable by Monit:
https://github.com/romanbsd/passenger_monit
Its a little ghetto but run these commands
watch passenger-status
watch passenger-memory-stats
then install and run htop
I did a quick search and I think I found the thing your looking for. He uses a script which runs off "passenger-status" as John Topley said.
http://blog.slowb.ro/2013/06/18/add-passenger-status-to-monitoring-on-zenoss/
I'm having ruby instances from mod_rails go "rogue" -- these processes are no longer listed in passenger-status and utilize 100% cpu.
Other than installing god/monit to kill the instance, can anyone give me some advice on how to prevent this? I haven't been able to find anything in the logs that helps.
If you're using Linux, you can install the "strace" utility to see what the Ruby process is doing that's consuming all the CPU. That will give you a good low-level view. It should be available in your package manager. Then you can:
$ sudo strace -p 22710
Process 22710 attached - interrupt to quit
...lots of stuff...
(press Ctrl+C)
Then, if you want to stop the process in the middle and dump a stack trace, you can follow the guide on using GDB in Ruby at http://eigenclass.org/hiki.rb?ruby+live+process+introspection, specifically doing:
gdb --pid=(ruby process)
session-ruby
stdout_redirect
(in other terminal) tail -f /tmp/ruby_debug.(pid)
eval "caller"
You can also use the ruby-debug Gem to remotely connect to debug sockets you open up, described in http://duckpunching.com/passenger-mod_rails-for-development-now-with-debugger
There also seems to be a project on Github concerned with debugging Passenger instances that looks interesting, but the documentation is lacking:
http://github.com/ddollar/socket-debugger/tree/master
I had a ruby process related to Phusion Passenger, which consumed lots of CPU, even though it should have been idle.
The problem went away after I ran
date -s "`date`"
as suggested in this thread. (That was on Debian Squeeze)
Apparently, the problem was related to a leap second, and could affect many other applications like MySQL, Java, etc. More info in this thread on lklm.
We saw something similar to this with very long running SQL queries.
MySQL would kill the queries because they exceeded the long running limit and the thread never realized that the query was dead.
You may want to check the database logs.
This is a recurring issue with passenger. I've seen this problem many times helping people that ran ruby on rails with passenger. I don't have a fix but you might want to try this http://www.modrails.com/documentation/Users%20guide%20Apache.html#debugging_frozen