Difference between workers and processes in uwsgi - uwsgi

I am trying to be careful and understand every setting in my nginx configuration. I have seen a configuration example that has something like:
workers = 8
processes = 10
But the uWSGI docs do not seem to differentiate. Are they synonyms? If so, does that make this configuration incorrect?

workers and processes are indeed synonyms and the same thing. (I'm sure you've seen the configuration option documentation for them both)
That configuration is very incorrect and could actually have no effect as a result. (I just found out that by having the cheaper option twice in one of my ini files neither was being picked up - but I don't know if this is the case with workers/processes).

Related

uwsgi master and emperor modes and binary upgrade

I've been digging around for some time on this and have not been able to find any clear answers.
First.
What are the benefits of running uwsgi with both the --master and --emperor options?
I know it is recommended to always use --master, however I have not been able to find any benefit of using both over just using --emperor.
Second.
Is there a way to upgrade the binary of uwsgi "on the fly" in the same way you are able to upgrade nginx's binary?
It seems you have to do a complete stop and start of the uwsgi processes for it to pick up the new binary. In which case requests stop being processed.
If these have been discussed previously, I apologize. If there are any links to previous discussions, I would appreciate them to take a look at.
Thanks in advance.
Running the Emperor with the Master is only required if you want advanced logging or alarms for the Emperor. Generally it is not needed. Every time you make a graceful reload of the stack, you end with binary patching: https://uwsgi-docs.readthedocs.org/en/latest/Management.html

Carrierwave/Paperclip and Resque - Sharing across several computers

I am working on a rails application that requires files to be uploaded to my server and then have resque workers (running on several other computers) use those files to do some tasks. I have my workers all set to do the task but I can't seem to find a nice way to get the files from my host computer to my worker computers. I've tried Carrierwave (and looked the documentation for Paperclip), but all I see is using S3 which I cannot use. My only idea is to store a string which contains the URI where the file may be found so that the workers can download them and start working. I'm not particularly fond this idea. Does anyone have any suggestions on what might be best way to do this? Thank you!
Update
I should also note the files that need to be shared are roughly 200MB each
Have you considered something like a Network File System instead of doing this inside your application?
Depending on what platform your workers and server are you should have numerous options to share a filesystem (I assume you have a LAN running between them).
And even if no real LAN, sshfs could work too..
The upsides are obvious: Your Ruby application only has to deal with a regular filesystem using FileUtils and the heavy lifting of pushing stuff around is handled by a much more reliable infrastructure

Multiple database.yml but one application

The Setup
I have a Ruby on Rails application that I manage from a sysadmin perspective. This application in installed on a pool of load balanced application servers. These application servers a running Apache 2 and Passenger 3.0. The application files are stored in a ramdisk because IO on the application servers are ridiculously slow.
Database configuration
The backend database is stored on a pair of MySQL clusters (active/passive). Several clients use our application and each have a separate MySQL database.
Currently, we have X copies of the application (X being the number of clients). The difference between each copy is just the database.yml. Since we are using ramdisks, "disk" space is expensive and I'm thinking that there has to be a better way to solve this problem.
Potential solutions
Ideally, I would like to be able to specify the database.yml in the Apache virtualhost but that doesn't look possible with my current setup. The database.yml would be attached to the domain name accessed. If there is a way to do this, it would really be really fantastic.
Another approach would be to make a whole lot of symbolic links instead of storing copies of the application. I guess that doesn't sound too bad but I don't really like this solution.
How would you approach this problem and solve it ?
If you need more information, just ask and I'll be happy to answer. I'm not so sure whether this belongs to SF or SO but smells more SO to me.
As Mladen said, you can use RailsEnv. It's pretty much the ideal solution to your problem, it's intended to be used that way. Just don't forget to set different PassengerAppGroupName values for each env because Phusion Passenger normally uniquely identifies an application based on its path only. Also don't forget to make the config/initializers/[env name].rb file.
Perhaps you can use the same database.yml, but use different environments? You should be able to set different Rails environments using different RailsEnv parameter in your virtualhost setting.

Mongrel hangs after several hours

I'm running into a problem in a Rails application.
After some hours, the application seems to start hanging, and I wasn't able to find where the problem was. There was nothing relevant in the log files, but when I tried to get the url from a browser nothing happened (like mongrel accept the request but wasn't able to respond).
What do you think I can test to understand where the problem is?
I might get voted down for dodging the question, but I recently moved from nginx + mongrel to mod_rails and have been really impressed. Moving to a much simpler setup will undoubtedly save me headaches in the future.
It was a really easy transition, I'd highly recommend it.
Are you sure the problem is caused by Mongrel? Have you tried running your application under WEBrick?
There are a few things you can check, but since you say there's nothing in the logs to indicate error, it sounds like you might be running into a bug when using the log rotation feature of the Logger class. It causes mongrel to lock up. Instead of relying on Logger to rotate your logs, consider using logrotate or some other external log rotation service.
Does this happen at a set number of hours/days every time? How much RAM do you have?
I had this same problem. The couple options I had narrowed it down to were MySQL adapter related. I was running on Red Hat Enterprise Linux 4 (or 5) and the app would hang after a given amount of idle time.
One suggested solution was to compile native MySQL bindings, I had been using the pure Ruby one.
The other was to set the timeout on the MySQL adapter higher than what the connection would idle out on. (I don't have the specific configuration recorded, but as I recall it was in environment.rb and it was some class variable in the mysql adapter.)
I don't recall if either of those solutions fixed it, we moved to Ubuntu shortly after that and hadn't had a problem since.
Check the Mongrel FAQ:
http://mongrel.rubyforge.org/wiki/FAQ
From my experience, mongrel hangs when:
the log file got too big (hundreds of megabytes in size). you have to setup log rotation.
the MySQL driver times out
you have to change the timeout settings of your MySQL driver by adding this to your environment.rb:
ActiveRecord::Base.verification_timeout = 14400
(this is further explained in the deployment section of the FAQ)
Unfortunately, Rails (and thereby Mongrel) using up too much memory over time and crashing is a known problem (50K+ Google entries for "Ruby, rails, crashing, memory"). The current ruby interpreter has the property that it sometimes simply fails entirely to give memory back to the system - it may reuse the memory it has but it won't give it up.
There are numerous schemes for monitoring, killing and restarting Mongrel instances in a production environment - for example: (choosing at random) rails monitor . Until the problem is fixed more decisively, one of these may be your best bet.
We have experienced this same issue. First off, install the mongrel_proctitle gem
http://github.com/rtomayko/mongrel_proctitle/tree/master
This gem/plugin will allow you to view the mongrel processes via "ps" and you can see if a Mongrel is hung. An issue we have seen with Mongrel is that it will happily accept connections and enqueue them, then wedge itself. This plugin will help you see when a Mongrel has been wedged but then you must use another monitoring app to actually restart a a wedged Mongrel, something like Monit or God
You might also want to consider putting a more balanced reverse proxy in front of your Mongrels, something HAproxy, instead of nginx, Apache or Lighttpd. With a setting of "maxconn 1" in HAproxy you can assure that the queue is being maintained by HAproxy versus Mongrel. The other reverse proxies (nginx, Apache, Lighttpd) only do round-robin which means that they can load up your Mongrel queue, inadvertently.
My personal choice is God as it is much more flexible.
tl;dr Install this gem plugin and keep an eye on your Mongrels. Try Apache+Phusion Passenger.

Big things to do when deploying a rails app

In the question What little things do I need to do before deploying a rails application I am getting a lot of answers that are bigger than "little things". So this question is slighly different.
What reasonably major steps do I need to take before deploying a rails application. In this case, i mean things which are are going to take more than 5 mins, and so need to be scheduled. For small oneline config changes, please use the little things question.
Set up Capistrano to deploy You'll want to learn capistrano if you don't already know it, and use it to deploy your code in an automated way. This will involve setting up your shared directory and shared resources like database.yml.
Install C Based MySQL gem If you don't have all the required libs, this can take a little while, but less than 20 minutes.
Make sure you aren't vulnerable to common web application attacks Session fixation, session hijacking, cross-site scripting, SQL injection (probably you don't have to worry much about SQL injection). Be sure you use h() when outputting user-entered data in a view screen. Lots of good material online about this.
Choose a server architecture Nginx, Mongrel, FastCGI, CGI, Apache, Passenger: there is a lot to choose from. Think about how your app will be used and decide on the best architecture, then set it up.
Set up Exception Notifier or Exception Logger You will want your app to warn you when it breaks. Set one of these tools up to track production exceptions. Note: Exception notifier will warn you when routing errors occur (i.e. when people fat-finger URLs or script kiddies attack you): so think about what you want the framework to do when that happens and adjust accordingly.
Make sure all of your passwords are out of source control If you have database.yml, mail.yml (if you use yaml_mail_config) or other sensitive files in source control, get them out of there, replace them with database.yml.example, and put them in the shared/ folder on your server.
Ensure that your DB is locked down. A lot of people forget to secure MySQL when setting up their new production Rails box. Don't be like them.
Make sure all of the little web-files are in place If you are planning to be listed in Google, generate a sitemap.xml file. If you are planning to use an .htaccess file for something, make sure it's there. If you need a robots.txt file to prevent certain areas of your site from being indexed, make one. If you want a good looking 404 Page, make sure it's set up correctly. If you want a "Be Right Back" page to be present when you deploy, make sure that you have a Capistrano maintenance file specified and Nginx or Apache knows how and when to redirect to it.
Get your SSL Certs in place If you are going to use SSL, make sure you get certificates that are valid on your production domain, and set them up.
Use some process monitoring
Sometimes your processes (mongrels in many cases) will die or other bad things will happen to them. For example a memory leak could cause the memory consumption to increase indefinitely or a process could start using all your CPU.
monit and god are both good choices to save you from this fate. They can also be set to hit a url on your site to check for a 200 response code.
Set up server monitoring
Some suggestions in this space: fiveruns, newrelic, scout
These tools will record detailed metrics on your servers and are invaluable when something goes wrong and you need to see what actually happened. They also give you real time information on server load.
If you have a cluster this kind of reporting is even more critical.
Backup
Write a script to periodically backup your database and any other assets that your users can upload. S3 might be a good choice for this.
Choose a web server / load balancer
My preferred server is nginx, but the common pattern is to start with apache + mod_proxy_http.

Resources