Running a Rails site: development vs production - ruby-on-rails

I'm learning Ruby on Rails. At the moment I'm just running my site locally with rails server in the OS X Terminal. What changes when a Rails site is run on a production box?
Is the site still started with rails server?
Any differences with how the db is setup?
Note: I'm running Rails 3.

A rails app can be run in production calling rails server -e production, although 99% of the time you'll be serving on something like passenger or thin instead of WEBrick, which means there's a different command to start the server. (thin start -e production for instance)
This is a complicated question, but the best place to start learning about the differences would be to look at the specific environment.rb files. When rails boots up it starts with the environment file that matches the called environment, ie if you start it in development it begins by loading your development.rb file, or if you're in production it will load the production.rb file. The differences in environments are mostly the result of these differences in the various environment config files.
Basically if a Rails 3.1 app is in production mode, then by default it is not going to be compiling assets on the fly, and a lot of caching will be going on that isn't happening in development. Also, when you get error messages they will be logged but not rendered to the user, instead the static error page from your public directory will be used.
To get more insight into this, I would suggest reading the relevant rails guides:
Rails Initialization Guide: http://guides.rubyonrails.org/initialization.html
Rails Configuration Guide: http://guides.rubyonrails.org/configuring.html

There are two contexts you can use the word "production" here. One of them is running the server in production mode. You can do this locally by,
RAILS_ENV=production ./script/server
The configuration for this is picked up from config/environments/production.rb. Try comparing this file with config/environments/development.rb. There are only subtle differences like caching classes. Development mode makes it easier so that it will respond to any changes you make instantly. Plus there are two different databases (by default) will be used namely yourproject_development and yourproject_production if you choose to run your server in either of these modes.
On the other hand, rails deployment to a production box is something different. You will need to pick your server carefully. You may have to deal with a deployment script may be capistrano. You may also need a load balancer such as netgear. The database also may require a deep consideration like size expectation, master/slave clustering etc.,
Note: I have never used Rails 3. This answer is biased towards 2.3.x.

Related

Why WEBrick server is faster in production mode rather in development mode? + Rails

I have been developing ruby on rails application since some couple of months. I use the default WEBrick server to run the applications. And I found that when I start the WEBrick server in the development and production modes, the server works more speed for production mode than for the development mode.
Is there any specific reason behind that? Can anybody explain me?
In production mode, a server loads code into the cache, which makes things quick. However, that's not the case in development mode (since you don't want to restart your webrick every time you made a change). Every request loads the according code again, which takes a bit of time.
And the most of all time-eaters is the asset pipeline. In production, you get a compiled version of your assets (javascripts and css) in maybe one or two requests. In development, you get them split, for debugging purpose (based on your environment settings, of course). And because a browser does not handle all requests simultaneously, some assets are loaded after other ones are finished loading. You can watch this behaviour with e.g. firebug's network console. That means: the more assets you have, the longer your page takes to load in development mode.
In dev mode classes are not cached, so Rails reloads all the classes each time you refresh. Also, asset compilation is not done in development (by default), so Rails reloads all the assets (CSS, Javascript etc) each time you refresh.
The difference is between 2 environments. In Rails, there are several environment. Each has his own database configuration and Rails options.
You can use the Rails.env variable to made some different change with particular environment.
By default, the development environment is without all cache and activate the auto-reloading. The production environment is with all cache.
But if you want you can make a production environment like development or development environment like production.
You can add some new specific environment too.
Creating new Environment:
Assuming you want create the hudson environment.
Create a new environment file in config/environments/hudson.rb.
You can start by cloning an existing one, for instance config/environments/test.rb.
Add a new configuration block in config/database.yml for your environment.
That's all.
Now you can start the server
ruby script/server -e hudson
Run the console
ruby script/server hudson
And so on.

Where can I find developer output from Rails application deployed to Torquebox

I have a JRuby on Rails application, which is usually deployed as war to Tomcat. In development mode we use either WEBrick or trinidad (usually first). Now we are considering using Toquebox.
I was able to deploy app using Torquebox, but I wonder where can I find development logs (things like request/response details, executed SQL queries etc). I got used to that stuff. JBoss'es console, boot.log and server.log don't contain those - only torquebox specific logging.
Thanks
When you are currently in your applications directory, use
$ less log/development.log
For every enviorement: env
$ less log/<env>.log
For following it (as it appends) use:
$ tail -f log/<env>.log

Want to develop rails site offline then move to server

Is there an issue with developing my site on my macbook and then moving to a server when done? Will there be issues I need to plan ahead for? DB or ruby related maybe? Dependencies or something a server could have different from my dev environment that could cause a nightmare later? I'd rather develop it offline since it'd be faster and wouldn't require an internet connection but in the past I've always done everything with live sites so this would be a first, and I am new to ruby on rails.
Developing locally and then deploying to your server(s) via something like capistrano is standard practise.
It's a good idea to keep your development environment as close as possible to your production environment (ruby versions, database versions etc). Bundler makes keeping your gems in sync easy
I used Heroku for some projects. The deployment was as easy as it could be. I just did a git push and it worked without problems... I really like bundler and rake :-)
Your Question embodies THE way to develop in Rails. Your development environment is an offline representation of what you're production site will be.
A quick workflow analysis for you could be:
rails new ~/my_app -d postgresql; cd ~/my_app; rm public/index.html
Next, create the database:
bundle exec rake db:create:all
Now you'll have the db and app all set up, let's set up your main pages:
bundle exec rails generate controller Site index about_us contact_us
Now you'll have something to see on the site, so run:
bundle exec rails server
This server acts as your offline connection and will handle the rendering of any text, images, html etc you want to serve in your rails app. Now you can join in the debates of TDD, to TATFT or JITT, rspec vs test::unit. Welcome.
Developing locally is definitely the way to go. However, I would look into getting it on production as soon as possible and pushing often. This way you can see changes happen as you make them and are aware of any possible breaking changes.
I use heroku a lot and when I start a new project I push it to heroku almost immediately. While developing, I can publish new changes simply by git push heroku master. Everyone has to find their own workflow, but this has always worked well for me.
If you are interested in Heroku here is a good link to get you started:
https://devcenter.heroku.com/articles/rails3

Serving static assets from S3

I am running a Rails 3.0.9 app on Heroku's Cedar stack and have S3 serving static assets. In my production.rb file, there was a config set to:
config.serve_static_assets = false
If I change this to true, will it serve cached content quicker, or should I leave it as is?
The answer from Rafal is not strictly correct as essentially it comes down to what stack and what version of Rails you choose to run in your application.
With Rails 3.0 on the Bamboo stack there is a Varnish cache which sits in front of the Thin processes that Heroku run. This caches any static assets and returns them without hitting your application.
With Rails 3.0 on the Cedar stack there is no Varnish cache. Therefore all requests will be hitting your Rails process regardless of whether they are static or not.
With Rails 3.1 which should be on Cedar Heroku will try to run a rake assets:precompile as part of the slug compilation process. If this fails for any reason it will inject some code into your slug meaning that static assets are compiled and served at at run time.
There is a Rails 3.1 document on the dev center which is particularly useful regarding this: http://devcenter.heroku.com/articles/cdn-asset-host-rails31
So, if you're looking for the correct setting bear this in mind. However, before you do that, notice that Heroku will alter this setting as they see fit when you do your deploy, so any setting that you do put in will be overwritten anyway during the slug compilation. Therefore it doesn't really matter what you put in here.
(and for the record, Heroku uses Nginx, you just don't tend to see it)
From rails guide:
config.serve_static_assets configures Rails itself to serve static assets. Defaults to true, but in the production environment is turned off as the server software (e.g. Nginx or Apache) used to run the application should serve static assets instead. Unlike the default setting set this to true when running (absolutely not recommended!) or testing your app in production mode using WEBrick. Otherwise you won´t be able use page caching and requests for files that exist regularly under the public directory will anyway hit your Rails app.
Hope this helps.

How do I force Capistrano deployed app to use my development database?

I have a app that I'm deploying to a development server using Capistrano. I'd like to force this deployment to use the development database. So far the only way I've managed to do it is to make my production database info in database.yml equal to the development info. But this is a complete hack.
I've tried setting rails_env to development in deploy.rb but that hasn't worked.
Thoughts?
I ended up using the solution over here. Basically a recipe to replace a line in environment.rb after deploy but before restart.
The problems seems to be with DreamHost's Passenger config. It assumes you're running in production mode.
I'd use Capistrano Ext in order to define multiple deployment environments. I have used this in the past to deply staging and production installations of my apps, so I think it'd work well for you.
Jamis Buck has a writeup if you'd like an overview on how to use it.

Resources