Although Rails and PHP have different deployment methods, what is the preferable way to distribute a FOSS Rails app? Suppose one of the major PHP apps - Magento, Drupal, Wordpress had been build upon RoR, what would have been the preferable way for them to have distributed their application?
Packaging up the code as a gem seems to be the wrong approach for a complete out-of-the-box application, but I could be wrong.
Coming from the world of PHP with its upload-and-go approach, and being a newcomer to Rails, it's rather opaque at the moment to see how code could be easily and effectively distributed.
Packaging a completed Rails app as a gem is probably the wrong approach. I think the best solution is to provide access to a git repository or a tarball of your git repo.
If you want to offer your users something more than rake db:schema:load to setup your app it's pretty easy to create custom setup commands.
Many applications are packaged with the source code just like typical PHP applications. While deploying Rails applications may seem difficult its expected that the user will know how to set up the server properly according to their environment and needs. The only issue you need to worry about is distributing the code, setting up the server is not a domain that you are going to want to help with.
For information on deployment in Rails you should see the deployment page here.
Well, usually Rails apps run in environment running Apache + Passenger (aka mod_rails).
Deployment is easily done with Capistrano gem.
When you're running Rails app in shared host environment, they usually use fcgi/cgi dispatchers to run Ruby.
Related
Is it possible to deploy Ruby on Rails app on FTP?
If possible then how run migration on it?
My app also have a cronjob. How to set it?
How I deploy my webiste on FTP?
If any tutorial etc availble?
Technically it is possible to deploy by FTP, but the question is, why would you want to? It's a nightmare when compared to a modern, automated deployment system. There's also serious security concerns since FTP is not in any way encrypted and is extremely easy to crack into. Using public Wi-Fi exposes you to the risk of your credentials being captured.
The traditional way to deploy a Rails application is with Capistrano which handles packaging up your application through your version control system and rolling it on to your production system.
If you're not using a version control system that's the first thing you need to fix. Hacking away on files randomly and throwing them to a server over FTP produces quick results but over time it makes it very difficult to get a consistent, tested, reliable build over to your target server.
Remember that Rails is not something that runs automatically like .php files can be, you'll need to use something like Passenger to handle launching your application.
If all this seems a bit convoluted, it's worth trying Heroku to get started. They have a very streamlined approach.
If I understand right what you are asking (is it possible to run Ruby program using only FTP as a protocol), the answer is no, it is not possible. Ruby files is not Web static content (HTML, JS, CSS) that is executed in a browser and hence you can just upload it somewhere (as an option using FTP) and then access via Web. In case of Ruby, apart from uploading content you need to execute commands there (start interpreter, rake etc.) and this is not possible to do using plain FTP.
Normally you may want to use SSH channel to the deployment server to run the program after it has been uploaded. In that case upload is possible via FTP, but as well secure version of it, SFTP (or SCP to just copy files between local and remote machines).
Hope it helps.
We have a website using Rails 2.3.x, bundler, nginx, passenger and git, and would now like to use the same code to deploy a very similar site. Differences between the two will include:
Locale
Databases
Validations in some cases
Views in some cases
What is the best way to manage these differences while using the same code base?
Some ideas we've had:
Create new Rails environments, such as production-a and production-b and handle differences in the appropriate environment files. One potential problem is that many gems and plugins are hardcoded to look for production or development environments.
Use Passenger to set a global variable or use the domain per request to determine which context to use. The problem with this are rake tasks, cron jobs, etc that would not have access to this state.
Maintain two versions of the config directory. This would be inconvenient maintaining 2 versions of all the config file, many of which would be identical. Also, I'm now sure how to leverage git to do this correctly.
Any ideas, tips, or examples would be greatly appreciated! Question #6753275 is related but seems incomplete.
One solution I have used in a rails 2.3.x project was to convert the entire site to an engine. That actually is pretty easy, create a folder under vendor\plugins\ and move all the app stuff there. You can see an explanation for rails 2.3 here.
If needed you can even move all migrations and stuff there as well, and use a rake task
to run those.
Everything that needs to be overruled can then just be placed in the actual rails project using the engine. So you would have two rails-projects, with their own configuration, locales and some local overrules, and one big shared plugin/engine.
We used git submodules to keep the code in sync over different projects.
In rails 3 this is even easier, since the engine can now just be a gem.
Hope this helps.
I've been spending the last few months on developing a (my first) Rails application all by my self, just me and my Linux box, all in my development RAILS_ENV, no SCM ("for shame!") or anything. It has become quite the beast now and I am getting ready to release it onto the world. My question is: how am I ever going to make this work?
I installed gems, plugins, servers (MySQL, node.js, nginx, sphinx, juggernaut), photo compression apps that I call, video compression tools (FFMPEG) etc, I also obviously have a DB and a (lot of) seed data. I can't even remember all the things I did to my system to make it all work, but it does.
So now, when I deploy this on some stranger's server, how do I make sure that all those things get installed and configured correctly? How is e.g. FFMPEG ever going to get installed on this server when I deploy my application. How will the seed data get uploaded, how will the servers get started, with the right parameters etc.
I have read (a little bit) about Capistrano which seems to be the deployment tool of choice in the Rails community, but I am not sure if that will cover all my needs. For example, how do I figure out all the gems I used or the plugins (do I even need to know?). Is there any way I can test the deployment on my own linux box,the same I am developing on, i.e. pretend that I am hosting my own production server/rails_env and "deploy" it there?
Any help will be much appreciated.
Cheers.
There a lot of standards to follow that make life easier...
As far as figuring out which gems you need, you could try and use RVM and make a local config that you keep adding gems to until your app works. This will be kind of like starting from scratch so that you are sure to know precisely what configuration you need to run. (And it should make it easy to stand up a new, identical environment each time.)
The RVM route will allow you to test in a specific environment, which should help.
You can list the required gems in your environment.rb file so that the server demands them on start-up.
Good Luck, Cowboy .
I've been using Wordpress for awhile, it's installed on my own server, and one of the features I most love about it is how every time there is a new version it alerts you within the app's web-based admin and with one click I can upgrade the app. I don't have to get anywhere near a console.
Personally I wouldn't mind updating manually, but I can see how this could significantly affect the adoption of a piece of software. I'm working on creating a full-featured ruby on rails forum software and I would love to figure out how to include this feature. Any ideas if this could be done with rails?
Could a rails app modify it's own files? If it did, would the server need to be restarted?
To complicate things further, what if the app was deployed from a repo. Could the rails app check in a commit of itself after updating?
Maybe packaging the core of the app as a gem would be simpler? Then maybe the upgrade would not actually modify the rails MVC stack (the rails app would just be super-basic), instead if the forum was all contained within a gem then all it has to do is trigger a 'gem update [name]'. If this occurred, I don't think the Gemfile would even need to be updated. Would a server restart even be required to load the updated gem?
Ideas or feedback on any of this?
Rails files can be modified and even deleted on production - in my case aplication is still working unchanged as all classes are cached in memory. It means Rails instances must be restarted to take new change.
I suppose WordPress is Perl via CGI and you just drop application into web directory to have it working immediately - same with updates - just overwrite files and Apache picks them up immediately.
In case of Rails is that you don't know target deployment architecture thus restarting application may not be trivial. E.g. with passenger I can just do touch tmp\restart.txt and then all instances are killed and started again. Some deployments may need init.d script restart invocation.
Maybe you could recommend or prepare a ready to use deployment model which supports autoupdate. In other cases users could do updates manually.
I just look up at rails sources and find folder named "dispatches". There is four file in it. I want to know purpose of this files. I know I use this files on my production server, but I never used to think of their purpose. I know there is something about attaching Rails app to Apache server. On my production server rails appname command add this files to public folder automatically. Can I set up this behavior on my development machine?
The rails dispatcher is the entry point for a rails application and is used to bootstrap the environment.
They have a long history and in a lot of ways they are almost obsolete. In days gone by rails apps used to be powered using cgi or fastcgi, which was a way for the webserver to communicate with a rails process. The boot process would be initiated by dispatch.fcgi or dispatch.cgi. Nowadays people are more likely to use apache/nginx+passenger or apache/nginx+mongrel/thin. (Does anyone still use lighttpd?)
I'm a little fuzzy on how dispatch.rb is used, but I think it's used by upstream rails servers such as mongrel/thin to bootstrap the rails process. However, now that rails is rack compatible I'm not entirely sure if that has changed.
You don't need to pay the dispatch.* files any attention.
I hope this helps.