When do I want to use Ruby On Rails submodules? - ruby-on-rails

I like the idea of using submodules, but I am worried that I am leaving my code in someone else's hands. The main issue is that every time I deploy with capistrano, a new copy of the submodule is checked out since I am using:
set :git_enable_submodules, 1
So what happens if someone commits broken code? Then I app breaks on deploy.
Are submodules generally a bad idea unless you control the repository?
If so, is it common practice to just keep a copy of every plugin in your local repo and under your SCM?
Thanks!

Yes, you should keep local copies of everything that may be updated without warning (such as git submodules or svn externals). Take no risk when it comes to deployment on production!
Some even argue you should freeze Rails and all your pure-Ruby gems to the vendor directory as well, so that they only get updated when you want to. You avoid having to install all dependencies on every server you deploy to. This is slightly less relevant now that Rails makes it really easy to install all required gems with a simple rake task, though (rake gems:install).

Related

workflow for building rails engines as gems

We are looking into building a very large rails application and considering using engines for better separation of "modules" out of the main app.
we have started this process by creating a small engine using the gem motorhead (the idea of its active_if component is wanted).
That engine was then removed from the main app and given a git init, then pushed to github.
the main app then was able to pull the gem in within the Gemfile.
During this proof of concept, it works, but not very efficient, and also updating the new engine/gem is a bit awkward in this way as it is kinda a submodule in a way. What is the proper workflow for building and maintaining engines/gems when building a modular app like this?
Thanks in advance
The most akward part about deploying Gems or Engines as modules is the constant need to update. We had a lot of success with using:
bundle config local.my_gem ~/projects/my_gem/
It'll point to the Gem/Engine version on disk without modifying the Gemfile and Gemfile.lock.
To remove the local override run:
bundle config --delete local.my_gem ~/projects/my_gem/
With this you should be able to restrict the times the Gemfile.lock has to be updated to deployment time.

Capistrano 3 database migrations fail and does not create current symlink

I've never worked with Capistrano before and currently I am fighting the urge to just scrap it and go back to my old manual ways.
As I understand, Capistrano V3 does not create the initial database because they feel it is the duty of the DB administrator.
So I must be missing something but I have followed their instructions but the initial cap staging deploy fails when it gets to the rake db:migrate step because the database does not exist.
Because of the failure, the symlink for current -> releases never gets created.
Is it just accepted general practice that we SSH into our boxes and cd into the first folder under releases and manually run rake db:create...?
And then from there, am I supposed to just run cap staging deploy again so that it finishes creating the symlinks?
Seems hacky for something that is supposed to make things easier and I am not sure if I am understanding this correctly or not.
Thanks.
It does make sense to leave certain things out of a deployment. As the initial set up and the routine deployments are very separate functions and require different specialties, or in large deployments even different skillsets. That said.. I'm totally with you - on the first deploy having to manually set up the database and certain files (specifically linked files like secrets.yml) is a step that just wastes my time.
I use this plugin:
https://github.com/capistrano-plugins/capistrano-postgresql
just add the require capistrano/postgresql to your capfile as you would any plugin
then run cap staging setup before the first time you run cap staging deploy

Fixing github rails deployment

I'm just getting to know rails deployment so forgive me if this question sounds silly.
I have (on my local machine) created a rails app, capified it, initiated git, pushed it (origin -> master to a private repo) and then cloned it on a VPS. However, after that I had to modify several files on the server due to the server's peculiarities and now the repos obviously don't match. I want to start working with Capistrano, but as it is, I cannot do anything (I didn't even catch up with the changes on my local machine). So, I've got several questions.
a) What's the best way to go on about this? Can I delete the github repo, then create another one by pushing from the VPS and clone it on my local machine? If so, should I 'degit' (delete the .git folder) the repo on a server first? Or is the best way to copy the app directory to the local machine and then go through all this once again?
b) As far as I'm concerned, the only file that has to be different between server and the development machine is the database.yml file, do I have to add it to .gitignore and if yes, will it be deleted the next time I pull the changes from the master?
c) If (at first) I push it to github from the VPS as origin, will I be able to change the role afterwards, for it to change every time I push changes to the master from my local machine?
d) Does it even make sense to use Capistrano if there's another way to pull the changes automatically (I've heard some people are somehow using commit hooks)? Because at this point in time I just want to keep the app folder on the VPS up-to-date with what's on github and the capfile and deploy.rb seem to be offering far too many options. Not elegant in the slightest bit.
Thank you for your attention, have a nice day.
Sincerely yours,
Eugene.
a), can't you get the combined diff as a diff file and apply to the local code? Or just commit from there? Then, you will have the changes in the github repo. (ignore the changes that expose private information, use git add -p on the critical files)
for b), the database.yml file is already configured to have different conviguration per environment (dev/production). If you need something more, you can read environment variables (setting them in the production machine) using ERB:
Failing to access environment variables within `database.yml` file
On the other side, if you add the database.yml to the ignored files, you'll need some other way to generate it.
c) does not make sense for me. The VPS's repo will have a default remote configured, and your local repo too, you can push and pull and fetch from wherever you want.
d) capistrano is good enough, keep in mind to not overload the configuration with lots of hooks etc, it may become difficult to manage. Deploying through capistrano is a matter of seconds, I wouldn't worry about it.

Can I update a single source file on Heroku without recompiling the slug?

I'm working on a rails project that's being hosted on Heroku.
I'm wondering if it's possible to update one file, without restarting the app.
Why. I have a bug, but I can't track it down. It works perfectly on my local system, but it seems to stop mid way through processing on heroku.
As there are no break points, I'm scattering status updates in the code. (to be removed later)
But adding one line of code to a rails app is like a five minute process.
change file
stage file to git
commit file
push git (all the above is quite fast)
wait for heroku to pull down the app, do what looks like a gem install, or at least a gem update.
change a few files to reflect the local url
start up the service again.
Is there a way to push the git without running all those other things? Perhaps a special parameter to add to the push?
A further annoyance is that my git now has a bunch of check-ins that I don't want my co-workers to see. I'm targeting a my own non-production instance of heroku (testing only) and there's no reason to include all these attempts in the global source control.
There's a good reason as to why it's not possible. When you push to Heroku, they produce a 'slug' of your application (https://devcenter.heroku.com/articles/slug-compiler). To provide the massive scalability that Heroku provides this slug is read only so that it can be spun up on multiple dynos which are likely to be distributed across many different physical machines. Each of these dynos runs a separate instance of your application whilst the routing mesh ensures that requests to your application goes to the correct dynos.
Now consider what would occur if any of these instances were writeable, if you're running 5 dynos you'd have your application running on 5 seperate instances - if a file is written how is it then distributed across of your running dynos? Yes, Heroku could have considered some kind of shared file system for running applications out of but that's complicated. By making the file system read only (https://devcenter.heroku.com/articles/read-only-filesystem) this problem is alleviated.
If you've built an app and deployed to Heroku but forgotten to use S3 type peristant storage your application will let you upload files to it (via Paperclip of such like in the Ruby world) but that uploaded asset will only exist on the dyno that received it and will then be lost when new code is deployed or the application restarted as the dyno receives the latest code from the slug.
If you're debugging against Heroku don't forget you've got the usual git arsenal available, git commit --amend. Alternatively work in a branch and deploy that to directly to Heroku (git push heroku <yourbranchname>:master) then when you've isolated the problem rebase (http://git-scm.com/book/en/Git-Branching-Rebasing) your branch onto master squashing any commits you no longer need.
XY Problem
This is a classic XY Problem. X is your non-working code; Y is your search for a non-existent Git misfeature.
How Git Works
Git commits fundamentally work at the tree level, not the file level. As a gross over-simplification, a commit points to a tree, which points to a set of files. When you push a commit, you have to push all the objects related to that commit unless the objects already exist on the receiver.
How Heroku Works
Heroku compiles the application in your Git repository into a slug. While you can ignore certain files during compilation, you can't avoid compiling the slug. That's just the way the platform works.
This is not a problem if you have a reasonable slug size; my Heroku apps only take a couple of seconds to compile. If your slugs are very large, and therefore take a long time to compile (you claim it takes you 5+ minutes), then you have another XY problem on your hands if you're trying to solve for "don't compile."
Debugging on Heroku
Heroku has lots of features and add-ons to aid debugging. Here's a short list to get you started.
interactive console sessions
logging
Exceptional Add-On
See Also
https://devcenter.heroku.com/articles/read-only-filesystem
https://devcenter.heroku.com/articles/dynos#ephemeral_filesystem
It's not possible. The way heroku is setup, every time you push up a change the server will restart.
Yes you can.
First, list all the files that have been modified since the last commit.
git status
Add the files you want to commit individually.
git add location/file_name.rb
git add location/file_name2.rb
...
Make commit to the files you added to push.
git commit -m "committing files one at a time or two at a time"
Now push
git push heroku

How should I deploy a patch to a Passenger-based production Rails application without downtime?

I have a Passenger-based production Rails application which has thousands of users. Occasionally we need to apply a code patch (we use git) and the current process for doing this (you can assume there are no data migrations) is:
Perform git pull origin [production-branch-name] on the server
touch tmp/restart.txt to restart Passenger
This allows us to patch the server without having to resort to putting up a maintenance page, which is great, but it doesn't feel quite right since it's not actually a proper 'deployment', and we still need to manually update the revision file and our deployment doesn't appear in the Hoptoad or NewRelic services we use.
Ideally I would run cap production deploy and just let the standard Capistrano deployment script take care of everything, but is this a dangerous thing to do without putting up a maintenance page? This deployment process seems to be fairly safe in that the new revision is deployed to a completely separate folder and only right at the end of the process is a symlink re-created to switch the currently deployed version, but I'm still fairly paranoid about this somehow resulting in a lost or failed request.
No problems here doing cap production deploy. If the deployment fails then the previous release is still good. Nothing will fail as the old release is loaded (cached) in the current Passenger process. The touch tmp/restart.txt will pick up the new release and all is good in the world.

Resources