Fixing github rails deployment - ruby-on-rails

I'm just getting to know rails deployment so forgive me if this question sounds silly.
I have (on my local machine) created a rails app, capified it, initiated git, pushed it (origin -> master to a private repo) and then cloned it on a VPS. However, after that I had to modify several files on the server due to the server's peculiarities and now the repos obviously don't match. I want to start working with Capistrano, but as it is, I cannot do anything (I didn't even catch up with the changes on my local machine). So, I've got several questions.
a) What's the best way to go on about this? Can I delete the github repo, then create another one by pushing from the VPS and clone it on my local machine? If so, should I 'degit' (delete the .git folder) the repo on a server first? Or is the best way to copy the app directory to the local machine and then go through all this once again?
b) As far as I'm concerned, the only file that has to be different between server and the development machine is the database.yml file, do I have to add it to .gitignore and if yes, will it be deleted the next time I pull the changes from the master?
c) If (at first) I push it to github from the VPS as origin, will I be able to change the role afterwards, for it to change every time I push changes to the master from my local machine?
d) Does it even make sense to use Capistrano if there's another way to pull the changes automatically (I've heard some people are somehow using commit hooks)? Because at this point in time I just want to keep the app folder on the VPS up-to-date with what's on github and the capfile and deploy.rb seem to be offering far too many options. Not elegant in the slightest bit.
Thank you for your attention, have a nice day.
Sincerely yours,
Eugene.

a), can't you get the combined diff as a diff file and apply to the local code? Or just commit from there? Then, you will have the changes in the github repo. (ignore the changes that expose private information, use git add -p on the critical files)
for b), the database.yml file is already configured to have different conviguration per environment (dev/production). If you need something more, you can read environment variables (setting them in the production machine) using ERB:
Failing to access environment variables within `database.yml` file
On the other side, if you add the database.yml to the ignored files, you'll need some other way to generate it.
c) does not make sense for me. The VPS's repo will have a default remote configured, and your local repo too, you can push and pull and fetch from wherever you want.
d) capistrano is good enough, keep in mind to not overload the configuration with lots of hooks etc, it may become difficult to manage. Deploying through capistrano is a matter of seconds, I wouldn't worry about it.

Related

Rails: Capistrano change database.yml to database.yml.example causes error

When I deploy a new app to nginx using Capistrano.
I follow tutorial to do git mv database.yml database.yml.example and git mv secrets.yml secrets.yml.example , then created a new database.yml file on remote server. But now when I want to run app on my local mechine, it shows me an error
No such file - ["config/database.yml"]
Because there is no database.yml on my local repo.
Can I create an new and empty database.yml to fix this?
The guide just tells you that storing database credentials in a repository is bad practice and you shouldn't do it, but it doesn't mean you don't need to have this files at all.You application still needs it, so you definitely need to create it, just don't store it in main repo with code, this security critical information is better to store it elsewhere you decide to keep your authentification data like separate repository for credentials, key-pass storage or whatever place you want for such critical information.
PS Of course, if you just learning since it's not a big deal, you COULD keep your "root-123" credits in repository, but it's better to develop right habit from the beginning or at least get the idea why it should be separated.

From manual pull on server to Capistrano

I've always deployed my apps through SSH by manually logging in and running git pull origin master, running migrations and pre-compiling assets.
Now i started to get more interested in Capistrano so i gave it a try, i setup a recipe with the repository pointing to github and deploy_to to /home/myusername/apps/greatapp
The current app on the server is already hooked up with Git too so i didn't know why i had to specify the github url in the recipe again, but i ran cap deploy which was successful.
The changes didn't apply, so out of curiosity i browsed to the app folder on the server and found out that Capistrano created folders: shared, releases and current. the latter contained the app, so now i have 2 copies one in /home/myusername/apps/greatapp and another in /home/myusername/apps/greatapp/current.
Is this how it should be? and I have to migrate user uploads to current and destroy the old app?
Does Capistrano pull the repo on my localhost then upload it through SSH or run pull on the server? in other words can someone outline how the deployment works?
Does Capistrano run precompile:assets?
/releases/ is for previous versions incase you want to do cap:rollback.
/current/ as you rightly pointed out is for the current version of your app.
/shared/ is for files and folders that you want to persist between deployments, they typically get symlinked to your /current/ folder as part of your recipe.
Capistrano connects to your server in a shell and then executes the git commands on the server.
Capistrano should automatically put anything in public/system (the rails convention for stored user-uploaded files) into the shared directory, and set up the necessary symlinks.
If you put in the github url, it actually fetches from your github repo. Read https://help.github.com/articles/deploying-with-capistrano for more info.
It does, by default.

Can I update a single source file on Heroku without recompiling the slug?

I'm working on a rails project that's being hosted on Heroku.
I'm wondering if it's possible to update one file, without restarting the app.
Why. I have a bug, but I can't track it down. It works perfectly on my local system, but it seems to stop mid way through processing on heroku.
As there are no break points, I'm scattering status updates in the code. (to be removed later)
But adding one line of code to a rails app is like a five minute process.
change file
stage file to git
commit file
push git (all the above is quite fast)
wait for heroku to pull down the app, do what looks like a gem install, or at least a gem update.
change a few files to reflect the local url
start up the service again.
Is there a way to push the git without running all those other things? Perhaps a special parameter to add to the push?
A further annoyance is that my git now has a bunch of check-ins that I don't want my co-workers to see. I'm targeting a my own non-production instance of heroku (testing only) and there's no reason to include all these attempts in the global source control.
There's a good reason as to why it's not possible. When you push to Heroku, they produce a 'slug' of your application (https://devcenter.heroku.com/articles/slug-compiler). To provide the massive scalability that Heroku provides this slug is read only so that it can be spun up on multiple dynos which are likely to be distributed across many different physical machines. Each of these dynos runs a separate instance of your application whilst the routing mesh ensures that requests to your application goes to the correct dynos.
Now consider what would occur if any of these instances were writeable, if you're running 5 dynos you'd have your application running on 5 seperate instances - if a file is written how is it then distributed across of your running dynos? Yes, Heroku could have considered some kind of shared file system for running applications out of but that's complicated. By making the file system read only (https://devcenter.heroku.com/articles/read-only-filesystem) this problem is alleviated.
If you've built an app and deployed to Heroku but forgotten to use S3 type peristant storage your application will let you upload files to it (via Paperclip of such like in the Ruby world) but that uploaded asset will only exist on the dyno that received it and will then be lost when new code is deployed or the application restarted as the dyno receives the latest code from the slug.
If you're debugging against Heroku don't forget you've got the usual git arsenal available, git commit --amend. Alternatively work in a branch and deploy that to directly to Heroku (git push heroku <yourbranchname>:master) then when you've isolated the problem rebase (http://git-scm.com/book/en/Git-Branching-Rebasing) your branch onto master squashing any commits you no longer need.
XY Problem
This is a classic XY Problem. X is your non-working code; Y is your search for a non-existent Git misfeature.
How Git Works
Git commits fundamentally work at the tree level, not the file level. As a gross over-simplification, a commit points to a tree, which points to a set of files. When you push a commit, you have to push all the objects related to that commit unless the objects already exist on the receiver.
How Heroku Works
Heroku compiles the application in your Git repository into a slug. While you can ignore certain files during compilation, you can't avoid compiling the slug. That's just the way the platform works.
This is not a problem if you have a reasonable slug size; my Heroku apps only take a couple of seconds to compile. If your slugs are very large, and therefore take a long time to compile (you claim it takes you 5+ minutes), then you have another XY problem on your hands if you're trying to solve for "don't compile."
Debugging on Heroku
Heroku has lots of features and add-ons to aid debugging. Here's a short list to get you started.
interactive console sessions
logging
Exceptional Add-On
See Also
https://devcenter.heroku.com/articles/read-only-filesystem
https://devcenter.heroku.com/articles/dynos#ephemeral_filesystem
It's not possible. The way heroku is setup, every time you push up a change the server will restart.
Yes you can.
First, list all the files that have been modified since the last commit.
git status
Add the files you want to commit individually.
git add location/file_name.rb
git add location/file_name2.rb
...
Make commit to the files you added to push.
git commit -m "committing files one at a time or two at a time"
Now push
git push heroku

How to push a Git update for my Rails app on DotCloud.com without loosing the SQLite prod db

This could be a noob problem but I couldn't find a solution so far.
I'm developing a Rails app locally that uses SQLite, I've set up a local Git repo, and the dotcloud push command is using this. Locally I use the dev environment and on DotCloud it automatically uses the prod env, which is great. The problem is that each time I do a push my prod db on DotCloud gets lost, no matter how minor the changes are to the codebase, and I have to run 'rake db:migrate' to set it up again. I don't have a prod db locally, only the dev and test dbs.
Put your DB in ~/data/ as described here and create a symbolic link at deploy time:
ln -s ~/data/production.sqlite3 ~/current/db/production.sqlite3
You should not have your SQLite database file in version control. If you had multiple developers it would conflict every single time somebody merges the latest changes. And as you've noticed, it will also be pushed up to production.
You should add the db file to .gitignore. If it's already in version control, you'll probably have to git rm the file first.
The problem is that every time you deploy, the old version of your deployed app is wiped, and replaced with the new code, and your sqlite db is usually within your app files. I'm not a dotcloud user I don't know it it works, but you can try to setup a shared folder, where you put the production database on the server, which is outside of your rails app.
Not really sure how git is setup on DotCloud.com, but I'm assuming there is a bare repo that you push to and another repo that pull from the bare when a suitable git hook has been executed. You need to find out if you can configure that last pull to use the ours merge strategy.

When do I want to use Ruby On Rails submodules?

I like the idea of using submodules, but I am worried that I am leaving my code in someone else's hands. The main issue is that every time I deploy with capistrano, a new copy of the submodule is checked out since I am using:
set :git_enable_submodules, 1
So what happens if someone commits broken code? Then I app breaks on deploy.
Are submodules generally a bad idea unless you control the repository?
If so, is it common practice to just keep a copy of every plugin in your local repo and under your SCM?
Thanks!
Yes, you should keep local copies of everything that may be updated without warning (such as git submodules or svn externals). Take no risk when it comes to deployment on production!
Some even argue you should freeze Rails and all your pure-Ruby gems to the vendor directory as well, so that they only get updated when you want to. You avoid having to install all dependencies on every server you deploy to. This is slightly less relevant now that Rails makes it really easy to install all required gems with a simple rake task, though (rake gems:install).

Resources