Firstly, Happy New Year everyone.
I'm new to Rails, so please tolerate any incorrect use of terminology...
I have developed a simple Rails application, backed by a MySQL database.
I would now like to deploy this application to multiple, independent groups of users (i.e. it is a club application, and I would like to deploy it to a number of completely independent clubs).
I would like to use the same Rails Application code as much as possible, and just have a separate instance of the database for each club.
As each instance will be running on the same server (until server load proves to be an issue) I assume I can use a different port for each Rails server to steer users to the correct group?
I'd read that there are test and production modes, is it possible to have multiple [additional] instances of production modes, e.g. club1, club2, all sharing the same code, with unique databases?
My questions are how to support multiple separate database instances, and also how best to route to these?
Any advice on how to go about this much appreciated.
If you are using Git (I suggest you should be!) then you can keep a central version of your code in one place and then deploy it multiple times, changing only the database.yml file (it should not be checked in to your git repository in that case). http://git-scm.com/
Let's say you put all of your code up on github.com with username 'snips' and the project is called 'clubster'. Using something like Heroku you would then do:
git clone https://github.com/snips/clubster.git
cd clubster
heroku create boxingclub
Because Heroku auto-configures your database there is no need for a database.yml file
git push heroku master
And you'd have a version of your code deployed at boxingclub.heroku.com
When you make changes to your code you just go to each of your installations and do:
git pull origin master
git push heroku master
Which updates your code on that particular instance of your application.
And if you're getting a little more advanced you'd be looking at Chef to manage the whole setup for you. http://www.rubyinside.com/chef-tasty-server-configuraiton-2162.html
The other approach would be to have some kind of subdomain system, but I'll leave that to others to cover.
Related
I have a Heroku app that is running on the Cedar-10 stack which will soon be deprecated. I am following the Migrating to the Celadon Cedar-14 Stack guide and have created an instance of my app on the new stack. However, this has also created another (empty) PostgreSQL database automatically.
Can I upgrade to the new stack but continue to use the existing database?
When I delete app on the old stack, will it delete the database that is associated with it automatically?
I see that the database URL is set in the environment variable $DATABASE_URL - does that mean I can somehow update that and "link" the old database to the new app?
When searching for information on this I've come across heroku's pg:copy and pg:transfer directives, but it seems strange to duplicate the database when it is working fine, and has a paid "upgrade" and backups already associated with it.
The problem with the configuration variable approach
While it is possible to share a database between two applications in Heroku by copying the DATABASE_URL configuration variable of the first application to the second, only the first application will have a primary connection (add-on) to the database.
If you remove the first application, your database will disappear from the postgres dashboard in Heroku. Although it seems the second application will still be able to use the database. (Tested on 2015-10-25.)
It is also worth noting that:
Heroku do reserve the right to change database URLs as required. If this occurs, it will cause your secondary connections to fail, and you'll need to update the URLs accordingly.
What this means is that if Heroku decide to change the URL for your database, your new application will fail, and since the database is now gone from your dashboard, you will not be able to retrieve the new URL without approaching Heroku. (If at all.)
This implies that there is a more intrinsic connection between an application and its database than merely the DATABASE_URL, and it might not be the best idea to use this as a mechanism for transfer.
An alternative approach using PGBackups
Heroku official documentation recommends using the free PGBackups add-on (which, honestly, you should always be running anyway) to transfer data between Postgres instances. The tool uses the native pg_dump and pg_restore tools.
Particularly, you can use the PG copy feature, described below:
PG copy uses the native PostgreSQL backup and restore utilities. Instead of writing the backup to a disk however, it streams it over the wire directly to the restore process on the new database.
They conveniently provide a means for you to copy a database between applications with a single command, and without any intermediate storage required:
As an alternative, you can migrate data between databases that are attached to different applications.
To copy the source database to the target database you will need to invoke pg:copy from the target application, referencing a source database.
Example:
heroku pg:copy source-application::OLIVE HEROKU_POSTGRESQL_PINK -a target-application
Where source-application is your existing stack, and target-application your newly created one. Not relevant to you, but still worth mentioning, is that any existing data in target-applications database will be destroyed.
After this, you might need to promote the database in the new application like so:
heroku pg:promote HEROKU_POSTGRESQL_PINK
You certainly can change your configuration variables to point from the new app's database to the existing database. I've done this before without issue (just make sure you have a fresh backup).
So within your new app run heroku config. You'll see a reference to your database in DATABASE_URL. You can then change the value with:
heroku config:set DATABASE_URL=<existing-database-url>
At that point you'd have your old app and new app pointing at the same database without issue. As you said, you're paying for the database upgrade on your existing database so deleting your old app shouldn't delete the database; if it does (again you have backups right?) that would be the time to contact Heroku!
If you're not comfortable working with the command line, you can also change the values in your app's Settings tab on the Heroku app dashboard.
With all of that said, you shouldn't actually need to create a new app in order to upgrade your stack. Again, I've done a stack upgrade before and you ought to be able to do it in place; which the Heroku docs confirm.
Although #Drenmi's approach is probably the safest option it is possible to transfer database "ownership" between apps.
Make sure you make a backup first.
In Heroku databases are treated as Add-Ons. So when you for example push a rails app heroku will add a postgres db addon to the app.
First use heroku addons to get a list of the addons attached:
$ heroku addons --app old_app
Add-on Plan Price
─────────────────────────────────────── ──── ─────
heroku-postgresql (resting-fairly-4672) dev free
└─ as HEROKU_POSTGRESQL_CYAN
We can then attach the database to the new app.
heroku addons:attach resting-fairly-4672 --app new_app
And detach it from the old app.
heroku addons:detach resting-fairly-4672 -app old_app
See https://devcenter.heroku.com/articles/managing-add-ons
I'm writing a rails app that is not intended to act as a web server; I only want to use it to manipulate the database easily. The same application can be used to manipulate many databases on one machine.
This situation is quite similar to git; one git app, one machine, and several git repositories.
I am using sqlite3.
Rails has only one config/database.yml per app. I'm looking for a way to switch between config/database.yml's for each of my app's repositories. In git, you switch repositories by changing the current directory. For my app, any alternative to achieve this 'swapping' of config/database.yml is OK for now.
I am currently using Bitbucket and am handling a Ruby on Rails repository across users. By default, when one user pushes the repository(default command - git push origin master -entire rails folder), I assume that the db also gets pushed to bit bucket, right?
When a second user downloads the repository off git, shouldn't i expect all the db files to also get downloaded?
Is it necessary for the second user to run rake db migrate commands again after downloading the file?
In the specific case above, I am the second user and I get the following error message upon downloading the repository off BitBucket, while the file run perfectly on the uploaders computer:
ActiveRecord::StatementInvalid in StaticPagesController#home
Could not find table 'users'
I want to make sure that both of us are working on the same DB and are not working in parallel on different datasets.
Data in your database will reside only in database. It will not be in git repository. The repository contains database config files and migration files to create databases on the fly. Again, it doesn't contain data.
If you want to work on the same database, I would look into using Amazon AWS RDS. Setting up RDS is not undoable, but it's not trivial enough for me to expound on exactly how you do that here.
I guess you are new to Rails. The way Rails handle database in development is :
with database structure:
You maintain the structure through migrations file.
Yes, if you pull new code that does contain new migration file, you
need to run rake db:migrate. You will notified if you don't.
with database data:
In development, you can maintain data to test through seed file. You can watch this great screencast here: http://railscasts.com/episodes/179-seed-data
Better, you should use seed_fu gem https://github.com/mbleigh/seed-fu
The company I'm working for currently has a Rails 3 application on a dedicated remote server. The current development environment is a local machine.
We are trying to put some infrastructure in place to hire some contractors to be able to do some development remotely. Obviously this won't work with our current development setup because it's local.
I was thinking that I could put the development and test code in separate subdomains i.e.
test.mydomain.com and dev.mydomain.com.
This is a small (but growing) project and we would not have more than one developer working on one or two changes on our system at any given time. We are starting out with smaller enhancements and working our way up to bigger ones.
My question is, what is the best way to deploy a development system that contractors would be able to access remotely and securely?
Normal practice would be for developers to still develop locally on their own systems, cloning the code using a version-control system (VCS) such as git. Then, you'd either 'pull' new code from a location they give you, or allow them to 'push' code to a location you give them. There might well be a 'staging' server set up, though, where the application is deployed as an additional check before deploying to the 'live' server; Rails lets you set up an arbitrary number of environments ('development', 'production', 'test' are the defaults, but more can be added), or you could use a deployment solution which ignores these settings and uses a different approach (such as Heroku).
You need to have source control, preferrably svn, then access that source code anywhere you want. Give a user id and password to the contractor and yourself to access the svn and install local development environment using local development database on your/contractor's pc. Anyone can deploy to dev. env. Or production env only if he/she has the authentication.
I am doing something similar. We use GITs to manage the code. To manage those different environments, I think using the GIT workflow is great, but the next step is configuring a remote database for development that all the developers access. They would just need to pull the code via git, once you are able to configure a remote database in the yaml., everyone would be in sync because you are developing on the same "dev" database.
I am trying to deploy my first rails app and struggling a bit. My plan is to initially host it on a heroku free account to get a feel for live deployments and do some production testing. Eventually I might move it to a VPS.
I use git and do not use Capistrano at the moment.
Heroku primarily uses git, which is fine, but git manages the entire project state not files. So I have issues managing configuration files that are different from production to development, for example captcha keys in the environment.rb or goolge js api keys.
So what I did was to..
1 - Take the environment specific configuration out of the enviornment.rb and put it in the development.rb and production.rb. Created a branch called dev where I do my development and then merge it with master and push master to the production heroku remote.
This all works ok, but wondering if there is a better way to do it.
The other massive problem is I might have to use different gems in dev and in herouku. For example, I use ThinkingSphinix for search in dev, but Heroku I have to use acts_as_solr, which means my "Article.search call in the controller, will have to be Article.find_by_solr in production. This can become messy very fast.
What's the best way to deal with this kind of situation?
Thanks
For non-sensitive keys such as Google's JS API key etc., I found this RailsCasts episode very helpful.
Just created a config file under config/ and store your development settings in there.
# /config/google.yml
development:
google:
js:
key: 123456
test:
google:
js:
key: 345678
production:
google:
js:
key: 567890
Then create an initializer inside config/initializers/ that will parse the yaml and create an object which can be used without worrying about the current environment.
# /config/initializers/google.rb
GOOGLE_CONFIG = YAML.load_file("#{RAILS_ROOT}/config/google.yml")[RAILS_ENV]
The environment variable RAILS_ENV refers to the current environment, so on application startup it picks up the current type, and you can refer to the settings in your code through GOOGLE_CONFIG:
<script type="text/javascript" src="http://www.google.com/jsapi?key=<%= GOOGLE_CONFIG['js']['key'] %>"></script>
For the latter issue, where code itself differs from environment to environment, I believe Capistrano would be a better solution.
For values that you want to keep different between environments, Heroku offers config vars.
As for using one indexing program in production and another in development, that's a bad idea, and will make things way messier than they need to be. Either start using Solr locally, or set up a Thinking Sphinx instance in EC2 yourself, and have your dynos connect to it.
I would suggest that it is very unwise to have different code in development and production.
Your development, test and production environments should be as similar as possible.
In fact, I would go so far as to say the entire point of the different environments is to simply provide an easy system for allowing minor configuration changes between setups. Different databases, different API parameters, different aching options, but the core system MUST be the same.
The the issue you will face is doubling your development effort. You still have to write the code. So in the search example you provide above, you will have to develop and test twice - once for the Solr production system and once for you local Sphinx, then you need to be able to switch and test between the two approaches in all of your environments to ensure test coverage and functionality.