I have two heroku applications, staging and production. As the names would suggest, we use the staging app for testing/new code. Once we deem that it's relatively bug free we move it to production.
The one thing this doesn't translate to is data. We might add a bunch of data to staging to see how it looks, then have to do it again in production to for the general user to see it.
For example, we have a news reader which pulls in stories from many different RSS feeds. Say we add 20 Feeds to staging and decide that they're all good for production. Is there any way to wright a method that connects to the production database from staging and adds each feed? I'm aware that there are ways to do this by overwriting the whole database but that doesn't work here since user specific data in production would be deleted.
Related
When a model is generated in rails - let's say to keep records for users, then we also get a route/controller/view for handling these (CRUD). Therefore visiting "root_url/users" would list all the users, "root_url/users/1" would display the first user etc.
While this is handy in a dev environment, it's not inappropriate for production (currently production for me is Heroku).
I could just remove the extra controllers, views etc. but I was wondering whether there is a standard way of approaching this issue (like a flag in a config file) so that there isn't a mismatch between dev and production.
Yes there is a standard way of approaching this issue, it's called testing, code review, and generellay just doing what is required for your application to work.
Scaffolding code is a good thing, but you just have to use it when it's needed. If you commit often enough, you can scaffold and easily reset what you've done if it does not meet your need.
It is your responsibility to not put any code in production :)
I've been searching and doing my research but I can't seem to find any articles directly talking about this. I have an art gallery rails app with around 6 models of different attributes, pieces of art, etc. When I make changes to their site and redeploy, will the databases also be reset? Or is Postgres and the rails app separate on Heroku?
I also read that someone takes all of their data and puts it into seed.rb then repopulates the databases with the seed data once it's redeployed? Does that sound right? Any insight would be very helpful. Thank you
If you're using a database, your data won't get lost on redeploys. Only the data which is stored in /tmp gets lost after a deploy is performed.
I'm going to assume you're using heroku postgres. In this case check out this, it's good to regularly create backups: https://devcenter.heroku.com/articles/heroku-postgres-backups
In seed.rb you should only add data which is necessary to set up the project, and nothing more. e.g. create an admin.
No data will be lost on redeploy to heroku until you intentionally do so.
Seed data only for populating some default database values in a rails application.
I am assuming you are uploading pictures in your application and they don't persist after deploy. So that true. Heroku does allow you to upload images to heroku. But they not persist after deployment.
Uploaded images persist for particular interval of time.
If this is the case with you try upload images to amazon s3 bucket, all uploading gem support that.
I'm building a Rails 3.2 app upon a legacy database which also has some broken records in different tables. One of the issues giving the most headache is that it includes invalid dates.
I've setup a sandbox which I manually fixed one time to get my code working. Now it's time for deployment. For this reason, the sandbox is reset every night and copied from the live database, ferret indexes are rebuilt, and migrations are re-applied. We are going to deploy to the sandbox often to get in the last fixes before deploying to the live setup.
As the legacy PHP app and this new Rails app need to run in parallel for a few weeks to months, we cannot simply one-time-fix the dates (Update: just for clarification, that means they run on the same database at the same time). I need a way to automate this, maybe with a migration or rake task (I'd go for the latter).
But the problem is: ActiveRecord chokes on loading such records so I have no way to investigate the record and fix the dates by some hardcoded assumptions made in ruby code.
A second problem is that the legacy database has inconsistencies because the PHP code did not use transactions and some code paths are broken and left orphans and broken table constraints behind. I will deal with that as they occur, most of them is already taken care of in the models. First problem goes with the dates.
How would you usually fix this? Maybe there's even some magic gem out there which supports migrating legacy databases with broken records by intercepting exceptions and running some try-to-fix code...
The migration path uses MySQL, and three production environments (stable with live database, staging with the same database, and sandbox with a database clone reset every night). We decided against doing a one-time data mapping / migration because we cannot replace the complete legacy application in one step (it consists of a CMS with about 50000 articles, hundreds of topics, huge file database with images and downloads, supporting about 10 websites, about 12 years of data and work, messy PHP code from different programming skills, duplicated code from different migration stages, pulling in RSS content from partner sites to mix articles/posts from there into the article timelines in our own application's topics, and a lot more fun stuff...
First step is to migrate the backend application to get a consistent admin and publishing interface. The legacy frontend applications still need to write to the database (comments and other content created by visitors). So the process of fixing the database must be able to run unattended on a regular basis.
We already have fixes in place that gracefully handle broken model dependencies in belongs_to and has_many. Paperclip integration has been designed to work with all the fantastic filename mappings invented. And the airbrake gem reports all application crashes to our redmine installation so we get a quick overview of all the left quirks.
The legacy applications have already been modified to work with the latest MySQL version and has been migrated to a current MySQL database server.
I had the same problem. The solution was to tell mysql2 not to perform casting, like this:
client.query(sql, cast: false).each do |row|
row['some_date'] = Date.parse(row['some_date']) rescue(nil)
end
See mysql2 documentation for details on how to build client object. If required, access rails db config via ActiveRecord::Base.configurations.
Create a data import rake task that does all the conversions and fixes you need (including the data parsing and fixing), and run it every time you get a fresh update from the legacy app. The task can use raw SQL (look-up "execute" and "exec_query" methods), it doesn't have to work with models. This will be your magical "gem" that you were looking for. Obviously, you cannot have a one-fits-all tool for that, as every case of broken data is unique.
But just don't create kludges in your new code base.
Similar to: Rails: How to handle existing invalid dates in database? and also without correct answer so I repost my solution below.
I think the simplest solution that worked for me was to set in database.yml file write cast: false, e.g. for development section
development
<<: *default
adapter: mysql2
(... some other settings ...)
cast: false
I believe it will solve your problem Date.parse()
e.g. Date.parse(foo.created_at)
Most Rails database-deployment discussions assume there are two facets of a database: the schema, which is handled in code via migrations, and the data, which is all user-generated and never needs to move from test to production. What about the stuff that lies in between?
For example, we have a number of mostly-static tables that contain complex surveys our users can take: questions, choices, branching. We want to be able to edit those surveys via our web app, but we want to be able to test changes on the staging server before we push them to production.
What's a good way to handle this in Rails, which wants all the models to exist in one database, and certainly wouldn't like the same model (with different contents) to exist in two databases? Are there any good discussions online, or any gems that have abstracted out this type of functionality?
I've worked with a large, complex CMS system that had its own multi-environment version control and deployment, so you could deploy your change to the test system (without riskily linking the test and production databases), test it thoroughly, and then do a one-click deploy to production. I guess I'm looking for something like that on a smaller scale.
I would use ActiveResource to pull the desired records from the staging environment to production. Alternatively, you could create a name-spaced set of ActiveRecord models to connect to the staging database directly. Either way, the implementation is roughly the same, but ActiveResource allows more flexibility with changing deployment details and the ActiveRecord method requires less setup code.
The actual implementation code should be fairly simple - pull a list of un-imported records from staging (you'll probably want to map the production records to their source staging records to easily prevent duplication) and copy the data.
Not sure about Rails but I am using one python script called Migraine, its useful to synchronizing development, staging, and live (production) sites' databases for Drupal CMS. For more info refer this :
Presentation
Get Migraine script here
Is there a recommended practice for managing multiple testing and production database users under one Rails app? I've a Rails app with four different database users associated with it:
owner, the DB user who owns the app schemaPermissions: Just about everything. (This is the maintenance/migration account.)
app, the DB account that powers the web applicationPermissions: Read on most tables and views, write on some temporary caching tables.
writer, the DB account that feedsPermissions: Write on a few tables.
auditor, the DB account that logs DB write activityPermissions: Owns a few triggers and functions.
Right now my migration files contain GRANT/REVOKE logic for these specific, named users. However, in the "development" environment all it is often convenient for these users to all be the very same account. Additionally, the hardcoded names of these users may conflict with already-existing DB user names in the final production environment.
It sounds like you're going to need to manage 2 different database connections for each of the classes of users you've got (app/writer). This is often managed by mixing in helpers to set these up to different classes of Models that need to use them.
There's no reason you can't configure this in your development environments, but you'll get the most bang for the buck by using a Staging environment that exactly resembles your Production environment for issues like this, where you can do a final shakedown of behavior before something is pushed live.