Rails + Elastic Beanstalk: Abort deployments when database migration fails - ruby-on-rails

I've got a Rails 5/Ruby 2.3 app running on Elastic Beanstalk. I want deployments to abort when a database migration fails.
My RAILS_SKIP_MIGRATIONS is set to false, so migrations execute when deployment. However, I had an issue where one migration failed but the deployment completed. This of course resulted in several 500 afterwards.
I've considered writing an ebextension which runs on post-deployment and checks whether there's an issue. If there is, then I rollback to the previous app version. However, I'm not sure if this approach is the right one.

Related

Rails 6.1 failed migrations no longer block rails from running?

In the past when there were migrations which didn't run yet, there would be an error 'pending migrations' that would prohibit rails (server, console) from starting.
I've updated to Rails 6.1 and have a failing migration. Nevertheless the server is running and I can go into the console without any warning.
The migration that is failing is a ActiveRecord::Migration[6.1] migration.
Is this intended? Is there a way of going back to the old behavior? We use Kubernetes and a failed migration just blocks the new pod from coming live which is perfect.

Ruby on Rails accidentally db:rollback

I'm new to rails, and I accidentally ran rails db:rollback command in a development.
Next I did rails db:migrate:up VERSION=XXXX to change status of the file I rolled back from down to up.
The migration file was about images. However my images were gone in a development mode due to rollback, files status the same as before I ran rails db:rollback.
In this case, if I pushed this to remote repository, and it's merged in production, the images already there will be gone as well as mine in development?
When add_column method in migration is run you just add column in migration so it will run for production & development environment. Now you added images through localhost to your application and stored in database. So those will be stored to database regardless migration.
Rollback will remove column running remove_column so it will hamper your development as removing column will make you loose all data inside column of table. So on production it does not deal same.
Images are getting pushed to production database or remote repository, it is just to add or remove column only so rollback will affect only your local/development
Unless you're doing something wacky in your migrations, any goofs like this you make to your development database will not effect production. That's why dev and prod databases are separate.
The general problem of "is it safe to push to production" can be mitigated by adding a staging server which runs in the production Rails environment, but is used for additional manual testing of new features. Once everything checks out in staging then push to production. Many services provide a "pipeline" to do this for you, for example Heroku Pipelines.

After setting up local dev environment with database-migration plugin, how to then update test servers

Following the documentation I was able to get the database-migration plugin working on an existing project which already has quite a few tables and is running in production. I went through the following process locally:
Pulled down latest production database
Source production database to local dev
grails dbm-generate-changelog changelog.groovy
grails dbm-changelog-sync
grails dbm-gorm-diff 2012-06-25-latest.groovy --add
grails dbm-update
I understand why I had to do each of those locally to get to a point of applying future change sets. However, now I want to run my 2012-06-25-latest.groovy on one of my test servers. It already has the latest database based on our production database.
I tried just running dbm-update but without the sync it failed creating tables that already exist. So I ran dbm-changelog-sync but then when running dbm-update it didn't actually apply the latest file.
I know that I can add a context (tag) to the change sets and specify that context when running dbm-update but I want to know if this is the only way to go about this or if my workflow needs to be modified; what is the best way to go about applying the changelog to the test server?
What I ended up doing is deleting all the rows in the DATABASECHANGELOG table where the FILENAME = '2012-06-25-latest.groovy'. I then ran dbm-status and it told me I had 256 changes waiting. I then ran dbm-update and all is well.
I'm not sure this is how it was supposed to be done, but it seems to have worked.
UPDATE: In addition to this I could probably run the entire migration on an empty database, and then do a mysqldump of the production database with INSERT statements only.

Avoiding deployment gem-fail downtime while using Passenger, bundler & git-based deploys

I'm deploying a Rails 3 app on Passenger 3.0.7 using capistrano and git-based deployment, similar to GitHub's setup:
https://github.com/blog/470-deployment-script-spring-cleaning -- this means the app operates entirely out of one directory, with no /releases/123456 and symlink switching.
If we've added any gems our app starts throwing 500 errors during deployment, during the "bundle:install" phase, but before deploy:restart. The code has been updated and it seems like passenger is already starting to use it, and required gems can't be found yet.
This is not caused by new workers being spun up, as I've tried setting the Passenger idle_time to 0 and max_instances and min_instances to the same value, so that workers are never spun down.
Running on Linux with ruby-ee 1.8.7-2011.03. Sample error from Passenger: https://gist.github.com/54794e85d2c799e4f697
I've also considered doing "two-directory" git-based deployment as a hack -- swapping in the new code once the bundle is complete. Ideas welcome.
Go with the two-directory deployment. Apart from avoiding the 500s during deployment, this will also act as a safety net if you need to rollback during/after deployment.

How should I deploy a patch to a Passenger-based production Rails application without downtime?

I have a Passenger-based production Rails application which has thousands of users. Occasionally we need to apply a code patch (we use git) and the current process for doing this (you can assume there are no data migrations) is:
Perform git pull origin [production-branch-name] on the server
touch tmp/restart.txt to restart Passenger
This allows us to patch the server without having to resort to putting up a maintenance page, which is great, but it doesn't feel quite right since it's not actually a proper 'deployment', and we still need to manually update the revision file and our deployment doesn't appear in the Hoptoad or NewRelic services we use.
Ideally I would run cap production deploy and just let the standard Capistrano deployment script take care of everything, but is this a dangerous thing to do without putting up a maintenance page? This deployment process seems to be fairly safe in that the new revision is deployed to a completely separate folder and only right at the end of the process is a symlink re-created to switch the currently deployed version, but I'm still fairly paranoid about this somehow resulting in a lost or failed request.
No problems here doing cap production deploy. If the deployment fails then the previous release is still good. Nothing will fail as the old release is loaded (cached) in the current Passenger process. The touch tmp/restart.txt will pick up the new release and all is good in the world.

Resources