Gitlab CI - deploy to Heroku and run migrations - ruby-on-rails

I have a Rails App hosted on gitlab.com, and I am configuring it to deploy to heroku following this guide: http://docs.gitlab.com/ce/ci/examples/test-and-deploy-ruby-application-to-heroku.html. It works fine.
My question is, how can I run migrations every time I deploy to heroku? When deploying via CLI I would usually do:
git push heroku master && heroku run rake db:migrate
but using gitlab-ci.yml I have no clue on how to do this...

If you want to be able to use the full power of the Heroku CLI in your GitLab CI process (including having the build fail if a migration fails for whichever reason), you can also try this approach which will install the Heroku CLI and deliver status codes of your Heroku commands back to GitLab, as well as, of course, the command line output. Using heroku run without credentials on the commandline require the HEROKU_API_KEY environment variable to be set to a key which has access to the app in question.
before_script:
- echo "deb http://toolbelt.heroku.com/ubuntu ./" > /etc/apt/sources.list.d/heroku.list
- wget -O- https://toolbelt.heroku.com/apt/release.key | apt-key add -
- apt-get update
- apt-get install -y heroku-toolbelt
- gem install dpl
stages:
- deploy
test_on_heroku:
type: deploy
script:
- dpl --provider=heroku --app=my_heroku_app --api-key=$HEROKU_API_KEY
- heroku run <your command here> --exit-code --app my_heroku_app
I actually run my tests on an Heroku instance to be sure, the environment is exactly the same. This is where this comes in real handy.

The information in this answer may be out of date. Please see both answers below, and remember to upvote the answers that are up to date to help future visitors.
here is a sample .yml I have that runs my tests then pushed to Heroku stage (for master branch pushes) or production (for tags pushes)
image: "ruby:2.3"
test:
script:
- apt-get update -qy
- apt-get install -y nodejs
- gem install bundler
- bundle install -j $(nproc) --without production
- bundle exec rails db:create RAILS_ENV=test
- bundle exec rails db:migrate RAILS_ENV=test
- bundle exec rails RAILS_ENV=test
staging:
type: deploy
environment: staging
script:
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_STAGING_APP_NAME --api-key=$HEROKU_API_KEY
- "curl -n -X POST https://api.heroku.com/apps/$HEROKU_STAGING_APP_NAME/ps -H \"Accept: application/json\" -H \"Authorization: Bearer ${HEROKU_API_KEY}\" -d \"command=bundle exec rails db:migrate\""
only:
- master
production:
type: deploy
environment: production
script:
- gem install dpl
- dpl --provider=heroku --app=$HEROKU_PRODUCTION_APP_NAME --api-key=$HEROKU_API_KEY
- "curl -n -X POST https://api.heroku.com/apps/$HEROKU_PRODUCTION_APP_NAME/ps -H \"Accept: application/json\" -H \"Authorization: Bearer ${HEROKU_API_KEY}\" -d \"command=bundle exec rails db:migrate\""
only:
- tags

To update #huesforalice's answer, this would also work for the new Heroku CLI, which replaced Heroku Toolbelt in November 2016:
before_script:
- apt-get update
- apt-get install apt-transport-https
- echo "deb https://cli-assets.heroku.com/branches/stable/apt ./" > /etc/apt/sources.list.d/heroku.list
- wget -O- https://cli-assets.heroku.com/apt/release.key | apt-key add -
- apt-get update
- apt-get install -y heroku
- gem install dpl
staging:
type: deploy
variables:
HEROKU_API_KEY: $HEROKU_STAGING_API_KEY
script:
- dpl --provider=heroku --app=$HEROKU_STAGING_APP --api-key=$HEROKU_STAGING_API_KEY
- heroku run rails db:migrate --exit-code --app $HEROKU_STAGING_APP
only:
- master
production:
type: deploy
variables:
HEROKU_API_KEY: $HEROKU_PRODUCTION_API_KEY
script:
- dpl --provider=heroku --app=$HEROKU_PRODUCTION_APP --api-key=$HEROKU_PRODUCTION_API_KEY
- heroku run rails db:migrate --exit-code --app $HEROKU_PRODUCTION_APP
only:
- tags

To further improve #huesforalice and #Jimmy Bosse 's answers - If you want to
avoid putting Heroku CLI installation in the global before_script but only use it in the deployment stages
at the same time, avoid copying and pasting the installation snippet into different stages
You can do something like this using YAML anchors to DRY up
before_script:
# the global before_script
- gem install bundler --no-document
- bundle check || bundle install --jobs $(nproc)
.deployment_before_script: &deployment_before_script
before_script:
- echo "deb http://toolbelt.heroku.com/ubuntu ./" > /etc/apt/sources.list.d/heroku.list
- wget -O- https://toolbelt.heroku.com/apt/release.key | apt-key add -
- apt-get update
- apt-get install -y heroku-toolbelt
- gem install dpl
# other stages...
staging:
stage: deploy
<<: *deployment_before_script
script:
- dpl --provider=heroku --app=$HEROKU_APP_STAGING --api-key=$HEROKU_API_KEY_STAGING
- heroku run bundle exec rails db:migrate --exit-code --app $HEROKU_APP_STAGING
only:
- master
production:
stage: deploy
<<: *deployment_before_script
script:
- dpl --provider=heroku --app=$HEROKU_APP_PRODUCTION --api-key=$HEROKU_API_KEY_STAGING
- heroku run bundle exec rails db:migrate --exit-code --app $HEROKU_APP_PRODUCTION
when: manual
only:
- tags

Related

Problems storing video/screenshots when testing Rails app with Cypress/CircleCI

I am running Cypress to test a Rails app on CircleCI. I have the tests running on CircleCI with the following config. But no video/screenshot assets are created in CircleCI artefacts if tests fail.
version: 2.1
orbs:
ruby: circleci/ruby#1.0.6
node: circleci/node#3.0.1
jobs:
build:
docker:
- image: cimg/ruby:2.7.2-node
steps:
- checkout # pull down our git code.
- ruby/install-deps # use the ruby orb to install dependencies
- node/install-packages:
pkg-manager: yarn
test:
parallelism: 3
docker:
- image: cimg/ruby:2.7.2-node # this is our primary docker image, where step commands run.
- image: circleci/postgres:12.3
environment: # add POSTGRES environment variables.
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: testdb
environment:
BUNDLE_JOBS: "3"
BUNDLE_RETRY: "3"
PGHOST: 127.0.0.1
PGUSER: user
PGPASSWORD: password
RAILS_ENV: test
steps:
- checkout
- ruby/install-deps
- node/install-packages:
pkg-manager: yarn
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Database setup
command: bundle exec rails db:schema:load --trace
- run:
name: Load test data
command: bundle exec rails db:seed --trace
- run:
name: Run Rails Server
background: true
command: CYPRESS=1 bundle exec rails s -p 5017
- run:
name: Wait for server
command: |
until $(curl --retry 10 --output /dev/null --silent --head --fail http://127.0.0.1:5017); do
printf '.'
sleep 5
done
- run: sudo apt-get update
- run: sudo apt-get install libgtk2.0-0 libgtk-3-0 libgbm-dev libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2 libxtst6 xauth xvfb
- run: yarn cypress install --force
- run:
yarn: true
command: yarn run cypress run
- store_test_results:
path: test-reports/

pg_dump: aborting because of server version mismatch during gitlab CI

Here is the error I get when the gitlab runner run my CI script:
pg_dump: server version: 13.2 (Debian 13.2-1.pgdg100+1); pg_dump version: 11.11 (Debian 11.11-0+deb10u1)
pg_dump: aborting because of server version mismatch
rails aborted!
failed to execute:
pg_dump -s -x -O -f /builds/steady-install-inc/steady-install-backend/db/structure.sql test
And here is the .gitlab-ci.yml file running rspec:
image: ruby:2.6.3
stages:
- test
- deploy
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- vendor/bundle
before_script:
- gem install bundler
- bundle install --deployment --without development -j $(nproc)
rspec:
stage: test
services:
- postgres:13.2
variables:
POSTGRES_DB: test
POSTGRES_USER: test
POSTGRES_PASSWORD: test
DATABASE_URL: "postgres://$POSTGRES_USER:$POSTGRES_PASSWORD#postgres/$POSTGRES_DB"
DATABASE_CLEANER_ALLOW_REMOTE_DATABASE_URL: 'true'
script:
# Use example environment variables
- cp config/application.yml.example config/application.yml
- apt-get update -qy && apt-get install -y nodejs
- apt-get install -y postgresql postgresql-client libpq-dev
- bundle exec rails db:migrate RAILS_ENV=test
- bundle exec rspec
coverage: '/\(\d+.\d+\%\) covered/'
only:
- merge_requests
Can any see what I'm doing wrong?
I've tried around 5-10 suggestions online but most seemed to either be for unbuntu or docker which I'm not fully sure how to implement or they just didn't work.
Anything helps!
Edit:
I also forgot to mention I recently switched to a schema.rb format to a structure.sql format but I'. not sure if that is part of the problem as my specs pass when I run them locally.
I lack expertise with gitlab-ci.yml, but I wanted to mention Gitlab's CI Linter- have you tried that? It's kinda hidden. On the left menu choose CI/CD > Pipelines and it will be in the far right corner of the pipeline view.
Gitlab pipeline view with CI lint button

Circle CI fails to execute psql command when running rake db:structure:load

So this is my config.yml
version: 2.1
orbs:
queue: eddiewebb/queue#1.5.0
executors:
node_postgres_redis:
docker:
- image: circleci/ruby:2.4.10-node-browsers
environment:
CC_TEST_REPORTER_ID: 7ceff1524bdc09dsd3e232321cf9daa531154170d590823232eddqw2330f3570
PGHOST: 127.0.0.1
RAILS_ENV: test
TEST_REPORT_PATH: "test/reports"
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: circleci
POSTGRES_DB: circleci-jet-test
POSTGRES_PASSWORD: ""
- image: circleci/redis:3.2
ubuntu:
machine:
image: ubuntu-1604:202004-01
commands:
setup_and_run_test:
steps:
- checkout
# Restore bundle cache
- restore_cache:
key: jet-bundle-{{ .Branch }}-{{ checksum "Gemfile.lock" }}
# imagemagic
- run:
name: Setup Image Magick
command: |
sudo apt-get install libmagickwand-dev imagemagick imagemagick-6.q16 libmagickcore-dev
sudo ln -s /usr/lib/x86_64-linux-gnu/ImageMagick-6.9.7/bin-q16/Magick-config /usr/bin/Magick-config
- run:
name: Install Bundler
command: |
gem install bundler -v 1.17.3
# Install gem dependencies
- run: bundle check --path=vendor/bundle || bundle install --path=vendor/bundle
# Store bundle cache
- save_cache:
key: jet-bundle-{{ .Branch }}-{{ checksum "Gemfile.lock" }}
paths:
- vendor/bundle
# Database setup
- run:
name: Database Setup
command: |
bundle exec rake db:create
bundle exec rake db:migrate
bundle exec rake db:structure:load
Everythings runs properly until it tries to execute
bundle exec rake db:structure:load
Then it throws this error which is a bit weird
failed to execute:
psql -q -f /home/circleci/circleci-jet/db/structure.sql *****_test
Please check the output above for any errors and make sure that psql is installed in your PATH and has proper permissions.
I cannot understand this error and what needs to be done on my side.
Anyone with examples, please can show me direction.
Thanks

How to run bitbucket pipeline to deploy php based app on nanobox

I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines

Circle Ci- Run rspec tests parallely

I added an additional container in circle ci and tried to run tests parallaly.
This is my circle.yml file.
machine:
ruby:
version: 2.3.0
database:
post:
- cp config/sunspot.ci.yml config/sunspot.yml
- bundle exec sunspot-solr start -p 8981
dependencies:
pre:
- sudo apt-get update; sudo apt-get -y install solr-tomcat
test:
override:
- rvm use 2.3.0 && bundle exec rspec --color --format progress:
environment:
RAILS_ENV: test
parallel: true
files:
- "spec/**/*_spec.rb"
but tests don't seem to be running in parallel.
What am I missing? Thanks in advance
For some reason removing rvm use 2.3.0 from test section fixed my problem.
machine:
ruby:
version: 2.3.0
database:
# After default database task finished
post:
# Circle CI already replace our original sunspot.yml, replace it with backup copy `config/sunspot.yml.ci`
- cp config/sunspot.ci.yml config/sunspot.yml
- bundle exec sunspot-solr start -p 8981
dependencies:
pre:
- sudo apt-get update; sudo apt-get -y install solr-tomcat
test:
override:
- bundle exec rspec --tag ~#flaky --color --profile 20 --format progress:
timeout: 240
environment:
RAILS_ENV: test
parallel: true
files:
- my files here

Resources