GitLab CI for Rails App Using Postgres and Elasticsearch (searchkick gem) - ruby-on-rails

How does one go about configuring a .gitlab-ci.yml file for a Rails app that depends on PosgreSQL and Elasticsearch via the searchkick gem to run my tests when I push it to GitLab?

I wanted to post this question, as it took me far too long to find the answer and don't want others to feel my pain. The below example not only builds my application, but also runs all my specs.
Setup
Rails 5+
PostreSQL 9.6
Rspec gem
Searchkick gem (handles Elasticsearch queries and configuration)
Configuration
Add the following files to your Rails app with the configurations listed.
config/gitlab-database.yml
test:
adapter: postgresql
encoding: unicode
pool: 5
timeout: 5000
host: postgres
database: test_db
user: runner
password: ""
.gitlab-ci.yml
image: ruby:2.4.1
services:
- postgres:latest
- elasticsearch:latest
variables:
POSTGRES_DB: test_db
POSTGRES_USER: runner
POSTGRES_PASSWORD: ""
ELASTICSEARCH_URL: "http://elasticsearch:9200"
stages:
- test
before_script:
- bundle install --without postgres production --jobs $(nproc) "${FLAGS[#]}"
- cp config/gitlab-ci/gitlab-database.yml config/database.yml
- RAILS_ENV=test bundle exec rails db:create db:schema:load
test:
stage: test
script:
- bundle exec rspec
And that's it! You're now configured to auto-run your specs on gitlab for each push.
Further Explaination
Let's start with PostgreSQL. When we start our new runner, the application we're copying in won't know how to properly connect to Postgres. Thus we create a new database.yml file, which we prefixed with gitlab- so it doesn't conflict with our actual configuration, and copy that file into runner's config directory. The cp command not only copy's the file, but will replace the file if it currently exists.
The items we're connecting to via GitLab are database:, user:, and password:. We do this by specifying those same names within our environment variables, ensuring everything connects properly.
Okay, connecting to PostgreSQL is well explained and documented on GitLab's website. So how did I get Elasticsearch working, which isn't explained very well anywhere?
The magic happens again in variables. We needed to set the ELASTICSEARCH_URL environmental variable, made available to us through the Searchkick gem, as Elasticsearch looks for http://localhost:9200 by default. But since we're using Elasticsearch through a service, we need to explicitly tell it to not use the default and use our service's hostname. So we then replaced http://localhost:9200 with http://elasticsearch:9200, which does map properly to our service.

Related

How to separate environments of a project in docker and docker-compose

I have a couple of basic conceptual question regarding the environments of a webapp.
I am trying to build a dockerized Rails app, having: test, development, staging and production.
My first question is, should the Dockerfile and docker-compose be the same for every environment?
The only thing that would change then would be that when I want to do the testing I pass the RAILS_ENV=test when creating a container, when I want to do the development I pass RAILS_ENV=development and so on.
Would this be the correct idea behind it?
Or can they be different (in my case I am building nginx on production together with app and db but I have just a simple setup with only the app and db for testing and development)
My second question is, when I pass the RAILS_ENV=test for example, should I do it on the Dockerfile (conditionally pass a different environment when building the image):
# Set environment
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set RAILS_ENV to 'development' or set to null otherwise.
ENV RAILS_ENV=${BUILD_DEVELOPMENT:+development}
# if RAILS_ENV is null, set it to 'production' (or leave as is otherwise).
ENV RAILS_ENV=${RAILS_ENV:-production}
Or keep the same image and pass the RAILS_ENV when doing the docker-compose ? :
docker-compose -f docker-compose.production.yml run rake db:create db:migrate RAILS_ENV=production
Thank you!
Should the Dockerfile be the same for every environment?
Yes. Build a single image and reuse it in all environments. Do not use Dockerfile ARG to pass in "is it production", or host names, or host-specific user IDs. Especially you should use an identical environment in your pre-production and production environments to avoid deploying an untested image.
Should docker-compose.yml be the same for every environment?
This is the main place you have to control deploy-time options like, for example, where your database is located, or what level of logs should be output. So it makes sense to have a separate Compose file per environment.
Compose supports multiple Compose files. You can use docker-compose -f ... multiple times to specify specific Compose files to use; or if you do not have that option then Compose will read both a docker-compose.yml and a docker-compose.override.yml. So you might have a base docker-compose.yml file that names the images to use
# docker-compose.yml
version: '3.8'
services:
app:
image: registry.example.com/app:${APP_TAG:-latest}
In the question you suggest a docker-compose.prod.yml. That can set $RAILS_ENV and point at your production database:
# docker-compose.prod.yml
services:
app:
environment:
- RAILS_ENV=production
- DB_HOST=db.example.com
- DB_USERNAME=...
# (but don't repeat image:)
You could separately have a docker-compose.dev.yml that launched a local database, and had instructions on how to build the image:
# docker-compose.dev.yaml
version: '3.8'
services:
app:
build: .
environment:
- RAILS_ENV=development
- DB_HOST=db
- DB_USERNAME=db
- DB_PASSWORD=passw0rd
db:
image: postgres:14
environment:
- POSTGRES_USER=db
- POSTGRES_PASSWORD=passw0rd
volumes:
- dbdata:/var/lib/postgresql/data
volumes:
dbdata:
If you use the docker-compose -f option, you need to always mention both files you're using
docker-compose \
-f docker-compose.yml \
-f docker-compose.dev.yml \
run app \
rake db:migrate
You could also symlink docker-compose.override.yml to point at an environment-specific file, and then Compose would be able to find it by default.
ln -sf docker-compose.test.yml docker-compose.override.yml
docker-compose run app rspec

Docker plugin: Access drone services from within the Dockerfile build process

I'm using drone/drone:0.8 along with the Docker plugin, and I'm kinda stuck with a Dockerfile I use to build the app.
This Dockerfile runs the app test suite as part of it's build process - relevant fragment shown:
# ENV & ARG settings:
ENV RAILS_ENV=test RACK_ENV=test
ARG DATABASE_URL=postgres://postgres:3x4mpl3#postgres:5432/app_test
# Run the tests:
RUN rails db:setup && rspec
The test suite requires a connection to the database, for which I'm including the postgres service in the .drone.yml file:
pipeline:
app:
image: plugins/docker
repo: vovimayhem/example-app
tags:
- ${DRONE_COMMIT_SHA}
- ${DRONE_COMMIT_BRANCH/master/latest}
compress: true
secrets: [ docker_username, docker_password ]
use_cache: true
build_args:
- DATABASE_URL=postgres://postgres:3x4mpl3#postgres:5432/app_test
services:
postgres:
image: postgres:9-alpine
environment:
- POSTGRES_PASSWORD=3x4mpl3
But it looks like the services defined in the drone file are not accessible from within the build process:
Step 18/36 : RUN rails db:setup && rspec
---> Running in 141734ca8f12
could not translate host name "postgres" to address: Name does not resolve
Couldn't create database for {"encoding"=>"unicode", "schema_search_path"=>"partitioning,public", "pool"=>5, "min_messages"=>"log", "adapter"=>"postgresql", "username"=>"postgres", "password"=>"3x4mpl3", "port"=>5432, "database"=>"sibyl_test", "host"=>"postgres"}
rails aborted!
PG::ConnectionBad: could not translate host name "postgres" to address: Name does not resolve
Is there any configuration I'm missing out? Or this is a feature not currently present in the plugin?
I know this could be related somehow with the --network and/or --add-host options from docker build command... I could help in case you think we should include this behavior.
So a couple things jump out to me (although I don't have the full context so take what you think makes sense)
I would probably separate out the build/testing piece of the code into a different step, and then use the docker plugin to publish the artifacts ones the've passed
I think the docker plugin is really to publish the image (I don't believe its container is going to be able to reach the service containers due to dind)
if you do separate it out you'll probably need - sleep 15 in the commands section of the build to give the db time to startup
http://docs.drone.io/postgres-example/ has examples of how to use postgres but again, it would required separating the build pieces from creating and publishing the docker image :)
here's a sample I'm talking about ;)
pipeline:
tests-builds: //Should probably be separate :)
image: python:3.6-stretch
commands:
- sleep 15 //wait for postgrest to start
- pip install --upgrade -r requirements.txt
- pip install --upgrade -r requirements-dev.txt
- pytest --cov=sfs tests/unit
- pytest --cov=sfs tests/integration //This tests the db interactio0ns
publish:
image: plugins/docker
registry: quay.io
repo: somerepot
auto_tag: true
secrets: [ docker_username, docker_password ]
when:
event: [ tag, push ]
services:
database:
image: postgres

Could not find Docker hostname on Gitlab CI

I have an app inside a Docker container based on Elixir image, that need to connect to a database and run tests using a Gitlab runner.
The build stage works fine but there is a problem to connect to a database to run tests. I tried both connecting to a service and running another database container, but from the logs it looks like the problem is with the Phoenix app:
** (RuntimeError) :database is nil in repository configuration
lib/ecto/adapters/postgres.ex:121: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:40: anonymous fn/3 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:675: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:675: Enum.each/2
(mix) lib/mix/task.ex:301: Mix.Task.run_task/3
(mix) lib/mix/cli.ex:75: Mix.CLI.run_task/2
This is how the config/test.exs file looks like
config :app, App.Repo,
adapter: Ecto.Adapters.Postgres,
username: System.get_env("POSTGRES_USER"),
password: System.get_env("POSTGRES_PASSWORD"),
database: System.get_env("POSTGRES_DB"),
hostname: System.get_env("POSTGRES_HOSTNAME"),
pool: Ecto.Adapters.SQL.Sandbox
This is the output from the runner:
$ docker run --rm -t $CONTAINER echo $MIX_ENV $POSTGRES_USER $POSTGRES_HOSTNAME $POSTGRES_DB
test username db test_db
I'm trying to figure out why I get this error :database is nil, and if it is related to Gitlab, Ecto or Phoenix.
Edit
I wrote static values in the config/*.exs files (for some reason it didn't pick them up), but now it can't find the postgresql hostname. Although the postgresql instance is running it can't find it.
I checked if the instance is running with docker ps
Based on the message :database is nil in repository configuration it seems to me like your POSTGRES_DB variable is not set. You can try to change that configuration line to
database: System.get_env("POSTGRES_DB") || "postgres"
to see whether you still get the same error. If you don't, you can debug from there.

How to set up Travis CI and postgresql using custom db credentials?

I'm trying to set up custom Postgres credentials with Travis CI as I'd like to avoid the existing credentials definition in the code to be tested.
The testing code defines that the database should be accessed on:
'sqlalchemy.url': 'postgresql://foo:bar#localhost/testing_db'
I've therefore created a database.travis.yml file:
postgresql: &postgresql
adapter: postgresql
username: foo
password: bar
database: testing_db
...and added the following to my .travis.yml:
services:
- postgresql
before_script:
- psql -c 'create database stalker_test;' -U postgres
- mkdir config && cp database.travis.yml config/database.yml
However, I am still getting this during testing:
OperationalError: (psycopg2.OperationalError) FATAL: role "foo" does not exist
What am I doing wrong?
Adding the following to .travis.yml solved my issue. No need for a database.travis.yml file.
before_script:
- psql -c "CREATE DATABASE testing_db;" -U postgres
- psql -c "CREATE USER foo WITH PASSWORD 'bar';" -U postgres
Database.yml seems to be a Ruby On Rails thing. Travis CI started with rails / ruby testing, so the docs might be reflecting that.
You most probably need to do your setup in a separate script or migration setup, and not rely on travis, except for running the service.

How to setup database.yml to connect to Postgres Docker container?

I have a Rails app. In the development and test environments, I want the Rails app to connect to a dockerized Postgres. The Rails app itself will not be in a container though - just Postgres.
What should my database.yml look like?
I have a docker default machine running. I created docker-compose.yml:
postgres:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_USER=timbuktu
- POSTGRES_PASSWORD=mysecretpassword
I ran docker-compose up to get Postgres running.
Then I ran docker-machine ip default to get the IP address of the Docker virtual machine, and I updated database.yml accordingly:
...
development:
adapter: postgresql
host: 192.168.99.100
port: 5432
database: timbuktu_development
username: timbuktu
password: mysecretpassword
encoding: unicode
pool: 5
...
So all is well and I can connect to Postgres in its container.
But, if someone else pulls the repo, they won't be able to connect to Postgres using my database.yml, because the IP address of their Docker default machine will be different from mine.
So how can I change my database.yml to account for this?
One idea I have is to ask them to get the IP address of their Docker default machine by running docker-machine env default, and pasting the env DOCKER_HOST line into their bash_rc. For example,
export DOCKER_HOST="tcp://192.168.99.100:2376"
Then my database.yml host can include the line
host: <%= ENV['DOCKER_HOST'].match(/tcp:\/\/(.+):\d{3,}/)[1] %>
But this feels ugly and hacky. Is there a better way?
You could set a correct environment variable first, and access it from your database.yml:
host: <%= ENV['POSTGRES_IP'] %>
With a bashrc like (using bash substring removal):
export DOCKER_HOST=$(docker-machine env default)
export POSTGRES_IP=${DOCKER_HOST#tcp://}
I found a simpler way:
host: <%= `docker-machine ip default` %>

Resources